article_id
stringlengths 6
9
| article_text
stringlengths 5
1.27M
| document_type
stringclasses 4
values | domain
stringclasses 3
values | language
stringclasses 28
values | language_score
float64 0
1
|
---|---|---|---|---|---|
PMC11688121
|
Neuropsychiatric symptoms, also referred to as behavioral and psychological symptoms of dementia, are highly prevalent, affecting approximately 97% of individuals along the dementia continuum . Among these neuropsychological symptoms (NPS), psychotic symptoms, characterized by the presence of delusions and/or hallucinations, are a hereditary trait found in 40%–60% of individuals with Alzheimer's disease (AD) . A systematic review revealed that among AD patients with psychosis, 23% displayed delusions exclusively, 5% had hallucinations alone, and 13% experienced both delusions and hallucinations . Those with AD and psychotic symptoms typically experience more severe cognitive impairment and executive dysfunction and are prone to neuropsychiatric disturbances such as aggregation, agitation, and depression . This results in a reduced quality of life, accelerated disease progression, higher mortality rates, increased economic burden, and heightened stress for their caregivers . Recently, Cummings et al. revised diagnostic criteria for psychosis in individuals with major and mild neurocognitive disorders, addressing a critical need for clearer guidelines to aid clinical practice, research, and treatment development. The revised criteria incorporate feedback from global experts and expand upon previous definitions to improve the accuracy and applicability of diagnoses. Research on neuroimaging findings in AD patients with psychosis has yielded somewhat inconsistent results. Additionally, the majority of previous studies have primarily focused on delusions rather than hallucinations or the combined presence of delusion and hallucination symptoms. In terms of structural and functional magnetic resonance imaging (MRI) modalities evaluating volumetric measurements, AD patients experiencing delusions have shown increased atrophy in frontotemporal and hippocampal regions . Specifically, assessments of individuals with delusional misidentification syndromes have revealed parahippocampal atrophy . Gray matter atrophy in the cerebellum and parietal lobe, without frontal lobe involvement, has also been reported. However, a study using data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) has suggested a potential link between posterior cortical atrophy, frontal circuits, and the default mode network (DMN) and delusional development in AD patients . On the other hand, another study using the ADNI data observed supramarginal atrophy of the parietal lobe when exclusively studying AD patients with hallucinations . Furthermore, investigating those AD patients who exhibited both delusion and hallucination, there was no evidence of any alteration in functional connectivity within the DMN . Concerning imaging modalities measuring regional blood flow and glucose metabolism, such as single‐photon emission computed tomography (SPECT) and fluorine‐18 fluorodeoxyglucose positron emission tomography ( 18 F‐FDG‐PET), the majority of studies have consistently identified hypometabolism and hypoperfusion patterns in the right frontal and temporal cortices . However, it is important to note that the majority of these studies have predominantly focused on delusions. More recent investigations using 18 F‐FDG‐PET and SPECT have demonstrated specific findings, including hypometabolism in the orbitofrontal region and hypoperfusion in the right hemisphere consisting of inferior temporal gyrus, parahippocampal cortex, posterior insula, and amygdala . Bilateral hypoperfusion in the temporal poles has also been reported in these investigations . Furthermore, in the context of nuclear imaging, research focusing on AD patients with psychosis has been restricted and often constrained by small sample sizes. Nonetheless, in one study, no link was found between the 5‐hydroxytryptamine 2A receptor (5‐HT2A) and psychosis symptoms . In contrast, another investigation exploring dopamine receptors revealed an elevated number of striatal D2/3 receptors in AD patients who were experiencing psychosis . Chronic psychosis‐related stress can induce neuroinflammation, contributing to AD progression by promoting amyloid‐beta (Aβ) plaque and tau tangle formation . Conversely, AD‐related neurodegeneration, particularly in the frontal and temporal lobes, can lead to impaired cognitive control, manifesting as psychosis . Additionally, shared genetic factors and vascular dysfunctions might predispose individuals to both conditions, further linking psychosis with AD onset and progression . Given the lack of definitive findings and recognizing the importance of studying both individual and combined aspects of delusions and hallucinations, our primary aim was to systematically review neuroimaging findings of delusions and hallucinations in AD patients to describe the most prominent neuroimaging features. Our results may help clinicians to better diagnose as well as predict future clinical symptoms of AD patients with hallucination, delusion, or psychosis. The present study was conducted following the Preferred Reporting Items for Systematic Reviews and Meta‐Analyses (PRISMA) statement . We performed a comprehensive search in three online databases including PubMed, Scopus, and Web of Science using the following terms in June 2023: “Alzheimer, psychosis, hallucination, Charles Bonnet syndrome, delusion, positron emission tomography, PET, beta‐amyloid, amyloid, amyloid‐β, amyloid deposition, PiB, Pittsburgh, florbetapir, flortaucipir, tau, tau deposition, FDG, fluorodeoxyglucose, DTI, diffusion tens, imaging, microstructure, anisotropy, Diffusivity, functional magnetic resonance imaging, functional MRI, fMRI, rsfMRI, resting‐state fMRI, Brain mapping, Structural MRI, voxel‐based morphometry, gray matter, white matter, VBM, MRI, magnetic resonance imaging, atrophy, hippocampus, EEG, electroencephalography, electroencephalogram.” Additional studies were identified via a manual search of reference lists. Full search strategy is available in Supporting Information 1 . We included studies that reported neuroimaging features of AD patients with delusion, hallucination, or psychosis with a minimum sample size of five. We excluded reviews, case reports, letters, and non‐English studies. Two independent researchers (S.S., S.M.) screened the title and abstract in the first step. Then, the same researchers reviewed the full text of the remaining studies to identify eligible studies. Any disagreements were resolved by consulting with the third reviewer (F.N.). Two investigators extracted the required data based on a predesigned sheet independently. The following data were extracted: author, year of publication, study design, sample size, age, gender, AD diagnosis criteria, MMSE score, type of psychosis symptoms, number of patients with psychosis symptoms, and findings. We used the Newcastle‐Ottawa scale (NOS) to assess the quality of the included studies with the highest possible score of 8 . A total of 774 studies were identified via database search and manual addition after duplicate removal . After title and abstract screening, 586 studies were excluded, and the remaining studies underwent careful review. Finally, 34 studies were eligible to be included in our qualitative synthesis . Among the included studies, 22 were cross‐sectional, and 12 were longitudinal (Table 1 ). The total number of AD patients was 2241 with mean age ranging between 60 and 82. The quality of included studies assessed by NOS was acceptable with a mean score of 7.08 (Supporting Information 2 ). AD with psychosis is initially linked to the reduced size of the right hippocampus irrespective of the frontal region size . A longitudinal study among 109 patients with AD showed a reduction in the cortical volume or thickness of the medial temporal lobe (MTL) in AD patients with psychosis (Table 2 ) . The other longitudinal study on 42 AD patients showed the same results. AD patients with psychosis exhibited greater atrophy in the right interior–inferior temporal lope (fusiform gyrus) as well as a greater rate of atrophy in the right insula compared with nonpsychotic AD patients . AD confirmation in the first study was based on NINCDS/ADRDA, and in the second one, it was based on NIA‐AA. A cross‐sectional study using structural MRI and diffusion tensor imaging (DTI) in 2015 on 58 patients with AD (mean age: 71, female: 63.7%, MMSE score: 19, AD confirmation: NINCDS/ADRDA) demonstrated various connections between microstructural damage of white matter in 4 divisions of the corpus callosum and the atrophy of gray matter in AD patients with psychotic symptoms . Three studies exhibit structural deviations in frontal and temporal lobes in AD patients with delusion . The first one was a longitudinal study in 2012 on 53 AD patients (mean age: 78, female: 73.5%, MMSE score: 21.1, AD confirmation: NINCDS/ADRDA) of whom 66% were AD patients with psychosis. This study revealed a potential association between delusional symptoms in AD patients with structural deviations in the frontal and MTLs . The other study in 2018 was a cohort study on 59 patients with AD (mean age: 78, female: 80%, MMSE score: 18.0). Overall, 40% of patients with AD suffered from delusion. Neuroimaging in AD‐developed delusions demonstrated a decreased volume of gray matter in the frontal region but both groups revealed a decrease in gray matter within frontotemporal brain regions over time . Another study in 2008 on 31 patients with AD (mean age: 77, female: 38.7%, MMSE score: 23.3, AD confirmation: NINCDS‐ADRDA) revealed a significant correlation between delusion severity and declined gray matter density in the right inferior parietal lobule (IPL) and inferior frontal gyrus as well as left claustrum and medial and inferior frontal gyri . Moreover, a study by Manca et al. found that AD patients who develop delusion had greater GM volume loss in both the left and the right caudate nuclei. Moreover, longitudinal analysis showed greater GM loss in delusional patients in the bilateral medio‐temporal ROIs (bilateral parahippocampal gyri and left hippocampus), in the right anterior cingulate cortex, and in posterior hubs of the DMN (bilateral precuneus and left posterior cingulate cortex). Furthermore, another study demonstrated higher WMH volume in the left occipital lobe in AD patients with delusion . Although the above three studies showed structural changes in the frontal and temporal lobes of the brain in AD patients with delusion compared to patients without delusion, a study in 1995 showed that structural changes in the brain in Alzheimer's patients with and without delusion are not different. This study was a cross‐sectional study on 32 patients with AD (mean age: 78, female: 65.6%, AD confirmation: DSM‐III‐R and CERAD) . Some studies investigated the structural changes of the brain in Alzheimer's patients according to the type of delusion. A longitudinal study in 2017 investigated the structural changes of the brain of specific psychosis symptoms (nonpsychotic, paranoid, misidentification, mixed) in AD patients. Psychotic symptoms revealed a decrease in left parahippocampal gyrus volume. A significant reduction in left parahippocampal volume was seen in patients with the mixes and misidentification subtypes in comparison with the paranoid and nonpsychotic groups . A cross‐sectional study in 2011 on 113 AD patients (mean age: 75, female: 65.4%, MMSE score: 19.3, AD confirmation: DSM‐IV and NINCDS/ADRDA) revealed a reduction in cortical thickness within the left superior temporal and left medial orbitofrontal regions in female patients with paranoid delusion compared to male patients with delusion . A cohort study in 2019 using fMRI on 30 patients with AD (mean age: 76, female: 63.3%, AD confirmation: NIA‐AA) demonstrated a significant reduction in connectivity between the left IPL and the remaining regions within the DMN in AD patients with delusion . A cross‐sectional study in 1995 revealed an association between hallucination symptoms and overall brain atrophy, right and left lateral ventricle size . Patients with AD who experienced visual hallucinations showed a significantly decreased occipital/whole brain ratio compared to AD patients without visual hallucinations . Another study demonstrated that AD patients with visual hallucinations had a higher occipital periventricular hyperintensities score in comparison with those without visual hallucinations . AD patients with delusion demonstrated temporal horns larger to the right and the frontal horn to the left on CT scan in a cross‐sectional study (mean age: 76, MMSE score: 22, AD confirmation: NINCDS‐ADRDA) . A longitudinal study in 2011 on 259 AD patients (mean age: 75, female: 67.9%, MMSE score: 21.4, AD confirmation: NINCDS/ADRDA) revealed that delusion and hallucination were significantly associated with changes in left lacunes and basal ganglia . SPECT analysis in AD patients revealed declined perfusion levels in the right inferior temporal regions and inferolateral prefrontal cortex in female AD patients with psychotic symptoms in comparison with AD female patients with no psychotic symptoms. Conversely, higher perfusion levels in the right striatum were exhibited in male AD patients with psychosis in comparison with AD male patients with no psychotic symptoms . Most of the studies show hypoperfusion in different regions of the right hemisphere in Alzheimer's patients with delusion symptoms. We will continue to review these studies. A cross‐sectional study of 33 patients with AD revealed a correlation between delusion symptoms and hypoperfusion in the right anterior hemisphere but no distinct focal regions were indicated . Another study also exhibited significant hypoperfusion within different brain regions of the right hemisphere, including the anterior cingulate gyri, parietal cortex, prefrontal cortex, and inferior to middle temporal cortex . A cross‐sectional study also revealed hypoperfusion in temporal regions. In this study, there was a significant hypoperfusion in the bilateral temporal lobes, with a dominance toward the right hemisphere. Specifically, hypoperfusion was exhibited in the right amygdala and right inferior temporal gyrus, in comparison with AD patients without delusion . AD individuals with delusions of theft exhibited significantly reduced cerebral blood flow in the right media posterior parietal region in comparison with patients without delusions . In a cross‐sectional study, AD patients with delusion revealed a significant hypoperfusion in the right anterior insula . A group of AD patients with autobiographic delusions revealed a notable region of hypoperfusion in the right frontal lobe compared to AD patients without delusions and AD patients with a range of delusions but no autobiographic content. This region of hypoperfusion covers specific parts of Brodmann's areas 9 and 10 . In a cross‐sectional study, eight types of delusions were classified by factor analysis and evaluated with a neuropsychiatric inventory. Delusion beliefs about home, phantom border symptom, abandonment, and misidentification were correlated with perfusion changes in the right temporal pole, medial frontal, and precentral regions. Delusion regarding television and persecution, with corresponding perfusion changes in the precuneus, insula, and thalamus. Delusions of abandonment and jealousy are related to perfusion changes in the right inferior temporal and frontal regions, middle frontal gyrus, insula, and posterior cingulate gyrus . Although most studies indicated involvement in the right hemisphere in Alzheimer's patients with delusion, in some studies, hypoperfusion was observed in the left hemisphere. In a study, AD patients with delusions revealed hypoperfusion in the left frontal lobe relative to the right frontal lobe (26) (Table 3 ). Patients presenting with hallucinations revealed hypoperfusion in the parietal lobe . According to the type of hallucination, AD patients who exhibited symptoms of visual and auditory hallucination revealed an imbalanced dysfunction in the frontal lobes as well as associated subcortical and parietal structures . PET imaging analysis on AD patients with psychosis exhibited increases in tau pathology in occipital, frontal, and medial temporal cortexes compared to AD patients without psychotic symptoms (a cross‐sectional study on 67 AD patients, mean age: 79.2, female: 40.2%, MMSE score: 20.9, AD confirmation: NIA‐AA) . Studies have shown that in Alzheimer's patients with delusion, there is dysfunction in different portions of the frontal and prefrontal cortex. In a cross‐sectional study using FDG‐PET, delusion was observed in AD patients who had dysfunction within specific regions of the frontal and temporal cortexes compared to AD patients without delusions. Patients with delusion revealed reduced metabolic activity in the right orbitofrontal cortex, lateral frontal cortex, and bilateral temporal cortex . Another study using FDG‐PET imaging revealed that the manifestation of delusion among AD patients was correlated with hypometabolism in two distinct regions of the right prefrontal cortex, including the inferior frontal pole (Brodmann's area 10) and the superior dorsolateral area (specifically the lateral aspect of Brodmann's area 8) . In this study, they did not compare results in AD patients with delusion symptoms to AD patients without delusion symptoms. The result of this study was in accordance with the result of another study using SPECT imaging that revealed hypoperfusion in specific parts of Brodmann's areas 9 and 10 in AD patients with delusion . Moreover, in a cross‐sectional study, AD patients with delusion symptoms had a significant increase in the metabolism of glucose in the left inferior temporal gyrus, whereas glucose metabolism was reduced in the left medial occipital region in comparison with those without delusions . A cohort study on 173 AD patients (mean age: 69.4, female: 10.9%, MMSE score: 24, AD confirmation: NINCDS‐ADRDA) revealed AD patients with hallucination had lower peak frequency, a2‐functional connectivity, and a2‐ and b‐power but higher d‐power compared to AD patients without hallucination . Another study showed that AD patients with hallucinations had slightly higher ventricle‐brain ratios on CT scans and higher mean theta and delta powers on EEG . A cross‐sectional study using MRS in AD patients with psychosis demonstrated a significant reduction in N ‐acetyl‐ l ‐aspartate and significant increases in glycerophosphoethanolamine. These results suggest that the excessive deterioration of synaptic integrity and neocortical neuronal may be an infrastructure for psychosis in AD patients . The current review attempted to investigate neuroimaging findings of AD patients suffering from psychosis, hallucinations, and delusions. We reviewed studies using several imaging methods, including MRI, CT scan, SPECT, PET, MRS, and EEG. On the basis of the previous studies, there are significant changes in the volume and perfusion levels of broad brain areas, including the hippocampus, amygdala, insula, cingulate, occipital, frontal, prefrontal, orbitofrontal, temporal, and parietal cortices in these patients . Moreover, AD patients with psychosis, hallucinations, or delusions reflected different EEG waves compared to AD patients without these disorders. The accumulation of Aβ plaques and synapse loss are the hallmark underlying mechanisms of AD . Recent studies reported plenty of neuropathological mechanisms for psychosis in AD patients which mostly seem to be more severe than patients with AD only. Reduced amounts of Aβ1‐40 were reported in AD patients suffering from psychosis, revealing an increased Aβ1‐42/Aβ1‐40 ratio in the prefrontal cortex of these patients . Moreover, increased neurofibrillary tangle (NFT) area density was reported in the neocortex region of patients with AD and psychosis . Loss of synaptic functions was considered the most important underlying factor in AD patients which seems to be more severe in AD patients with psychosis problems . These patients showed higher nucleus accumbens dopamine D3 receptor density but lower serotonin (5‐HT) in the ventral temporal cortex compared to patients with AD only . The MTL is a part of the brain that plays an important role in episodic memory . This region is shown to be the first brain area affected in AD reflecting volume atrophy and NFT accumulation . MTL consists of several subregions such as the hippocampus which makes two general networks, including anterior‐temporal (AT) and posterior‐medial (PM) networks . Flores and colleagues reported different patterns for the alterations of AT and PM in AD patients. The results of their study showed higher tau uptake but lower amyloid uptake in the AT network compared to the PM network . Moreover, it is shown that decreased functional connectivity in the temporal regions can explain auditory hallucinations as well as psychosis . Our study suggested MTL alterations, including atrophy and hypoperfusion in AD patients with psychosis or delusion by using various imaging methods. The frontal lobe, as an essential part of the human brain, is involved in various cognitive processes, including long‐term planning, short‐term memory, and self‐reflection . Some studies suggested that white matter alterations in AD patients can result from vascular impairments such as ischemia in the frontal lobe . Farber et al. , in their study, reported a more effective role for NFTs in the frontal area of these patients. It is shown that female psychotic AD patients, unlike male patients, may present more rapid tau accumulation in the frontal lobe compared to nonpsychotic AD patients . This study also revealed an α‐synuclein‐related pathology in the frontal lobe of male psychotic AD patients . Previous studies demonstrated that α‐synuclein pathology is significantly correlated with delusions and dementia with Lewy bodies (DLB) . Incorporating Lewy Body (LB) co‐pathology into the understanding of psychosis in AD is crucial, as it significantly influences both the clinical presentation and neuroimaging findings . LB pathology, often present in AD, exacerbates neuropsychiatric symptoms, particularly hallucinations and delusions, by contributing to more pronounced atrophy in frontal and temporal regions . This co‐pathology may also explain some inconsistencies in imaging studies, as patients with LB co‐pathology exhibit distinct atrophy patterns compared to those with pure AD . Recognizing LB co‐pathology's role is essential for accurate interpretation of neuroimaging results and for tailoring clinical interventions. The results of our study showed the alteration in the frontal lobe as one of the most common findings in AD patients with psychosis. Moreover, structural alterations and dysfunction of the frontal lobe were reflected in AD patients with delusion and hallucination. A notable observation is that many studies report findings predominantly in the right hemisphere. This includes regions such as the right prefrontal cortex, right inferior temporal gyrus, and right parietal lobe. This may suggest a lateralized vulnerability to psychosis‐related symptoms in AD, particularly in the right hemisphere. Although less frequent, there are significant findings in the left hemisphere, especially in cases involving delusions with reductions in glucose metabolism or structural changes in the left parahippocampal gyrus, frontal lobe, and occipital regions. It is shown that the prefrontal lobe is another brain area that has an important impact on cognitive processes . Dolotov et al. reported dysfunction of astrocytes and atrophic alterations in the prefrontal lobe as vital underlying mechanisms in both AD and depression disorders. Moreover, previous studies revealed a significant correlation between the atrophy of the prefrontal lobe as well as the MTL and dementia, which can be a strong predictor in patients with AD or depression . Moreover, reduced prefrontal thickness is reported as an indicator of negative symptom progression and cognitive impairments in patients with early psychosis . Some studies suggested that there is an increased amount of phosphorylated tau in the dorsolateral prefrontal cortex (DLPFC) in AD patients with psychosis compared to AD patients without psychosis . Similar results were reported for NFT accumulation in this area of psychotic AD patients . Our study indicated dysfunction and decreased perfusion levels in the prefrontal region of AD patients with psychosis and delusion using SPECT and PET imaging methods. The parietal lobe, especially the left parietal, is shown to be engaged in social cognition and language tasks . Recent studies revealed a significant decrease in the metabolism of the parietal lobe of AD patients . This hypometabolism seems to be more severe in the medial parietal regions of AD patients . Moreover, some studies reported reduced volume of the inferior parietal lobe in the early stages of AD . Borgwardt et al. , in a longitudinal study, showed a significant reduction in the gray matter volume of the parietal lobe, particularly the medial and superior parietal lobe, in patients with psychosis. Moreover, a significant correlation was shown between impairments of functional connectivity in the parietal memory network (PMN) and auditory hallucinations in patients with schizophrenia . Our study suggested hypoperfusion and dysfunction of different parietal areas as an underlying mechanism in AD patients with delusion and hallucination. The most consistent finding across studies using MRI is the association between psychosis (particularly delusions and hallucinations) and structural abnormalities in the frontal, temporal, and parietal lobes. Specifically, atrophy or decreased gray matter volume in regions such as the inferior frontal gyrus, medial temporal regions, and parietal lobes has been frequently reported . Moreover, several studies point to a reduction in hippocampal and parahippocampal volumes, particularly in the right hemisphere, as being linked to the emergence of psychotic symptoms. These findings suggest that the neurodegenerative process in AD particularly affects regions involved in cognitive control, memory processing, and perception, which could contribute to the development of psychosis . Consistent with MRI findings, SPECT studies reveal a pattern of reduced cerebral perfusion in regions corresponding to the frontal, temporal, and parietal cortices in AD patients with psychosis. Particularly, hypoperfusion in the right frontal lobe and the temporal‐parietal junction appears to be a recurrent finding. These areas are implicated in executive function and sensory integration, which might explain the occurrence of delusions and hallucinations in this patient population . This review included some limitations that are worth mentioning. First, most included studies lack longitudinal evaluation to provide sufficient evidence over time. In addition, there was no assessment of sex differences in AD patients with psychosis, hallucinations, or delusions to show various alterations between the two genders. Moreover, there were no adjustments for medications used by patients with psychosis symptoms. There was a lack of DTI and functional MRI studies to capture more information on the changes in tracts and connections responsible for psychosis, hallucinations, or delusions in AD patients. There was notable variability across studies regarding the specific regions implicated and the extent of the changes observed. Methodological differences, such as variations in imaging techniques, the severity of AD among patient populations, and differences in how psychosis is defined and measured, likely contribute to these inconsistencies. The heterogeneity of psychosis symptoms in AD, ranging from delusions to hallucinations with different underlying mechanisms, might also explain why some studies emphasize different brain regions. For instance, hallucinations are often linked with occipital lobe abnormalities, which are less consistently reported in studies focused on delusions. Sample size and demographic factors, such as age, sex, and the presence of comorbidities, might also account for the variability in findings. Smaller studies might be underpowered to detect certain effects, whereas differences in patient characteristics can introduce variability. In conclusion, although there is consistent evidence that frontal and temporal lobe abnormalities are associated with psychosis in AD, the exact patterns of these abnormalities can vary depending on the symptom profile and the imaging modality used. The findings suggest that psychosis in AD may result from a complex interplay of structural, functional, and neurochemical alterations primarily involving the frontal‐temporal network. The results of our review provided evidence about the neuroimaging alterations in AD patients suffering from psychosis, hallucinations, and delusions using different imaging methods. AD patients with psychosis, hallucinations, or delusions have significant differences in the volume and perfusion levels of various brain regions along with alterations in EEG waves and biological molecules compared to patients with only AD. Further studies with larger sample sizes are needed to investigate tracts and connections affected in AD patients with psychosis, hallucinations, or delusions. Fardin Nabizadeh : conceptualization, investigation, writing–original draft, visualization, methodology, writing–review and editing, validation, software, formal analysis, project administration, resources, supervision, data curation. Shadi Sheykhlou : data curation, resources, investigation. Sara Mahmoodi : investigation, data curation. Elham Khalili : writing–original draft, writing–review and editing, formal analysis. Rasa Zafari : formal analysis, writing–review and editing, writing–original draft. Helia Hosseini : writing–review and editing, writing–original draft. This article is based on previously conducted studies and does not contain any new studies with human participants or animals performed by any of the authors. This manuscript has been approved for publication by all authors. The authors declare no conflicts of interest. The peer review history for this article is available at https://publons.com/publon/10.1002/brb3.70205
|
Review
|
biomedical
|
en
| 0.999996 |
PMC11688124
|
Conceptual frameworks represent ways of thinking about a problem or a study 1 and provide the justification for why a given study should be conducted. 2 In essence, they articulate the rationale for a study. Specifically, the conceptual framework argues why the study is important, outlines what the study aims to do, and describes how the study aims will be achieved. The ability to describe why we embark on an educational research project and how to communicate the topic's importance beyond the local context are fundamental aspects of educational scholarship. Although ideas for educational studies abound in neurology, careful consideration must be given to how those ideas are translated into impactful educational research projects. A clear description of the rationale for neurology education research studies will allow the field to move beyond descriptive studies of educational interventions (e.g., showing that a curriculum was well received by students or that it resulted in an improved post-test score for a given topic) and focus on more foundational questions, such as why and how the educational intervention works or in what way does the learning happen. This article will discuss conceptual frameworks and provide practical examples centered around neurology education to introduce the reader to this foundational educational research concept. There are important early steps to take in building arguments to support the rationale when embarking on an education research study ( Table ). These steps facilitate the formulation of the study's conceptual framework. From a practical standpoint, the conceptual framework should be described in the introduction and methods sections of the article. There are many useful approaches to guide educational researchers in describing their conceptual framework; here, we focus on the problem-gap-hook heuristic. 3 Such a heuristic helps organize a study's rationale by briefly describing an educational problem, summarizing the available literature and knowledge gaps in the area of study, and making the arguments for why narrowing the gap is important. This approach calls for considering the context of a broader scientific conversation we are trying to join and contribute to with the research study. Knowing what is being discussed is essential to understand how to move the conversation forward. Below are 3 practical examples of how to apply a conceptual framework in different research scenarios commonly encountered in neurology education settings. The examples start with identifying an educational topic of research, which typically emanates from observations during day-to-day educational activities the researchers are involved in through their work. After the researcher identifies a specific topic, the first step is to clearly state the problem that is being studied and the reasons for its importance. A problem statement is not a description or definition of an educational concept but focuses on an important and specific educational problem that warrants further exploration, examination, or intervention (problem). The second step is to review the literature on this topic to determine whether there are gaps, unanswered or partially answered questions that align with the local problem (gap). Next step will be to identify and apply an appropriate conceptual framework that communicates the researchers' purpose, approach, and paradigmatic orientation. This could include using a theory, an established model, guidelines, or best practices, any of which could provide the foundation underpinning the study. The researchers must then make the argument that their research will indeed fill a knowledge gap and advance the field (hook). Finally, the study aims and questions or hypotheses evolve through a combination of the problem statement, the gap, and the conceptual framework. As a point of distinction, theoretical frameworks relate to the use of theory to inform a given study and conceptual frameworks relate to the overall arguments made justifying the study 2 (these arguments can include how a given theory/theories or models would be used in the study). A Neurology Residency Program Director (PD) meets with residents in her program at the end of the academic year to discuss potential changes to improve resident education. The residents reflected on their experience receiving feedback and identified a desire to improve the quality of the feedback they receive from the supervising faculty at the end of their rotations. The PD wonders if there might be a broader research opportunity to find a solution for the specific problem at her institution. More specifically, this project can focus on issues related to resident feedback in the local context, yet the results could provide insights for other neurology programs designing feedback initiatives for their residents. The first step of the framework-building process has been formulated: the residents identified a need to improve feedback quality based on their lack of satisfaction with their faculty feedback. The problem to address in this case is the perceived low quality of feedback. When thinking about the phenomenon of feedback between supervising faculty and physician trainees, the PD quickly realizes that effective feedback is an inherently complex and dynamic interaction. There is no single recipe for what constitutes effective feedback that can apply to all contexts. Therefore, some elements of her framework should align with assessing the quality or effectiveness of feedback, while acknowledging that the elements she chooses may differ from other projects examining feedback. Her other insight is that the feedback problem could be framed in multiple ways because there are many potential research questions around feedback. She realizes that feedback is multifaceted. Some aspects of feedback analysis may focus on the content or setting for the delivery of feedback, such as its verbal content (what information is conveyed by the feedback?), how it is communicated (what communication strategies are used to convey that information?), or the context in which the conversation is situated. Other aspects may focus on the experience of the recipient or provider, such as the readiness of the learner to receive feedback, faculty attitudes toward feedback (how do the faculty experience the process of giving feedback?), emotional responses to feedback on either side, or the ability of the learner to understand what was communicated. Feedback may be examined by its process, outcomes, or impact by asking questions such as perceptions around how or why feedback is effective or not effective. Such questions can be addressed by examining the behavioral changes induced within its recipient (did the learner act differently as a result of the feedback?) or the personal response of the recipient (what impact did the feedback have on the learner's professional identity formation?). These are all potential avenues to be considered while defining a clear problem statement for the research study. Narrowing the scope of the question from “a research project on feedback” to “a research project on a specific aspect of feedback” helps better define what she intends to study. The PD then considers the relevance of her problem by asking why she should conduct a study on feedback involving neurology residents in the first place. She plans her literature search with the help of her institution's librarian. She also discusses with her neurology PD peers in the American Academy of Neurology (AAN)'s Consortium of Neurology Program Directors (CNPD) whether there are issues around feedback encountered at their residency programs (to assess the relevance and potential transferability of her study findings). A potential research project on feedback involves joining a broader scientific conversation and filling a gap in the literature while staying relevant to the local problem. Finding a gap in the literature requires a careful search and review of the studies in the literature around the topic of interest (to determine what is the existing body of knowledge that the study will contribute to). The PD conducts the planned literature search to understand the current state, available evidence, and potential gaps and to understand the theories and theoretical frameworks used by other educators to study feedback. This step helps her move beyond writing a study about a specific problem for the residents at her institution with a narrow scope of interest for others. It allows for framing the research question in a broader context of feedback-related scholarship. Upon examining the different types of questions and theoretical frameworks researchers have used to study feedback, she has more clarity on the research question she wants to ask for her study. After defining the problem and its relevance and reviewing the articles from her literature search, she decided the study would focus on neurology residents' perceptions of feedback informed by the theoretical framework of educational alliance. Theoretical frameworks define the use of a theory or theories to ground the study. 2 In this case, the elements of the educational alliance framework will shape the way the research study will be designed and interpreted. The educational alliance 4 framework borrows from the therapeutic alliance framework in psychotherapy that states that behavioral change in a patient undergoing psychotherapy is influenced by 3 key components of the therapist-patient relationship: mutual understanding of the therapy's purpose; agreement about how to reach a goal; and the patient's liking, trusting, and valuing of the therapist. Thus, the analogous components of an educational alliance may focus on the learner's perception of the purpose of the feedback, agreement on a specific learning goal during the feedback conversation along with action plans to achieve the goal, and the relationship qualities between the learner and supervisor. The PD formulates a specific research question: what are the perceptions of neurology residents regarding the purpose of the feedback they receive? She plans to investigate the topic guided by the theoretical framework of educational alliance and continues to build the rationale for her study by choosing which methodology and methods will best help her address her research question. Given the specific aspect of feedback “trainees' perceptions of the feedback they receive” she aims to study and the social aspects of the educational alliance theoretical framework she is using to underpin her study, she opts for qualitative methodology because it could yield richer information about the feedback experience than quantitative methodology. Owing to her lack of experience in qualitative research, she reaches out to a qualitative research expert at her institution for advice and potential collaboration. Together, they plan the key research steps, such as sampling, data collection, analysis, and reporting. They conclude that semistructured interviews would be the best data collection method and discuss with the members of the CNPD to have multiple sites involved in study recruitment. The Dean of Assessment at a medical school has received evaluation data from graduates who felt underprepared in hands-on medical procedures. She noticed that lumbar punctures are one of the procedures highlighted in the survey. She meets with the Neurology Clerkship Director, asking him to develop and evaluate an educational program for teaching bedside lumbar puncture to the neurology clerkship students. The Neurology Clerkship Director is aware that it would not be feasible in the clinical environment for all clerkship students to practice or perform actual bedside lumbar punctures, given the variability of clinical experiences across clinical sites. He wonders if, in the process of finding a solution to the specific problem at his institution, there is a broader opportunity to conduct an educational research project on lumbar puncture instruction to provide insights for other neurology clerkship programs to train their students in this procedural skill. The first step is to define the problem. In thinking about the phenomenon of procedural skill development, he remembers that lumbar puncture is a skill described as an objective in the AAN's Core curriculum guidelines for a required clinical neurology experience. 5 Thus, this problem is relevant for neurology clerkships nationwide and to educators facing similar challenges in providing opportunities for procedural skill development. The next step is to identify what is involved in teaching the procedural skill of lumbar puncture. In contrast to the feedback case above, we can assume that there would be less deviation in how we define what constitutes an effective lumbar puncture (i.e., performing an effective lumbar puncture is less context dependent because there is more agreement around the steps required to perform a lumbar puncture and determining when it is effective). Nevertheless, questions around lumbar puncture skills training can be formulated in several ways: what are the perceptions of students on the need to learn lumbar punctures? What are the perceptions of the faculty on the need to teach the skill to students vs resident trainees? What are the most common aspects of the procedure where the students encounter difficulties? What is the rate of knowledge decay around procedural skills? What effective instructional methods can be used to train many students on lumbar punctures? What is the best way to assess successful attainment of the lumbar puncture skill? He decides to concentrate on developing an instructional program to teach students lumbar puncture and assess their skill in a controlled setting, given the variability of exposure in the clinical environment. The Neurology Clerkship Director reviews multiple research articles about lumbar puncture as a procedural skill and various instructional strategies used to teach and assess it. Because he is focused on instruction and further on how to plan instruction systematically for a large number of participants, he finds good alignment with his research goals and a study published in Neurology 6 that described simulation-based instruction in lumbar puncture to internal medicine and neurology residents. After defining the problem and its relevance and reviewing the literature on lumbar puncture instruction, he searches for theoretical frameworks to underpin his study. The simulation curriculum he plans to implement relies on simulation-based mastery learning (SBML) principles. SBML integrates 3 learning theories: behavioral learning theory, constructivist learning theory, and social cognitive learning theory, 7 in addition to deliberate practice. 8 He sees that behavioral learning theory in SBML is also associated with prolonged skill retention after learning and has been shown to improve advanced cardiac life support skills even after a 1-year delay. 9 Thus, while SBML has informed his lumbar puncture curriculum on the immediate acquisition of skills, using SBML to develop enduring lumbar puncture skills has not yet been explored. To fill this knowledge gap, he decides to study his curriculum by assessing not only immediate improvements in students' skills in lumbar puncture but also reassessing them in another simulation-based assessment after a delay of 3 months. Now that he has chosen an instructional program to develop students' lumbar puncture skills informed by the theoretical framework of SBML and deliberate practice, he will assess the immediate and retained skills using a simulation-based assessment with a checklist, which represents a quantitative methodology. He starts planning his lumbar puncture simulation, learner assessments, and program evaluation. The preclinical Neuroscience Course Director meets with the Neurology Vice-Chair for Education to share faculty evaluation data from students describing low satisfaction with the neurology faculty lecture sessions for the neuroscience block. They discuss potentially developing a faculty development program to help improve the lecturing skills of the faculty. While finding a solution to the specific problem at their institution, they wonder whether there might be a broader opportunity to conduct an educational research project on faculty development for preclinical neuroscience course teachers to provide insights to other institutions on developing and implementing lecturing skills. The first step is to define the problem. In thinking about lecturing skills, the Course Director realizes that she first has to review the goals and objectives of the neuroscience course and determine which objectives map to each of the faculty lectures. This way, she has specific information about what each lecture should achieve. Having a clear idea of the objectives for each lecture allows her to determine what it means to have an effective lecture anchored in objective data and not solely based on subjective perceptions of learners. She also has to consider the experience of the faculty giving the lecture (i.e., go beyond the student evaluations) to assess if the issue is about the lecture content, the content delivery, or the misalignment of educational expectations. When thinking about an approach to the problem, she comes up with several questions: what is the lecture content? Is the content aligned with prior sessions and course learning objectives? How are the lectures delivered (e.g., recorded, live, or hybrid)? What factors influence the perceived benefit of the lectures, for example, timing, required attendance, and lecture length? Is there a misalignment between the educational expectations of faculty and students? To assess the transferability of her findings to other contexts outside of her institution, she reaches out to several neuroscience course directors to ask if they have experienced similar problems with the lecture portions of the neuroscience course. She finds it a common problem for others and thus relevant to address in a research project. The Course Director identifies considerable debate in the literature regarding the adequacy of student evaluations of teaching used solely for definitive faculty assessment. 10 This strengthens the idea to focus her study on the development and implementation of a program that involves restructuring the lectures themselves (content and methods of delivery) rather than the perceptions of faculty teaching by learners or the faculty lecturing skills (which was the original intent of the initiative). She then reviews the literature to understand the theoretical frameworks that could be used to inform her study. She reads with interest the “The Future of the Lecture in Neurology Education” article, 11 which describes different learning theories that could help inform her study design and implementation. After defining the problem and its relevance and literature review, she decided that the study would focus on developing a system to structure the lecture content and delivery across a neuroscience course to align and reinforce the different course objectives. In designing the program, she draws from cognitive load theory 12 to inform the structural aspects of the lecture, such as timing, lecture content (e.g., using clinical cases as schemas to simplify content, lecture over image-based slides rather than text-based to minimize conflicting processing of information), and content delivery sequencing. She also informs her implementation plan using design-based research principles. 13 Once the Course Director has decided to develop a structure to modify lectures in a neuroscience course informed by cognitive load theory and underpinned by principles of design-based research, she goes forward with a program evaluation plan to judge and plan improvements for her curriculum redesign project. 14 Substantial time and effort should be dedicated to the questions: what are we doing? Why are we doing it? How do we plan to do it? before embarking on an educational research study. As highlighted in this article, building a conceptual framework ensures that a research study is adequately conceptualized, supported, and rigorously designed. As illustrated in the cases above, the groundwork underpinning a study involves clearly defining the problems being addressed through educational research; situating the research within a broader scholarly conversation, including defining how the research study will contribute to the field by a clear outline of the gap in the literature it intends to fill; formulating specific research questions; deciding on theoretical frameworks or models informed by a careful review of the literature; and finally choosing the appropriate methodology and methods for the research questions. The examples demonstrate how educational research questions commonly originate from the local learning environment, derived from real-life concrete educational problems. This pragmatic approach allows for the research to have practical applications. Still, it can also rush the process by pushing educators to find a solution before adequately thinking about the problem beyond the local context and how to study it through rigorous educational research with the goal of having a broader relevance in the field. In addition, education research often settles on studies of skill or behavior-based outcomes without asking why and how these outcomes occur. The solutions to local educational problems are also often too narrow and context specific (i.e., only applicable to the specific educational problem at the institution). Developing educational scholarship should strive to move beyond local education quality improvement projects and aim to join a broader scholarly conversation, which gives it more relevance and enhances transferability to other contexts beyond the local settings. 15 Using the problem-gap-hook heuristic 3 is a helpful way to organize the description of the conceptual framework of the study and facilitates intentionally thinking about them in the early steps of the research process. A clear problem-gap-hook description should anchor the first paragraph or two of the introduction section of education scholarship papers. 16 The scientific story starts in the introduction and must be brought to a close in the discussion section. 17 This means that the conceptual framework(s) used to provide rationale for the study need to be clearly described in the introduction, with a closing of the loop linking them to the results in the discussion section. Authors often reach out to peers, in addition to the literature review, to assess the potential transferability of prospective study findings to contexts outside of their own. Multiple educational forums for idea generation and refinement are available in the neurology field, such as the Consortium of Neurology Clerkship Directors and the CNPD. In addition, the education sessions at neurology professional society meetings are valuable opportunities to discuss and start new collaborations to strengthen the scope and reach of potential education research studies. In summary, building a solid conceptual framework lays the groundwork to embark on the arduous work of developing and carrying out an educational research study. The critical steps of this groundwork involve defining the educational problem being addressed, the relevance of the problem beyond the local context, a careful review of the literature to understand the scholarly conversation around the topic, formulation of clear research questions, and underpinning theoretical frameworks that inform the study, and finally choosing the appropriate methodology and methods to answer the research questions with rigor. We can be reassured that we are on the right path if we deliberately lay a solid foundation for our future educational research work.
|
Other
|
biomedical
|
en
| 0.999999 |
PMC11688129
|
Antiphospholipid antibody syndrome (APS) is an autoimmune disease which is characterized by antiphospholipid antibodies (aPLs) and vascular thrombosis or obstetrical complications. The main target antigen in APS is β 2-glycoprotein I ( β 2-GPI) , a plasma glycoprotein participating in coagulation and complement regulation . aPLs bind to the cell membrane through β 2-GPI, and subsequently elicit signal transduction intracellularly , resulting in activating the complement [ 4 – 6 ] and the coagulation cascade. The autoimmunity towards β 2-GPI is speculated to play a key role in the pathogenesis of APS. Among the severe manifestations of APS, nephropathy is an important one. One third of APS patients develop nephropathy as revealed in biopsy [ 7 – 9 ]. Such nephropathy is histologically characterized by thrombotic microangiopathy and chronic vascular lesions such as fibrous intimal hyperplasia of interlobular arteries, fibrous occlusions, recanalized thrombi in arteries and arterioles, et cetera. The end result is acute kidney injury, hypertension, hematuria, proteinuria, and even chronic kidney dysfunction. In those patients undergoing renal transplantation; furthermore, the presence of aPL in the blood is associated with a lower survival rate of renal graft [ 10 – 12 ]. The underlying mechanism of APS nephropathy remains unclear. Seshan et al. administered human and mouse aPL to mice and produced a murine model of APS in nephropathy. Their results implied the contributory role of tissue factor and complement activation in the pathogenesis of APS nephropathy . To date, the treatment of APS relies on the use of antiplatelet and anticoagulants. No standard treatment for APS nephropathy has been established. In recent decades, a contributory role of type I interferon (IFN) in the pathogenesis of autoimmune diseases like APS has been suggested . The Janus kinase (JAK) family is crucial in the signal transduction of a myriad of cytokine receptors and its inhibition revolutionized the treatment for autoimmune diseases . Tyrosine kinase 2 (Tyk2) is one of the JAK family members, through which interleukin (IL)-12, IL-23, and IFN- α exerts the downstream effects. Deucravacitinib , a Tyk2 inhibitor that binds to the JAK homology 2 (JH2) domain, has been approved for treating psoriasis and reported to be effective in the treatment of systemic lupus erythematosus (SLE) . To be noted, APS is frequently accompanied with SLE. Since the JAK family share a similar structure of the JAK homology 1 (JH1) domain, a cross reaction against other JAK happens and may lead to unwanted side effects. BMS-986202 is a novel Tyk2 inhibitor that binds to the JH2 domain. It is expected to more selective, with greater potency and less side effects . We hypothesized that type I IFN is crucial in the generation of APS nephropathy. In addition, we inhibited downstream Tyk2 signaling with BMS-986202 in a murine model of APS nephropathy to examine the therapeutic potential. Six-week-old female BALB/c mice were obtained from the National Laboratory Animal Center (Taipei, Taiwan). Free access to food and water was kept. Their environment was maintained at 22 ± 2°C and 60% ± 10% humidity. Assuming the difference in renal function (blood urea nitrogen (BUN)) between treated and control mice being 20 mg/mL , the within-group standard deviation (SD) being 10 mg/mL, power of 0.8, and the type I error of 0.05, five mice are required for both groups to be able to reject the null hypothesis. Finally, six mice were assigned to each experimental group. All animal procedures were conducted according to institutional guidelines and approved by the Institutional Animal Care and Utilization Committee of National Chung Hsing University, Taiwan (approval protocol no. NCHU-IACUC-110-127). Our experimental procedures were modified from that conducted by Seshan et al. . Immunization with β 2-GPI (20 μg/mouse; Prospec, Rohovot, Israel) in the complete Freund's adjuvant (Chondrex, Redmond, WA) was performed in mice. They underwent booster immunization ( β 2-GPI of 20 μg/mouse in complete Freund's adjuvant) at day 21. Their blood samples were obtained from the orbital sinus on day 42. Urine samples were collected at the same time. After CO 2 asphyxiation, their kidneys were then retrieved. We incubated the whole blood sample obtained from each mouse for 1 h at 4°C, which was followed by centrifugation at 2500 rpm. The serum samples were then collected, and diluted in tris-buffered saline (1:100; pH 8.0) which contained 1% BSA and 0.5% Tween-20. Anti- β 2GPI IgG concentrations in the serum were measured by an in-house ELISA. In brief, we added human β 2-GPI (Prospec, Rohovot, Israel) 5 ug/mL in carbonate/bicarbonate (pH:9.6) to the wells for 24 h. After washes, we added 1% BSA for blocking. Serum samples underwent 1:1000 dilution by PBS and was then added to the well for 24 h. We added anti-mouse IgG and incubated for 2 h. After washes, we added tetramethylbenzidine (TMB) and then H 2 SO 4 (1 M) to stop the reaction. The results were obtained at OD 450 nm by a TECAN Sunrise ELISA reader (Männedorf, Switzerland). Ethylene diamine tetraacetic acid (EDTA)-anticoagulated blood and urine samples were analyzed using Beckman colter AU480 (Brea, CA). Based on the prior seminal study by Seshan et al. on murine APS nephropathy and another report showing that BUN is more sensitive in detecting murine renal injury than serum creatinine , BUN and urine creatinine concentrations were determined. Urine microalbumin concentrations were determined with an ELISA kit (ELK Biotechnology, Denver, CO). We calculated albumin-to-creatinine ratio (ACR) in the urine to determine levels of proteinuria. We determined blood platelet counts, for each mouse, in EDTA-anticoagulated blood samples with the HEMAVET hematology analyzer (Drew Scientific, Miami Lakes, FL). A Tyk2 inhibitor, BMS-986202 (Bristol-Myers Squibb, NY), was dissolved in the olive oil and given daily to mice at an oral dose of 2 mg/kg from day 35 to 42. In the control APS mice, only the vehicle (olive oil) was given daily from day 35 to 42. Electron microscopy (EM) was performed on retrieved kidney samples. We first fixed kidneys in 4% formaldehyde and 5% glutaraldehyde with 0.1 M sodium cacodylate buffer for 1.5 h and postfixed with 1% osmium tetroxide for 30 min at room temperature. Kidneys were then dehydrated with ethanol before being embedded in LX112 (EMS). Thin sections (80 nm) were stained with uranyl acetate and lead citrate, and we viewed the sections under a HT-7700 transmission electron microscope (Hitachi, Tokyo, Japan). Glomerular injury was evaluated in a semiquantitative way, as described in the literature . Based on the extent of loss of fenestrations, endothelial swelling, and detachment of endothelial cells, we graded the severity of the vascular lesions. Vascular lesions involving less than 25% of the glomeruli were scored as 1, involving 25%–50% of the glomeruli as 2, and involving more than 50% of the glomeruli as 3. In every mouse, we graded at least 5 glomeruli. Scoring was conducted in a treatment-blinded manner. For immunohistochemistry (IHC) staining, kidneys were embedded in paraffin. C3 was detected with specific antibodies . Fibrin deposition was detected using a polyclonal antimouse fibrinogen antibody (Biolegend, San Diego, CA; 1:400). The expressions levels of above proteins in the glomeruli were quantified for each mouse, represented by the percent area stained, using the software ImageJ (National Institutes of Health, Bethesda, MD). To determine the IFN signature, we first separated the cortex and medulla from the retrieved kidneys. The collected kidney tissues were homogenized with 1 mL Trizol reagent (Sigma, St. Louis, MO) for total RNA extraction. The homogenate was then added to 200 μL of 1-bromo-3-chloropropane (Sigma–Aldrich, China) and thoroughly mixed, followed by centrifugation at 12000 × g for 15 min. The supernatants were collected and mixed with 500 μL of isopropanol. We then discarded the supernatants, and the RNA pellet was washed with ethanol. The pellet was dried and resuspended in 50 μL DDW. A total of 2 μg of RNA were mixed with 1 μL of oligo dT (0.5 mM) and DDW, and heated at 70°C for 5 min. Afterwards, we added 1 μL dNTP (10 mM), 5 μL MMLV reverse transcriptase, and DDW for cDNA synthesis and subsequent RT-polymerase chain reaction (PCR) in the StepOne Real-Time PCR System (Applied Biosystems, Waltham, MA). Ten microliters reaction mixture was composed of 5 μL of Fast SYBR Green Master Mix (Applied Biosystems; Thermo Fisher Scientific, UK), 1 μL of 100 μM cDNA, 0.75 μL of primer, and 3.25 μL of DDW. The primers were as follows: Mx1: forward: GAATGGGAAAGTTTTGCCGAGT and reverse: TGATAAACCGTCCACTTAGTCCT; IFN regulatory factor 7 (IRF7) forward: GCGTACCCTGGAAGCATTTC and reverse: GCACAGCGGAAGTTGGTCT; and GAPDH: forward: CGTGTTCCTACCCCCAATGT and reverse: TGTCATCATACTTGGCAGGTTTCT. The obtained data were normalized based on the expression levels of GAPDH. The relative expression level of each target gene was calculated based on the comparative threshold cycle (Ct) in the formula: 2 − Δ Ct , in which Δ Ct = sample Ct target gene − sample Ct GAPDH . Statistical analyses were performed using GraphPad Prism (version 9.0 for Windows; GraphPad Software). We presented the quantitative data in means and the SDs. One-way ANOVA with post hoc Tukey test was performed for intergroup comparisons . A two-tailed p value of <0.05 was recognized as statistically significant. As shown in Figure 1 A,B, we observed elevated levels of BUN and albuminuria in APS mice when compared with the normal mice. BMS-986202 alleviated such changes. As shown in Figure 1 C, production of serum anti- β 2-GPI antibodies was elevated after β 2-GPI immunization. BMS-986202, however, did not affect the serum levels of anti- β 2-GPI antibodies. As shown in Figure 1 D, a decrease in blood platelet count was noted in the APS mice, which was reversed after administration of BMS-986202. We further examined the ultrastructural effects of BMS-986202 on APS nephropathy. As shown in Figure 2 , immunization of β 2-GPI in mice led to more severe vascular lesions in glomeruli when compared with the normal mice. Administration of BMS-986202 reversed such abnormalities. In addition, as shown in Figure 3 , glomerular deposition of fibrin and C3 were greater in APS mice when compared with the normal mice and such greater deposits were reversed by BMS-986202. As shown in Figure 4 , we found that in the renal cortex, but not in the medulla of murine kidneys, mRNA expression levels of those genes downstream of type I IFN signaling (IRF7 and Mx1) were upregulated. Such an upregulation was reversed after administration of BMS-986202. This is the first study on the therapeutic potential of Tyk2 inhibitors in a murine model of APS nephropathy. We found that inhibiting Tyk2 had suppressed manifestations of APS nephropathy. This effect is likely mediated through suppressing the type I IFN response in the kidney. APS nephropathy is prevalent in APS patients but difficult to treat, with no established treatment. Here, we have demonstrated in mice that β 2-GPI immunization produced anti- β 2-GPI antibodies, which in turn affected the kidney. Consistent with the previous report by Seshan et al. , we showed histological damage of murine kidneys, including vascular lesions by EM and fibrin and C3 deposition at the glomeruli. Furthermore, BUN and microalbuminuria levels were higher in APS mice, providing biochemical evidence on deteriorated renal function. These findings support our murine model being appropriate, at least in part, to represent APS nephropathy. The pathogenesis of APS nephropathy is elusive. Tissue factor, complement pathway, and mTOR activation are implicated . In recent decades, the type I IFN response has been shown to play a key role in the generation of systemic autoimmune diseases such as SLE and Sjogren's syndrome . In addition, a prior study reported that IFN- α augments inflammatory responses at the atherosclerotic plaque, partly through the upregulation of Toll-like receptor 4 (TLR4) expression in myeloid dendritic cells . Consistent with these findings, the type I IFN score was reported to be higher in the blood of APS patients and further correlated with the presence of anti- β 2-GPI antibodies . Our murine model also demonstrated that the IFN signature was upregulated in the kidneys of APS mice. Interestingly, type I IFN therapy has been reported, in patients with multiple sclerosis, to be associated with thrombotic microangiopathy (a major pathological finding in APS nephropathy) and type I IFN signature in their renal biopsies . Thrombotic microangiopathy after hematopoietic stem cell transplantation is also associated with an upregulated type I IFN response . Anifrolumab, a human monoclonal antibody for type I IFN receptor, is currently an approved treatment option for SLE . Further studies on the type I IFN response in patients with APS nephropathy should be conducted. Upon ligand binding, the JAK family is capable of phosphorylating the cytokine receptor, which then recruits signal transducer and activator of transcription (STAT) to be further phosphorylated. Activated STAT translocates to the nucleus. There, it acts as a transcription factor to regulate cellular function. The JAK family includes JAK1, JAK2, JAK3, and Tyk2, and their inhibition is known to be therapeutic for many autoimmune and hematological diseases . Tyk2 is indispensable for signaling of several pro-inflammatory cytokines, including IL-12 and IL-23 as well as IFN- α . BMS-986202 is a novel chemical compound chemically analogous to deucravacitinib , which is approved for treating psoriasis . BMS-986202 binds to the JH2 domain of Tyk2 and thereby, inhibits its downstream signaling. In accordance with that, we found that BMS-986202 could suppress the IFN signature in the kidney of APS mice. Nevertheless, the production of anti- β 2-GPI antibodies and T helper response in murine spleen cells were not affected (data not shown) despite previous studies showing that type I IFN could enhance B and T cells [ 28 – 30 ]. This may be related to the growth-inhibitory effect of type I IFN on immune cells . Furthermore, the pathological change and biochemical abnormalities of APS nephropathy were reversed after BMS-986202 administration, although, we did not evaluate the real function (glomerular filtration rate) by the more accurate methods such as clearance of inulin, iohexol, or radioactive isotopes. Our results highlight the pathogenic role of type I IFN response and likely provide a novel treatment option for APS nephropathy. BMS-986202, without cellular toxicity reported in a preprint , is a potential therapeutic candidate. The pathogenesis and appropriate treatment for APS nephropathy are largely unknown. We have demonstrated the potential role of type I IFN in the pathogenesis of APS nephropathy. In addition, the therapeutic efficacy of BMS-986202, a novel Tyk2 inhibitor, has been demonstrated in a murine model of APS nephropathy. Further human studies are needed.
|
Study
|
biomedical
|
en
| 0.999997 |
PMC11688133
|
Papillary breast lesions (PBLs) are common proliferative disorders of the breast, which present mostly with arborizing fibrovascular stroma as core of the papillae. Each papilla is covered by epithelial cells, with or without a myoepithelial cell layer. PBLs present as a wide range of lesions including papilloma, papilloma with atypical ductal hyperplasia/DCIS, papillary DCIS, and solid and encapsulated papillary carcinomas in situ. Among them, intraductal papillomas (with or without atypia) were found in 5.3% of benign breast biopsies from a cohort of > 9000 women . PBLs can occur in women of all ages, usually in their 30s and 50s, clinically appear as mass lesions and/or nipple discharge . There are few specific clinical features differentiating between benign and malignant papillary lesions . Immunohistochemical stains provide diagnosis indicator by CK5, ER, p63 expression, and etc. . Pure intraductal PBLs have a low upgrade rate (1%–9%), whereas lesions with concomitant atypia have been reported to have a rate of up to 38%. However, the morphogenesis of PBLs is not well understood. European Third International Consensus Conference discussed about the examination and diagnosis of PBLs as B3 lesions, including core needle biopsy (CNB), vacuum-assisted biopsy (VAB), and open excision (OE) . However, the balance between biopsies and pathology decision has been brought up as an unsolved question [ 7 – 10 ]. Thus, in this study, we explore on the data of 2964 patients with papillary lesions aiming to form a retrospective study of independent risk factors associated with malignant PBLs through clinical variables as an adding analysis for early PBLs diagnosis. The clinicopathologic data of 2995 patients who underwent open breast surgery at the Harbin Medical University Cancer Hospital between January 2010 and December 2016 and were diagnosed as “papillary lesions” in the postoperative pathology results were collected, respectively. Inclusion criteria are as follows: (1) pathologic diagnosis of PBLs and (2) imaging time (interval between performing surgery and the last preoperative imaging study, use of ultrasound and/or mammography) < 1 week. Exclusion criteria are as follows: (1) incomplete clinicopathological and imaging data and (2) concurrent presence of malignant tumors in other sites. 2964 cases were finally analyzed . All patients underwent breast surgery and were re-evaluated by two experienced pathologists. The patients underwent surgery for the following reasons: imaging studies suggesting that the mass occurred as an intraductal occupancy; ultrasound suggesting a BI-RADS 4 classification (assessed through different aspects, such as shape, orientation, margin, echo pattern, and calcification) ; and strong surgical advocacy by the patients with mass exceeding 1 cm. According to the WHO histological classification of breast tumors , papillary lesions of the breast are classified into intraductal papilloma, papillary ductal carcinoma in situ, encapsulated papillary carcinoma, solid papillary carcinoma (in situ and invasive), and invasive papillary carcinoma. Patients were categorized into nonmalignant and malignant groups based on postoperative pathologic findings. The number of masses was determined based on ultrasound findings. Isolated lesions in the same ductal system were considered single, and ≥ two were considered multiple. For patients with recurrence after previous treatment admission, the recurred lesion was defined as the same if the mass appeared in the same ductal system in the ipsilateral breast; in all other cases (recurrence in the contralateral breast or bilateral concurrent disease), multiple masses were defined, and therefore, the number of patients enrolled was less than the number of masses. The locality of the lesion was determined based on the ultrasound findings. Lesions within 1 cm of the nipple were defined as central lesions, and any distant locations were defined as peripheral. The above statistical analyses were performed using IBM SPSS Version 27.0. The categorical variables were shown as frequencies and proportions and compared with the Chi-square tests. All factors with p < 0.05 in Chi-square tests were taken and further analyzed using multivariable logistic analysis to explore the independent risk factors affecting the malignant PBLs. p < 0.05 was considered a statistically significant difference. In this study, 2964 cases of PBLs were included, of which 2281 (77.0%) were in the nonmalignant group and 683 (23.0%) were in the malignant group. 42% (405/955) of patients aged ≥ 50 years were malignant, and 1101 patients showed palpable tumors, of which 453 (41.1%) were malignant PBLs. The rest of the clinicopathologic features are shown in Table 1 . The results of univariate analysis showed that age, tumor palpation, nipple discharge, menstruation status, tumor size, distance of the tumor from the nipple, and tumor with calcification may be associated with malignant PBLs (all p < 0.05). However, family history of malignant breast tumors, personal history of malignant breast tumors, and number of tumors were not significantly associated with malignant PBLs (all p > 0.05) ( Table 1 ). As shown in Table 2 , the factors that may be associated with malignant PBLs were included in the multivariate logistic analysis as age ≥ 50 years (OR = 2.724, 95% CI 2.131–3.483), palpable tumor (OR = 1.546, 95% CI 1.131–2.113), postmenopausal (OR = 1.829, 95% CI 1.425–2.349), tumor size ≥ 15 mm (OR = 3.884, 95% CI 2.839–5.313), peripheral lesions (OR = 2.904, 95% 2.241–3.764), and tumor with calcification (OR = 7.013, 95% CI 5.564–8.838) (all p < 0.05). We further validated the ability of the above six independent risk factors to predict malignant PBLs using receiver operating characteristic (ROC) curves. As shown in Figure 2 , the area under the ROC curves (AUC) for age, tumor palpation, menstruation status, tumor size, distance from the nipple, and tumor with calcification were 0.674, 0.690, 0.636, 0.726, 0.648, and 0.726, respectively. These results further confirmed the predictive ability of these independent risk factors for malignancy of PBLs. In this study, we explored the independent risk factors affecting malignant PBLs by analyzing clinical variables in 2964 patients with PBLs. Finally, we selected six independent risk factors associated with malignant PBLs: ≥ 50 years, postmenopausal, palpable tumor, tumor size ≥ 15 mm, peripheral tumor, and tumor with calcification. The ROC curve verifies that the six factors could independently predict malignant PBLs . Besides, as shown in Table 3 , there were 731 cases without the above risk factors, of which only 25 lesions were malignancies, with a malignancy probability of only 3.4%. For this low-risk group, perhaps patients would benefit more by choosing active surveillance instead of surgical treatment. In addition, patients with PBLs have a progressively higher risk of malignancy as risk factors accumulate. Thus, patients with one or more risk factors are more likely to benefit from surgical excision. In previous studies, age was considered an independent risk factor for predicting malignant PBLs [ 12 – 14 ]. Similarly, in our study, it was found that the risk of malignancy in patients aged ≥ 50 years was 2.724 times higher than that in patients aged < 50 years ( p < 0.001). As shown in Figure 2 , the ROC curve confirms the age's independent discrimination and predictive capacity for malignant PBLs, with AUC values of 0.674. Brennan et al. concluded that the probability of malignant PBLs was higher in postmenopausal women . In this study, 74.5% of patients in the nonmalignant group were premenopausal , and 52.7% of patients in the malignant group were postmenopausal (360/683) corresponding to the former paper. Statistical analysis showed an 82.9% increase in the relative risk of malignancy of PBLs in postmenopausal women (OR = 1.829, 95% CI 1.425–2.34, p < 0.001). In our study, 59.1% of patients had palpable tumors. The analysis showed that the risk of malignant PBLs with palpable was 1.546 times higher than that of nonpalpable tumors (OR = 1.546, 95% CI 1.131–2.113, p =0.006). In Li et al.'s study, results showed that 748 of 2290 patients with PBLs with palpable tumors developed cancer. In comparison, only 69 of 2160 patients with nonpalpable tumors developed cancer, with the difference reaching statistical significance ( p < 0.01) . Similarly, a study of 250 cases from Korea also demonstrated that palpable tumors are independent risk factors for malignant PBLs . Whether nipple discharge predicts malignant PBLs is not uniformly agreed upon by researchers. Most scholars have shown that nipple discharge does not increase the risk of malignancy in PBLs , which is supported by our study ( p =0.599). According to NCCN guideline v2.2024, abnormal nipple discharge is defined as persistent, spontaneous uniductal, unilateral bloody, or clear nipple discharge. Although family history and personal history are associated with breast cancer, we did not find a clear association between family history and personal history and malignant PBLs in our study. This is similar to the findings of some other studies . However, Abbassi-Rahbar et al. reported that 28.6% of malignant cases had a previous diagnosis of ipsilateral breast cancer, and further studies confirmed that the risk of escalation was significantly higher when PBLs were located in the same quadrant as the ipsilateral malignancy ( p =0.023) . The difference in our findings may be because the present study only explored the effect of personal history on the malignancy of PBLs and did not breakdown the association of history of ipsilateral and contralateral breast cancer with malignant PBLs, respectively. Glenn et al. reported that the probability of malignancy in patients with PBLs with a mass < 15 mm was only 4.7% and the size of the mass was determined by the ultrasound data, taking the longest diameter. Similarly, Kil et al. found that most PBLs were ≥ 15 mm, while most benign PBLs were < 15 mm . Our study confirmed that when the tumor size was ≥ 15 mm, the risk of malignancy was 3.903 times higher than that of patients with tumors < 15 mm . This result is in line with several other studies confirming that the risk of malignant PBLs increases with the tumor size. PBLs are categorized into central and peripheral lesions, and previous studies have elucidated that peripheral PBLs have a higher probability of malignancy than the central type . Several studies have also explored the effect of the tumor's location on the malignancy of PBLs and found that the tumor's location in the malignant group was further away from the nipple . In our study, there were 1213 patients in the central type, with 124 (10.2%) malignancy and 1751 patients in the peripheral type, of which 559 (31.9%) malignancy. The difference was statistically significant . A study by Oyama and Koerner showed that multiple PBLs would have a higher risk of malignancy , in contrast to Gutman et al., who concluded that the number of PBLs does not affect its risk of malignancy . Similarly, our study found that the number of tumors does not affect malignant PBLs. We explored the effect of tumors accompanied by calcification on malignant PBLs. Of the 2964 cases enrolled, 655 lesions were accompanied by calcification, of which 391 (59.7%) were proved to be malignant. Only 292 (12.6%) of 2309 cases in the group without calcification were malignant. Statistical analysis showed that the risk of malignancy in the calcified group was 7.021 times higher than that in the noncalcified group . The same results were confirmed in other studies . This study is a single-center retrospective analysis, but our study is the most extensive sample study on PBLs to date, and six independent risk factors were screened to predict malignant PBLs. Besides, we still need prospective experiments to validate our findings. This study examined the clinical variables of malignant and nonmalignant PBLs in patients. The findings revealed that older patients (aged 50 years and above), postmenopausal, with a palpable tumor, a tumor size of 15 mm or greater, a peripheral tumor, and a tumor with calcification were identified as independent risk factors associated with malignant PBLs. The aforementioned findings support the implementation of early surgical excision in patients exhibiting the identified risk factors. Conversely, patients presenting with a minimal number of risk factors (0–3) may benefit from regular imaging surveillance. Furthermore, the probability of malignant PBLs in the absence of the aforementioned risk factors was found to be only 3.4%. This substantiates the reliability of the present study.
|
Study
|
biomedical
|
en
| 0.999996 |
PMC11688136
|
Newborn eye screening involves examining the eyes to detect ocular abnormalities that may require referral to an ophthalmologist. The sine qua non of screening is to carry out red reflex test at birth or thereafter as stated by the World Health Organization . Ocular pathologies such as cataract, glaucoma, and retinoblastoma can be identified earlier through these screenings. In most cases, visual potential and school performance can be optimized via surgical or nonsurgical means . Our national eye screening program recommends that all term infants should have an ophthalmic examination carried out by their family physician or pediatrician, including pupillary red reflex test as well as assessment of strabismus by 0–3 months of age. Prompt referral to an ophthalmologist should be considered if a pathology is suspected. However, studies have showed that the red reflex examination may fail to detect a significant proportion of posterior segment diseases . Detection of diseases in the first months is paramount importance in that the prognosis for some conditions, such as familial exudative vitreoretinopathy (FEVR) or retinoblastoma, depends on early intervention . Recently, the use of digital fundus photography for retinopathy of prematurity (ROP) screening has become popular, even for full-term screening, providing support for consultations in challenging cases and also addressing medico legal concerns . Taking a retinal photograph of a baby is not easy without special devices like RetCam and Optos, which are both expensive and unavailable in most hospitals . In comparison to conventional fundus imaging systems, smartphone-based fundus imaging is cost-effective, accessible, and easy to use approach. Besides, it has been shown that retinal irradiance from modern smartphones is lower than that from an indirect ophthalmoscope, making it a safe tool for eye examination . The aim of our study was to describe the posterior segment findings in term infants examined using do-it-yourself smartphone-based fundus camera in a tertiary care center. The study was designed and conducted in accordance with the Declaration of Helsinki with the approval of the ethics committee. The written informed consent was obtained from each parent(s) before examination. The study was a retrospective observational study reviewing all non-premature infants who underwent neonatal ophthalmological examination at Ankara Bilkent City Hospital, between October 2021 and October 2023. Our hospital provides tertiary and quaternary neonatal intensive care services and is the largest diagnosis treatment and certified training center for ROP in Turkey. The inclusion criteria were (1) full-term newborns birth weight ≥ 2000 g; (2) post-menstrual age between 36 and 42 weeks; and (3) APGAR score ≥ 9. Infants with a history of intensive care unit, infants with anterior segment pathologies, and infants whose parents refused to participate in the study were excluded. Complete medical information about the gestational period and delivery, gestational age at birth, birth weight, and time of examination was recorded. All newborns conforming to the criteria underwent a detailed eye examination including red reflex test and dilated posterior segment examination. Newborns were examined within first three months of life by experienced ophthalmologists (DEA or AO), with a nurse present. Pupillary light reflex examinations were carried out using a direct ophthalmoscope. Then, pupil was dilated using 2.5% phenylephrine (Mydfrin, Alcon, USA) and 0.5% tropicamide (Tropamid, Bilim Pharmaceuticals, Turkey) eye drops for 3 times one hour before the fundus examination. Topical anesthetic (Alcaine, 0.5% proparacaine hydrochloride, Alcon, USA) was instilled to conjunctival sac a few minutes prior to sterile pediatric eyelid speculum placing. A smartphone and/or do-it-yourself smartphone-based fundus imaging device with condensing lenses (iPhone 11, Apple, USA, and Volk ® Digital Clearfield ClearField and Ocular Maxfield ® 20D condensing lenses) were used by the examiners (DEA and AO) for documentation of posterior segment findings. The smartphone's video mode was used to record clear image. For all analyses, the IBM-SPSS Version 25.0 was used. Data were presented as frequency and percentage or mean ± SD. Between October 2021 and October 2023, 16,684 term infants were born at our hospital, and 5041 of these, which is 30.2%, underwent eye screening in our clinic. Together with 486 newborns from other hospitals, a total of 5527 term infants, 2869 female (51.91%) and 2658 male (48.09%), underwent eye screening during the study period. 48 parents refused examination. Mean gestational age (GA) was 39.40 ± 0.83 (38–41) weeks and mean birth weight (BW) was 3475.70 ± 281.72 g. Out of the total number of infants, 4720 (85.4%) underwent examination between the ages of one week and two months, while 807 (14.6%) underwent examination between the ages of two and three months. Following the examinations, 19 newborns experienced a mild fever and 11 newborns exhibited conjunctival hemorrhage. However, these complications were not long-lasting. Of all the infants examined, 1031 (18.7%) showed an abnormality in at least one eye. Hypopigmented retinal white lesions (Figures 1(a) , 1(b) , and 1(c) ) were the major finding, present in 722 (13.1%) infants; they were of varying size and shape, such as spots ( Figure 1(a) ), stripes ( Figure 1(b) ), or patches ( Figure 1(c) ). 243 infants (4.4%) were diagnosed with fundus hemorrhage, making it the second most frequent ocular finding(Figures 2(a) , 2(b) , 2(c) , 2(d) , 2(e) , and 2(f) ). The hemorrhages were located in the optic disc ( Figure 2(a) ), retina (Figures 2(b) and 2(c) ), subhyaloid area ( Figure 2(d) ), and vitreous (Figures 2(e) and 2(f) ). Of the 243 cases, the mid-peripheral retina was affected in 153 (63%) cases, followed by the peripheral retina in 68 (28%) cases, and the entire retina in 22 cases (9%). Bilateral involvement was observed in 69.1% of cases ( n = 168). Foveal region was involved in four babies (Figures 2(c) and 2(d) ). Other findings included congenital hypertrophy of the retinal pigment epithelium (CHRPE) ( n = 14) (Figures 3(a) and 3(b) ), choroidal nevus ( Figure 3(c) ) ( n = 11), idiopathic peripheric retinal scar ( Figure 4(a) ) ( n = 9), chorioretinal coloboma (Figures 4(b) and 4(c) ) ( n = 6), optic nerve coloboma (Figures 5(a) and 5(b) ) ( n = 4), FEVR ( n = 4) ( Figure 4(d) ), retinal calcification ( Figure 4(e) ) ( n = 2), optic nerve large cup ( Figure 5(c) ) ( n = 2), optic nerve hypoplasia ( Figure 5(d) ) ( n = 2), optic nerve pit ( Figure 5(e) ) ( n = 2), morning glory disc anomaly ( Figure 5(f) ) ( n = 1), vascular loop on the optic disc ( Figure 5(g) ) ( n = 1), retinoblastoma (Figures 6(a) , 6(b) , 6(c) , 6(d) , and 6(e) ) ( n = 1), X-linked retinoschisis ( Figure 4(f) ) ( n = 1), congenital toxoplasmosis ( Figure 4(g) ) ( n = 1), thread-shaped white lesion ( Figure 4(h) ) ( n = 1), combined hamartoma of the retina and the retinal pigment epithelium ( Figure 4(i) ) ( n = 1), foveal hypoplasia ( Figure 4(j) ) ( n = 1), retinal dystrophy ( Figure 4(k) ) ( n = 1), and astrocytic hamartoma ( Figure 4(l) ) ( n = 1). These findings are summarized in Table 1 . Red reflex abnormality was found in only a small minority of infants ( Table 2 ). Two patients have been diagnosed FEVR, and one patient has been diagnosed with CHARGE syndrome based on genetic evaluation. Two patients with hemorrhage were diagnosed as bleeding diathesis, one patient with inactive retinitis was diagnosed as toxoplasmosis, and one patient with optic nerve hypoplasia was diagnosed as endocrine abnormalities. An eye screening program of healthy term newborns is not common in most developing countries and even in some developed countries . In our country, there is a national eye screening program that has been implemented since 2016. This program is based on red reflex testing performed by a neonatologist or pediatrician shortly after birth and formal visual function assessment around three years of age thereafter. Performing of red reflex test to detect ocular pathologies is relatively straightforward; however, this test may be insufficient in detecting small lesions such as foveal hemorrhage, retinoblastoma, or FEVR . The high cost and lack of indirect use when capturing fundus photos may restrict the use of the RetCam system's in neonatal eye screenings. We have designed a do-it-yourself, costless, and handheld, smartphone-based fundus imaging device to capture photos of fundus pathologies, thereby causing less stress and minimizing infection risks. We have also observed that thorough documentation plays a key role in enhancing communication with parents, resulting in better adherence to follow-up. Ocular findings may range from innocuous signs to serous signs which may threaten vision and/or life . In our study, the most frequent findings were retinal white lesions, which were considered innocuous finding accounting for 13% of all screened newborns. These lesions were found 17% of all screened babies in a study . The classic lesion phenotype was characterized by discrete small patches of varying sizes at the level of the retinal pigment epithelium and inner retinal layers near to the ora serrata. Spots and stripe shapes were also observed. Unless there is a suspicion of retinoblastoma, ROP, or FEVR, further examination and follow-up are generally not required. Fundus fluorescein angiography and handheld optic coherence tomography would be helpful in differential diagnosing; however, these devices did not exist in our clinic. We observed that the majority of retinal white lesions resolved without sequela before two years of age so that long-term follow-up is controversial. Although the exact reason behind this pathology remains unclear, the most suggested mechanism is developmental delay in retinal vascular epithelial cells in the peripheral retina, resulting in retinal exudations due to immature development of the blood–retinal barrier . The second most common pathology in our study was posterior segment hemorrhages, accounting for 4.4% of all screened newborns. The prevalence of neonatal posterior segment hemorrhages can vary widely in the studies (2%–50%); the earlier the examination, the higher the prevalence of posterior segment hemorrhages . The mean postnatal examination time in our study was relatively late when compared to other studies. Studies indicated that vision impairment is more likely occur with prolonged, foveal hemorrhages; however, in our study, all hemorrhages had resolved spontaneously within 4–12 weeks, and no permanent damage was observed . Severe hemorrhages warranted a systemic workup. No significant hematologic abnormality was observed. A theory was put forth by Yanli et al. to explain the mechanism behind retinal hemorrhages in newborns after a spontaneous vaginal delivery. The sudden increase in intracranial pressure during vaginal delivery is caused by the compression of the fetal head as it descends. Increased pressure in the central retinal vein and dilation of the scalp and intracranial veins occur simultaneously due to obstruction of venous return. In cases where the retinal vascular structures are thin, this pressure increase may cause retinal hemorrhages . The results of our study and the studies in the literature on this subject are summarized in Table 3 [ 5 , 11 , 16 – 21 ]. Ocular pathologies such as retinoblastoma are time sensitive, delayed intervention of which may lead to irreversible damage to vision either because of amblyopia or anatomic defect and may threaten life . FEVR is an inherited vitreoretinopathy characterized by congenital abnormal retinal vascularization. Retinal exudates, neovascularization, retinal folds, preretinal membranes, and retinal detachment may result in visual loss . We found an avascular area as well as peripheral fibrovascular proliferation in one eye of a baby born at 38 weeks and 3150 g, and prompt laser was carried out. We ruled out incontinentia pigmenti and Norrie's disease because of the absence of hearing problems and skin symptoms and requested genetic consultation. The family members of these cases underwent retinal evaluation, and no retinal pathology was detected. The early management of retinoblastoma may improve the visual and survival outcome. In general, retinoblastoma can be diagnosed above 1 year of age in the absence of a family history . We diagnosed a patient with no family history in the first month of life. The leukocoria was so hardly visible that ( Figure 6(a) ) we thought that this case could not have been noticed by red reflex test alone. Neuroimaging, endocrine investigations, or detailed genetic tests are recommended for pathologies such as congenital anomalies of the optic disc . We diagnosed different congenital anomalies in our study, resulting in early intervention and reasonable outcomes. Our study has a limitation. It is not a multicenter study; therefore, we cannot generalize our findings to the entire population. Our study has indicated that a considerable portion of infants exhibit ocular anomalies, pointing toward the possibility of hundreds of thousands of infants being affected on a national level. In conclusion, detailed eye examinations of term infants can reveal a range of ocular and/or systemic abnormalities that would not be caught through pupillary red reflex test. Smartphone-based fundus imaging is a simple and effective method for documenting findings.
|
Other
|
biomedical
|
en
| 0.999996 |
PMC11688140
|
Acute liver failure (ALF) is a fulminant clinical syndrome characterized by extensive liver cell necrosis, resulting in severe liver dysfunction. This condition manifests with ascites, coagulation abnormalities, and rapid progression to complications such as hepatorenal syndrome, hepatic encephalopathy, and multiorgan failure in patients without prior liver disease . ALF poses a significant threat to human life, with a mortality rate of 60%–80% , and its incidence continues to rise annually. For instance, in the United States, the incidence of ALF has ranged from one case per million individuals annually to ~2000–3000 cases per year , with much higher rates observed in developing countries . The etiology of ALF varies by region, with viral hepatitis being prevalent in the Asia-Pacific region, while acetaminophen poisoning and other drug-induced liver injuries are more common in Western countries [ 5 – 8 ]. Current treatment options for ALF include addressing the underlying cause, providing general supportive care, managing extrahepatic organ failure, utilizing artificial liver support systems, and performing orthotopic liver transplantation (OLT) [ 9 – 12 ]. Since 1983, OLT has been considered the most effective treatment for ALF , significantly improving patient survival rates . However, the availability of suitable donors, the inherent risks, the high costs associated with OLT, and the need for lifelong immunosuppressive therapy limit its widespread application [ 16 – 18 ]. Additionally, as the incidence of ALF continues to rise, the gap between patients in need of transplants and the availability of donor organs is growing , highlighting the need for alternative therapeutic interventions. Recent research suggests that immunotherapy targeting macrophages may offer a promising alternative to treating ALF . As the first line of defense against environmental changes and damage, the innate immune system responds significantly faster than the adaptive immune response. This rapid response is particularly crucial in ALF, where the host has little time to activate an effective adaptive immune response . Thus, the innate immune system may play a more crucial role than the adaptive immune system in the progression of ALF. Among the components of the liver's innate immune system, Kupffer cells (KCs) account for ~80%–90% of the tissue-resident macrophages in the liver [ 23 – 26 ]. Hepatic macrophages are key mediators in hepatocyte injury and are essential in regulating the initiation, amplification, and resolution of inflammatory responses, thereby playing a pivotal role in the onset and progression of ALF . In response to various stimuli within the microenvironment, macrophages can be activated and reprogramed into different functional subtypes, typically polarizing toward classical (M1) or alternative (M2) activation. These subtypes are not fixed states but exist within a dynamic continuum, allowing for interconversion to meet changing demands and conditions of the environment . M1 macrophages are pro-inflammatory and play a key role in pathogen defense, while M2 macrophages exhibit anti-inflammatory properties and promote tissue repair . In the pathogenesis of acute liver injury, M1 macrophages predominantly secrete pro-inflammatory factors, while M2 macrophages primarily produce anti-inflammatory factors. By balancing these pro-inflammatory and anti-inflammatory agents through the suppression of M1 macrophage activation and the promotion of M2 macrophage activation, liver damage can be effectively minimized, thereby enhancing tissue repair in ALF [ 30 – 32 ]. Moreover, studies have demonstrated that macrophages can dynamically alter their metabolic patterns and cellular functions in response to various environmental stimuli. This process involves changes in metabolic enzymes, metabolites, and pathways, enabling macrophages to obtain the necessary energy and metabolic intermediates for biosynthesis and cellular functions. The interplay between immune cell activation and energy metabolism is closely interlinked, where immune cells undergo metabolic changes following stimulation and activation, and the regulation of cellular energy metabolism impacts the immune response . Immune cells rely on six main metabolic pathways : glycolysis, tricarboxylic acid (TCA) cycle, pentose phosphate pathway (PPP), fatty acid oxidation (FAO), fatty acid synthesis (FAS), and amino acid metabolism. The M1 macrophages predominantly derive their energy from glycolysis and the PPP, while the M2 macrophages depend on oxidative phosphorylation (OXPHOS) and FAO . This article explores the impact of glucose metabolism reprograming on the polarization of liver macrophages in ALF from a metabolic immunology perspective. Additionally, we summarize several traditional Chinese medicine (TCM) monomers that effectively regulate macrophage metabolism and reshape their polarization states. This article aimed to provide insights into potential therapeutic strategies for ALF. Macrophages, first identified by Ilya Metchnikoff in the late 19th century , function as vital immune “sentinels” within the body. They bridge innate and adaptive immunity and play crucial roles in host defense, maintaining tissue integrity, and combating invading pathogens . Hepatic macrophages represent the largest population of innate immune cells in the liver, essential for sustaining overall body homeostasis and immune tolerance . Studies [ 43 – 45 ] have demonstrated that the source of macrophages in the liver is heterogeneous, including liver-resident KCs, monocyte-derived macrophages (MDMs), liver capsular macrophages (LCMs), and splenic macrophages. Macrophage polarization refers to the process by which macrophages are activated and differentiate into various subtypes in response to changes in the microenvironment, triggered by factors such as pathogenic microorganisms, inflammatory responses, cytokines, and certain physicochemical stimuli . Macrophages exhibit high plasticity and can polarize into different phenotypes when stimulated by pathogen-associated molecular patterns (PAMPs) or damage-associated molecular patterns (DAMPs) . The M1 and M2 subtypes, analogous to the Th1 and Th2 classifications, are widely recognized . However, the M1/M2 dichotomy represents only the two extremes of the macrophage activation spectrum. In reality, macrophages can exist in multiple activation states [ 50 – 52 ]. Hepatic macrophages can be activated by various agents, including interferon γ (IFN- γ ), lipopolysaccharide (LPS), granulocyte-macrophage colony-stimulating factor (GM-CSF), or tumor necrosis factor (TNF), leading to their differentiation into classically activated M1 macrophages . M1 macrophages possess a strong antigen-presenting capacity and produce large amounts of pro-inflammatory cytokines such as IL-1 β , TNF- α , IL-6, and IL-12. They also release nitric oxide (NO), reactive oxygen species (ROS), and reactive nitrogen species (RNS). Additionally, M1 macrophages secrete various chemokines that recruit other immune cells into the liver tissues, thereby promoting pro-inflammatory responses, pathogenic microbial clearance, and antitumor effects . Hepatic macrophages can also polarize into M2 macrophages under specific stimuli, which can further be subdivided into four subtypes: M2a, M2b, M2c, and M2d . M2a macrophages, induced by IL-4 and IL-13, are involved in wound healing. They upregulate the mannose receptor (MR), secrete profibrotic factors, and contribute to tissue repair. M2b macrophages, induced by immune complexes and LPS, express high levels of IL-1 β , IL-6, TNF- α , and IL-10. They secrete pro-inflammatory and anti-inflammatory cytokines, playing roles in protective and inflammatory processes. M2c macrophages, activated by IL-10, TGF- β , and glucocorticoids, inhibit inflammation, promote tissue repair, induce regulatory T cells, and engage in the engulfment of apoptotic cells. M2d macrophages, stimulated by IL-6, TLR ligands, and adenosine, upregulate vascular endothelial growth factor (VEGF) and IL-10. They share phenotypic and functional similarities with tumor-associated macrophages, thereby promoting angiogenesis and tumor cell metastasis . The current consensus is that the “endotoxin-macrophage-cytokine storm” is the core pathogenic mechanism of liver failure, with immune damage being the initiating factor, particularly in the early stages of the condition . Endotoxin, chemically known as LPS, is recognized by Toll-like receptor 4 (TLR4) , which is widely expressed on the surface of macrophages . As key components of the innate immune system, macrophages play essential roles in microbial elimination, immune regulation, and tissue repair . Hepatic macrophages, as first-line immune responders, significantly influence the progression of liver disease . They are crucial for maintaining liver homeostasis and are actively involved in the process of acute and chronic liver injury and repair . Hepatocyte damage or death triggers the release of DAMPs, which in turn stimulate KCs and recruit MDMs to release of pro-inflammatory cytokines and chemokines . KCs and MDMs can rapidly adjust their polarization states in response to local stimuli, influencing the progression and resolution of ALF . The M1 and M2 phenotypes of macrophages typically play opposing roles in disease regulation and maintaining a balance between them is crucial for resolving inflammation and preserving tissue homeostasis . In the early stages of ALF, activated hepatic macrophages, particularly M1 macrophages, significantly increase, releasing pro-inflammatory factors that exacerbate liver injury. At this point, inducing M2 macrophages can counteract the pro-inflammatory effects of M1 macrophages, exerting anti-inflammatory, reparative, and immunomodulatory functions . However, in the later stages of ALF, excessive activation of M2 macrophages may lead to an exaggerated anti-inflammatory response, resulting in the deactivation of hepatic monocytes/macrophages. The potential treatment options in these cases include plasma exchange to remove IL-10 and secretory leukocyte protease inhibitors and restore immune balance . Additionally, studies have confirmed the alternation and imbalance between systemic inflammatory response syndrome and compensatory anti-inflammatory response syndrome during the course of ALF, leading to immune dysregulation . Therefore, identifying the immune status in ALF, determining the optimal timing for intervention, and redirecting macrophage functions to reduce liver damage, facilitate tissue repair, and promote liver regeneration are critical in the field of immunomodulatory therapy for ALF. All cells require energy metabolism to produce ATP and metabolic intermediates, which are essential for their survival, proliferation, and differentiation . Recent studies have shown that cellular energy metabolism can influence the polarization of macrophages, thereby regulating their functional roles. The metabolic profile of M1 macrophages is primarily characterized by enhanced glycolysis, PPP, and FAS, with a concurrent reduction in the activity of the TCA and OXPHOS pathways. In contrast, M2 macrophages exhibit enhanced OXPHOS, FAO, and glutamine metabolism . However, depending on the stimulus, OXPHOS may also be upregulated to promote inflammation. Research has shown that OXPHOS can generate ROS through Complex I , and excessive ROS production can lead to tissue damage and chronic inflammation . Inhibiting Complex I has been found to reduce the activity of the IFN- γ receptor . Furthermore, studies have revealed that prolonged exposure of macrophages to degradation products of polylactic acid results in an increase in both OXPHOS and glycolysis. This elevation results in heightened expression of proteins such as IL-6, MCP-1, TNF- α , and IL-1 β , thereby enhancing the production of pro-inflammatory cytokines . Under steady-state conditions, cells primarily rely on the OXPHOS pathway for energy production, with each glucose molecule producing 36 molecules of ATP through the TCA cycle. However, in a hypoxic environment, pyruvate produced by glycolytic metabolism is converted into lactate instead of entering the TCA cycle, yielding only 2 ATP molecules in this process . In the early 20th century, Warburg, Wind, and Negelein discovered that tumor cells exhibit active glycolytic metabolism and consumed large amounts of glucose even in the presence of sufficient oxygen, a phenomenon known as the “Warburg effect.” Although glycolysis is less efficient in ATP production and requires increased glucose consumption, its rate of glucose metabolism is 10–100 times faster than that of OXPHOS . However, the study by Dengler et al. has demonstrated that excessive aerobic glycolysis can lead to cell death. In 1970, Hard identified a Warburg effect similar to that of tumor cells in LPS-activated M1 macrophages. During this process, M1 macrophages primarily rely on aerobic glycolysis for energy production, resulting in reduced OXPHOS metabolism, increased glucose consumption, and lactate synthesis . Additionally, hypoxia-inducible factor 1 α (HIF-1 α ) upregulates the expression of glycolysis-related enzymes such as glucose transporter 1 (GLUT1), hexokinase-2 (HK2), phosphate fructose kinase 1/2 (PFK-1/2), and pyruvate kinase isozyme M2 (PKM2) . Additionally, studies have demonstrated that modulating macrophage energy metabolism can regulate macrophage polarization. Specifically, inhibiting glycolysis while promoting OXPHOS and FAO suppresses M1 macrophage activation and enhances M2 macrophage activation. Moreover, with the advancement of research in metabolic reprograming, it has been demonstrated that intermediate metabolites generated through these processes, such as acetyl-coenzyme A (acetyl-CoA), α -ketoglutarate ( α -KG), and NAD+, serve as substrates or cofactors that play critical roles in the epigenetic modification of tumors . Similarly, the involvement of these metabolites in regulating macrophage polarization is significant. Studies have revealed that citrate possesses an influential position in maintaining the macrophage inflammatory response . In LPS-activated macrophages, the upregulation of glycolytic genes occurs when citrate carriers export citrate from mitochondria via histone acetylation . Itaconate and its derivatives commonly inhibit macrophage activation. They not only suppress pro-inflammatory M1 macrophages but also prevent M2 polarization by inhibiting JAK1 phosphorylation . The research results may seem contradictory, as itaconic acid primarily exerts anti-inflammatory effects. Certain studies have shown that it limits inflammatory responses through the activation of the Nrf2 pathway . Additionally, succinate can upregulate the transcription of metastasis-associated genes via HIF-1 α and enhance the production of the pro-inflammatory cytokine IL-1 β through HIF-1 α induction . α -KG is essential for M2 macrophage activation, including involvement in FAO and Jmjd3-dependent epigenetic reprograming of M2 genes . M1/M2 macrophage activation is a highly regulated process , with these macrophage subsets exhibiting distinct metabolic profiles . As previously discussed , macrophage activation involves metabolic reprograming, and modulating macrophage energy metabolism can influence their polarization state. It has been demonstrated that the metabolic profile of macrophage polarization is regulated by multiple signaling pathways . In this review, we focus on four principal signaling pathways: the phosphatidyl-inositol 3-kinase/protein kinase B (PI3K/AKT) pathway, the mammalian target of rapamycin (mTOR)/HIF-1 α signaling pathway, the nuclear factor- κ B (NF- κ B) pathway, and the AMP-activated protein kinase (AMPK) signaling pathway. These signaling pathways are pivotal not only for macrophage polarization but also for the regulation of macrophage metabolism. PI3K signaling is crucial for regulating cell growth, proliferation, metabolism, inflammation, survival, motility, and tumor progression . Upon activation, PI3K phosphorylates Akt, which in turn activates p-Akt. Activated p-Akt can further activate mTOR and regulate HIF-1 α , playing a central role in glycolysis, cancer metabolism, and cancer cell proliferation [ 95 – 97 ]. The PI3K/Akt signaling pathway regulates the survival, migration, and proliferation of macrophages, and it coordinates their responses to various metabolic and inflammatory signals . Studies [ 98 – 100 ] have demonstrated that Akt1 promotes macrophage polarization toward the M2 phenotype while inhibiting polarization toward the M1 phenotype. Conversely, Akt2 stimulates macrophage polarization towards the M1 phenotype and inhibits polarization towards the M2 phenotype. mTOR is a serine/threonine kinase that plays a crucial role in regulating cell metabolism, including growth, proliferation, and survival and is pivotal in macrophage polarization . It functions through two distinct protein complexes: mTOR complex 1 (mTORC1) and mTOR complex 2 (mTORC2) . Wu et al. reported that activation of mTORC1 induces glycolysis through a HIF-1 α -dependent mechanism, promoting M1-polarization, while mTORC2 activation stimulates peroxisome proliferator-activated receptor γ (PPAR γ ), enhancing FAO and promoting M2 polarization. Furthermore, Byles et al. found that the deletion of tuberous sclerosis complex 1 (TSC1), which leads to mTORC1 overexpression, results in enhanced M1 polarization and impaired M2 polarization. Recent studies have confirmed that mTOR-induced glycolysis is mediated through the activation of HIF-1 α and the stimulation of glycolytic enzymes. Conversely, the mTOR inhibitor rapamycin has been shown to inhibit HIF-1 α expression . HIF-1 α , an essential regulator of aerobic glycolysis, facilitates the transition from OXPHOS to glycolysis, thus promoting M1 macrophage activation [ 90 , 106 – 108 ]. Specifically, HIF-1 α transcriptionally activates genes involved in oxygen homeostasis and metabolic activation . Additionally, it upregulates various glycolysis-related proteins, such as GLUT1, hexokinase 3, and 6-phosphofructo-2-kinase, which increase glycolysis levels and influence macrophage polarization . In a mouse model of ALF, Cai et al. demonstrated that downregulation of HIF-1 α expression inhibits glycolysis, thereby reducing macrophage infiltration and M1 polarization in liver tissue. The NF- κ B signaling pathway, a well-known pro-inflammatory pathway, plays a central role in regulating gene transcription during immune and inflammatory responses . NF- κ B is intricately linked to metabolic processes, influencing the balance between glycolysis and mitochondrial respiration, which is essential for managing the energy metabolism network . Specifically, the TLR4/NF- κ B signaling pathway is involved in inducing acute liver injury under LPS stimulation . TLR4 is a signal-transducing transmembrane receptor located on the cell membrane, and NF- κ B is an important molecule downstream of the TLR4 signaling pathway . After TLR4 specifically recognizes LPS and binds to it, it relies on myeloid differentiation protein 88. Upon activation, NF- κ B dissociates from its inhibitor, translocates to the nucleus, and binds to the promoter regions of target genes, thereby regulating their expression and promoting the secretion of inflammatory factors such as TNF- α , IL-1 β , and IL-6 . NF- κ B is also crucial in the metabolism and polarization of M1 macrophages . Activation of NF- κ B initiates the transcription of pro-inflammatory genes and facilitates the transition from M2 to M1 macrophages . Studies have shown that upon activation by LPS, the NF- κ B signaling pathway increases GLUT6 expression, thereby enhancing glycolysis and the secretion of inflammatory mediators in M1 macrophages. Additionally, NF- κ B acts as a direct regulator of HIF-1 α expression during inflammation and hypoxia. NF- κ B activation upregulates HIF-1 α transcription, which promotes macrophage glycolysis and M1 polarization, thus enhancing the host's immune defense responses. Conversely, HIF-1 α provides a feedback mechanism to inhibit NF- κ B transcriptional activity both in vivo and in vitro during inflammatory states, thereby preventing excessive and destructive pro-inflammatory responses . AMPK, a protein kinase that monitors changes in energy molecules, is involved in signal transduction of multiple signaling pathways and is essential for maintaining cellular energy homeostasis . It consists of three subunits: α , β , and γ , with the γ subunit containing binding sites for AMP, ADP, and ATP, enabling it to sense the AMP/ATP ratio. AMPK is activated when there is an increase in the intracellular AMP/ATP ratio when calcium ion flux rises. Once activated, AMPK phosphorylates pivotal proteins through various pathways, promoting catabolism to generate more ATP and inhibiting anabolism to reduce ATP consumption [ 123 – 125 ]. In terms of macrophage polarization, AMPK is considered a negative regulator of M1 macrophage activation induced by LPS . Sag et al. found that stimulation of macrophages with anti-inflammatory factors like IL-10 and TGF- β resulted in the activation of AMPK, whereas stimulation with pro-inflammatory factors like LPS results in AMPK inactivation. Suppression of AMPK activity, achieved through siRNA knockdown or introduction of a dominant-negative mutant, it is demonstrated that suppressing AMPK activity increased the synthesis of TNF- α and IL-6 while decreasing the production of IL-10. Conversely, constitutively active AMPK α 1 decreased LPS-induced TNF- α and IL-6 production while increasing IL-10 levels, indicating that AMPK activation inversely regulates the macrophage inflammatory signaling pathway. Furthermore, studies have demonstrated that AMPK activation antagonizes the Warburg effect by inhibiting HIF-1 α and its associated glycolytic effectors. Importantly, AMPK and mTOR signaling pathways are interconnected and exhibit opposing effects on nutrient sensing, energy availability, and cell growth regulation . They function as balance-regulating switches in the M1/M2 macrophage transformation, detecting changes in cell metabolites and promoting signals for M1 or M2 activation, thus maintaining the equilibrium between pro-inflammatory and anti-inflammatory responses . Melittin, the principal bioactive component of bee venom, constitutes ~40%–50% of bee venom's total dry weight . Despite its inherent toxicity, recent studies have highlighted its potential benefits, including anti-inflammatory, anticancer, antibacterial, and antiviral properties . Naji et al. demonstrated that Melittin could mitigate drug-induced liver injury induced by isoniazid and rifampicin in rats. Their findings indicated that Melittin improved biochemical indicators and liver histopathology, suggesting its potential in preventing antituberculosis drug-induced ALF. Similarly, Park et al. observed in a mouse model of liver failure induced by LPS/D-GalN that Melittin reduced the release of inflammatory cellular factors and prevented hepatocyte apoptosis, likely through the inhibition of NF- κ B activation. These studies provide evidence supporting Melittin's protective effects against acute liver injury. Moreover, Fan et al. investigated the mechanism from an immunometabolism perspective, elucidating the effects of Melittin. Their study indicated that in a mouse model of ALF induced by LPS/D-GalN, the expression of PKM2 and HIF-1 α was upregulated in liver tissues. Treatment with bee venom significantly reduced the levels of PKM2 and HIF-1 α . LPS-induced activation elevated the glycolytic rate and glycolytic product levels of RAW264.7 cells, but melittin intervention inhibited PKM2 and induced a shift in energy metabolism, resulting in an anti-inflammatory effect. Further mechanistic analysis suggests that the Akt/mTOR/PKM2/HIF-1 α signaling pathway is upregulated during ALF progression. However, this signaling pathway's activation was suppressed by Melittin. Additionally, Melittin inhibited PKM2 activity and mitigated the PKM2-mediated Warburg effect, thereby controlling the inflammatory response associated with macrophage activation. Quercetin, a flavonoid found in various dietary sources such as vegetables, fruits, nuts, and tea, possesses notable anti-inflammatory, antioxidant, and anticancer properties . Previous studies indicate that quercetin exhibits anti-inflammatory effects by regulating immune cell activation, inhibiting the release of pro-inflammatory factors, and down-regulating inflammatory gene transcription. These effects involve the modulation of several signaling pathways, including NF- κ B, mitogen-activated protein kinases (MAPK), and arachidonic acid (AA). Mendes et al. explored the metabolic reprograming of glucose in macrophages derived from THP-1 by treated with LPS + IFN- γ . Their findings revealed that quercetin inhibited glycolysis and promoted the TCA cycle, demonstrating superior pharmacological efficacy compared to other flavonoids in reversing macrophage glucose metabolism reprograming. In another study, Tsai et al. pretreated RAW264.7 cells and adult mouse microglia with quercetin before stimulation with LPS. The results demonstrated that quercetin significantly reduced the expression of M1 macrophage markers such as IL-6, IL-1 β , and TNF- α . It also reduced the release of chemokines linked to M1 polarization and suppressed the production of NO. Additionally, quercetin inhibited the expression of inducible NO synthase (iNOS) and cyclooxygenase 2 (COX-2) while enhancing the expression of the M2 macrophage marker IL-10 through activation of AMPK and Akt signaling pathways when directly applied to RAW264.7 and adult mouse microglia cells. Additionally, quercetin upregulated the endogenous antioxidant system and reduced ROS production. Moreover, glycyrrhizin-mediated liver-targeted alginate nanogels have been developed to deliver quercetin directly to the liver. This novel pharmaceutical formulation demonstrated promising effects in treating acute liver injury, as evidenced by improved biochemical indexes, reduced peroxide parameters, and downregulation of TNF- α , IL-6, iNOS, and monocyte chemotactic protein-1 (MCP-1) . Salvianolic acid B, a prominent active compound derived from Salvia miltiorrhiza root, exhibits a spectrum of pharmacological effects, including antioxidant, anti-inflammatory, antiapoptotic, and antifibrotic effects . Studies have demonstrated its hepatoprotective and antifibrotic effects on the liver, including its ability to inhibit hepatocyte apoptosis and reduce oxidative stress-induced hepatocyte damage. Huang et al. further confirmed the beneficial impact of salvianolic acid B on cellular energy metabolism by upregulating PPAR α expression and promoting the phosphorylation of AMPK and acetyl-CoA carboxylase (ACC) in liver tissues. Moreover, Zhao et al. found that salvianolic acid B could shift macrophage polarization from pro-inflammatory M1-type to anti-inflammatory M2 macrophages by inhibiting mTORC1-induced glycolysis. Additionally, Wei et al. elucidated the ability of salvianolic acid B to inhibit glycolysis and regulate abnormal glucose metabolism through modulation of the PI3K/AKT/HIF-1 α signaling pathway. LBP, the primary bioactive constituent of wolfberry, possesses antioxidative, antitumor, immune-modulating, neuroprotective, and hepatoprotective effects . Ding et al. used LPS to induce the polarization of RAW264.7 cells toward the M1 phenotype, noting an upregulation of PKM2 and HIF-1 α expression, increased glycolysis, and elevated secretion of inflammatory mediators such as IL-1 β and TNF- α . However, upon LBP intervention, macrophage polarization toward the M1 phenotype was inhibited, accompanied by reduced expression of PKM2 and HIF-1 α , suppressed glycolysis, and decreased secretion of inflammatory mediators. Remarkably, the therapeutic effects of LBP closely resembled those achieved through PKM2 knockdown and could be reversed by PKM2 overexpression, leading to the hypothesis that LBP suppresses glycolysis and modulates macrophage polarization by downregulating PKM2. Liu et al. conducted a similar study, demonstrating that LBP could attenuate the expression levels of inflammatory mediators (TNF- α , IL-1 β , and IL-6) and NO by regulating macrophage polarization and NF- κ B translocation. Notably, these effects were primarily mediated through the inhibition of TLR4 and the NF- κ B signaling pathways. Cassia obtusifolia , the dried seeds of cassia plant from the Leguminosae family, possesses various therapeutic properties, including anti-inflammatory, hepatoprotective, antidiarrheal, antidiabetic, antibacterial, and neuroprotective activities. It has been used clinically to treat acute liver injury, constipation, Alzheimer's disease, hypertension, and hyperlipidemia . Among its primary active components, Cassiaside C, a naphthopyrone compound, has garnered significant attention . Kim et al. found that Cassiaside C could inhibit LPS/IFN- γ -induced polarization of RAW264.7 and peritoneal macrophages to the M1 phenotype by downregulating the PI3K/AKT/mTORC1 signaling pathway, thereby reducing the transcription and secretion of pro-inflammatory factors such as TNF- α , IL-1 β , and IL-6. Furthermore, Cassiaside C was shown to decrease glycolysis levels and lactate production. Cynaroside, a flavonoid isolated from the Bidens parviflora Willd plant , exhibits anti-inflammatory, antioxidant, and antitumor biological effects . It has been demonstrated to hinder glycolysis by inhibiting HK2 . In a study by Pei et al. , Cynaroside was observed to suppress liver PKM2 expression and nuclear translocation. This suppression led to a reduction in PKM2 binding to HIF-1 α , inhibition of glycolysis-related enzymes, promotion of the transformation from M1 to M2 macrophages, attenuation of pro-inflammatory factor release, and, ultimately, positive anti-inflammatory effects. Ginsenoside Rg3 is a member of the ginsenoside family of saponins, which, according to modern pharmacological studies, exhibits a range of biological activities, including anti-inflammatory, hepatoprotective, antiallergic, and antitumor effects . Ginsenoside Rg3 has been shown to inhibit M1 macrophage polarization and induce M2 macrophage polarization, leading to anti-inflammatory effects . Through proteomics and metabolomics analysis, Ni et al. found that ginsenoside Rg3 could activate AMPK to inhibit glycolysis and significantly modulate various metabolites such as pyruvate, acetyl-CoA, isocitrate, and succinate during glycolysis and the TCA cycles. Ginsenoside Rg3 also accelerates the resolution of inflammation by inducing M2 macrophage polarization, which may be dependent on the regulation of the NF- κ B pathway. The aforementioned Chinese herbal monomers are summarized in Table 1 . ALF is a rare yet life-threatening condition characterized by complex etiologies and high mortality rates . While OLT remains the primary treatment for ALF, its clinical application is limited . Hence, there is an urgent need to explore and develop novel drugs with innovative mechanisms of action for ALF. It represents a critical clinical challenge requiring immediate attention. Macrophages, as vital components of the innate immune system, are pivotal in the initiation, progression, and resolution of ALF . During different stages of ALF, macrophages can polarize into either the classically activated M1 pro-inflammatory phenotype or the alternatively activated M2 anti-inflammatory phenotype, each exerting distinct biological functions . Research indicates that promoting the polarization of macrophages from M1 to the M2 phenotype can effectively alleviate liver damage and promote liver tissue repair . Current research on macrophage polarization, based on immunometabolism, mainly focuses on key signaling pathways such as PI3K/Akt, mTOR/HIF-1 α , NF- κ B, and AMPK. These pathways play vital roles in the metabolic regulation and polarization of macrophages and are closely interrelated, as depicted in Figure 3 . Modern pharmacological studies indicate that TCM, such as Cassiaside C , Melittin , LBP , and Cynaroside , can effectively regulate the polarization of M1/M2 macrophages in ALF through glucose metabolism reprograming, thereby exerting anti-inflammatory and hepatoprotective effects. It is important to note that current research on glucose metabolic reprograming by TCM primarily focuses on isolated components or monomers of Chinese herbal compounds. This approach is crucial for elucidating the material basis and mechanisms of action of TCM. However, Chinese herbal formulations, which are the primary form used in clinical practice, inherently possess unique characteristics of multiple components, targets, and effects. Leveraging these characteristics can offer the advantage of integrated therapeutic effects. Therefore, it is essential not only to explore individual components of TCM but also to intensify and deepen research on Chinese herbal formulations. It is worth noting that although many reports highlight the use of TCM in treating ALF, there is a lack of studies focusing on metabolic immunology and metabolic reprograming. Future experimental designs that incorporate perspectives from metabolic immunology and metabolic reprograming hold significant promise for advancing herbal medicine in ALF treatment. This novel and promising approach can be implemented through several key aspects: 1. Preliminary screening of target components: Methods such as serum medicinal chemistry, “spectrum-effect” correlation analysis, in vitro cell models, near-infrared spectroscopy, TCM chromatographic fingerprinting, and real-time cell-based assays, and computer virtual screening can be employed to identify potential target components. Subsequently, the pharmacodynamic substance basis of TCM can be elucidated using target component knockout/knock-in technology . Furthermore, innovative techniques like mass spectrometry chromatography , chromatography , small molecule probe , network pharmacology , and bioinformatics can be used to determine the specific targets of TCM in the treatment of ALF. 2. Structural modification and optimization: The structural modification and optimization of active ingredients, along with the use of nanoparticle technology, can enhance drug targeting. For instance, Zhao et al. successfully delivered the antioxidant quercetin to the liver for ALF by employing glycyrrhizin-mediated liver-targeted alginate nanogels. This approach significantly improves drug targeting and bioavailability, demonstrating the potential of exploring TCM resources in developing new drugs for ALF. 3. Systematic and networked integration analysis: A comprehensive analysis of anti-ALF active components, mechanisms of action, and drug metabolism of TCM can be conducted using multirecombinant technologies such as genomics, transcriptomics, proteomics, and metabolomics. Notably, Xu et al. have completed the whole genome sequencing of Salvia miltiorrhiza , providing a crucial genetic background for understanding the biosynthesis and molecular regulatory mechanisms of its main pharmacologically active components, marking the beginning of the genomics era of TCM. 4. Construction of specific disease models: Disease models can be constructed based on TCM patterns of evidence. For instance, Fu et al. used serum from patients with liver depression and spleen deficiency syndrome to induce Hep G2 cells, building a cellular model that reflects the biological basis of the syndrome to a certain extent. 5. High-quality clinical trials: High-quality clinical trials related to TCM should be designed and conducted to ensure safety and efficacy. These trials should provide high-level evidence-based data to support clinical decision-making. In conclusion, targeting hepatic macrophages as a therapeutic strategy for treating ALF holds considerable potential. Encouraging the shift of hepatic macrophages from the M1 to M2 phenotype appears promising in reducing liver damage and fostering liver regeneration. However, the polarization state of liver macrophages is intricately intertwined with the reprograming of glucose metabolism. Transitioning the energy metabolism profile from glycolysis to OXPHOS can robustly regulate macrophage polarization, thereby exerting anti-inflammatory and hepatoprotective effects. Additionally, TCMs have shown favorable results in modulating the energy metabolism of liver macrophages. Therefore, our future direction will involve the design and execution of rigorous and comprehensive investigations at various levels, including biochemical, cellular, animal, and human studies, to delve further into this domain. Figure 1 : Macrophages can differentiate into classically activated M1 macrophages in response to stimuli such as IFN- γ , LPS, GM-CSF, or TNF. M1 macrophages secrete IL-1 β , TNF- α , IL-6, IL-12, NO, ROS, and RNS, contributing to pro-inflammatory responses, pathogenic microbial clearance, and antitumor effects. M2a macrophages, induced by IL-4 and IL-13, secrete IL-10 and play a role in tissue repair. M2b macrophages, activated by immune complexes and LPS, exhibit high expression levels of IL-1 β , IL-6, and IL-10, which are involved in protective and inflammatory responses. M2c macrophages, activated by IL-10, TGF- β , and glucocorticoids, express high levels of IL-10 and TGF- β 1, contributing to the inhibition of inflammation and the promotion of tissue repair. M2d macrophages, activated by IL-6, TLR ligands, and adenosine, display upregulated VEGF and IL-10, with anti-inflammatory and angiogenic effects. Figure 2 : Macrophages with different polarization states exhibit significant differences in their glucose metabolism pathways. M1 macrophages primarily derive energy from glycolysis, with a disruption in the TCA cycle. Key enzymes involved in the energy metabolism of M1 macrophages include Glut1, HK1/2, GPI1, PFK1/2, TPI1, PGK1, PGK2, Eno1, PKM2, and LDH α . This metabolic process results in the production of lactate, which is expelled from the cell by the MCT4 transporter. Some glucose also enters the PPP. In contrast, M2 macrophages rely predominantly rely on OXPHOS and FAO for energy. Pyruvate in M2 macrophages is metabolized through the TCA cycle (Krebs cycle), producing intermediates such as acetyl-CoA, citrate, and α -KG. These intermediates further contribute to FAS and epigenetic regulation, ultimately generating ATP and CO 2 .
|
Review
|
biomedical
|
en
| 0.999997 |
PMC11688142
|
The metaphyseal distal ulnar fracture is the fracture that lies within 5 cm from the dome of the ulnar head , the distal ulnar fractures are relatively uncommon and usually involve the ulnar styloid unlike styloid fractures, the incidence of associated distal radius fractures with metaphyseal ulnar fractures is 5%–6% , typically in elderly patients with osteoporotic fractures , and isolated fractures could occur due to direct trauma to the ulnar side of the wrist . The distal ulna is a fixed point around which the hand and radius rotate in various activities of daily living , the ulnar head is a keystone structure in maintaining the stability of the distal radioulnar joint (DRUJ) and the triangular fibrocartilage complex (TFCC) of the wrist , and improper treatment of the distal ulnar fractures could lead to limited forearm rotation, persistent pain, DRUJ instability, and arthritis . According to the OTA/AO classification , distal ulnar fractures (2U3A) are classified into type A1 which represents the ulnar styloid fractures and type A2 which represents extra-articular fractures and subclassified into 2U3A2.1, 2U3A2.2, and 2U3A2.3 which are the spiral, oblique, and transverse fractures, respectively, while type A3 is the multifragmentary fracture. Unlike the distal radius, the distal ulnar fractures are usually not well appreciated and often result in inadequate treatment , the distal ulnar fracture may require fixation either as an isolated injury or in combination with distal radius fractures, angulation more than 10°, more than 3 mm ulnar variance, or translation more than one-third of the diameter of the ulna are the main indications for fixation, and such criteria should be considered in cases of intact or anatomically reduced distal radius . Several options for fixation have been described, such as tension band wiring , minicondylar blade plate , locking plate , and headless compression screws ; the use of anatomical distal ulnar hook plate gained a lot of popularity in the last few years . However, no fixation option is superior to others , and the cost and availability of such options are still an issue that invites a more economical, stable, and available fixation option. The purpose of this study is to investigate the outcomes of the use of a 2.7 mm semitubular hook plate for internal fixation of unstable metaphyseal ulnar fractures. This prospective case series included 30 consecutive patients with a recent unstable distal ulnar fracture between January 2015 and July 2019. The inclusion criteria included adult populations with a closed unstable distal ulnar fracture either associated with fracture distal radius or isolated injury, and the fracture is considered unstable if there is angulation more than 10° in any plane, displacement of the distal fragment of more than one-third of the ulnar diameter, more than 3 mm of ulnar variance, or highly comminuted fracture. The exclusion criteria included nondisplaced or reduced fractures after fixation of the distal radius fracture, styloid ulnar fractures, pathological fractures, open fractures, patients with peripheral vascular diseases, and a population less than 18 years old. All cases were subjected to open reduction and internal fixation by a 2.7 mm semitubular hook plate which is fixed distally by two screws and 3-4 screws proximally using bicortical 2.7 mm screws. Preoperative planning included history taking and thorough clinical examination including neurovascular examination, radiographic evaluation using plane X-ray at 2 different views including wrist (posteroanterior and lateral views) and elbow (anteroposterior and lateral views), and computerized tomography in cases of associated distal radius fracture as well as routine preoperative investigations to confirm the fitness of the patients for anesthesia. Ethical approval was obtained from the institutional ethics committee and informed consent forms from all patients were received. All cases have been performed under general anesthesia with the use of a tourniquet. The fixation of distal radius fractures is performed first in cases of associated injuries and confirms that the distal ulnar fractures are still unstable. Seven cases have been fixed by percutaneous pinning with K-wires, and 9 cases were fixed by volar locked plates. A longitudinal skin incision was made on the ulnar border of the distal forearm, the subcutaneous tissue has been dissected carefully to identify and protect the dorsal sensory branch of the ulnar nerve which is seen about one inch from the ulnar styloid. In cases of multifragmentary fractures, care should be taken to preserve the soft tissue envelope to preserve the blood supply of the small fragments and the soft tissue relationship of the bone, once proper reduction is achieved by direct manual light traction or the use of small bone levers, a preliminary k-wire could be used to maintain the achieved reduction . After exposure to the fracture site and proper reduction, a 2.7 mm semitubular plate is used after cutting its upper edge and bending both pillars to make them like a hook . The plate should be applied on the dorsoulnar surface, and its hook should be located embarrassing the ulnar styloid to not jeopardize the TFCC, some proximally directed force should be applied to engage the hook into the ulnar head, then fixation of the fracture by 2.7 mm screws to conclude the fixation . Postoperatively, an arm splint is applied for 3 weeks, followed by a removable splint for another 3 weeks, with emphasis on mobilizing the fingers and then the wrist. Afterward, the patients were encouraged to mobilize the wrist and to do passive rather than active pronation and supination . All patients were submitted to follow-up after 12 months postoperatively with the time of union, range of motion (flexion, extension supination, and pronation of the wrist), pain using the Visual Analog Scale (VAS), and functional outcome using the quick Disabilities of the Arm, Shoulder, and Hand (DASH) score and Mayo wrist score . Radiographic evaluation for preoperative and last follow-up postoperative wrist posteroanterior X-ray has been evaluated for measuring radial height, radial inclination, and ulnar variance. Radial inclination was measured on a posteroanterior X-ray by measuring the angle formed between the long axis of the radius and a line drawn from the distal tip of the radial styloid to the ulnar corner of the lunate fossa. Radial height is measured by finding the long axis of the radius and extending a line perpendicular to it at the tip of the radial styloid on a posteroanterior radiograph. The distance between this line and the distal-most point of the ulnar dome was recorded. The ulnar variance was measured on a posteroanterior radiograph using the method of perpendiculars . Data were analyzed using Statistical Program for Social Science (SPSS), Version 15.0 (SPSS Inc., Chicago, Illinois). Quantitative data were expressed as mean ± standard deviations after confirmation of normal distribution. Data that were not distributed normally were expressed as medians and interquartile ranges. Qualitative data were expressed as frequencies and percentages. p value < 0.05 was statistically significant. A t -test was performed to study the association of the demographic data (age, associated distal radius, and the fracture type according to AO classification). The study included thirty patients. The mean age was 45.3 ± 10 (range 29–61) years. There were eighteen males (60%) and twelve females (40%), and there were 16 patients associated with distal radius fractures (53.33%). Among the distal radius fracture cases, 9 of them were type C2 (56.25%), 4 cases were A3 (25%), and 3 patients presented with type C1 fractures (18.75%). According to the AO classification of distal ulnar fractures, three fractures were type A2.1 (10%), 9 were type A2.2 (30%), 8 fractures were type A2.3 (26.67%), and 10 fractures were type A3 (33.33%) ( Table 1 ). All fractures have been united with a mean duration of 9 ± 1.4 (range 7–12) weeks; as regards the range of motion, the mean supination was 81.4 ± 3.5 (range 75–88) degrees, the mean pronation was 81.3 ± 4.5 (range 70–88) degrees, the mean flexion was 71.7 ± 3.6 (range 65–78) degrees, and the mean extension was 81.7 ± 3.4 (range 75–88) degrees; there were no cases of residual DRUJ instability at the end of the follow-up. The mean VAS was 1.1 ± 1 points (range 0–3). When measuring the functional outcome after 12 months postoperatively, the mean quick DASH score was 9.3 ± 5.6 points (range 0–20.5); according to the Mayo wrist score, the mean score was 88.5 ± 7.2 points (range 75–100); 17 patients were excellent (56.67%), 10 patients were good (33.33%), and 3 patients had satisfactory outcomes (10%) ( Table 2 ). As regards the radiographic evaluation of wrist parameters, the mean radial height improved from a mean of 10.13 ± 2.8 mm (range 3–14) preoperatively to 12.2 ± 0.96 mm (range (10–14) postoperatively which is statistically significant ( p =0.04), and the mean radial inclination improved from 17.6 ± 7.18° (range 3–25) preoperatively to 23.1 ± 1.95° (range 20–27) postoperatively ( p =0.87), while the ulna variance was +0.8 ± 1.86 mm (range −4 to +2.3) in preoperative assessment and +0.36 ± 0.96 mm (range −2 to +1.7) postoperatively ( p =0.3) ( Table 3 ). Subgroup analysis: When analyzing the correlation between age, fracture type, and associated distal radius fractures and the functional outcomes, there was no statistically significant correlation ( p value was ≤ 0.05) in all tests. Complications: The reported complications were superficial wound infection in 2 cases (6.67%), one of each was in each group, both improved with local wound care and broad-spectrum antibiotics at 3 weeks postoperatively, there were 3 patients (10%) with limited wrist flexion with minor limitation in the functional activity of daily living, and one patient reported hardware prominence with crepitus which required plate removal 10 months postoperatively after confirming the bony union ( Table 4 ). Metaphyseal distal ulnar fracture fixation is challenging due to the short segment of fixation, associated comminution, the presence of osteoporosis, and thin, soft tissue envelope, besides the triangular shape of the distal ulna makes the placement of the plate is poorly tolerated by the patients. The volar placement of plates is more convenient, but it carries a considerable risk of injury to the ulnar nerve and vessels . Fractures of distal ulna associated with distal radius fractures are usually well-tolerated making the fixation not necessary in all cases ; however, displaced and unstable fractures if neglected will lead to devastating complications such as longitudinal forearm instability, DRUJ instability, and arthritis . In the literature, the fixation of metaphyseal ulnar fractures has been achieved by different methods, percutaneous pinning carries the risk of loosening and pin tract infection , tension band wiring cannot be utilized in comminuted fractures, and implant-related complications are not uncommon . The use of headless compression screws has been reported to be used in intraarticular fractures either as isolated injury or associated with a distal radius fracture ; recently, the use of headless compression screws has been expanded to include metaphyseal fractures as an intramedullary screw by Oh and Park , the study included 11 patients with a mean follow-up of 26.6 weeks (about 6 months), the mean union time was 6.5 weeks, the mean quick DASH score was 14.6, and mean VAS score was 1.09. However, the age group was high (70 years and all were women), suggesting low functional demand, and the rotation in comminuted cases is still a concern. The use of anatomical locked distal ulnar plates gained popularity in the last few years. Lee et al. evaluated the results of distal ulnar LCP in 25 patients with a mean age of 62.3 years, all patients achieved bony union in a mean of 12.5 weeks with a good range of motion and the average modified Mayo wrist score was 87 points, and average quick DASH score was 14 points. In the retrospective study by Han, Hong, and Kim , 17 patients were included in this study with a mean age of 58.9 years, all of them were associated with distal radius fracture and the distal ulna was fixed by distal ulnar LCP, the average follow-up was 15 months, all fractures had a bony union with an average of 11.7 weeks with a very good range of motion and radiographic measurement, the mean DASH score was 11 points, and 6 patients gave excellently and 11 patients gave good results according to modified Sarmiento's score of the wrist. Meluzinová et al. used the same plate in eighteen patients, all of them associated with distal radius fractures with a mean age of 58 years, with a follow-up of 9 months, the average Mayo wrist score was 84 points, and the quick DASH score was 7.4 points with a good postoperative range of motion. Recently, Gauthier et al. retrospectively evaluated the outcome of 48 patients with combined distal ulnar fractures with distal radius fractures, the distal ulnar fractures were fixed by anatomical hook plate distal ulna LCP, the follow-up time was 28 months, and the functional outcomes of the patients were very good to excellent as regards Q-DASH score, Mayo wrist score, and range of motion; however, high complication rate was observed (45%), and the most common was pain and discomfort requiring plate removal in 14 patients (29%). Stock et al. investigated a different design of anatomical LCP which is applied volarly, and without a hock in the ulnar head; they included 20 patients with a mean age of 70 years old, and the mean DASH score, the PRWE score, and the VAS after one year had no significant difference to the uninjured side ( Table 5 ). The sample size was comparable to the published articles in the same area, this could be explained by the fact that many cases with distal ulnar fractures are stable, especially after the reduction and fixation of distal radius fractures, and the follow-up period (12 months) is also comparable to the referenced articles. The mean age of patients in this study is younger than that in other studies, this could be why the time for the union was relatively shorter than that in other studies, the functional outcomes in this study were comparable in all studies, and some variables were found similar (Mayo wrist score), or variable as VAS or Q-DASH score, and this could be due to the nature of the outcome itself; however, all results were relatively similar as regards the functional outcome, and the rate of complications was higher in anatomical LCP and minicondylar plate ( Table 5 ). However, the cost and availability of the distal ulnar LCP may limit its use in all cases, and still, a high rate of complications was observed; some modifications are ongoing to have a lower profile plate and to minimize soft tissue irritation . The immobilization method and time have been studied in the literature ( Table 5 ). The immobilization method is usually a short arm splint, the immobilization time shows gross variability from 2 to 6 weeks postoperatively, the immobilization time depends on the bone quality, the degree of comminution of the fracture, the stability of the fixation, and patient compliance, it is recommended to use short arm splint rather than long arm one, and for a minimum until stitch removal, a removable splint could be used during rehabilitation time. The use of hook plate has been utilized in different areas in the skeleton such as ankle fractures, olecranon, the base of the fifth metatarsal, and radial styloid. Heim and Niederhauser described the use of a 3.5 mm one-third semitubular plate successfully in the management of distal ulnar fractures; however, the use of 3.5 mm plates may lead to inadequate fixation of the distal fragment in several types of distal ulnar fractures . In this study, the use of 2.7 mm semitubular plates for fixation of distal ulnar fractures was not mentioned before in the literature, the one-quarter tubular plate is a low profile plate of 1 mm with a diameter of 7 mm and a hole spacing of 8 mm which allows fixation of the short distal fragment by 2 screws as a minimum, plus the hook which adds more stability to the construct when anchors the ulnar head, so this plate does not lead to more soft tissue disruption or hardware prominence, in this study, only one patient needed to remove hardware, when analyzing his radiograph, it is found that the plate is dorsal rather than a dorsoulnar site which is the recommended for plate placement to avoid interference with the function of Extensor Carpi Ulnaris tendon. The use of minicondylar blade plate for fixation of distal ulnar fractures was associated with hardware prominence in seven out of twenty-four patients (29.16%) . The subanalysis using a t-test found a nonstatistically significant difference between isolated cases and cases with associated distal radius fractures, to avoid overdone surgeries, it is emphasized that not all associated ulnar fractures after fixation of distal radius fractures need fixation and most of them became stable which does not require further treatment. Once the distal radius fracture was reduced and fixed, the stability of the distal ulnar fracture was evaluated, and the reduction and fixation of distal radius fracture if done properly restored the radiological parameters of the wrist ( Table 4 ); to minimize bias, subgroup analysis has been done which found no differences between both groups, the radiological parameters of the wrist have been evaluated for all cases, and the postoperative wrist parameters are comparable to the normal ranges which emphasize the recommendation to pay more attention for the reduction and fixation of distal radial fractures in the context of distal forearm fractures. There were no cases of residual DRUJ instability. This could be explained by the fact that anatomical restoration of the bony relations between the sigmoid notch of the distal radius and the ulnar head usually leads to excellent outcomes with distal radius reduction and fixation with or without fixation of the ulna . Even when DRUJ stability has been evaluated by CT scan by earlier studies , no residual instability has been encountered; this means that the clinical intraoperative evaluation is the most important parameter to detect and address such injury. Despite the novel use of the 2.7 semitubular plates in treating unstable distal ulnar fractures and its success, the study carries some limitations: the lack of a control group, especially in comparison to the anatomical distal ulnar LCP, the cost-effectiveness of both implants could be studied as well as the use of both plates in the older age group with more osteoporotic fractures. The use of the 2.7 mm semitubular hook plate is a successful choice for internal fixation of unstable distal ulnar fractures either isolated or associated with distal radius fractures with a favorable union time and functional outcome and range of motion with minimal complications. The use of such a plate is a suitable alternative to anatomical LCP with more availability and less cost with a comparable outcome.
|
Other
|
biomedical
|
en
| 0.999998 |
PMC11688157
|
After lung cancer, breast cancer is the most frequent cause of cancer deaths among women worldwide . It accounts for the maximum morbidity and mortality in women globally and imposes a huge burden on healthcare . The morphologic distinction between benign and malignant (in situ and invasive) diseases of the breast can be challenging, particularly in the setting of core needle biopsy. Although morphology alone can diagnose the majority of breast lesions, significant variation in the interpretation of challenging lesions based on histological analysis has been reported . Immunohistochemical staining can, therefore, help diagnose and determine the prognosis of these lesions . The cells of the ductal epithelium are of three types: luminal, basal, and myoepithelial cells . Different cytokeratins are described in the luminal and basal cell types, whereas myoepithelial cells express basal cell-type cytokeratins, smooth muscle actin, calponin, and p63 . The retention of the myoepithelial layer is often seen in benign and in situ lesions, whereas loss of this layer is considered a diagnostic feature of invasive cancer . Werling et al. conducted a comparative analysis of p63 against calponin and smooth muscle myosin heavy chain (SMMHC) and found that p63 is the most specific among the three markers, and could replace calponin and SMMHC , as the latter showed an affinity for myoepithelial cells and myofibroblasts . p63 is a specific nuclear marker for myoepithelial cells as it neither stains stromal fibroblasts nor vascular smooth muscle cells and can be very helpful in revealing invasion. It is also easily appreciated, even in cytologic preparations . p63 staining was observed around the normal ducts and benign tumors but was absent in invasive carcinomas. It also has a complementary role in distinguishing in situ from invasive lesions . This study aimed to investigate p63 expression in benign and malignant breast lesions and to assess if loss of p63 expression is consistently associated with invasive disease. This cross-sectional study was conducted in the Department of Pathology, Great Eastern Medical School and Hospital, Srikakulam, Andhra Pradesh over one year, from July 2023 to June 2024, after ethical clearance from the Great Eastern Medical College Institutional Ethical Committee . A total of 98 breast disease cases were collected, in the form of trucut biopsies, lumpectomies, and mastectomies. Clinical details were procured from the patients' histopathology requisition forms and the hospital information management system (HIMS). All cases underwent standard processing and were stained with hematoxylin and eosin (H&E) for analysis. Inclusion criteria All incisional biopsies, trucut biopsies, lumpectomy, and mastectomy specimens of both benign and malignant breast lesions were included in the study. Exclusion criteria All congenital breast diseases, inflammatory breast lesions, metastatic deposits to the breast, cases with prior treatment, and inadequate biopsies were excluded from the study. Sample size calculation The sample size was calculated based on the sensitivity estimation formula, n=Z 2 - α/2{Sensitivity(1-Sensitivity)}/d 2 X prevalence, with 20% sensitivity, and 90% prevalence of the outcome in the population . The minimum sample size needed was 36. Study procedure Tissues were routinely fixed in 10% neutral buffered formalin, embedded in paraffin blocks, sectioned at three to five microns, and stained with H&E. They were studied under a light microscope. For positive control of p63 staining, a histological section from a benign prostate biopsy was included in each staining batch. For the negative control, FLEX ready-to-use, monoclonal mouse universal negative control (code IR750; Agilent Dako, California, United States) was used. The procedure involved heat-induced epitope retrieval with a Tris/Ethylenediamine tetraacetic acid (EDTA) buffer at pH 9.0, followed by inactivation of endogenous peroxidase using aqueous hydrogen peroxide. The samples were then incubated with the primary antibody (mouse anti-human p63 monoclonal antibody, Clone 4A4, Vitro Master Diagnóstica, Madrid, Spain), followed by diaminobenzidine (DAB) chromogen application and counterstaining. All histopathology sections were evaluated, and immunohistochemical staining was performed on 86 cases. The IHC-stained slides of these cases were studied in detail and categorized into benign lesions, and non-invasive and invasive carcinomas. p63 expression assessment The intensity of p63 expression was categorized as continuous positive, discontinuous positive, or negative. The extent was quantified by the percentage of positive cells, with scores assigned as follows: zero (negative), one (1-25%), two (26-90%), and three (91-100%). Scores of one and two indicated discontinuous positivity, while a score of three indicated continuous positivity. Based on the p63 immunostaining, sensitivity, specificity, positive predictive value, negative predictive value, and accuracy were calculated. The Fisher's exact test was used for quantitative analysis of data. Mean ± standard deviation (SD) and percentage were used to determine data for statistical analysis by using IBM SPSS Statistics for Windows, Version 20 and the p-value was calculated. A p-value of less than 0.05 was deemed statistically significant. A total of 98 cases were evaluated, of which 12 were excluded. Thus, 86 cases were included in the study. Among these, there were 35 lumpectomy specimens (40.69%), 25 modified radical mastectomy specimens (29.0%), 24 trucut biopsies (27.9%), and two simple mastectomy specimens (2.3%). The patients in the study ranged in age from 18 to 80 years, with the majority of cases occurring in the fifth decade of life, followed by the fourth and the third decades. The average age at presentation was 40.8 years, and the study included 81 females and five males. Of the total cases analyzed, 46 (53.48%) were benign and 40 (46.51%) were malignant. The most prevalent histological variant among the benign breast lesions was fibroadenoma, followed by sclerosing adenosis and gynecomastia (Table 1 ). In contrast, infiltrating ductal carcinoma was the most common variant among the malignant lesions . p63 immunostaining was conducted on all the cases. Among the 46 benign cases, continuous p63 positivity with a score of three was observed in 24 fibroadenomas, four cases of fibrocystic change, one benign phyllodes tumor, and one intraductal papilloma (Table 3 ). Discontinuous positivity with a score of two was noted in five cases of sclerosing adenosis, two cases of usual ductal hyperplasia, four cases of atypical ductal hyperplasia, and five cases of gynecomastia . Out of the 40 malignant cases, 29 cases of invasive ductal carcinoma, two cases of invasive lobular carcinoma, one case of mucinous carcinoma, one case of solid papillary carcinoma, one case of medullary carcinoma and one case of Paget's disease (underlying invasive breast carcinoma) showed negative p63 immunostaining (score zero; Table 4 ). The remaining five cases exhibited discontinuous p63 positivity, with a score of one observed in one case of metaplastic carcinoma and four cases of ductal carcinoma in situ . Among the 86 cases, all 46 benign cases demonstrated positive p63 expression (scores of two or three), resulting in a 100% positivity rate. In contrast, of the 40 malignant cases, 35 (87.5%) showed no p63 expression (score of zero), while five cases (12.5%) exhibited positive p63 expression (score of one; Table 5 ). A significant difference in p63 expression was observed between the benign and malignant breast lesions, with a p-value of less than 0.00001. p63 was consistently positive in the benign breast lesions, indicating that the immunohistochemical expression of p63 can serve as a reliable marker for distinguishing between benign and malignant breast lesions (Table 6 ). Based on p63 positivity and negativity in all the benign and malignant breast lesions, sensitivity was found to be 100%, specificity 87.50%, positive predictive value 90.20%, negative predictive value 100%, and accuracy 94.19% (Table 7 ). Breast lesions do not constitute a single entity; they encompass a diverse range of diseases characterized by significant clinical and morphological variability . Invasive carcinomas are treated based on clinical, radiological, and pathological findings. The presence or absence of invasion is an important histopathological feature that helps in its treatment and prognosis . Myoepithelial cells have a tumor-suppressive function by secreting protease inhibitors and releasing tumor-suppressive proteins. The transition of benign lesions to invasive carcinoma involves the loss of myoepithelial cells, which leads to the loss of the tumor-suppressive function . Myoepithelial markers are valuable tools for differentiating invasive carcinoma from benign lesions that share similar morphological features . Invasive carcinomas are characterized by the absence of a myoepithelial layer, which typically encases benign breast ducts . p63 is both a sensitive and specific myoepithelial cell marker, as it is expressed exclusively in myoepithelial cells without cross-reactivity with myofibroblasts . The assessment of invasion in malignant cases on routine H&E staining posed challenges in some core needle biopsies due to peritumoral inflammation-associated fibrosis, which obscured the detection of myoepithelial cells. However, negative p63 staining on immunohistochemistry and clinico-radiological correlation confirmed the diagnosis of invasive carcinoma in these cases . Saini et al. found that most breast cases occurred in the fifth decade of life, followed by the fourth decade, with a mean age at presentation of 37.1 years, which was close to our study findings (40.8 years) . There were 86 cases in the present study, of which 46 (53.4%) were categorized as benign lesions, similar to the findings by Thakkar et al. (54.16%) and Stefanaou et al. (52.63%) . The remaining 40 cases (46.51%) were classified as malignant, which aligns closely with the findings of Wang et al. (41.17%) but is higher than the results reported by Stefanaou et al. (36.09%) and Verma et al. (32.4%) .In our study, fibroadenoma was the predominant benign lesion, accounting for 52.17% of cases, which aligns with existing literature where fibroadenomas represent 46.6% to 55.6% of all benign breast lumps . Invasive ductal carcinoma emerged as the most prevalent malignant lesion in our study, constituting approximately 72.5% of all malignant cases, a finding consistent with studies by Verma et al. (87.5%) and Tiwari et al. (84.3%) . Notably, all benign tumors in our study tested positive for p63. Conversely, the majority of malignant tumors (35 cases or 87.5%) were negative for p63, while five cases (12.5%) exhibited p63 expression, including four cases of ductal carcinoma in situ and one case of metaplastic carcinoma. Similar findings were reported by Barbareschi et al. who noted p63 positivity in all benign lesions, while invasive breast carcinomas displayed consistent negative p63 staining in 95% of cases . Among the 86 cases in our study, 30 (34.88%) benign lesions were continuous positive p63 with a score of three. While five (5.8%) cases of sclerosing adenosis , two (2.3%) cases of usual ductal hyperplasia, four (4.65%) cases of atypical ductal hyperplasia and five (5.8%) cases of gynecomastia were less continuous p63 positivity with a score of two. Out of the 40 (46.5%) malignant cases, four (10%) cases of ductal carcinoma in situ and one (2.5%) case of metaplastic carcinoma showed discontinuous positive p63 with a score of one, while 35(87.5%) invasive carcinomas showed a negative p63 with a score of zero. These findings were almost similar to Saini et al. . Metaplastic breast carcinoma represents a rare and heterogeneous category of primary breast malignancies, comprising less than 1% of all invasive breast carcinomas. In our study, we identified one case of metaplastic carcinoma that exhibited discontinuous positive p63 immunostaining, consistent with the findings of Saini et al. . Mammary Paget’s disease is a type of breast cancer that is usually associated with underlying malignancy of the breast. In this condition, there are atypically large cells with abundant, pale-staining cytoplasm, which sometimes mimicked keratinocytes. However, these Paget’s cells were negative for p63, in sharp contrast to the surrounding keratinocytes . This finding was consistent with our case, where Paget's cells were stained negative for p63 against the background of p63-positive keratinocytes (control). The identification of a peripheral rim of myoepithelial cells is essential in the differential diagnosis of breast lesions, especially when working with limited core biopsy samples. This characteristic feature is valuable because the loss of p63 expression has been linked to the progression of ductal breast carcinoma, indicating that p63 immunostaining plays a critical role in confirming the presence of invasive growth patterns. In the present study, p63 demonstrated high sensitivity and specificity for diagnosing benign breast lesions, consistent with findings by Barareschi et al., Reis-Filho et al., and Cheung et al. who reported a sensitivity of 100% and a specificity of 95%. Therefore, p63 proves to be a highly reliable myoepithelial marker and could be effectively incorporated into immunohistochemical panels to aid in the identification of myoepithelial cells in challenging breast lesions. When comparing p63 immunostaining results between benign and malignant breast lesions using Fisher’s exact test, a statistically significant difference was observed, suggesting that p63 is a reliable marker for differentiating between benign and malignant lesions. Our findings emphasize the utility of p63 expression in identifying myoepithelial cells in ambiguous breast lesions, in distinguishing various complex epithelial breast lesions, and also in challenging core biopsies. The results further imply that a loss of p63 expression is associated with the invasive progression of breast carcinoma. Consequently, p63 immunostaining may assist in differentiating invasive ductal carcinoma from ductal carcinoma in situ or atypical ductal hyperplasia, thereby aiding in clinical decision-making and ensuring appropriate therapeutic interventions. The findings of this study must be interpreted after considering several limitations. First, the small sample size may not provide a fully representative view of the general population, potentially limiting the broader applicability of the results. Additionally, variability in p63 expression, intensity, and staining patterns observed in this and other studies suggest that p63 may have complex molecular roles in tumorigenesis, warranting further investigation. Furthermore, differences in staining protocols and interpretation criteria across studies can lead to inconsistent results. To deepen our understanding of p63 expression across various tumor subtypes and normal tissues, future research should involve a larger and more diverse sample set examined under rigorously standardized conditions. Although large immunohistochemical panels have been used in differential diagnosis, there is still no consensus on the most sensitive and specific antibodies or optimal combination of markers. This ambiguity points to an ongoing need for novel markers and refined diagnostic tools to improve accuracy and reliability in distinguishing between benign and malignant lesions. In our study, the pattern of p63 expression was examined across a total of 86 cases. A positive correlation was observed between histopathological features and p63 scoring in all lesions, yielding 100% sensitivity, 87.50% specificity, 90.20% positive predictive value, 100% negative predictive value, and 94.19% accuracy. Among benign lesions, non-proliferative cases demonstrated continuous positivity, while proliferative lesions exhibited less consistent positivity for p63. Premalignant lesions showed minimal positivity and most malignant lesions were devoid of p63 staining, with a few exceptions. p63 expression proves to be beneficial for diagnosis in challenging trucut biopsy specimens preoperatively and in excisional lumpectomy specimens postoperatively. This study concludes that p63 is a highly valuable immunohistochemical marker for cases that are difficult to classify based solely on histomorphological features.
|
Study
|
biomedical
|
en
| 0.999998 |
PMC11688158
|
The integration of robotic technology into medicine marks a revolutionary leap in the evolution of surgical practices . Initially designed to enhance surgeons' precision and technical capabilities, robotics has now become indispensable across multiple surgical disciplines . The da Vinci surgical system (Intuitive Surgical, Sunnyvale, California, United States), a pioneering example, has redefined the surgical landscape with advancements such as superior ergonomics, three-dimensional (3D) imaging, and increased dexterity . These features enable surgeons to perform highly complex procedures with unparalleled accuracy and efficiency . Such breakthroughs have been made possible by progress in computer processing, miniaturization, and artificial intelligence (AI) . The advent of robotic surgery has brought profound and transformative changes to cardiac treatment, diverging significantly from traditional approaches . Known for its intricate nature and critical precision requirements, cardiac surgery has increasingly embraced robotic innovations. Minimally invasive procedures have replaced conventional open-heart surgeries, which were previously associated with substantial trauma and extended recovery times . This shift has been further propelled by robotic technology . Today, robotic cardiac surgery encompasses a broad range of interventions, including coronary artery bypass grafting (CABG) and valve repairs . Renowned for its exceptional accuracy, reduced patient trauma, and accelerated recovery periods, robotic-assisted surgery is rapidly becoming the preferred choice in modern cardiac care . The adoption of robotic systems has significantly improved surgical outcomes while broadening the scope of possibilities in cardiac procedures . The ability of robotic systems to execute precise movements within confined spaces, such as the chest cavity, has been instrumental in addressing the challenges posed by complex cardiac surgeries . Given the delicate nature of cardiac tissues and the need for meticulous intervention, this technological advancement holds exceptional significance . The field of robotic cardiac surgery is continually evolving as researchers and clinicians explore and implement novel techniques. Applications of robotic systems have expanded beyond mitral valve surgery and coronary revascularization, addressing diverse and complex cardiac conditions . Accumulating empirical evidence supports the effectiveness and safety of robotic cardiac surgery, reinforcing its potential to become a standard practice in the field . However, this progress is not without challenges. Key obstacles include issues with cardiopulmonary bypass (CPB) management, myocardial protection during robotic procedures, and the requirement for specialized training . Additionally, the financial burden of implementing and maintaining robotic systems poses significant challenges, particularly for healthcare systems in middle-income countries like Colombia . Despite these barriers, the trajectory of robotic cardiac surgery remains optimistic, driven by technological advances and increasing global acceptance. This innovative approach has the potential to revolutionize cardiac care by enhancing safety, reducing invasiveness, and improving patient outcomes. While the short-term benefits of robotic cardiac surgery, such as reduced hospital stays and quicker recoveries, are well-documented , there remains a lack of extensive longitudinal research on its long-term outcomes and cost-effectiveness. Critical areas like mortality rates, quality of life, and cost-efficiency over time remain underexplored, particularly as robotic technology continues to evolve . As new equipment and procedures are introduced, consistent evaluation of their long-term efficacy and impact is vital . Addressing this research gap is crucial for providing comprehensive insights into the value of robotic cardiac surgery. Such knowledge would benefit not only patients and healthcare providers but also policymakers tasked with resource allocation and technology investment . By examining these aspects, this study aims to support informed decision-making, ensuring that medical technology investments yield tangible and sustainable benefits for patients and healthcare systems . This research is motivated by the need to bridge the gap between technological advancements and their clinical integration. While robotics in cardiac surgery has made significant technological strides, clinical evidence supporting its widespread adoption remains limited. The primary goal of this study is to generate empirical data to inform clinical practices and policy frameworks. Additionally, with the healthcare industry increasingly prioritizing value-based care, understanding the cost-effectiveness of robotic surgery is imperative. By assessing whether its higher initial costs are offset by improved long-term outcomes and healthcare delivery efficiency, this study seeks to provide meaningful contributions to the dynamic field of cardiac surgery . The objectives of this study include a comprehensive synthesis of existing data and advancements in robotic cardiac surgery. This analysis focuses on cutting-edge developments, clinical outcomes, and areas for future exploration. Specifically, the study evaluates the immediate- and long-term impacts of robotic technology on cardiac surgical procedures, identifies the key technological drivers behind this progress, and examines the implications for surgical techniques and patient care. It also highlights the challenges and limitations associated with robotic cardiac surgery, such as financial, technological, and training-related issues. Materials and methods Review Design and Search Strategy The systematic review was meticulously conducted in strict adherence to the criteria outlined by the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). The protocol was collaboratively developed and unanimously approved by all authors involved. The primary objective of this study was to comprehensively examine the role of robots in cardiac surgery, with an emphasis on the latest achievements, outcomes, and potential future developments in the field. The search strategy was meticulously crafted to capture a wide range of relevant scientific literature. A comprehensive search was conducted on PubMed and Cochrane databases, covering the period from 2015 to December 2022, with the final search extending into 2023. Two independent researchers, A and B, conducted the investigation using a combination of Medical Subject Headings (MeSH) terms, phrases, and free-text terms. Primary search terms included "robotic cardiac surgery", "robot-assisted cardiac procedures", and "minimally invasive cardiac surgery". Boolean operators (OR and AND) were employed to combine these terms. Additionally, truncation was strategically used to enhance the retrieval of relevant scholarly resources. Besides electronic searches, a manual search of references in influential articles and review papers was conducted to identify any missed studies. Quality Assessment The selected articles' quality was evaluated using the checklists provided by the Critical Appraisal Skills Programme (CASP). These checklists were employed to assess several study designs, including randomized controlled trials, observational studies, and qualitative research. The initial evaluation was carried out independently by three authors, who used a scoring system where a score of 2 indicated full compliance, 1 indicated partial compliance, and 0 indicated non-compliance or inapplicability. After completing their individual assessments, the authors convened to compare evaluations and engage in discussions. This collaborative approach ensured a thorough and unbiased assessment of each study. Following this rigorous assessment, 475 papers were excluded due to low-quality ratings, leaving nine studies that met the moderate- or high-quality criteria. Data Extraction and Categorization Data extraction followed a standardized template, collecting crucial information such as the primary author's name, publication year, study country, research design, specific robotic technology used, study location, participant characteristics, and significant outcomes. The extracted data were coded and classified into subcategories. Through an analysis of the relationships among these codes, broader categories were formed. A series of iterative discussions among the authors determined the final categories and subcategories. The goal was to reach a consensus on the core topics and findings of the review. Results Study Selection The initial database search retrieved 484 records. Out of which, 26 were excluded on the basis of duplicate, while 20 were excluded due to the review article. The remaining 438 articles were screened for their title/abstract, and 32 publications were deemed for full-text screening. Out of which, two publications were excluded as their full text was not accessible. The full-text screening of the remaining 30 articles retrieved nine articles to be included in the interview. The detailed process of screening and the reason for exclusion are illustrated in the PRISMA diagram, in Figure 1 . Study Characteristics This review includes nine studies of various types, including one randomized , two prospective , one observational , and five retrospective studies . These studies were conducted in different regions, including London , the USA , Japan , Finland , Turkey , Korea , and the Netherlands . Depending on the study, overall sample sizes range from eight to 605 patients, including a mix of male and female participants. The mean ages of the participants, as reported, exhibit considerable variation, stretching from 43.4 to 69 years. Two investigations reported follow-up data . Table 1 summarizes the characteristics of the nine studies on robotic-assisted surgical procedures in cardiac surgery. Surgical Type and Surgical System Surgical procedures performed in different studies included CABG , left atrial ablation , atrial septal defect (ASD) closure , and robotic mitral valve repair (RMVP) . Various surgical systems were utilized in each of the investigations. Arujuna et al. utilized NavXTM (St. Jude Medical Inc., Saint Paul, Minnesota, United States) and CARTO XP (Biosense Webster Inc., Irvine, California, United States), while Giambruno et al. employed Automated Endoscopic System for Optimal Positioning (AESOP) and the Zeus telemanipulation system (Sunnyvale, California, United States). da Vinci surgical systems were implemented in seven studies, with the exception of one study which did not report any surgical system . Atrial fibrillation (AF) was documented in two studies, Arujuna et al. and van der Heijden et al. , with reported prevalence rates of 33.1% and 2.8%, respectively. Left atrial (LA) diameter varied across studies, with Spanjersberg et al. reporting a mean diameter of 15.4±4.6 mm. Ejection fraction (EF) data were available in five studies, with the majority reporting values above 50%. Notably, Kesävuori et al. reported a mean LA diameter of 3.4±3.5 cm, while Yun et al. did not provide specific information on AF, LA diameter, or EF. The values represented in different studies are detailed in Table 2 . Comorbidities The patients who are undergoing robotic-assisted cardiac surgery exhibit a diverse range of comorbidities, with varying frequencies reported across the studies outlined in Table 3 . The patient presents with a range of comorbidities, namely, diabetes (n=7), hypertension (n=6), chronic lung disease (n=6), peripheral vascular disease (n=1), coronary artery disease (n=1), thyroid disorder (n=1), smoking history (n=1), myocardial infarction (n=5), stroke (n=3), transient ischemic attack (n=2), percutaneous coronary intervention (n=1), and previous cardiac surgery (n=1). Operative Time A majority of studies (n=7) reported a comparable or reduced operative time in robotic-assisted cardiac surgeries compared to traditional approaches . CPB time was measured in five studies . In all studies, the robotic platform demonstrated efficiency in complex procedures such as mitral valve repair and CABG. Evidence suggests a trend towards reduced blood loss in robotic surgeries, contributing to improved postoperative recovery . Mortality Rates The overall mortality rates were similar between robotic and traditional cardiac surgery groups, indicating the safety of robotic interventions. Interestingly, a few studies reported a zero mortality rate . Complications and Conversion Fewer postoperative complications, including myocardial infarction, AF, and wound complications, were observed in the robotic-assisted surgeries. Little conversion was reported in three studies, while three studies reported zero conversion. Reoperation due to bleeding was also reported in two studies in a limited number of patients. Hospital Stay Robotic surgeries were associated with a shorter hospital stay compared to conventional approaches, suggesting potential economic benefits . A lower number of readmissions was reported in only two studies. Learning Curve Only one study acknowledged an initial learning curve for surgeons adopting robotic techniques, with proficiency gained over time . Quality Assessment The quality assessment of the articles was conducted using the CASP checklist. While all the studies addressed a clearly focused issue (Q1), only one study was a randomized controlled trial . This study met the criteria for Q2 and Q3, while most other studies were either unclear (NC) or did not meet the criteria. Several studies did not provide sufficient data regarding group assignment or blinding, resulting in lower scores for Q3 and Q4. Two studies provided comprehensive data on important clinical outcomes, while others reported relevant but less comprehensive outcomes. Overall, the quality of the papers was mixed, with some studies meeting most criteria and others being unclear in certain areas, but the general assessment indicates the papers were of good quality. A summary has been provided in Table 4 . Discussion Robotic surgery has consistently demonstrated promising outcomes across various cardiac procedures, including valve repair, CABG, and ASD repair. These advancements highlight the growing acceptance and effectiveness of robotic technology in complex cardiac interventions. The successful implementation of two-port robotic cardiac surgery for ASD treatment, as reported by Yun et al., underscores the potential of minimally invasive techniques to achieve clinical goals with minimal patient trauma and quicker recovery times . Similarly, robotic mitral valve surgery has proven to be both safe and efficacious, particularly in patients with comorbidities like obesity, where traditional sternotomy might pose higher risks. This finding emphasizes that obesity should not be a contraindication for robotic procedures . Remarkably, the rate of valve repair in robotic groups was observed to be 98.6%, comparable to 97.9% in sternotomy groups, as reported by Kesävuori et al., further validating the reliability of robotic approaches . Moreover, the use of transesophageal echocardiography (TEE) has enhanced procedural precision by enabling the accurate selection of annuloplasty ring sizes and artificial chordae lengths, especially in conditions like Barlow's disease . In the realm of ablation techniques, robotic systems have shown the potential to improve long-term outcomes. For instance, robotic ablation has been associated with increased late gadolinium enhancement, potentially reducing the likelihood of future reoperations . Another notable advancement is the integration of unilateral left-sided thoracoscopic AF ablation with minimally invasive direct coronary artery bypass (MIDCAB). This technique, specifically involving left internal mammary artery to left anterior descending artery (LIMA-LAD) grafting, has been shown to be both feasible and reliable. It offers a tailored solution for patients with concurrent AF and significant coronary artery narrowing, highlighting the versatility of robotic systems in addressing multifaceted cardiac conditions . Robot-assisted CABG has emerged as a compelling alternative to traditional CABG, combining safety, feasibility, and efficacy. According to Giambruno et al., robotic CABG procedures not only match the clinical outcomes of conventional approaches but also bring additional advantages such as reduced blood loss, shorter hospital stays, and faster recovery times . Furthermore, Spanjersberg et al. reported that robotic CABG significantly increased the likelihood of early discharge to home compared to off-pump CABG, underscoring its potential for improving patient throughput and reducing healthcare costs . Despite these advancements, robotic cardiac surgery is not without challenges. One of the main hurdles is the steep learning curve associated with adopting robotic systems. Proficiency in robotic techniques requires significant training and experience, which can initially limit the widespread adoption of these technologies. Additionally, the high upfront costs of robotic systems, such as the da Vinci surgical system, and their maintenance present financial barriers, particularly for healthcare systems in resource-limited settings. Addressing these challenges will require a multifaceted approach, including enhanced surgeon training programs, financial incentives for hospitals, and further technological innovations to make robotic systems more affordable. Furthermore, while the current evidence highlights excellent short-term outcomes, there remains a paucity of data on the long-term benefits of robotic cardiac surgery. Future studies should focus on long-term patient outcomes, such as survival rates, quality of life, and cost-effectiveness, to provide a more comprehensive understanding of its value in clinical practice. The continued refinement of robotic technology, combined with efforts to address its financial and educational barriers, has the potential to establish robotic-assisted surgery as a gold standard for various cardiac procedures. Robotic cardiac surgery is a transformative innovation with proven safety and efficacy across a range of procedures. It offers a minimally invasive alternative with significant benefits for patient recovery and hospital efficiency. As the field continues to evolve, sustained research and development will be critical to overcoming existing limitations and ensuring the broader adoption of robotic technologies in cardiac care. The utilization of robots in cardiac surgery has shown encouraging results concerning effectiveness, security, and recuperation time for patients. The efficacy of robotic systems is shown by the shorter operating durations and blood loss, as well as comparable or lower death rates when compared to conventional procedures. The low rate of conversions to conventional techniques and postoperative problems is another evidence of these technologies' dependability. Shorter hospital stays are indicative of both better patient outcomes and possible financial savings for healthcare organizations. However, the research also highlights the significance of the learning curve related to robotic surgery, highlighting the need for specific education and expertise. Based on the overall quality of evaluated papers, robotic cardiac surgery is a promising subject that requires ongoing study and monitoring to be optimized. Robotic-assisted cardiac surgery seems to have a promising future with an emphasis on method refinement, improved training for surgeons, and improved patient outcomes.
|
Other
|
biomedical
|
en
| 0.999997 |
PMC11688159
|
Cardiovascular disease (CVD) refers to a group of disorders affecting the heart and blood vessels, including atherosclerotic peripheral arterial disease (PAD) and coronary artery disease (CAD) . CVD stands as a global health crisis, being the leading cause of morbidity and mortality in this world of aging population . In 2019, statistics unveiled a striking figure of 523 million people impacted by CVD, resulting in 18.6 million fatalities attributed to these ailments . Conventional risk factors such as hypertension, diabetes mellitus, smoking, and high lipid levels have played a pivotal role in shaping risk prediction models and advancing treatment methods in conjunction with biomarkers . Biomarkers are "characteristics that are objectively measured as indicators of health, disease, or a response to an exposure or intervention, including therapeutic interventions" . Biomarkers can be categorized as diagnostic, predictive, and prognostic markers. Diagnostic biomarkers are used to detect or confirm the presence of a disease or a condition. A prognostic biomarker provides information about the progression of a disease in an untreated individual or one getting routine treatment. A predictive biomarker aids in the identification of individuals who are most likely to benefit from a specific therapy or distinguishes those who are suitable for targeted therapies. On the other hand, pharmacodynamic biomarkers assess the impact of a drug on the disease itself. Essentially, they reflect how a target organism changes in response to both the disease and its treatment. Over the past 30 years, advances in CVD biomarker research and innovations resulted in more sensitive screening techniques, increased focus on early detection and diagnosis, and better treatments that have improved clinical outcomes in the community . Biomarkers are considered valuable when they fulfill certain criteria, such as (a) accuracy or the capacity to recognize people who are at risk, (b) reliability and the consistency of outcomes upon repeating, and (c) the therapeutic effect of early intervention . Assessing cardiovascular risk in high-risk asymptomatic individuals is now a common practice in preventive medicine. Predictive tools and risk scores, developed from comprehensive cohort studies and randomized trials, help to pinpoint those vulnerable to CVD . However, traditional risk-assessment models, such as the Framingham risk score, may no longer fully reflect current health patterns due to significant changes across generations . This comprehensive search aims to uncover novel biomarkers that not only complement existing risk-assessment models but also offer additional insights into cardiovascular health. Methodology An in-depth literature review was conducted to improve risk prediction tools, utilizing keywords such as "risk prediction", "biomarker", "lab tests", and CVD-related terms such as "acute coronary syndrome", "coronary artery disease", "myocardial infarction", "heart failure", and "stroke", on PubMed and Google Scholar. Inclusion and Exclusion Criteria This review included studies published exclusively in English, without any specific geographic location, covering the period from 2000 to 2024. Additionally, only free full-text peer-reviewed journal articles were included. Studies are excluded if they do not contain the required biomarker information . Cardiovascular biomarkers are classified according to their pathophysiological processes, such as (1) myocardial injury, (2) myocardial stress, (3) inflammation, (4) blood coagulation factors (platelet activation), (5) plaque instability, and (6) metabolic abnormalities . Biomarkers indicating metabolic abnormalities Glycated Hemoglobin A1c (HbA1c) HbA1c is a blood test that measures average blood glucose concentrations from the past two to three months . High HbA1c levels indicate a higher occurrence of carotid arterial plaque in both pre-diabetic and diabetic patients. Arterial plaque leads to increased development of coronary heart disease (CHD) . A study by Ceriello et al. found that individuals with a mean HbA1c level of 53 mmol/mol had a higher risk of myocardial infarction, stroke, and mortality compared to those with a lower HbA1c level . According to Khaw et al., a 1% increase in HbA1c was linked to a 21% increase in cardiovascular risk in both men and women . In chronic hyperglycemic individuals, the chances of CVD risk are higher; however, maintaining HbA1c levels less than 70 mmol/mol (8.6%) may reduce CVD risk in diabetes patients. However, findings regarding the association between abnormal HbA1c levels and cardiovascular complications are inconsistent among healthy populations . Thyroid-Stimulating Hormone (TSH) TSH plays a crucial role in regulating metabolism, influencing thermogenesis, and controlling energy expenditure . A population-based study indicates that hyperlipidemia and vascular inflammation are caused by hypothyroidism (high TSH value). Hyperlipidemia leads to systolic hypertension, atrial fibrillation, and hypercoagulability. The effects of dyslipidemia directly contribute to the development of atherosclerotic CVD . Serum TSH levels higher than 10 mIU/L among young populations with subclinical hypothyroidism increase the risk of CHD and mortality . Therefore, participants with high TSH levels have a significantly higher risk of CVD mortality. TSH levels play a crucial role in identifying the high-risk population in the community . Homocysteine Homocysteine is an amino acid that is a byproduct of meat and dairy products. It interacts with vitamins B12 and B6 and folate to form the proteins needed by the body. Normally, very little homocysteine stays in a healthy individual's blood. Homocysteine plays a crucial role in endothelial clot formation by inhibiting protein C and heparin sulfate, which in turn increases blood viscosity. Furthermore, serum homocysteine interacts with low-density lipoprotein (LDL) to form LDL-homocysteine thiolactone, which forms atherosclerotic plaques and thrombosis in arteries. The risk of atherosclerosis increases with age due to an increase in homocysteine levels . A study by Lühmann et al. showed that lowering homocysteine levels with folate or vitamin B does not reduce the risk of cardiac events in high-risk populations. Homocysteine might not be a strong predictor individually, but in combination with standard risk factors, it can give a stronger predictive value in healthy individuals . Plasma Ceramides Plasma ceramides are fatty acids present in all tissues and blood that play a role in cellular signaling. It accumulates when there is inflammation, dyslipidemia, or metabolic malfunction. Elevated ceramide levels modify cell membrane permeability, leading to leaky blood vessels and plaque accumulation. In addition, ceramides form 30% of circulating LDL-cholesterol, which plays a pivotal role in atherosclerosis . Mishra et al. reported that increased plasma ceramides correlate with a 9% rise in the risk of impaired left ventricular function. The impaired left ventricle reduces the ejection fraction, which increases CVD mortality. Nevertheless, plasma ceramides could serve as a stand-alone predictive biomarker for high carotid intima-media thickness (CIMT), indicating their potential in identifying subclinical atherosclerosis . Biomarkers indicating plaque instability Lipid Profile Lipoproteins are particles made of protein and fats, categorized into atherogenic and anti-atherogenic. Atherogenic factors such as LDL and very low-density lipids (VLDL) tend to promote atherosclerosis. Anti-atherogenic, such as high-density lipids (HDL), inhibits atherosclerosis. Increased atherogenic levels increase macrophage absorption of cholesterol, which in turn causes inflammation and plaque formation. This mechanism has made dyslipidemia the major risk factor for CVD . Notably, non-HDL, which consists of atherogenic particles, is emerging as a stronger predictor of CVD risk irrespective of triglyceride (TG) levels . Tanabe et al. suggested that non-HDL cholesterol levels are significant indicators for assessing the risks. The findings suggest that considering non-HDL cholesterol, which includes various atherogenic lipoproteins, can enhance the accuracy of cardiovascular risk prediction compared to total cholesterol alone . TG play a pivotal role in lipid metabolism due to their association with atherogenic particles, which increase the chance of plaque formation. Increased TG causes pancreatitis and other conditions, which are linked to increased atherosclerotic risk . However, some studies suggested that TG is not an independent risk factor for measuring CVD risk prediction . Moreover, the Atherosclerosis Risk in Communities (ARIC) study revealed strong associations of total cholesterol, LDL, and TG with increased CVD risk, while HDL is linked to decreased risk. Evidence from the studies suggested that reduced LDL level by every mmol/L will reduce the risk of CVD by 40% , and a similar study mentioned that a 12% increase in CVD risk is associated with every 10 mg/dL rise in LDL . Therefore, total cholesterol/HDL and LDL/HDL ratios are better predictors of CVD risk instead of focusing solely on LDL-cholesterol levels . Additionally, a recent study by Gentile et al. revealed an independent association between VLDL and CIMT , where a positive association was observed between VLDL and vessel thickness, arterial stiffening, and loss of elasticity, which account for major cardiovascular risk factors . Therefore, including whole lipid particles will increase accuracy in predicting CVD risk better than using only LDL and HDL. Apolipoprotein B (ApoB) ApoB is a protein component of LDL, which measures the number of atherogenic particles , including LDL. It is an accurate predictor of CVD events , and combining ApoB with traditional lipids significantly improves long-term CVD risk assessment . Studies show that increased ApoB particles lead to the development of ischemic heart disease, myocardial infarction, and other CVD events . Su et al. demonstrated that adding ApoB information to LDL and HDL measurements does not significantly enhance CVD risk prediction based on China's atherosclerotic CVD (ASCVD) risk score. Overall, ApoB's addition improves CVD risk assessment . Lipoprotein A (Lp A) Lp A binds to oxidized phospholipids to transport cholesterol particles. It also plays a critical role in macrophage foam cell formation, which in turn aids in the development of thrombosis and atherosclerosis . This process ultimately results in the buildup of plaques, leading to atherosclerosis. The accumulated lipid plaques cause aortic valve stenosis, leading to valve calcification . A study by Mohammadi-Shemirani et al. showed that a 50 mg/dL increase in Lp A will increase the risk of atrial fibrillation by 3% and the risk of CVD by 20% . Therefore, Lp A has been found to be a significant CVD risk prediction, particularly among low-risk individuals . Biomarkers indicating inflammation C-reactive Protein (CRP) CRP levels increase in response to tissue damage caused by trauma, infection, malignancy, or chronic inflammatory conditions . This indicates the linear association between CRP level and the risk of CVD, stroke, and CHD , making CRP a better inflammatory marker for predicting CVD events. Specifically, high CRP levels in acute myocardial infarction indicate inflammation and thrombosis in the infarcted myocardium . In contrast, a study by Han et al. showed that the use of CRP in preventive care is still uncertain because the levels can be influenced by multiple factors, such as chronic inflammatory diseases, lifestyle, and obesity . Due to these influences, CRP is not used as a sole marker in asymptomatic individuals. Instead, CRP is often considered alongside other risk factors and diagnostic tools to provide a more comprehensive CVD assessment . Myeloperoxidase (MPO) MPO is a protein released by circulating leukocytes during CVD events, and upon oxidation, macrophages enriched in MPO produce hypochlorous acid (HOCl) . This chemical contributes to increased inflammation and plaque formation with thinner fibrous caps, which is the leading cause of coronary thrombosis, infarction, and sudden cardiac mortality events between 30 days and six months . The elevated MPO levels in coronary circulation suggest localized tissue injury due to pathophysiological processes of cardiac failure. Therefore, MPO can be used as a diagnostic and monitoring marker but not in prediction . Procalcitonin (PCT) PCT is a protein released by macrophages associated with sepsis, progressive atherosclerosis, obesity, and insulin resistance. Elevated PCT indicates the severity of bacterial infection. Patients with severe myocardial damage post-infarction showed increased PCT concentrations, leading to increased mortality within 48 hours to six months. PCT enhances outcome and prognosis information but is not as accurate as CRP in predicting outcomes . The serum PCT levels were significantly higher among the CVD high-risk populations than those with low to intermittent-risk populations . High PCT values have a twofold increased risk of cardiovascular death. PCT has a lesser association with the prediction of stroke development than myocardial infarction . Blood coagulation factors and thrombosis Complete Blood Count (RBC, WBC, Platelet, and Hematocrit) A complete blood count (CBC) is a laboratory test that measures the levels of red blood cells (RBC), white blood cells (WBC), platelets, hemoglobin, and hematocrit in an individual's blood. Most studies reported a consistent association between elevated RBC, WBC, hematocrit, and CVD. Similarly, a higher RBC count and hematocrit levels contribute to increased blood thickness and increased platelet clumping by releasing adenosine diphosphate. The thickened blood and platelet clumps have an increased chance of atherosclerotic plaque formation, increasing CVD risk . The above-mentioned mechanism was also reported to cause vascular smooth muscle dysfunction and abnormal vascular structure . Moreover, higher RBC and hemoglobin levels are associated with increased risk factor development, such as hypertension, diabetes, dyslipidemia, and obesity, which potentially augments plaque formation in coronary arteries. Consequently, platelets are essential in clotting and inflammatory mechanisms by encouraging the attachment of inflammatory cells, such as neutrophils and monocytes, causing plaque and thrombosis formation. The elevated platelets (thrombocythemia) increase the frequency of thrombosis formation . Neutrophils and monocytes, a type of WBC, act as the first line of defense in the immune mechanism. A prospective cohort study conducted across Europe, which aims to study the etiology of chronic diseases, revealed that elevated WBC counts, particularly among active smokers, were linked to increased CVD risk, with a stronger effect on stroke than on CHD. Despite the not-so-significant relation between individual WBC subtype associations with CVD, when total WBC count is taken into consideration, it revealed a strong association between WBC count and CVD risk . WBC count was found to predict future CVD and mortality in patients with or at high risk, independently of conventional risk factors. The greatest predictive ability was provided by high neutrophil (N) or low lymphocyte (L) counts . The N/L ratio measures the balance of inflammation and immunity in the body. The higher the N/L ratio, the higher the chance of CVD. These findings have important implications for CVD risk assessment . Fibrinogen Fibrinogen is a glycoprotein, which is also known as coagulation factor I. Fibrinogen plays a crucial role in blood clotting. Fibrinogen concentration is affected by factors that regulate synthesis and genetic factors. It has a half-life of three to five days . Increased fibrinogen leads to the formation of thrombosis, which blocks the blood vessels and reduces blood supply, causing CVD. According to the Framingham trial, greater frequency of CVD events was associated with higher fibrinogen plasma concentrations . Furthermore, a study by Surma et al. reported that the individuals who experienced a myocardial infarction event showed higher plasma fibrinogen concentrations (≥343 mg/dL) than healthy individuals . A study by Maresca et al. showed that arterial thrombosis occurs irrespective of fibrinogen levels . Overall, these findings highlight the significance of fibrinogen as a predictor of CVD risk and its potential role in refining risk-assessment strategies . Biomarkers indicating myocardial injury Creatine Kinase (CK) CK biomarkers are known to be used to diagnose muscle disorders. CK is an enzyme that is generally found in muscles of the skeletal, heart, and brain . When heart muscles are damaged due to atherosclerosis, CK is released into the bloodstream. This makes it inadequate for early diagnosis of myocardial infarction within six hours of the event . A study by Wu et al. noted that CVD mortality is higher among the groups with high CK values than the low CK group . Therefore, CK is a better diagnostic marker than predicting the risk of CVD. Cardiac Troponin Cardiac troponin I (cTnI) is a protein that is released into the bloodstream from myocardial cells when they are permanently damaged due to acute heart muscle injury. cTnI levels are usually elevated four to nine hours after myocardial injury in the bloodstream, peaking at 12-24 hours. Increased cTnI levels may not appear for two to three hours after a myocardial injury due to several physiological processes involved in the release. It takes time for myocyte membranes to break down and release detectable levels of troponins into the blood. This process typically begins within hours of the onset of myocardial injury, making it difficult to predict cardiovascular events . Higher troponin levels indicate more damage to the heart muscles and are associated with CHD, mortality, and heart failure events . In contrast, a study by Huynh and Blankenberg et al. mentioned that high-sensitivity troponin I (hs-TnI) is an advanced diagnostic that aids in detecting low troponin concentrations, enabling earlier and more accurate diagnosis of myocardial infarction. Therefore, the hs-TnI test helps in early diagnosis, prognosis, and potential treatment strategies for CVD. Biomarkers indicating myocardial stress N-terminal Prohormone of Brain Natriuretic Peptide (NT-proBNP) NT-proBNP is a protein-based hormone produced by the ventricles of the heart. It is released in response to cardiac wall stress during heart failure events as a protection mechanism for reducing the workload on the heart and improving its efficiency. The mechanism leads to vasodilation and diuresis to minimize cardiac workload . Increased cardiac muscle damage correlates with higher NT-proBNP levels, which are associated with increased cardiovascular mortality rates . A study by Hussain et al. reported that participants with hypertension and elevated NT-proBNP levels had greater cardiovascular risk compared with those with hypertension but with lower NT-proBNP levels . Pulmonary and renal dysfunction can also increase NT-proBNP levels . Therefore, the NT-proBNP test can predict morbidity and mortality of cardiovascular events, which benefits heart failure screening, identifying people at high risk of heart failure, and helping manage disease progression . Others Serum Creatinine Serum creatinine is the marker of kidney function. Elevated creatinine levels and decreased glomerular filtration rate (GFR) are indicators of impaired kidney function. Impaired kidney function is associated with fluid retention and atherosclerotic plaque formation . These increased fluids cause hypertension and increased cardiac afterload. This association suggests that even mild impairment of renal function can contribute to an elevated risk of CVD events and mortality . A study by Chen et al. mentioned that higher creatinine levels are recognized among elderly, diabetic, and hypertensive individuals and those with a history of myocardial infarction or stroke . Therefore, serum creatinine can be used as a marker to assess the high-risk populations for CVD . Vitamin D Vitamin D, also known as calciferol, is primarily produced in the body by the action of sunlight on the skin, although it may also be taken through food sources and supplements. Vitamin D plays a crucial role in calcium absorption and metabolism, which is essential for bone mineralization . Vitamin D is essential for maintaining endothelial function, which prevents platelet aggregation and controlling inflammation. Vitamin D deficiency (<20 ng/mL) can cause endothelial dysfunction, which is an early stage in the development of atherosclerosis and is linked to a twofold increase in the risk of CVD events . Additionally, vitamin D deficiency was associated with various CVD risk factor development, such as hypertension, diabetes, high body mass index (>30), and elevated triglyceride levels . In contrast, high vitamin D levels lead to irregular heartbeat and atrial fibrillations . Plasma Trimethylamine N-oxide (TMAO) TMAO is a small colorless amine oxide generated from choline, betaine, and carnitine by gut microbial metabolism, which is rich in many fruits, vegetables, nuts, dairy products, and meat . The elevated levels of TMAO (>6 μM) alter metabolism, which influences bile acid synthesis and cholesterol absorption, causing dysfunction in vascular cells and cardiomyocytes. The elevated TMAO also promotes the accumulation of cholesterol in macrophages, leading to foam cell formation. The dysfunction in vascular cells leads to inflammation and cellular apoptosis, which contributes to the development of atherosclerosis, cardiomyopathy, and heart failure . Moreover, in a European study, CVD patients had higher plasma TMAO levels than healthy individuals. The study revealed that gut microbiota-related mechanisms contributed to CVD progression, but the predictive value of TMAO needs further evaluation . Table 1 presents biomarkers and their normal ranges, along with their relevant references. Limitation Articles in other languages, such as Mandarin and Spanish, may have been excluded. This language restriction might have led to a "Tower of Babel" bias. Since such articles are easily accessible and understandable, we included them in our narrative review. Biomarkers are often used for diagnosis, prognosis, and risk prediction. The extensive literature conducted focuses on identifying biomarkers whose varied levels are useful in identifying high-risk populations and early diagnosis of all aspects of CVD event occurrence. The increased levels of biomarkers such as atherogenic lipoproteins, fibrinogen, homocysteine, and TSH showed a greater association with CVD risk factor development. Risk factors such as dyslipidemia, hypertension, diabetes, and obesity cause atherosclerotic CVD. Inflammatory markers such as CRP will not provide specific risk, but they can enhance risk prediction while adding conventional biomarkers. Biomarkers such as hs-TnI, NT-proBNP, MPO, PCT, CK, and plasma ceramide act as better diagnostic markers than screening, as some proteins are released from cardiac myocytes into the bloodstream only one to two hours following the onset of a cardiovascular event. Although there is evidence that combining biomarkers can improve the accuracy of specific tests, the best combinations for early diagnosis or prognosis must be identified.
|
Review
|
biomedical
|
en
| 0.999997 |
PMC11688161
|
Thalassemia, the most common form of hereditary anemia, is caused by the impaired synthesis of one of the two globin chains in hemoglobin . This disorder has been found to be highly prevalent in tropical and sub-tropical regions of the world (e.g., Southeast Asia, the Mediterranean area, the Indian subcontinent, and Africa), where the estimated prevalence rates are 12%-50% in the case of alpha thalassemia and 1%-20% for beta thalassemia . The overall prevalence of the thalassemia trait in India is 2.78%, with higher burdens in northern and western states . Beta thalassemia syndromes result from a decrease in beta-globin chains, which results in a relative excess of alpha-globin chains. The life expectancy of thalassemic patients has dramatically extended due to the availability of multiple transfusions and chelation therapy. The majority of children with severe forms of thalassemia, such as thalassemia major, are transfusion-dependent, requiring blood transfusions every 15-30 days . The natural course of the disease is affected by transfusion side effects, which need to be monitored and treated . Iron overload resulting in end-organ damage and blood-borne infectious agents still represent the principal causes of morbidity and mortality. Due to repeated blood transfusions, iron accumulates in tissues such as the liver, heart, and endocrine glands, as these organs have high levels of transferrin receptors. Serum ferritin is a valuable parameter in managing thalassemia, particularly for monitoring iron overload and guiding chelation therapy. A serum ferritin level >1000 ng/ml indicates iron overload, which is associated with harmful consequences such as organ damage and increased mortality . Endocrinopathies are among the most frequently observed complications in thalassemia. Hormone secretion disorders among endocrine organs affected by iron deposition include the pituitary, adrenal, pancreas, thyroid, and parathyroid glands leading to glucose intolerance, and gonadal, thyroid, and parathyroid dysfunctions . Growth retardation in thalassaemic children is not only seen in pubertal periods but also in the infantile and pre-pubertal periods . Gonadal dysfunction is a major complication occurring in thalassemia due to gonadal iron deposition, as confirmed by multiple gonadal and pituitary-gonadal function studies . Factors contributing to the development of impaired glucose tolerance or overt diabetes include chronic iron overload in the pancreas, causing impaired insulin excretory function . Both primary effects as a result of iron deposition in the thyroid gland as well as secondary effects due to pituitary dysfunction have been observed in the affected children. Overt clinical hypothyroidism develops in a minority of patients, that is, around 5%, whereas a larger percentage develops subclinical primary hypothyroidism as depicted by normal tetraiodothyronine (T4) and triiodothyronine (T3) levels but high thyroid-stimulating hormone (TSH) levels . The development of hypoparathyroidism is mainly attributed to poor compliance with chelation therapy and elevated serum ferritin levels. Hematopoietic stem cell transplantation (HSCT) is the only curative treatment for transfusion-dependent thalassemia, but it is not widely available in our country . Thus, medical management of transfusion-dependent thalassemia, which includes multiple transfusion therapies, and management of endocrine dysfunctions appearing as a result of iron overload is a pressing medical challenge. Since endocrine complications have become a major problem in adolescence and adulthood, therefore early recognition and treatment are important in order to prevent late irreversible sequelae and improve the quality of life of patients. With this background, the present study was conducted with the primary objective of evaluating growth parameters and endocrine function in children diagnosed with thalassemia major. The secondary objective was to investigate the impact of elevated serum ferritin on growth metrics and endocrine functions, focusing on markers of pubertal development, thyroid, and parathyroid functions. By identifying relationships between serum ferritin levels and growth or endocrine abnormalities, the study aimed to provide early interventions and improved management strategies for transfusion-dependent thalassemia major patients, ultimately enhancing their overall quality of life and clinical outcomes. This prospective observational study was conducted in the Department of Pediatrics, Uttar Pradesh University of Medical Sciences (UPUMS) Saifai, Etawah. This study included all patients between the age groups of six months and 14 years with transfusion-dependent thalassemia admitted to the Department of Pediatrics at UPUMS, Saifai, Etawah. Sample size We included 62 children admitted during the study period fulfilling eligibility criteria. Thalassemia major is a relatively rare condition, especially among pediatric populations. A sample size of 62 is practical and feasible within this specific population and region. Studies in similar domains have reported moderate effect sizes between ferritin levels and clinical outcomes, indicating that a sample size of 62 should be sufficiently sensitive to detect these relations. Ensuring comprehensive data collection on growth and endocrine markers may limit the scope for larger sample sizes, as each participant requires detailed testing for a range of outcomes (e.g., hormone levels and growth measurements). Previous studies on similar topics (e.g., the relationship between iron overload and endocrine or growth abnormalities in thalassemia patients) have often used sample sizes in the range of 50-100 participants . This study's sample size ensures optimal use of available resources while achieving meaningful insights. Inclusion and exclusion criteria Children aged six months to 14 years with confirmed diagnoses of thalassemia major, undergoing regular blood transfusions, and willing to give informed and written consent for participation in the study were included. Children with other genetic or chronic illnesses and children on hormonal therapy that could affect growth and endocrine function were excluded. Data collection Data collection was conducted from the Thalassemia Care Center, UPUMS, Saifai, Etawah in North India, from December 2022 to May 2024. An informed written consent was obtained from parents/legal guardians of the patients eligible for participation in the study after approval from the institutional ethics committee of UPUMS, Saifai, Etawah. They were informed about the procedure in detail before the commencement of the study. Confidentiality and privacy were ensured. Demographic information like age and gender was collected. Growth assessment - measurements of height, stature, and bone age - was done. Evaluation of thyroid and parathyroid function was done with serum hormone levels assessed via blood tests. Serum ferritin levels of participants were obtained as a surrogate marker for iron overload. Statistical analysis The data collected were loaded in Microsoft Excel (Microsoft Corp, Redmond, WA) and coded. The analysis was done in IBM SPSS Statistics version 23 (IBM Corp, Armonk, NY). The data was analyzed using descriptive statistics and making comparisons among various groups. Categorical data was summarized as proportions and percentages, and quantitative data was summarized as mean ± standard deviation (SD). Data normality was found not normal by the Shapiro-Wilk test. Spearman correlation analysis was to study the correlation between serum ferritin and thyroid hormone. The difference of means across the groups was tested with the Mann-Whitney U test for two groups and the Kruskal-Wallis test for more than two groups. A two-sided p<0.05 was considered statistically significant. This observational study included 62 transfusion-dependent thalassemic children aged between six months and 14 years. Out of the 62 patients, the largest age group consisted of patients aged one to three years, comprising 25 (40.3%), and 49 (79%) were boys, as shown in Figures 1 , 2 . Table 1 shows the clinical and biochemical characteristics of the participants. The most common endocrinopathy in transfusion-dependent thalassemia patients was short stature (37.1%), followed by impaired glucose tolerance (28.6%), subclinical hypothyroidism (14.5%), and parathyroid dysfunction (14.5%). Overt diabetes and pubertal delay were not detected in any of the patients. The serum ferritin showed a strong positive correlation with age (r=0.688, p<0.001) and a weak positive correlation with TSH level (r=0.303, p=0.017), which were statistically significant (Table 2 ). In contrast, T3 and T4 demonstrated a weak, non-significant correlation with ferritin. Table 3 presents the mean serum ferritin levels (ng/ml) across participant characteristics, along with their associated p-values. There was a statistically significant association of ferritin with age (p<0.001), stature (p=0.001), TSH (p=0.004), and parathyroid function (p=0.006). An increased level of serum ferritin was observed with increasing age. The relationship between serum ferritin and bone age, thyroid, and parathyroid functions was not statistically significant. Beta-thalassemia major is a severe hemolytic anemia requiring regular blood transfusions . In the present study of transfusion-dependent beta-thalassemia major children between the ages of six months and 14 years, the mean age was 5.66 ± 3.77 years. The largest group consisted of children aged one to three years, making up 40.3% of the participants. The majority of participants were boys, which is similar to another study . A considerable proportion of short stature (37.1%) was found in our study. Chronic anemia, transfusion-related iron overload, and noncompliance with chelation therapy may be the responsible factors. Chronic anemia may lead to hypoxia and poor growth, whereas iron overload in endocrine glands impairs hormone synthesis and release, manifesting as hypothyroidism, growth hormone deficiency, micronutrient deficiency, undernutrition, and psychological stress. Short stature was detected to be the most common endocrine abnormality occurring in 40.2% of participants in a study by Tan et al. in Malaysian children with transfusion-dependent thalassemia . In another study of multi-transfused Indian thalassemia patients, it was found that 57.14% of patients were short . The prevalence of short stature in our study was lower than observed in some previous studies. The reason could be different genetic make-up and different classification criteria used for defining short stature. The prevalence of thyroid dysfunction was found to be 14.5%. Out of nine patients found to have thyroid dysfunctions, all the patients had subclinical hypothyroidism, that is, elevated TSH with normal T3 and T4 values. There were no cases of secondary hypothyroidism. The results of this study are comparable to the previous studies. Tan et al. demonstrated subclinical hypothyroidism in 13.4% of patients and overt hypothyroidism in 4.9% of patients . In another study by Bordbar et al., it was found that 10.7 patients had subclinical hypothyroidism . Thyroid dysfunction frequently develops in thalassemia, with subclinical primary hypothyroidism occurring in a greater majority, mainly attributed to free radical release and oxidative stress due to iron overload. Overt clinical hypothyroidism occurs in a minority . Endocrine dysfunctions in thalassemia major are multifactorial. Chronic anemia, hypoxia, oxidative stress, repeated transfusions, and inflammation independently impact endocrine function. These factors can disrupt hormone production and the conversion of T4 to T3. Therefore, weak correlations suggest that while iron toxicity is significant, other pathophysiological processes might play significant roles in endocrine dysfunctions. Out of 14 patients who were more than 10 years old, five (28.6%) were tested impaired on the oral glucose tolerance test (OGTT), while there were no cases of overt diabetes. In the study by Tan et al. , pre-diabetes mellitus and overt diabetes were present in 8.6% and 5.2% of the patients, respectively. Pancreatic dysfunction because of chronic overload takes time to develop and is thus a late complication usually observed in the second decade of life. In our study, the number of patients more than 10 years of age was less. So, more patients in this age group and further follow-up are required to comment on the exact prevalence of dysglycemia in thalassemic patients. In transfusion-dependent thalassemic patients, parathyroid dysfunction is evident after the first decade of life. In this study, parathyroid dysfunction was evaluated in all 62 patients with a 14.5% prevalence. In a similar study, hypoparathyroidism was observed in 12.3% of the patients . In another study, 13.2% of the patients were found to have hypoparathyroidism . Pubertal development was assessed in children more than 10 years of age in this study. The absence of breast development in girls by the age of 13 years and testicular development by the age of 14 years in boys defined pubertal delay. In our study, there was no patient above the age group for defining pubertal delay. Impaired puberty was present in 71% of patients in the study by Najafipour et al. . In a study by Merchant et al., it was found that 60% of patients had not attained puberty . Sexual underdevelopment is a cause of serious concern in beta-thalassemia patients as it represents the delayed onset of puberty . Our study exhibits that the prevalence of short stature, parathyroid dysfunction, and thyroid dysfunction increases with rising serum ferritin levels. A study on Turkish children did not demonstrate ferritin levels to be significantly correlated with endocrine complications . Thus, it can be said that patients with transfusion-dependent thalassemia can have endocrine complications irrespective of normal serum ferritin levels and vice versa. Therefore, close monitoring for endocrine dysfunction is essential irrespective of serum ferritin levels to prevent long-term adverse outcomes and to improve the quality of life . These findings could underscore the need for early interventions to manage iron overload in thalassemia major patients and prevent or mitigate growth and endocrine complications. Improving chelation therapy compliance is another major target for managing complications of iron overload . The study has a few noticeable strengths. It provides valuable insights into the relationship between serum ferritin levels and both growth and endocrine function among a well-defined cohort of transfusion-dependent thalassemia major children. The study provides region-specific data for North India where thalassemia prevalence is significant that can be helpful in tailoring clinical practices. The use of robust statistical analyses to establish correlations adds to the reliability of the findings, making them applicable for broader clinical considerations. There are a few limitations in the current study. There may be a possibility of biases in data collection, such as recall bias for transfusion histories and confounders not adjusted for. A small sample size may be another potential limitation. A cross-sectional study design limits the certainty of causality. The appropriate length of follow-up will capture more robust data on complications. The prevalence of endocrinopathies in present transfusion-dependent thalassemic cohorts was considerably high, presenting as short stature, impaired glucose tolerance, hypoparathyroidism, and subclinical hypothyroidism. Overt diabetes and pubertal delay were not detected in any of the patients. The study showed a weak positive correlation of endocrinopathies with serum ferritin levels. Hence, irrespective of serum ferritin levels, patients with transfusion-dependent thalassemia can have a considerably high prevalence of endocrine complications. Therefore, close monitoring for endocrine dysfunctions is essential, irrespective of serum ferritin levels, to prevent long-term outcomes. Early interventions, including chelation therapy to manage transfusion-related iron overload, may mitigate complications. Further multicentric studies with larger sample sizes and more robust designs are necessary to validate these findings and inform clinical guidelines.
|
Study
|
biomedical
|
en
| 0.999996 |
PMC11688163
|
Zoon’s vulvitis, vulvitis chronic plasma cellularis or plasma cell vulvitis (PCV) is a rare, chronic, benign idiopathic inflammatory condition of the vulva, characterized by a bright-red lesion of mucosa with significant chronicity. Typically, it presents as thinning of the epidermis of mucosa or atrophic mucosa with shiny, orange-red plaques, which can affect any part of the vulva and can spread symmetrically and bilaterally with the propensity of chronicity and gradual coalescence. It can involve the oral cavity, lips, or palate. Patients with coexisting autoimmune diseases have been documented to experience PCV, indicating an autoimmune etiology . Histologically, it is called PCV due to the increased number of chronic inflammatory responses, such as plasma cells, observed in the skin biopsy. There may be fewer eosinophils and neutrophils. It appears as a dense, subepithelial mononuclear cell infiltrate largely composed of plasma cells, along with diamond-shaped keratinocytes, lozenge-shaped cells, hemosiderin deposition, and red cell extravasation . Zoon's vulvitis is an under-recognized condition and diagnosis is often delayed and can be refractory to topical treatment. The diagnosis is important because symptoms of PCV may improve with topical therapy but signs of the disease can be quite refractory causing a significant impact on the patient‘s quality of life. Lichen Planus (LP) is an uncommon inflammatory dermatosis, which can affect skin, nails, and mucosa. Approximately 10% of those affected have LP of the nails while half of those affected have oral lichen planus (OLP), which is more common in women than in men. The mucosal LP has more chronicity in its course than the cutaneous LP. It has a strong association between vulvovaginal and oral type . Clinical presentation is similar to PCV, usually characterized by pain, burning, or itching, and associated with dysuria, dyspareunia, and postcoital bleeding. However, the potential for malignant transformation in LP ranges from 0% to 5.6%, which underscores the importance of its early diagnosis and treatment . Female genital LP has four clinical forms, including erosive, papulosquamous, and hypertrophic, which is rare. Diagnosis is clinical in classic cases with white interlacing stria called Wickham stria. However, a biopsy is needed for atypical presentation. The exact cause of OLP is not fully understood, but lymphocytic infiltration suggests that OLP may be a cell-mediated immune response or an autoimmune reaction targeting specific skin cells called keratinocytes. Histology shows hyperkeratosis, saw tooth acanthosis, wedge shape hypergranulosis, and lymphohistiocytic infiltrate obscuring dermo-epidermal junction are classic of mucosal LP . A 60-year-old diabetic female had a nine-month history of progressive vulvar irritation and itching, associated with dyspareunia, urinary stream disturbance, and occasional mouth soreness, significantly affecting her quality of life. Pelvic examination revealed well-demarcated red-orange areas of erythema covering the clitoral hood with effacement, obvious peri-urethral scarring, and fusion of the labia as shown in Oral buccal mucosa revealed classic bilateral buccal mucosal lichenoid changes with erosive gums. The Wickham stria was classical of LP as shown in . Investigations including FBC, ESR, LFTs, TSH, ANA, and Hep C serology were all unremarkable. Hair, nails, and rest of full skin examination were normal. There had been no relief with topical antifungal creams, antibiotics, or estrogen suppositories. She found topical tacrolimus stingy and could not tolerate it. The potent topical steroid, Clobetasol propionate, provided relief initially, but the patient became intolerant following long-term use and stayed non-respondent, which prompted the skin biopsy. A 4 mm punch biopsy, with the patient's consent, was taken from the right clitoral site, as demonstrated. The biopsy showed erosion of the surface epithelium with a dense band-like infiltrate of plasma cells in the underlying sub-epithelium, along with lymphocytes and neutrophils, as shown in Figures 4 , 5 . The presence of 90% plasma cell-rich lichenoid inflammatory infiltrates, with less than 1% of lymphocytes and neutrophils, was suggestive of PCV. Periodic acid-Schiff (PAS) staining was negative for fungi. The clinico-pathological diagnosis was suggestive of Zoon's vulvitis and coexisting reticular LP, due to the clinical features of interlacing white lines (Wickham striae). As patient was non-respondent to topical steroids and the addition of coexisting LP directed toward systemic treatment options. She was offered methotrexate for long-term treatment with dermatology follow-up. Her symptoms started improving after three months, and at the six-month follow-up visit, her symptoms had completely settled, and the clinical signs showed minimal inflammation. She was continued on methotrexate with blood monitoring of LFTs and FBC to keep her disease under control. Any type of lichenoid dermatosis raises the chance of developing another one. Our patient, in addition to PCV, had coexisting LP, as evidenced by the white interlacing striae of Wickham and its correlation with histological findings. It is anticipated that 2% of women will develop LP, with the oral cavity being the most typically affected. In postmenopausal women, vulvovaginal LP accounts for 6% of chronic vaginal complaints, affects 25% to 57% of OLP cases and is histologically verified in 3.7% of cases involving women who visit a multidisciplinary vulvar clinic . The percentage of plasma cells appears to be the most crucial factor in PCV diagnosis, and when plasma cell counts are ≥50%, this is sufficient for the diagnosis. Numerous plasma cells are often observed in conditions such as eroded contact dermatitis of the genital area, as well as in infections like syphilis or HIV. Special stains and serological testing can be valuable when there is clinical suspicion of an infectious etiology. The presence of a band-like lymphocytic infiltration is considered a key histopathological feature in the pathogenesis of OLP. Additionally, the involvement of B cells in the pathogenesis of OLP is supported by findings from Mattila et al. who reported the presence of B cells in 74.3% of OLP lesions, further highlighting their potential role in the disease process . Chan and Zimarowski et al. describe that basal keratinocyte crowding was a novel finding in their study. This should be considered in future histopathologic investigations since it could be a valuable criterion for identifying PCV in the future (Table 1 ). There is no gold standard treatment for PCV; however, the primary treatment is topical corticosteroid, tacrolimus, or topical estrogen. Patients are often refractory to its primary treatment, which significantly affects their quality of life, as reflected in our case, where PCV was recalcitrant to topical potent steroids and tacrolimus. Other alternative therapies are topical imiquimod, estrogen, interferon, lasers, cryotherapy, intralesional corticosteroids, and surgical excision. Systemic treatments include methotrexate and ciclosporins . Another alternative treatment to recalcitrant PCV demonstrated by Paras Oil et al. is platelet-rich plasma, with significant improvement after two weeks of the first session and complete resolution within six weeks (Table 2 ) . Nevertheless, these topical medicines do not consistently provide an effective treatment response. One known fact in treating PVC is limited knowledge of vaginal absorption of the drug, as it is dependent on penetration across the membrane and the solubility of the drug in the vaginal lumen. Vaginal epithelium thickness, mucus viscosity, the pH and volume of vaginal fluid-all of which vary from patient to patient, influencing individual drug absorption. Medications meant to be delivered vaginally must be somewhat soluble in water. Additionally, compared to large-molecular weight lipophilic or hydrophilic medications like testosterone and hydrocortisone, low-molecular-weight lipophilic options like progesterone and estrone are absorbed through the vagina. The suppository form of hydrocortisone still prolongs the drug's contact with the vaginal epithelium to enhance hydrocortisone's low absorption . There is a low safety risk profile with local estrogen therapy with estradiol vaginal tablets as studies have not demonstrated an increased risk of endometrial cancer, breast cancer, or cardiovascular events. Vaginal cream containing 0.01% estradiol is also well tolerated; however, compared to low-dose vaginal pills, the cream absorbs more systemically. Dermatologists should talk about treatment with their patients' oncologists if their patients are receiving treatment for breast cancer or have a history of breast cancer . Despite having very distinct clinical symptoms, PCV is a rare inflammatory vulvar dermatosis, as healthcare providers are not familiar with it, diagnosis is sometimes delayed . Symptoms of PCV are common and often quite severe. The frequent delays in diagnosis and the use of inappropriate treatments suggest that misdiagnosis is a common issue. Correct diagnosis is important as the presentation mimics other genital conditions such as Bowen’s disease squamous cell carcinoma, candidiasis, syphilis, herpes simplex, bullous disorders, and extramammary Paget’s disease, which require specific treatment . Similarly, it is essential to differentiate LP from erosive LP and vulval intraepithelial neoplasia histologically. While examples of lesions with moderate dysplasia have been reported, there have been no reports of malignant alterations associated with Zoon's vulvitis, in contrast to Zoon balanitis and LP. Therefore, in order to rule out human papillomavirus (HPV) infection or other dysplastic conditions such as neoplasia and bullous disorders, patients are advised to have periodic gynecologic evaluations, repeat biopsies of persistent lesions, and regular dermatological follow-ups . In addition, variables related to infections, hormones, and irritation have been linked to it. To the best of our knowledge, very few cases of coexisting PCV and lichen sclerosus have been reported in the literature, and none to date have involved LP, making our case exceptionally rare . The above case expands our knowledge of challenging clinical and histological diagnoses of two conditions together. Due to the disease's rarity and its differential diagnosis, when discussing treatment options with patients, the multitude of clinical manifestations must be carefully considered. Our patient received methotrexate, and optimum resolution was achieved within three months. Early and correct diagnosis is important as unlike Zoon‘s Balanitis, PCV is benign but coexisting LP makes it liable for regular dermatology follow-up and monitoring of methotrexate side effects.
|
Review
|
biomedical
|
en
| 0.999995 |
PMC11688165
|
Optic neuritis (ON) is a neuro-ophthalmic emergency. It is a rare but important cause of unilateral or bilateral acute-onset visual loss in children . However, they tend to have good visual recovery despite initially presenting with more profound visual loss than adults . Secondary causes of ON are more common in children; thus, it is important to investigate the presence of associated disease states. In recent years, there has been increased awareness to differentiate the different ON phenotypes, namely, multiple sclerosis (MS), neuromyelitis optica spectrum disorder (NMOSD), and myelin oligodendrocyte glycoprotein antibody-associated disease (MOGAD). Anti-MOG and anti-aquaporin-4 (AQP4) antibodies are laboratory investigations that are mainly used in detecting this difference in ON phenotypes. Apart from these laboratory investigations, a comprehensive examination is vital in the management of pediatric ON, and this includes history taking, optic nerve examination, imaging, and serology testing . An eight-year-old girl presented to the emergency department with bilateral eye acute visual loss for one day. The child complained to her parents that she was unable to perform her school homework because she could not see her books properly. The parents are also unable to get the child to focus on the objects in front of her. The visual loss was associated with pain during eye movement. The parents reported she had a brief episode of viral fever that happened around two weeks before the start of the visual complaint. The viral fever was self-limiting and did not require a visit to health facilities. She also experienced intermittent severe headaches for a duration of two weeks. The child was less active during the period of headache. The headache was relieved with rest. Otherwise, there was no history of red eyes or trauma to the eyes. There was no recent travel, swimming in a river or pool, or pets at home. There was no history of seizures to indicate any associated neurological symptom On examination, the right eye's visual acuity was 1/120, and on the left eye, there was perception of light. A positive relative afferent pupillary defect (RAPD) was noted on the left eye. There was pain on eye movement in general and not associated with any specific gaze direction. Pupil examination revealed a mid-dilated pupil on the left that was not responsive to light. No lid swelling, ptosis, extraocular movement limitation, conjunctival injection, or corneal pathology noted. Fundoscopy examination demonstrated bilateral optic disc swelling . There were no signs of posterior uveitis such as vitritis, retinitis, vasculitis, or choroiditis. The macula was flat with no presence of macula star or exudate. We were unable to perform or document other optic nerve function tests, such as color vision, visual field testing, and contrast sensitivity because the patient presented with marked visual impairment. The rest of the neurological examination, including cerebellar signs, was normal. Initial blood work was negative for infection and inflammation, with normal values of white blood cell count at 10 x 10 9 /L (normal value: 4-11 x 10 9 /L), erythrocyte sedimentation rate at <29mmol/hour, C-reactive protein of <0.5 mg/dL, and negative serologies (Cytomegalovirus (CMV), Herpes simplex virus (HSV), Rubella, and Toxoplasma). Urinalysis ruled out urinary tract infection. Magnetic resonance imaging (MRI) was performed the next morning and revealed the presence of bilateral ON with tiny foci of hyperintense signal intensity in bilateral white matter . Cerebrospinal fluid (CSF) analysis revealed a normal composition, with a normal lumbar puncture opening pressure. CSF culture also came back negative for any growth. Intravenous methylprednisolone therapy of 30 mg/kg/day in three divided doses for five days were initiated then continued with oral prednisolone 1 mg/kg/day. The oral prednisolone was tapered down over the period of three months. At day 10 of illness, patient vision improved to a visual acuity of 6/12 on the right eye and 2/120 on the left eye. An internationally standardized cell-based assay (CBA) for antibodies later came back as positive anti-MOG and a negative anti-AQP4. A diagnosis of MOG-associated demyelinating ON was made. A follow-up MRI of the brain and orbit was done at six months after the first presentation. It shows evidence of resolving bilateral ON with similar appearance of white matter lesion. The patient was also scheduled and underwent MRI of the spine as a routine workout to rule out transverse myelitis. The result of the MRI was not suggestive of transverse myelitis as there were no presence of white matter lesion on the MRI. At the sixth-month follow-up, the patient showed full recovery with final visual acuity of 6/6 and intact optic nerve function test in both eyes. The child was planned for a follow-up at six-monthly intervals for the first two years upon completion of visual recovery. Pediatric ON has not been extensively studied due to its rarity. The common presentation of pediatric ON is bilateral papillitis with marked visual impairment following a viral illness . However, despite poor visual acuity at presentation, the visual prognosis is good with complete visual recovery. When dealing with a case of ON, the question that should be addressed is the risk of conversion to MS . Of late, there is increased awareness in differentiating the phenotypes of demyelinating ON, namely, NMOSD and MOGAD . Positive anti-AQP4 antibodies confer that a patient is at high risk for NMOSD. In MOGAD, MOG antibodies are positive whereas having negative anti-AQP4 antibodies results. One marked difference that sets MOG antibody-associated demyelination apart from both MS and AQP4 antibody-positive demyelination is the prognostic concerns, leading to the different amounts of aggressiveness in terms of management and treatment . The target for anti-MOG antibodies are oligodendrocytes, which cause acute demyelinating lesions and are associated with good recovery potential. However, in NMOSD, anti-AQP4 antibodies target astrocytes, leading to lesions with a poorer prognosis for the patient. Other distinct characteristics of MOGAD are earlier age of onset, equal female-to-male ratio, and a thicker retinal nerve fiber layer . In terms of imaging, practitioners should perform an MRI of the brain and orbit with contrast enhancement in all cases if available . It is the gold standard to confirm the presence of ON, rule out any other intracranial pathology, and look for features of demyelination. Analysis of the blood and CSF excludes any signs of infection and inflammation. When suspecting demyelinating disease, practitioners should order serum testing for anti-AQP4 and anti-MOG. Another modality to investigate is optical coherence tomography (OCT) to look at the thickness of the retinal nerve fiber layer. Visual evoked potential (VEP) is also a useful tool in investigation; however, it is not performed in this case as it is not readily available. For a more systematic approach, Ramanathan et al. has proposed a comprehensive algorithm for investigation and diagnosis for the first episode of ON . To our knowledge, there has been no clinical trial made on the treatment of ON in pediatric populations given its rarity of presentation. Much of the current practice in treating pediatric ON comes from understanding of the Optic Neuritis Treatment Trial study in adults . The commonly described regime involves treatment with intravenous methylprednisolone 30 mg/kg/day for 3-5 days. This is followed by a course of oral prednisolone at 1 mg/kg/day, which will be slowly tapered over 4-6 weeks. Practitioners should monitor the side effect of steroids during these periods. An alternative treatment is using intravenous immunoglobulin and plasma exchange, for example, in steroid-resistant patients . By now, we know that pediatric ON patients have good visual recovery despite having poor visual acuity at presentation . The visual recovery begins during the initial 2-3 weeks of treatment and is complete by 4-6 months. However, the effects can last up to two years. In 70%-85% of cases, patients achieve final visual recovery of 6/12 or better. In conclusion, although pediatric ON is rare, it is a neuro-ophthalmic emergency, which almost always presents with profound initial visual loss compared to adults. Although in children it presents in such a grave manner, it is usually followed by encouragingly good visual recovery. This case report highlights the significance of investigating the secondary causes, emphasizing the need for a comprehensive examination and the diagnostic value of anti-MOG and anti-AQP4 antibodies.
|
Review
|
biomedical
|
en
| 0.999998 |
PMC11688237
|
RSV causes 200,000 deaths yearly, with a disproportionate burden in low-income countries 1 ; in the last few years, there have been dramatic advances in the development and deployment of RSV vaccines 2 . For infants, RSV vaccines provide passive protection following maternal immunisation 3 ; the alternative approach is passive protection through monoclonal antibody administration 4 . In the UK, a decision has been made to implement the maternal vaccine 5 , but other countries, for example, Spain, are using universally administered monoclonal antibody 6 . Both are attractive strategies and will serve to reduce infection in the first 6 months of life. But they are not the complete solution to protect children from RSV disease, the major challenge of both approaches is that because they rely on passive immunity, protection will wane. In the absence of vaccination, levels of transferred maternal antibody decline rapidly in the infant; one study modelled the approximate half-life to be 35 days projecting the duration of response was 4.7 months 7 ; this is similar to other antigens 8 . So even if the vaccine elevates maternal antibody titres 9 , there is still a window of susceptibility in the infant; this is of particular concern in premature infants who may only receive a smaller amount of maternal antibody and could still be developmentally immature as the antibody wanes. Questions remain about whether delaying RSV infection to the second year of life will simply delay the peak of the disease. For monoclonal antibody therapy, there is an additional risk of viral mutation and escape 10 . Therefore, it could be beneficial to boost immune protection to the child. One approach to extend protection against RSV would be to boost the antibody response by vaccinating at the low ebb of antibodies. However, maternal vaccination can attenuate subsequent antibody responses in the child, for example, after measles 11 or pertussis 12 vaccination. An alternative strategy is to induce protective T cells with a vaccine because they have been shown to correlate with protection against viral lung infection 13 . A tissue-resident subset of T cells (TRM) has recently been identified as a key component of protective cellular immunity 14 . TRM cells act as sentinels at mucosal sites responding rapidly to infection, leading to an antiviral response 15 . These cells are derived from circulating effector T cells that migrate into tissues and, under key transcription factors (Hobit/Blimp), lose receptors that enable tissue egress (CCR7, S1PR1) and gain integrins (αE/β7) that enable tissue retention 14 . TRM cells can be defined by the expression of cell surface markers, including the activation marker, CD69, and the integrin, CD103. TRM cells are found in the lungs after human RSV infection, and their numbers correlate with protection against challenge 16 . We 17 and others 18 have demonstrated that TRM cells are sufficient to protect against RSV infection, and it has been shown that vaccine-induced RSV-specific TRM is protective against viral infection 19 . Based on this, we believe that tissue-resident memory T cells could be a critical target for an RSV vaccine to be administered between 6 and 12 months of life. However, in order to generate protective T cells after vaccination, we need to understand more about the induction of TRM and, specifically, the requirements to induce TRM in early life. Here we explored the immune response in very early life; the immune system of neonates is developmentally adapted and differentially regulated. We have shown that RSV infection in neonatal (7-day-old) mice induces antigen-specific T cells, but the neonatal memory T cell response is different from that in adult mice 20 – 22 . A similar long-term imprinting effect has been seen on human T cells after RSV infection in the first year of life 23 . A recent study has observed significantly fewer TRM in the lungs of neonatal mice following RSV infection compared to adult mice 24 . Whilst the mechanisms by which TRM develop in the adult lungs are beginning to be established 15 , why or how the primary immune response fails to generate robust TRM in children remains unclear. There is also extensive evidence for the involvement of chemokines in the lung to selectively recruit inflammatory cells, such as CCL2 and CCL5, in both mouse models 25 and human studies 26 . Here, we show that mice primed at 7 days of age with RSV and CCL5 or CXCL10 made more TRM when rechallenged with RSV as adults. This could be a potential way to boost TRM recruitment through vaccination. To confirm previous studies 24 , we compared the effect of age on levels of lung CD8 TRM after RSV infection in a mouse model. Seven-day-old (neonates) or 6-week-old BALB/c (adult) were infected with RSV A2 virus and sacrificed 21 days after infection . There was a significantly greater proportion of CD69 + /CD103 + CD8 + T cells in the lungs of adult mice , and significantly more of these were specific for the RSV M2 82–90 pentamer . Previously we have observed that transferring airway cells from RSV infected adult mice to naïve mice was protective against subsequent RSV infection 17 . When we transferred cells from the airways of mice infected with RSV as neonates, there was no protective effect , indicating there was no localised T cell protection after RSV infection in neonatal mice. Fig. 1 Differential lung response induced by RSV in neonates compared to adults. Female 6–7 weeks old (adult) or mixed 7 days old (neonates) BALB/c mice were infected with RSV. Infection schedule made with BioRender ( A ). Percentage all CD8 Trm ( B) and RSV specific ( C ). Airway cells were transferred from neonatal RSV-infected mice into naïve adult mice prior to the RSV challenge of the recipient mouse, weight at d7 ( D ). In a separate study, neonatal and adult mice were infected with RSV, on day 7, lungs were collected from the infected animals and RNAseq was carried out from RNA extracted. PCA of genes in different groups ( E) , pathways related to each PC ( F ) and loading genes driving PC2 and PC3 are shown in ( G ). Volcano plots of DEG from Adult ( H ) and Neonatal ( I ) mice. Bars in B and C represent mean ± SEM of n = 5 mice. * p < 0.05. Having observed reduced TRM following neonatal RSV infection, we used RNASeq to identify whether there were transcriptomic differences in the lung that might explain the different recruitment of TRM to the lungs. We first evaluated global transcriptomic responses in lung RNA following RSV infection in neonatal and adult mice. Principal component analysis (PCA) demonstrated clear separation between the adult and neonatal groups; this was driven predominantly by PC1 (which accounted for 92.8% of the variance). There was also a distinct separation in the transcriptomic profile induced upon RSV infection in adult mice . This separation was predominately driven by PC2, marked by cytokine production ( p < 7 −05 ), immune response-regulating signalling pathway ( p < 5 − 05 ), and defence response ( p < 4 −05 ) . There was minimal separation between the infected and uninfected neonatal groups by PCA. Loading genes contributing to PC2 included granzyme B ( Gzmb ), interferon-gamma inducible protein 47 ( Ifi47 ), ubiquitin ligase ( Trim40 ), immunity-related GTPase , and developmental pluripotency-associated protein 3 ( Dppa3 ) . There were clear differences in the numbers of differentially expressed genes (DEGs) between adult and neonatal mice; adult mice had a total of 7536 DEGs, whereas there was 189 DEGs identified in neonates blood transcriptomics. Having observed significantly different transcriptomic profiles in response to RSV infection in the lungs of different aged mice, we explored the types of genes associated with the differences. When the two groups were compared , there were 32 genes both upregulated in adults and neonates, these related to complement pathways ( C1s1, C1ra ), apoptosis ( Casp12 ), platelet homoeostasis ( Nos2, Nos3, Nos1, Gucy1b1, Gucy1a1, Gucy1a2, Atp2b4, Clu ) and interferon signalling pathway ( Usp18, Oas2, Ifit1, Ifit3 ). When the DEGs were grouped by GO terms, there was a clear increase in cytokine-associated pathways in adult mice . Upregulated pathways in adult lung RNA following RSV infection included positive regulation of interleukin-1 production, positive regulation of cytokine production and regulation of antigen receptor-mediated signalling pathway. In comparison, the top enriched pathways in neonates were interferon-beta and gamma-focused. Fig. 2 Neonatal RSV infection induces a less pronounced cytokine response than adults. Venn diagram showing the overlaps between neonates and adult sequencing data ( A ). KEGG Pathways upregulated in neonates and adults ( B ). Relative expression level of selected genes ( C ). Cytokines measured in the lung in adult or neonatal mice 24 h after RSV infection; * p < 0.05, ** p < 0.01 compared between Ad RSV and NN RSV ( D ). We next explored the cytokine gene transcripts from the lung transcriptomics, focusing on genes associated with TRM recruitment (the cytokine genes Il15 and Tgfb , and the chemokine genes Cxcl10 (IP-10), Ccl5 (RANTES) and Cxcl16 ) and retention (integrins Itgae , Itga1 and Itgb7 ). Of these, Tgfb , Cxcl10 , Ccl5 and Itgb7 were significantly increased in adult lungs after RSV infection compared to age-matched control. Both Cxcl16 and Itgae were decreased significantly in adults. Il15 and Itga1 did not show any change in either adults or neonates. No significant increase in any of the TRM-related genes was observed in neonates . We then looked at protein levels of cytokines in the lung at an acute time point after infection. There was significantly more GM-CSF, CCL2, CXCL2, IL-6, CXCL1, CXCL10 and CCL3 in the lungs of adult infected mice compared to infected neonatal mice ; CCL5 protein levels were elevated after infection in both ages, but there was no difference between ages of mice. Having seen age-related differences in the murine response to RSV we wanted to compare the human response. As a model of neonatal human immune responses, we used cord blood; we compared the responses to the mothers post-partum and to non-pregnant women to account for the immuno-modulatory effect of pregnancy. PBMC isolated from cord blood ( n = 18), blood from mothers post-partum ( n = 13) or non-pregnant women ( n = 9) were incubated with RSV for 24 h. The cord blood and maternal blood came from the same mother–baby pair. Supernatants from these stimulations were collected and analysed by Luminex or pan-IFNα ELISA to investigate the immune response profile. To compare overall patterns of response, principal component analysis (PCA) was used to compress and transform the multivariate Luminex data. There were no significant differences in the global cytokine profiles between cord blood, mothers, and non-pregnant women . All three populations had moderate to high levels of inflammatory cytokines expression after RSV infection in vitro . There was no significant difference between the three groups of donors in individual inflammatory cytokines ; however, there was a trend towards CCL5 being lower in the cord blood. Fig. 3 Cytokines response to RSV stimulation in cells from mums, babies and non-pregnant women. PBMC isolated from cord blood, maternal blood post-partum and non-pregnant mothers were stimulated with RSV for 24 h. Cytokines were measured in the supernatant by Luminex. PCA analysis of all data points ( A ). Heat map of cytokine responses ( B ). Individual cytokine responses ( C ). Number of donors, cord blood n = 21, maternal n = 13 and non-pregnant n = 9. Whilst we saw a difference in expression levels of a range of cytokines, previous studies have shown associations between CCL5 and CXCL10 and the recruitment of TRM 14 ; therefore, we investigated the effect of administering recombinant CCL5 and CXCL10 into neonatal lungs of mice at the peak of T cell response, day 7–9 after RSV infection . We focussed on this time point because that is the peak of T cell recruitment after RSV infection 21 . Seven-day old BALB/c mice were infected intranasally with RSV; 20 µg of CCL5 or CXCL10 in 20 μl were given on days 7, 8 and 9 after infection. Mice were culled on day 21, and flow cytometry was performed on lungs harvested from the animals. Mice treated with CXCL10 had significantly more cells recruited into the lungs after infection . Although there was no difference in the number of CD8 T cells , central memory T cells or non-epithelial TRM cells (RSV + CD8 + CD69 + CD103 − ), there was a significant increase in the proportion of tissue-resident TRM cells (RSV + CD8 + CD69 + CD103 + ) after CXCL10 and CCL5 treatment . Fig. 4 Boosting primary RSV infection with chemokines enhanced tissue resident TRM production and confer protection. Seven-day-old BALB/c mice were infected intranasally with RSV, and 20 µg of CCL5 or CXCL10 in 20 μl were given on days 7, 8 and 9 after infection. Mice were culled on day 21, and flow cytometry was performed on the lungs that were harvested from the animals. Lung cell count ( A ), CD8 ( B ), Trm epithelial CD8 T cells ( C ). In a separate study, neonatal mice were infected with RSV and subsequently received chemokines intranasally on d7, 8 and 9 of infection before re-challenge on day 21. Mice were followed for 4 days after infection, and weight loss ( D ), lung cell count ( E ), CD8 TRM% ( F ), Viral Load ( G ) and antibody ( H ) were recorded. The same set-up was repeated except CCL5 or CXCL10 was given only on day 7 post primary infection. Mice were followed for 4 days after infection, and weight loss ( I ), lung cell count ( J ), CD8 TRM% ( K ), Viral Load ( L ) and antibody ( M ) were recorded. * p < 0.05, ** p < 0.01, *** p < 0.001; statistical analysis by one-way ANOVA except for panels D and I where 2 way ANOVA. N = 5 mice per study. We and others have previously observed that RSV infection in neonatal mice primes for more severe disease on re-infection, and this is driven by the recruitment of CD8 T cells during secondary infection 21 . We speculate that this is, in part, caused by the absence of TRM following neonatal RSV infection. Having seen increased CD8 TRM in the lung after the addition of chemokines, we investigated whether the administration of chemokines had an impact on disease following re-challenge with RSV . As previously seen, the control RSV group lost 15–20% of their original body weight on day 4 after re-challenge and also recruited significantly more cells to the lung compared to CXCL10 or CCL5 treated mice . Although there was no difference in the percentage of CD8 + T cells in the lung, mice treated with CXCL10 had significantly increased tissue-resident TRM cells . The reduced disease was reflected by a significantly reduced viral load . Interestingly the addition of CCL5 significantly increased the amount of RSV-specific antibody in the sera . Having seen that the addition of chemokines over 3 days during the peak of T cell recruitment altered the response to RSV infection and protection against secondary re-infection, we explored if a single dose of CCL5 or CXCL10 on day 7 after infection would be sufficient to enhance TRM recruitment . As with dosing over 3 days, CXCL10 addition at day 7 after neonatal RSV infection significantly reduced weight loss on RSV rechallenge of neonatally primed mice . There was no difference in lung cell recruitment between the three groups . CXCL10 treatment, but not CCL5, enhanced tissue resident TRM count in the lung . Both treatment groups had significantly less RSV L gene copy in the lung . As with 3 doses, there was more RSV-specific IgG in the serum collected from CCL5-treated mice , suggesting a different mechanism of protection in those mice against weight loss. We wanted to see if the failure to generate TRM following neonatal infection was a conserved response to viral infection in early life. A similar phenotype of reduced lung TRM induction has recently been observed following neonatal influenza infection 27 . As with RSV, we infected 7-day and 7-week-old mice with the H1N1 influenza virus intranasally and measured the immune response in the lung 21 days later. The proportion of CD8 T cells in the lung was significantly greater in adult mice than in naïve mice . There was a significant increase in total CD8 TRM as a proportion of cells recovered from the lung , and they were influenza-specific . We also evaluated the cytokine response to infection in lung 24 h after infection. The levels of GM-CSF, TNF, CCL2, CXCL2, IL-6, CCL5, CXCL1, IL-1β, CXCL10 and CCL3 in the lungs were significantly higher in infected adult lungs than infected neonatal mice . We then compared the response to influenza in human PBMC, using the same system described above. There was a significantly higher level of CCL5 and GM-CSF after live influenza virus stimulation of PBMC from non-pregnant women compared to cord blood . Fig. 5 Neonatal response to influenza virus is also blunted. Female 6–7 weeks old (adult) or mixed 7 days old (neonates) BALB/c mice were infected with RSV. Percentage all CD8 ( A ), Trm ( B ) and RSV specific ( C ). Cytokines were measured in the lungs in adult or neonatal mice 24 h after influenza virus infection ( D ). PBMC isolated from cord blood, maternal blood post-partum and non-pregnant mothers were stimulated with RSV for 24 h. Cytokines were measured in the supernatant by Luminex ( E ). One of the confounding factors for studying the cytokine response to viral infection might be pre-existing antibodies. The majority of adults have encountered RSV and influenza infection at some point. This could also affect neonates, because neonatal antibody is mostly of maternal origin due to the transplacental transfer. Therefore, we tested whether pre-existing antibodies have a significant impact on cytokine responses to the influenza virus in this experimental setup. H1N1-specific IgG were measured in plasma samples by antigen-specific ELISA. No correlation was found between IFNβ or IL-2 responses and H1N1 specific IgG in either cord or mothers (IFNβ: r 2 cord = 0.221, r 2 mother = 0.258; IL-2: r 2 cord < 0.01, r 2 mother = 0.202). There was no correlation between either CCL5 or CXCL10 response to the influenza virus, and pre-existing H1N1 specific IgG in both cord and maternal blood samples (CCL5: r 2 cord = 0.132, r 2 mother = 0.068; CXCL10: r 2 cord = 0.117, r 2 mother = 0.296). The GM-CSF response to the influenza virus was not correlated with pre-existing H1N1-specific IgG titre in neonates and their mothers ( r 2 cord = 0.097, r 2 mother = 0.011). In the current study, we explored the role of chemokines after neonatal respiratory viral infection in mice, in order to understand the reduced generation of TRM. As with previous studies, reduced levels of TRM after neonatal RSV 24 or influenza virus 27 infection were observed. RNA-Seq analysis of lungs from RSV-infected mice showed a significantly different gene expression profile, with a greater magnitude response in the adult mice, as observed previously 28 . It was notable that the neonatal mice had a much lower number of DEG, further exploration of why this occurs is needed. The DEG in the lungs after adult infection were clustered in cytokine pathways. This was reflected by increased protein levels of some of the cytokines in the lungs; though CCL5 had increased transcript, there was a difference in protein. Whilst we saw no difference after stimulation of cord blood and adult derived PBMC with RSV, there was significantly less CCL5 and GM-CSF produced following influenza virus stimulation of cord blood cells. When the TRM-associated chemokine CXCL10 was delivered intranasally at the peak of T cell recruitment to the lung during RSV infection, there was a change in the profile of memory CD8 cells that was associated with reduced weight loss following RSV re-challenge of neonatally primed mice. This suggests that there is a key deficit in one or more chemokines required to recruit TRM to the lungs of neonatal mice after viral infection. Why this deficit in chemokine production and cell recruitment occurs needs more investigation. Respiratory (CD103 + ) dendritic cells but not CD11b + DC have been shown to be important in TRM programming, and a recent study described a defect in this key DC subset in neonatal lungs 29 . Boosting the CD103 DC population with Flt3 ligand increased CD69 expression on neonatal CD8 cells after RSV infection 28 ; likewise, co-administering RSV with CpG increased the numbers of CD8 TRM in the lungs 24 . This mirrors the necessity of type I IFN during adult infection for the induction of adult TRM 30 . Another possibility is that the neonatal lungs lack adhesion molecules for the retention of Trm, we saw differential expression of integrins between adult and neonatal lungs, lower levels of Icam1 have been observed in a previous study 31 . Another possible mechanism that may affect Trm survival is altered cell metabolism. Resident memory cells utilise different energy pathways compared to central memory cells, for example Trm have been shown to require free fatty acids to survive 32 ; whether neonatal lungs provide this environment needs further investigation. One way to test whether the effect is at recruitment or retention would be to transfer adult Trm into neonatal mice prior to infection, we have seen this to be protective when performed between adult mice 17 . This effect is likely also to be affected by host genotype, we have previously observed that mouse haplotype plays a key role in the delayed effects of neonatal RSV infection 22 , why the CD8 T cell–DC interaction in some inbred strains leads to a more potent response needs further evaluation. The pauci-responsiveness of neonatal DC viral infection may be a protective adaptation to the acute onset of novel antigens in the transition from the womb to breathing in air. Previous studies have shown that by weaning age, the effect is less marked, so it may also be influenced by changes in diet 33 . Recent studies have observed dynamic changes in the proteome in the first weeks of life, reflecting adaptation to the post-partum environment 34 . Notably, levels of CXCL10 increase in the plasma of an infant cohort over the first 7 days of life 35 . When we supplemented the response with CXCL10, there was increased TRM recruitment to the lungs and protection against viral re-infection. This indicates that it is possible to recruit TRM to the lungs in early life and retain them there. CCL5 and CXCL10 have previously been identified to be important in the recruitment of Trm to the lungs in an influenza model 14 . Whilst we have previously observed that CD8 cells are recruited to the neonatal lung after RSV infection, this was significantly lower than in adult mice 21 . How this could be applied to future vaccination strategies is an important question. Mucosal T cells have been proposed to potentially reduce infection and onward transmission of viral infections or at least as part of a layered adaptive immune response 36 . More generally the question of how to recruit mucosal T cells following vaccination needs addressing. Given these findings, intranasal live-attenuated viral vaccines could have great potential as a booster strategy for RSV vaccines. In the development of RSV vaccines, there have been challenges with getting the balance right: generating a virus that is sufficiently attenuated not to cause disease but not so attenuated that it is not able to replicate in the human airways. One approach to generate live attenuated RSV is to recode the genome use codon pair deoptimisation 37 , this has been applied to RSV 38 . But the main approach has been gene deletion and the adaptation of temperature-sensitive mutants a number of these are in clinical trials in young children 6–24 months 39 , 40 . Ensuring that these vaccines trigger the right type of response to recruit TRM will be important in maximising their protective efficacy. Adult (6–8 week-old) or Neonatal (7 day-old) BALB/c mice were obtained from Charles River Ltd. (St Mary’s, UK) and maintained according to institutional and Home Office guidelines. All experiments were performed in the SPF room in the animal facility on a 12-h light/dark cycle at 20–24 °C with 55% ± 10% humidity at Imperial College London, St Mary’s Hospital Campus. All work was approved by the Animal Welfare and Ethical Review Board at Imperial College London, and studies were in accordance with the Animal Research: Reporting of In vivo Experiments (ARRIVE) guidelines. Mice were housed in groups of five animals per cage. Sample sizes were calculated using the G*Power raw software package, based on previously generated data for weight loss in the same model; weight loss was the primary outcome measure. No criteria were set for including and excluding animals. For RSV infection studies, mice were anaesthetised using 2.5–3% of isoflurane and intranasally (i.n.) infected with 10 6 PFU in 100 μl (adults) or 2 × 10 5 PFU in 20 μl (neonates) RSV subgroup A2 as indicated in the figure legend. For chemokine boosting, mice received 20 μg in 20 μl of CCL5 or CXCL10 i.n. under anaesthesia on d7, 8 and 9 after initial RSV infection, as indicated in the figure legend. In re-challenge studies, mice were re-infected with 5 × 10 6 RSV 21 days after the initial infection. For influenza infection, mice were infected under anaesthesia with 5 × 10 4 PFU in 100 μl (adults) or 400 PFU in 20 μl (Neonates) H1N1/Eng/195. On day 4 of the re-challenge, mice were culled 10 min after intravenous (i.v.) injection with 2 μg (in 200 μl) of PE-labelled anti-CD45 antibody (Cat!). Mice were culled using 100 μl intraperitoneal pentobarbitone (20 mg dose, Pentoject, Animalcare Ltd. UK). Lung tissue and BAL were collected as previously described 41 . Lungs were homogenised by passage through 100 μm cell strainers, then centrifuged at 1500 rpm for 5 min. Supernatants were removed, and the cell pellet was treated with red blood cell lysis buffer (ACK; 0.15 M ammonium chloride, 1 M potassium hydrogen carbonate, and 0.01 mM EDTA, pH 7.2) before centrifugation at 1500 rpm for 5 min. The remaining cells were resuspended in RPMI 1640 medium with 10% foetal calf serum, and viable cell numbers were determined by trypan blue exclusion. Live lung cells and cells from BAL were plated out onto a U-shaped 96-well plate and then spun down at 2000 rpm for 2 min at 4 °C. In total, 100 µl of Live/Dead violet dye was added for 20 min at 4 °C in the dark, the plate was then centrifuged at 2000 rpm for 2 min, and the supernatant was taken off. The cell pellet was resuspended in Fc block (Clone: 2.4G2) in PBS-1% BSA and stained with the following surface antibodies: FITC anti-mouse CD3 , APC-H7 anti-CD8 , BV605 anti-CD103 , APC anti-mouse CD69 , PerCP-Cy5.5 anti-mouse CD4 , BV711 anti-mouse CD44 , PE-Cy7 anti-mouse CD62L for one hour in the dark. Excess antibodies were washed off with 1% BSA in PBS three times before being filtered through the FAC tubes on an LSR Fortessa Flow cytometer (BD) and FlowJo. Fluorescent minus one (FMO) controls were used for surface stains. Analysis was performed using FlowJo and gated, as shown in Fig. S 1 . Cells were collected from BAL, washed and resuspended in sterile PBS. Mice were anaesthetized, and 10 6 cells in 100 μl were delivered intranasally with a Gilson pipette. RNA was extracted from the left lung lobe by first homogenising the tissue using a TissueLyzer (Qiagen, Manchester, UK) at 50 oscillations for 4 min followed by a TRIzol and chloroform extraction. RNA concentrations were determined using a Nanodrop before converting into cDNA using a GoScript reverse transcription system and 2 µg for all samples. qPCR for the RSV L gene was performed on a Stratagene Mx 3005p (Agilent Technologies, Santa Clara, CA, USA) using the primers 5′-GAACTCAGTGTAGGTAGAATGTTTGCA-3′ and 5′-TTCAGCTATCATTTTCTCTGCCAA-3′ and probe 5′-6-carboxyfluorescein (FAM)-TTTGAACCTGTCTGAACAT-6-carboxytetramethylrhodamine (TAMRA)-3′. RNA copy number per mg of lung RNA was determined using an RSV L gene standard. Gene expression levels of the RSV L gene were normalised to the GAPDH copy number. Lung RNA was extracted using QIAzol (Qiagen) and chloroform extraction. RNA QC and library preparation were performed by Novogene using the Illumina HiSeq at a target depth of 50 million 100–150 bp pair-end reads per sample. The quality of Raw RNAseq reads generated was assessed using FastQC (v0.11.9) to ensure good quality scores, GC content, and no adaptor reads, then appropriate adjustment was made using the programme Trimmomatic (v1.0.40). Raw reads were then mapped to the Mouse Reference Genome (Gmc38) using STAR (Spliced Transcripts Alignment to a Reference, v6.2.0) and count data for each gene performed using Salmon (v1.2.0). Principal component analysis (PCA) and heatmap visualisation on normalised sequence data were analysed by the variance stabilising transformation method. R function prcomp() was used for PCA in the package devtools (v2.4.2) and heatmap visualisation was performed the heatmap.2 function in gplot package (v3.1.1). Differential gene expression analysis was performed using DeSeq2 42 differential analysis to obtain a list of genes, P values, adjusted P values and log2 fold changes with positive log fold change values indicating increased gene expression and negative values decreasing gene expression. False discovery rate (FDR) was calculated by applying the weighted Benjamin–Hochberg method for multiple hypothesis testing. A gene was considered differentially expressed if the absolute fold change was above 0.5 with an adjusted P -value < 0.05. ClusterProfiler 43 was used to assess the enrichment of Gene Ontology (GO) pathways in each gene list. Network analysis of the GO terms was analysed using the emapplot function in the clusterProfiler package. The KEGG pathway was analysed using the gseKEGG function. Gene lists analysed include genes that were identified as significantly differentially variable and genes that were in a specific GO term. Gene set enrichment analysis (GSEA) was performed where the enrichment score was calculated as the −log10 ( P -value) 44 . The network analysis of KEGG pathways (network visualisation and clustering) was performed with NetworkAnalyst 45 . R code is available upon request. Lung cytokine levels were measured using commercial multi-spot U PLEX kits from Meso Scale Discovery (MSD) and performed according to the manufacturer’s instructions. Data was analysed, and lower limits of quantification (LLOQ) were determined using MSD DISCOVERY WORKBENCH software. This study was a nested study within a larger study investigating maternal pertussis vaccination. It was an opportunistic study using samples from the same individuals and not specifically powered to look at the questions. Healthy pregnant women were recruited antenatally. Exclusion criteria included: twin pregnancy, maternal infection, chronic maternal pathology, pregnancy pathology, and babies with chromosomal or structural abnormalities. The study was approved by the National Research Ethics Service (NRES), NHS, UK and stored under the Imperial College Healthcare Tissue Bank (ICHTB). Written informed consent was obtained. Cord blood and maternal blood were collected in sodium heparin-anti-coagulated Vacutainer tubes (BD Biosciences). Cord blood was obtained from the cords of healthy neonates at the Maternity Unit of St Mary’s Hospital, London. Through collaboration with Dr Beth Holder and Professor Beate Kampmann (Department of Paediatrics, Imperial College London), all participants had completed written informed consent, and the study was approved by the ethics committee of the Faculty of Medicine, Imperial College London. Blood samples from non-pregnant women were also collected as a comparator, these blood samples were obtained from volunteers in the Department of Infectious Disease. Maternal blood samples were collected from mothers at or within 2 days of delivery. Umbilical cord blood samples were collected at delivery. Blood samples were processed no more than 8 h after birth. Blood samples were diluted with PBS in a 1:2 ratio. This mixture was then layered with an equal volume of Histopaque (density: 1.077 g/mL, Sigma-Aldrich) and centrifuged at 1000 RPM for 20 min with the brake off. The cloudy interphase layer, which contains the PBMC, was collected with a Pasteur pipette into a new Falcon tube. PBMC were then washed with 15 mL PBS and centrifuged at 1750 RPM for 10 min with brake on. The supernatant was discarded, and washing was repeated with 15 mL PBS to remove platelets. After the second washing, supernatant was discarded and 2 mL ACK buffer was added to resuspend the pellet and remove any contaminating red blood cells. After incubating at room temperature for no more than 5 minutes, 10 mL R10 media were added and centrifuged for 5 min at 1750 RPM. The pellet was resuspended with R10, ready for counting. PBMC from cord, maternal and non-pregnant women blood samples were isolated. In total, 2 × 10 5 cells PBMC was incubated with either influenza virus or RSV at MOI 3 on 96-well U-bottom plate. The samples were incubated for 24 hours at 37 °C (5% CO 2 ) before harvesting 200 µL supernatant in Eppendorf tubes and stored at −80 °C for further analysis. An in-house Luminex kit was used 46 . The standards were purchased from R&D systems with corresponding antibody pairs. Individual Luminex bead sets (Luminex, Riverside, CA) were coupled to cytokine-specific capture antibodies according to the manufacturer’s recommendations. 19 analytes were measured in this assay (IL-1α; IL-1β, IL-6, IL-12, TNF, IFNβ, IFNy, IL-2, IL-4, CCL2, CCL4, CCL5, CCL8, CXCL8, CXCL10, GM-CSF, TGFβ). Magnetic beads conjugated with capture antibody were diluted in Luminex assay buffer (PBS supplemented with 1% Goat serum, 1% mouse serum, 0.05% Tween 20 and 20 mM Tris-HCL). 50 µL of bead mix was put in a 96-well flat-bottomed plate with 50 µL undiluted samples or pre-diluted standards. Plates were incubated for 1.5 h on a plate shaker. Plates were then washed with washing buffer on a magnetic platform three times before adding 50 µL pre-diluted detection antibody cocktail. Plates were incubated for 1 h on a plate shaker as per the previous incubation step. The washing step was repeated, followed by the addition of 50 µL streptavidin-PE and incubation for 30 min on a plate shaker. The washing step was repeated, and 100 µL washing buffer was added to each well. The plate was shaken for another 10 min to thoroughly disperse the beads before reading the plate on the Bio-Plex®100 Luminex machine (BIO-RAD). All statistical analyses were performed in Graph Pad Prism V9 (GraphPad Software, San Diego, CA), R version 3.5.0. A statistically significant difference was defined as a P -value < 0.05 by one-way analysis of variance (ANOVA). Supplementary Information
|
Review
|
biomedical
|
en
| 0.999997 |
PMC11688615
|
Pseudomonas aeruginosa is a common opportunistic pathogen in hospital settings that causes a wide range of acute and chronic infections, both of great concern to human health . Infection of the epithelial tissue by P. aeruginosa is often associated with injuries such as burns or chronic wounds . The latter is of particular concern since these non-healing wounds are a serious socioeconomic burden to healthcare, with an estimated cost of over 12 billion dollars . Bacterial colonization is a major contributor to wound chronicity and P. aeruginosa is one of the species more frequently isolated from such wounds . Virulence factors produced by colonizing P. aeruginosa can result in epithelial damage and may contribute to impaired wound healing . A severe outcome of wound colonization that can put the patient's life at risk is the translocation of bacteria from the wound to the bloodstream . P. aeruginosa infection of the bloodstream leads to a high mortality rate when compared to other bacterial species, due partially to the greater virulence potential of the species . Even though P. aeruginosa strains isolated from bacteremia cases tend to be more virulent than isolates from peripheral sites such as wounds or the respiratory tract, expression of virulence factors is not uniform between different strains of P. aeruginosa and is heavily influenced by the microenvironment of the infected tissue . Thus, we analyzed the virulence potential of 74 strains of P. aeruginosa from our culture collection, which were previously isolated from chronic wounds or bloodstream infections. Between the vast array of tools that P. aeruginosa can use to harm the host, biofilm formation is one of the most important ones for chronic wound persistence . Estimates project that over 90% of chronic wound infections are impacted by biofilms, which contribute to healing impairment . P. aeruginosa can form highly structured biofilms that protect the species from host defenses and environmental stresses, hindering the treatment of P. aeruginosa infections and contributing to colonization and persistence in human tissues . Biofilm analysis showed that, in general, adherence is lower on strains isolated from chronic wounds compared to strains isolated from the blood . Using the biofilm analysis formula described by Stepanovic et al., , we observed that among strains isolated from blood, 2 were weakly adherent, 6 were moderately adherent and 27 were strongly adherent. Among chronic wound strains, 2 were non-adherent, 2 were weakly adherent, 24 were moderately adherent and 11 were strongly adherent. Biofilms play a major role in persistent infection, while acute infections are usually associated with cells adopting a planktonic lifestyle . However, our evaluation of P. aeruginosa ability to produce biofilm showed that strains isolated from blood presented a higher biofilm-forming capacity than strains isolated from chronic wounds. A study carried out with 96 strains of different species isolated from bloodstream infections showed that most of them were weak biofilm producers. The vast majority of P. aeruginosa strains among those isolates, however, were strong biofilm producers , a result similar to what we observed in our study. Host colonization and biofilm formation are highly influenced by motility. P. aeruginosa exhibits three major forms of motility: swimming, swarming, and twitching, which allow movement in aqueous, viscous, and solid surfaces, respectively. These different motility mechanisms are mediated by the species flagellum and/or type IV pilus . Analysis of different motility mechanisms showed that, overall, chronic wound strains were less motile than strains isolated from blood when it comes to swarming and twitching motility, but had no statistical difference in swimming analysis . The increased ability of P. aeruginosa isolated from blood to move by swarming and twitching might be related to their increased biofilm formation, as both kinds of motility have been associated with biofilm formation before . Spearman's rank correlation analysis between biofilm and each kind of motility showed a similar association at the individual strain level. Biofilm formation was positively and significantly correlated to both swarming and twitching , but not to swimming . Several proteolytic enzymes, such as elastase B and alkaline protease are produced by P. aeruginosa to colonize and persist in host tissues . In this work, we show that P. aeruginosa proteolytic activity on skim milk agar is significantly higher on strains isolated from blood when compared to chronic wound strains. When bacteria reach the bloodstream, it has to survive against the host innate immune system . The higher proteolytic activity of bloodstream infections might be an evasion mechanism that helps P. aeruginosa escape the host immune system. Besides contributing to biofilm formation, proteases can also contribute to the disruption of host defense mechanisms and compromise host epithelial junctions, which enable bacterial migration to tissues that are usually inaccessible . Production of pyocyanin, a green-blue pigment that plays an important role in iron metabolism , showed a high degree of variability in our analysis. Even though the mean value was lower on chronic wound strains, no significant difference in pyocyanin production was seen between the two groups. Pyocyanin is produced by 90-95% of P. aeruginosa strains and it has been shown to increase microbial virulence . A longitudinal study has shown that, over time, P. aeruginosa virulence factors are selected against during chronic cystic fibrosis due to the genetic adaptation of the pathogen to host airways . This could explain why, in our study, P. aeruginosa strains that colonize chronic wounds have a lower virulence potential compared to strains isolated from acute infections such as the ones in the bloodstream. However, our conclusions might still be preliminary, as further analysis (e.g., analyzing other phenotypes or strains isolated in different regions, increasing the number of strains tested, etc.) could affect the difference in virulence between these groups. For example, one caveat of our experiments was the lack of a quantitative growth rate analysis. Even though we did not see visible differences in growth between cultures, growth curves could show subtle differences in growth between strains that could have impacted some of the phenotypes tested and that should have been considered during the analysis. Also, new approaches to analyzing biofilms that don't rely on static cultures (or that use other types of surfaces that the bacteria can attach to) could show unexpected differences in the biofilm formation ability between groups. Nevertheless, we believe that our data can contribute to other studies on the virulence of P. aeruginosa . Bacterial strains and growth conditions We used 74 strains of P. aeruginosa in this study, 35 isolated from bloodstream infections and 39 isolated from chronic wounds. The strains were part of the culture collection of the Controle Microbiológico laboratory of the Universidade Federal Fluminense , in Brazil. P. aeruginosa strains were routinely grown in LB culture medium at 37°C. Biofilm formation Quantification of total growth and biofilm formation was performed as described previously . Cells were inoculated into 96-well polystyrene plates containing LB broth and incubated at 37 °C for 24 hours. After incubation, planktonic bacteria were removed from the microplate and wells were washed with PBS (pH 7.4) three times. The plates were then dried at 60 °C for 1 hour and stained with 200 µl of 0.1% crystal violet for 30 minutes at room temperature. Excess stain was removed by rinsing the wells twice with PBS. The dye was then solubilized using 200 µl of a 95% ethanol solution, and the OD of each well was measured by spectrophotometry (SpectraMax M2e) at 570 nm. Comparative analysis of test results was performed according to Stepanovic et al., . Strains were classified into four categories based on the optical densities (OD) of biofilms. The cut-off OD value in our biofilm analysis was 0.253. Strains were considered non-adherent if OD value was lower or equal to 0.253; weakly adherent if OD value was higher than 0.253 and lower or equal to 0.505; moderately adherent if OD value was higher than 0.505 and lower or equal to 1.010; and strongly adherent if OD value was higher than 1.010. Motility assays P. aeruginosa motility was assessed by analysis of swimming, swarming, and twitching motility. Swimming and swarming assays were performed by touching a single colony of each P. aeruginosa strain with the tip of a sterile toothpick and using it to inoculate the surface of LB agar plates, followed by incubation at 37 ºC for 24 hours. The main difference between the two protocols was the agar concentration on the plates: swimming was carried out in LB with 0.3% agar and swarming in LB with 0.6% agar . After incubation, the plates were photographed for precise measurement of bacterial growth using Digimizer image analysis software (version 5.4.5). Swimming distance was determined by the diameter of bacterial growth and swarming was determined by the area of bacterial growth. Twitching analysis was carried out in LB with 1% agar and was performed by touching a single colony of each P. aeruginosa strain with the tip of a sterile needle and using it to inoculate the bottom of the plate by stabbing the agar. After 24 hours of incubation at 37 ºC, the agar was carefully removed and 0.1% crystal violet was added to the plates to stain twitching motility . The plates were photographed and the diameter of the stained area was measured using Digimizer image analysis software (version 5.4.5). Protease activity Differences in proteolytic activity were assessed by touching a single colony of each P. aeruginosa strain with the tip of a sterile toothpick and using it to inoculate the surface of LB agar plates supplemented with 2% skimmed milk. After incubation at 37 ºC for 24 hours, proteolytic activity was evidenced by the clearance zone around the colony on equivalent-depth poured plates and was determined by the difference between colony diameter and clearance halo diameter using Digimizer image analysis software (version 5.4.5). Pyocyanin production Production of pyocyanin by each P. aeruginosa strain was evaluated as previously described, with some modifications . A single colony was inoculated on 2 mL of LB broth and incubated by shaking (200 rpm) at 37 ºC. After 48 hours, 1 ml of chloroform was added to the bacterial culture and homogenized in a vortex for 1 minute. Then, the chloroform layer was transferred to a 1.5 ml tube and centrifuged at 13,000 rpm for 2 minutes. After centrifugation, 20 µl was collected from the lower phase and the production of pyocyanin was quantified in a spectrophotometer using a 50 µl quartz cuvette at an optical density of 690 nm. Statistical analysis Results for each phenotypic assay were obtained from three independent replicates and the data generated were analyzed using GraphPad Prism version 8.0.2 (GraphPad Software, San Diego, California USA). Mann-Whitney test was used to compare the two groups, with significance established by a p-value lower or equal to 0.05. Spearman's rank correlation was used to analyze the correlation between biofilm and motility phenotypes.
|
Study
|
biomedical
|
en
| 0.999998 |
PMC11693096
|
Local recurrence after surgery for UTUC varies in recurrence rates depending on risk factors, with recurrences usually occurring within 2 years after surgery. Local recurrence 10 years postoperatively is extremely rare. Herein, we report a rare case of retroperitoneal recurrence as SCC 10 years after nephroureterectomy. A 67‐year‐old female was referred to our urology department with a left ureteral tumor detected on CT. Contrast‐enhanced CT confirmed the diagnosis of left renal pelvic and ureteral cancer (cT3N0M0) . A laparoscopic left nephroureterectomy was performed. The pathological diagnosis revealed UC at the pT3 stage, with the tumor extending from the renal pelvis to the lower ureter, although with negative surgical margins . No cellular component in this lesion displayed differentiation into squamous epithelium . Pathological examination revealed a Grade 3 component with a disorganized nuclear arrangement and increased cell density, nuclear chromatin, and nuclear fission pattern. There was lymphovascular invasion, no vascular invasion, and negative margins (pT3 Invasive UC Grade2>3 INFb ly1 v0 RM0). Subsequently, she received six courses of adjuvant chemotherapy with gemcitabine/nedaplatin therapy instead of gemcitabine/cisplatin therapy because of decreased renal function, and no subsequent evidence of recurrence was noted. Then, 10 years after nephroureterectomy, a MRCP imaging incidentally revealed a mass lesion in the left retroperitoneum , and a PET‐CT scan showed abnormal FDG uptake at the same site (SUV max = 12.08) . A CT‐guided biopsy was performed for diagnosis. The histology of the retroperitoneal recurrent lesion showed SCC with keratinization and intercellular bridges . Immunohistochemical staining was diffusely positive for cytokeratin 5/6, P40 and P63 . There was no typical UC component in this lesion, as those observed in renal pelvis tumors. Blood biochemical findings were as follows: white blood cell, 7200/mm 3 ; hemoglobin (Hb), 13.3 g/dL; platelet, 22.4 × 10 4 /mm 3 ; blood urea nitrogen, 16.5 mg/dL; creatinine, 1.01 mg/dL; aspartate aminotransferase, 23 IU/L; alanine aminotransferase, 13 IU/L; alkaline phosphatase, 102 IU/L; lactate dehydrogenase, 229 IU/L; HbA1c, 8.4% (normal range: <6.3%); SCC, 39.2 ng/mL (normal range: <1.5 ng/mL); alpha‐fetoprotein, 8.65 ng/mL; and protein induced by vitamine K absence or antagonist II, 24 mAU/mL. [Correction added on 11 November 2024, after first online publication: The term ‘aspartate aminotransrerase’ has been corrected to ‘aspartate aminotransferase’. Despite the suspected distant metastasis of other organ tumors, examinations such as digestive endoscopy and bronchoscopy did not reveal any tumor lesions. The patient was diagnosed with recurrent invasive UC as a pathological feature of SCC, and CF therapy was initiated. After two courses, the assessment showed an increase in retroperitoneal metastatic lesions, leading to the determination of PD. Consequently, pembrolizumab therapy was administered as the second‐line treatment, maintaining a PR for 11 months and causing a decrease in SCC to 2.5 ng/mL. We encountered a rare case of retroperitoneal recurrence as SCC 10 years after nephroureterectomy. Local recurrence after surgery for UTUC varies depending on the risk factors (tumor in both the renal pelvis and ureter, T stage >2, lymph node involvement, grade 3 histology, and positive surgical margins), with recurrence usually occurring within 2 years of surgery. 1 Local recurrence 10 years postoperatively is extremely rare. Approximately 5–10% of UTUC exhibit SqD, 2 which is closely associated with chronic irritation, infection, and inflammation. 3 In the present case, the pathological diagnosis after nephroureterectomy was PUC without variants; however, the histological subtype at the time of recurrence differed from the initial diagnosis of renal pelvic and ureteral cancer. No lesions suggestive of a primary tumor were found, and considering that it was a retroperitoneal recurrence, it was diagnosed as a recurrence of renal pelvic and ureteral cancer. No local recurrence pattern showing a different histological appearance from PUC more than 10 years after surgery was found within the scope of our literature search. However, it is unlikely that an original PUC would recur as pure SCC 10 years later. Therefore, this case is more likely a UC recurrence with SqD. Standard drug therapy for SCC of the urothelium has not been established 2 , 4 , and only prospective studies of combination chemotherapy (ITP therapy; paclitaxel, ifosfamide, and cisplatin) have been reported. 5 In addition, pembrolizumab therapy has been reported to be highly effective in the treatment of VUC. 6 Other reports have suggested that the response of VUC to treatment with pembrolizumab was not inferior to that of PUC. 7 In particular, the presence of SqD did not affect the response after pembrolizumab as compared with PUC or non‐squamous VUC. 8 In this case, following the standard treatment for SCC in other areas, 9 the patient underwent CF therapy, resulting in a PD assessment, but then received pembrolizumab therapy, maintaining PR. There are no confirmed reports regarding the effectiveness of radiotherapy for local recurrence after nephroureterectomy. 1 Considering the proximity of the recurrence site to the intestine, radiotherapy was not performed in this case. Moving forward, continued follow‐up with CT scans and tumor markers will be conducted. If the efficacy of pembrolizumab therapy diminishes, paclitaxel will be used as the next therapy in accordance with the standard treatment for SCC in other areas. We encountered a rare case of peritoneal recurrence as SCC 10 years after a total nephroureterectomy. A recurrent pattern showing a different histological appearance as SCC more than 10 years after surgery has not been reported in the literature. Koichiro Uehara: Writing – original draft. Tatsuaki Onuki: Writing – review and editing. Yukari Ishibashi: Data curation. Sayuki Matsunuma: Data curation. Hiroaki Ishida: Data curation. Jiro Kumagai: Writing – review and editing. Takayuki Murakami: Supervision. The authors declare no conflict of interest. Not applicable. Written informed consent for publication was obtained from the patient. Not applicable.
|
Clinical case
|
biomedical
|
en
| 0.999995 |
PMC11693102
|
Intravesical BCG immunotherapy is used for treatment of NMIBC after TURBT. 1 The therapy is generally safe but sometimes causes complications. An 80‐year‐old man underwent TURBT, and the pathological examination revealed high‐grade NMIBC with carcinoma in situ . He subsequently received weekly alternating intravesical BCG/epirubicin infusion therapies for 8 weeks, consisting of four infusions of BCG (Tokyo strain; 40 mg) and four infusions of epirubicin (40 mg), with no traumatic catheterizations. However, at 2 months after treatment completion, he experienced discomfort in his lower abdomen and slightly painful defecation. He initially received oral antibiotics, but the symptoms did not improve and instead gradually worsened. Therefore, he was referred to our hospital. He had no fever or apparent urinary symptoms. He also had no specific comorbidities including diabetes mellites. His blood biochemical findings were as follows: white blood cell count (WBC), 5400/μL; C‐reactive protein (CRP), 1.1 mg/dL; prostate‐specific antigen, 3.39 ng/mL. Urinary sediment analysis showed no significant findings. Bacterial culture, acid‐fast bacteria (AFB) culture, and AFB PCR in the urine were negative. A digital rectal examination detected a slightly hard and tender nodule on the left side of the rectum. Contrast‐enhanced CT revealed an incidental gallbladder tumor and an irregular pelvic mass, suggesting either bladder cancer invasion or abscess in the prostate and rectum . MRI showed contrast enhancement and reduced diffusion in the same area . A cystoscope examination indicated a small recurrent papillary bladder tumor . A colonoscopy revealed a mucous membrane bulge and purulent mucus discharge in the lower rectum , and a biopsy showed inflammatory granulation with no malignant findings. We subsequently performed a CT‐guided needle biopsy , and the pathological examination revealed epithelioid granuloma containing Langhans giant cells . Thus, he was clinically diagnosed with small recurrent bladder cancer, gallbladder tumor, and BCG‐related tuberculous prostatic abscess spreading to the rectum, although AFB culture and PCR of the biopsy samples were negative. After a shared process involving the patient and hepatobiliary and pancreatic physicians, we started BCG abscess treatment with isoniazid (300 mg/day), rifampicin (600 mg/day), and ethambutol . At 1 month after treatment initiation, the painful defecation and discomfort in the lower abdomen were relieved, and he underwent TURBT for high‐grade NMIBC and staging laparoscopy for the gallbladder tumor (cholecystectomy was initially planned), for which the pathological examination revealed gallbladder cancer with peritoneal dissemination. The patient subsequently received isoniazid, rifampicin, and ethambutol for an additional 1 month and isoniazid and rifampicin therapies for an additional 4 months simultaneously with chemotherapy for the gallbladder cancer. After treatment completion, the tuberculous prostatic abscess had almost completely disappeared , and the gallbladder cancer was stable. Intravesical BCG immunotherapy can be used to treat NMIBC, especially in patients with carcinoma in situ . 1 The therapy is generally safe but can cause several complications. A large observational study found infrequent complications of the urinary tract, including granulomatous prostatitis, epididymitis, ureteral obstruction, contracted bladder, and renal abscesses (all ≤1%). 2 Remarkably, pathological examinations revealed that almost 80% of bladder cancer patients had pathological granulomatous prostatitis in their radical cystoprostatectomy specimens after intravesical BCG treatment. 3 , 4 These observations suggest that intravesical BCG treatment causes granulomatous prostatitis in the majority of patients, but only a few patients develop clinically symptomatic granulomatous prostatitis. Tuberculous prostatic abscess is a rare complication of intravesical BCG immunotherapy and sometimes induces rectal fistula. 5 , 6 , 7 , 8 , 9 Thus far, only five cases of BCG‐related tuberculous prostatic abscess have been reported, 5 , 6 , 7 , 8 , 9 and all cases with CT images predominantly showed abscess formation in the peripheral zone of the prostate. Notably, pathological granulomatous prostatitis also predominantly or exclusively occurs in the peripheral zone of the prostate, probably due to the microanatomical distribution of the prostate duct, 4 implying that aggravation of granulomatous prostatitis leads to tuberculous prostatic abscess development, although the specific causal conditions remain unknown. Furthermore, as the prostatic abscess in the peripheral zone worsens, the abscess may rupture the prostatic capsule, spread into the periprostatic and perirectal areas, and finally rupture the rectal wall, forming a rectal fistula. Bacterial prostatic abscess usually develops in immunocompromised patients, including diabetic patients, due to acute bacterial prostatitis. 10 Clinical symptoms are typically apparent, including lower urinary tract irritation symptoms in most cases, fever in up to 72%, and perineal pain in 20%. 10 Intriguingly, all five patients with BCG‐related tuberculous prostatic abscess had continuous lower urinary tract symptoms or pain, but three of the five patients had no fever before diagnosis. 5 , 6 , 7 , 8 , 9 These findings suggest distinct pathophysiological features of BCG‐related tuberculous prostatic abscess compared with bacterial prostatic abscess, wherein bacteremia and systemic infection quickly occur. Indeed, the present patient had no fever or serum WBC elevation and only showed slight serum CRP elevation, suggesting that the absence of systemic inflammation and infection and the formation of the rectal fistula may have avoided worsening of the infection through drainage of the abscess. Because of its infrequent occurrence, the risk factors for BCG‐related tuberculous prostatic abscess remain unknown. However, one study identified large prostate size as an independent predictor of BCG‐related prostatitis. 11 Assuming that tuberculous prostatic abscess is a worsened stage of granulomatous prostatitis, urologists should pay attention to clinical symptoms in NMIBC patients with benign prostatic hyperplasia who have received intravesical BCG immunotherapy. Notably, the prostate volume in our patient was 32 mL, suggesting that he may have had this risk factor for BCG‐related tuberculous prostatic abscess. Furthermore, the patient simultaneously suffered from double cancer, including disseminated gallbladder cancer, which may have influenced his immunity against BCG, although most patients with solid tumors are not significantly immunocompromised relative to patients with hematologic malignancies. 12 Conversely, age might impact our patient's BCG‐related prostatic abscess development, whereas several retrospective studies showed that the toxicity of intravesical BCG therapy was not associated with age. 13 , 14 Finally, our patient received sequential intravesical BCG/epirubicin therapy, which might also affect the prostatic abscess formation. However, the previous report demonstrated that the sequential intravesical BCG/epirubicin therapy did not increase local and systemic toxicities compared with BCG monotherapy, 15 suggesting that the sequential treatment was not associated with BCG‐related tuberculous prostatic abscess formation. The antitubercular agents that can be used against BCG include isoniazid, rifampicin, and ethambutol, although BCG strains are notably insensitive to pyrazinamide. 16 , 17 All six cases of BCG‐related tuberculous prostatic abscess (including the present case) were successfully treated with combined antitubercular drugs with or without drainage surgery. These results indicate that conservative treatment with antitubercular drugs is efficient and safe for treatment of tuberculous prostatic abscess. Tatsuhiro Sawada: Conceptualization; data curation; writing – review and editing; visualization; investigation. Ayaka Igarashi: Conceptualization; writing – original draft; writing – review and editing; investigation; data curation. Seiji Arai: Writing – original draft; writing – review and editing; visualization; validation; supervision; resources; funding acquisition; conceptualization; investigation; data curation; project administration. Akira Ohtsu: Writing – review and editing; data curation. Yuji Fujizuka: Data curation; writing – review and editing. Shun Nakazawa: Data curation; writing – review and editing. Yoshitaka Sekine: Data curation; writing – review and editing. Hidekazu Koike: Data curation; writing – review and editing. Yosuke Furuya: Data curation; writing – review and editing. Kazuhiro Suzuki: Data curation; writing – review and editing. The authors declare that they have no competing interests. Not applicable. Written informed consent for the publication of this case report was obtained from the patient. Not applicable. The authors have no funding to declare for this article.
|
Other
|
biomedical
|
en
| 0.999998 |
PMC11693112
|
Hemorrhage resulting from RAP is an uncommon yet significant complication that may arise following renal trauma, biopsy, percutaneous nephrostomy, PCNL, and partial nephrectomy. Although the occurrence of this potentially life‐threatening complication is below 1%, its incidence is expected to rise due to the growing adoption of endoscopic renal procedures. 1 However, the risk of RAP is higher in the case of PCNL done in a solitary kidney because of the hypertrophy of the renal parenchyma. 2 Renal angiography can be used for diagnosing RAP of an interlobar artery. Ultrasound guidance presents a distinctive alternative to fluoroscopy for percutaneous renal access. In addition to avoiding ionizing radiation exposure for both the patient and intraoperative staff, it provides numerous benefits, such as improved visualization of the posterior renal calyx and adjacent visceral structures. 1 , 3 We present a 41‐year‐old male with a solitary right kidney presented with hematuria and a RAP post‐PCNL and its comprehensive management. A 41‐year‐old male with a solitary right kidney presented with hematuria and episodic fever 3 months after PCNL. He had a percutaneous nephrostomy tube in place with daily hemorrhagic fluid output of 200–300 mL and reduced urine output. The tube was retained to ensure adequate kidney drainage, monitor for residual fragments or complications, and manage delayed healing of the nephrostomy tract. The patient had received 13 units of blood transfusion post‐PCNL. No other significant medical history was reported. The patient appeared fatigued with stable vital signs and right flank tenderness. The percutaneous nephrostomy tube site was clean. Laboratory results showed hemoglobin of 8.4 g/dL, serum creatinine of 3.2 mg/dL, and a urine creatinine ratio of 22.9 mg/dL. The radiological imaging and ultrasonography revealed hypertrophied right kidney and perinephric collections of 100 mL. Additionally, the presence of a PCN tube was noted. Most importantly, an anechoic cystic lesion at the midpole of the kidney was identified. Later, ultrasound Doppler imaging confirmed that there was turbulent flow within this anechoic lesion, strongly suggesting that there was a RAP, as shown in Figure 1 . These findings collectively guided the clinical evaluation and treatment approach for the patient's hematuria and related symptoms. After consulting with the interventional radiology team, the management of this complex case aimed to address a RAP following a recent PCNL procedure. A 5F vascular sheath and 5F SIMS catheter were used to access the right femoral artery, allowing catheter advancement to the right renal artery for angiography, as shown in Figure 2 . However, complications arose as renal vessels entered spasm, preventing further catheter progression. Despite administering intravascular vasodilators, the arterial spasm persisted, necessitating the procedure's abandonment due to the heightened risk of arterial dissection. A more successful approach was pursued, where a direct percutaneous embolization procedure was meticulously planned under the guidance of ultrasound and digital subtraction angiography (DSA). This involved the percutaneous puncture of the RAP using an 18‐G vygon needle, guided by ultrasonographic imaging. An angiogram, conducted under DSA guidance with the use of a water‐soluble radiographic contrast agent (Visipaque), confirmed the presence of the contrast‐filled pseudoaneurysm. Subsequently, 0.1 mL of 1:2 N‐butyl cyanoacrylate glue, reconstituted with lipiodol, was slowly and precisely injected into the pseudoaneurysm under DSA guidance, with careful fluoroscopy monitoring as shown in Figure 3 . Remarkably, this approach minimized the utilization of Visipaque contrast, thereby reducing radiation exposure compared to conventional computed tomography or angiography. Post‐procedure vitals remained stable, and there were no discernible complications observed in the postoperative period. A follow‐up color Doppler ultrasound 24 h post‐procedure showed no flow in the previously problematic pseudoaneurysm, confirming the successful resolution of the RAP and validating the effectiveness of the management approach. PCNL is the preferred method for removing kidney stones but can occasionally lead to complications such as renal arteriovenous fistulas or pseudoaneurysms, which are typically asymptomatic or present with temporary symptoms. 4 RAP is a rare but recognized complication following PCNL, where an artery is partially severed or punctured, causing blood to leak into a confined hematoma. 5 This complication often involves a significant arterial branch, such as a third‐order branch of the renal artery, which may be difficult to detect during the procedure due to thrombosis or spasm. 6 Over time, dislodgement of the occluding clot can result in delayed hematuria. In this case report, we describe a patient with RAP in a solitary kidney who presented with hematuria and decreased urine output following PCNL. Delayed bleeding after significant percutaneous procedures, often from arteriovenous fistulas or arterial pseudoaneurysms, can be managed effectively with selective angioembolization. Continuous bleeding usually points to an arteriovenous fistula, while intermittent bleeding suggests an arterial pseudoaneurysm. Both conditions are treated similarly with angioembolization, which has high success rates. Hospital admission and angiography are necessary for any post‐procedure bright red urine, as angiography is diagnostic in over 90% of cases. 7 Direct ultrasound‐guided percutaneous embolization has emerged as a novel method for treating renal pseudoaneurysms. The percutaneous embolization technique involves the following steps: First, under local anesthesia and imaging guidance (typically fluoroscopy), a catheter is introduced into the vascular system, usually through the femoral artery. The catheter is then navigated to the target vessel, supplying the area of interest. Embolic agents, which can include coils, particles, or liquid embolics, are carefully introduced through the catheter to occlude the target vessel. The progress and effectiveness of the embolization are monitored in real‐time using angiographic imaging. Once the desired level of occlusion is achieved, the catheter is withdrawn, and hemostasis is achieved at the entry site. This approach eliminates the need for contrast media, reduces radiation exposure hazards, and minimizes complications associated with angiographic catheterization. Notably, it also reduces the risk of surgical intervention, such as partial or total nephrectomy, which is particularly important in patients with solitary kidneys. 8 Numerous studies and case reports have investigated the utility of ultrasound‐guided embolization for treating RAP in solitary kidneys post‐PCNL. 9 , 10 , 11 For instance, Shah et al . documented a successful coil embolization case in a young female with RAP after PCNL. 1 Additionally, various studies have examined different aspects of ultrasound‐guided techniques in PCNL procedures. Usawachintachit and Tzou emphasized the benefits of ultrasound guidance, such as real‐time imaging and reduced radiation exposure, during renal access and tract dilation in PCNL. 3 These cases collectively demonstrate the effectiveness of ultrasound‐guided embolization for RAP in solitary kidneys. While percutaneous embolization presents several advantages, including minimally invasive nature and precise targeting of the affected vessels, it also carries potential risks. These include non‐target embolization, post‐embolization syndrome, and complications such as infection, bleeding, or vessel injury. Despite these risks, the benefits often outweigh the disadvantages, particularly in cases where surgical options pose higher risks. Moreover, the research suggests that ultrasound‐guided PCNL is a safe and feasible procedure with a low complication rate in patients with solitary kidneys. Long‐term follow‐up data revealed that over 90% of these patients experienced either improvement or stabilization of renal function after undergoing ultrasound‐guided PCNL. 12 Diagnosis and management of RAP in patients with solitary kidneys pose significant challenges. Percutaneous ultrasound‐guided embolization should be considered, especially for patients with solitary kidneys and post‐PCNL hematuria. We report a case where a patient with a solitary kidney and severe hematuria required multiple blood transfusions. Due to renal insufficiency, pre‐operative CT angiography was not feasible, making ultrasound the only available guidance method. The procedure initially failed due to renal vessel spasms during selective catheterization but was ultimately managed with a minimally invasive super‐selective embolization technique. Percutaneous embolization is particularly advantageous in scenarios involving a single kidney due to the critical need to preserve renal function. However, it is also effective in a wide range of cases, providing a valuable treatment option for patients with vascular abnormalities in various organs. The case was successfully managed with ultrasoun‐ and fluoroscopic‐guided direct injection of cyanoacrylate glue into the pseudoaneurysm. Herein, we also discuss the unanticipated events during the embolization of the pseudoaneurysm in the solitary kidney and its management. In conclusion, our case report demonstrates the effective management of a RAP in a solitary kidney using ultrasound‐guided embolization. This non‐surgical approach successfully controlled bleeding and preserved renal function, highlighting the value of angioembolization in high‐risk patients. Ultrasound guidance proved to be a safe and precise method, minimizing complications and optimizing outcomes. Further research is needed to confirm the efficacy and safety of this technique in similar cases. Sana Augustine: Writing – original draft. Mitwa Patel: Writing – original draft. Pugazhendi Inban: Data curation; formal analysis. Sk Sadia Rahman Synthia: Supervision; validation. Ummul Z. Asfeen: Investigation; methodology. Aliza Yaqub: Methodology; visualization. Aadil Mahmood Khan: Writing – review and editing. Mansi Singh: Writing – review and editing. The authors declare no conflict of interest. Not applicable. Written informed consent was obtained from the for publication of this case report and any accompanying images. A copy of the written consent is available for review by the editor in chief of this journal. Not applicable.
|
Clinical case
|
biomedical
|
en
| 0.999998 |
PMC11693430
|
Viral outbreaks impose major public health threats to human population and result in a heavy disease burden . COVID-19, a respiratory viral disease outbreak caused by SARS-CoV-2 has significantly affected the global population . Some of the viral outbreaks are seasonal, occurring simultaneously in the population complicating treatment and disease management. For instance, viral infections such as chikungunya, dengue affect population since the onset of monsoon owing to rise in vector population and present with overlapping symptoms and with the emergence of the COVID-19 pandemic, management of these diseases has become further confounding. In such a scenario, novel strategies such as usage of medicinal supplements and diet management may aid in disease management . In recent years, scientific interests are growing in nutraceuticals with the concept of “Food is medicine; Medicine is Food”. Nutraceuticals are involved in the maintenance of well-being, modulation of the immune system, thus enhancing health. Nutraceuticals interact with the components of the immune system and improve the immune response to viral pathogens. Siddha is a traditional indigenous medical system widely recognized as an effective strategy for the prevention and treatment of a variety of diseases. Moreover, several studies had demonstrated the use of the Siddha medical system for the effective treatment of viral diseases . MAM Granules is a provisionally patented Siddha nutraceutical supplement, made up of three ingredients viz. Curcuma longa , Withania somnifera , and Piper nigrum and have been evaluated for COVID-19 . Moreover, the ingredients of MAM have been previously explored for their activity as ROS inhibitor, anti-inflammatory activity and antiviral activities. For example, Curcuma longa, [ , , ] Withania somnifera [ , , ], and Piper nigrum [ , , ] have been reported to effectively reduce the increase in intracellular ROS and demonstrate anti-inflammatory activity. Similarly, the antiviral activity of Withania somnifera , Curcuma longa and Piper nigrum for different viruses have been reported. Here, we explored their activity as a formulation i. e. MAM granules. At the cellular level, one of the first consequences of virus infection is the induction of oxidative stress in the cell they infect and is characterized by increase in generation of free radicals and reactive oxygen/nitrogen species (ROS/RNS) . The excessive production of these molecules in the cell further damages the cellular components . This kick starts the cellular homeostasis machinery which employs different antioxidants to suppress the production of the ROS and amongst these, Superoxide Dismutase (SOD) works as first line defence mechanism . The sudden spike in ROS and free radicals then activate inflammatory pathways and initiate cellular inflammatory responses. These responses are mediated by two key cellular factors i. e. Nitric Oxide (NO) and PGE2 which activates previously silent signalling pathways leading to translocation of transcriptional factors in the nucleus required for transcription of cellular genes to produce interferons and pro-inflammatory cytokines . Use of traditional medicines for the treatment of infectious studies has proven to be an effective alternative to modern medicines . These medicines are well reported to strengthen the immune system, thereby helping both an infected individual to fight the pathogen as well as aid in building the immunity of a healthy individual to fight an impending pathogen especially during viral outbreaks in a community . With respect to the latter option, in recent times, impetus has been well thrusted on the concept of nutraceuticals as a preventive measure to combat infectious diseases. One such nutraceutical, MAM, has been developed using three ingredients viz. Curcuma longa, Withania somnifera and Piper nigrum and have been evaluated for its usage as effective supplement against viral diseases. In the present study, we first evaluated the acute toxicity of the MAM granules in Wistar rats. Using an aqueous extract, we further evaluated its cytotoxicity using in-vitro cell-based assays in Vero-E6 and RAW264.7 cells. Using the maximum non-toxic dose (MNTD), we investigated the antiviral activity of aqueous extract of MAM using different types of antiviral assays on two RNA viruses i. e. SARS-CoV-2 and chikungunya virus (CHIKV). It was observed that co-incubating the MAM aqueous extract with both of these viruses significantly reduced viral titers. Next, we systematically evaluated the aqueous extract of this nutraceutical and its ingredients for their antioxidant and anti-inflammatory properties in RAW264.7 cells. We observed the aqueous extract of MAM exhibited strong anti-oxidant potential and was able to suppress inflammatory mediators suggesting potent anti-inflammatory capacity. The three ingredients of MAM extract i. e. Curcuma longa, Withania somnifera and Piper nigrum also showed significant anti-oxidant and anti-inflammatory capacity in a dose dependent manner without imposing significant cytotoxicity. As per the Siddha Literature, all 3 ingredients have anti-inflammatory and antioxidant properties. They are time-tested medicinal drugs and the therapeutic benefits of these ingredients individually indicated for the management of Suram (fever) which is mentioned as Suram in Siddha Text “Agathiyar Gunavagadam”. Based on Taste concept of Siddha, they are also potent drugs to treat Surams – Vida Suram/Suram/Seetha Suram) (Viral Fevers). So Based on Siddha Basic concepts and Therapeutic references, these 3 ingredients were selected and made into the formulation in the proper ratio. (MAM – 1:4:1). The MAM granules were prepared based on an analysis of the properties of the individual components as per the existing Siddha text literature - Gunapadam Mooligai. Curcuma l onga (Manjal), Withania somnifera (Amukkara), and Piper n igrum (Milagu) were mixed in a ratio of 1:4:1 in this preparation. The root tuber of Aswagandha, Curcuma and dried fruits of Pepper were used to prepare the MAM Granules formulations. The raw drugs were purchased from the raw drug store. For the in-vitro studies, 100 g of polyherbal formulation was taken in 800 ml water and a decoction was prepared as per Gunapadam text literature. Decoction was filtered through muslin cloth to obtain 200 ml of aqueous extract, concentrated under vacuum using rotatory evaporator and lyophilized to remove the water content. Then, an aqueous solution was prepared of concentration 83.88 mg/ml in deionized water followed by vortex and kept for 30 min at 40 °C. Subsequently, solution was centrifuged at 3000 rpm for 10 min at RT and supernatant was collected, filtered and used for experiments. The raw drugs used in the preparation of MAM were procured from the local market of Chennai, Tamil Nadu, India. They were properly processed after authentication by Department of Pharmacognosy, Siddha Central Research Institute (SCRI), Arumbakkam, Chennai, Tamil Nadu, India. Certificate of Authentication No. 214.06021001 dated January 06, 2021. A voucher specimen of the same was deposited in the museum of Department of Pharmacognosy, SCRI, Chennai. Organoleptic parameters such as color, odor, taste, shape, and size were analysed and recorded. Powder microscopy of shade-dried powder was carried out using Nikon ECLIPSE E 200 trinocular microscope attached to Zeiss ERc5s digital camera under bright field light . Physico-chemical characteristics of MAM were analysed by quantitative analysis for total ash, water -soluble ash, acid-insoluble ash, water-soluble extractives, alcohol-soluble extractives, loss on drying, and pH (10% aqueous solution) as per standard techniques . The preliminary phytochemical screening of MAM Granules was screened for the identification of phytochemical constituents by HPTLC Methods . Inductively Coupled Plasma Optical Emission Spectroscopy (ICP-OES) is one of the most common techniques for elemental analysis and useful for standardization as well as to develop an analytical profile. An acute oral toxicity studies of MAM granules were performed on Wister rats ( Rattus norvegicus ) using the classical acute toxicity protocol. The procurement of animals was from Gentox Bio Services Pvt. Ltd., Hyderabad, with the following registration details - CPCSEA Registration No.: 1242/PO/RcBiBt/S/08/CPCSEA. The animals were acclimatized for a period of 9, 11, 13 and 15 days for Set I, Set, Set III and Set IV respectively in the experimental animal room before start of treatment. The animals were observed once daily for any abnormalities. All rats were maintained on a 12-h light/dark cycle and located at room temperature of approximately 25 ± 5 °C with constant humidity. Four groups of female mice (nulliparous and non-pregnant) of age group 11–13 weeks and body weight ranging from 160.91 g–210.35 g at the time of dosing, were selected with 3 animals per group (Supplementary method 1 and Supplementary Table 1 ). The study was approved by the institutional IAEC and was conducted based on the requirements of the OECD Guideline for Testing of Chemicals, Test No. 423, “Acute Oral Toxicity- Acute Toxic Class Method”; adopted on December 17, 2001. As there was no toxicological information available about MAM Chooranam, the starting dose of 300 mg/kg body weight was selected for this study. 1. A dose of 300 mg/kg body weight was administered orally to Set I and animals were observed at 30 min, 1 hour, 2 hours, and 4 hours post-treatment. The treatment-related clinical signs and symptoms and mortality was observed in this SET I till scheduled treatment termination day 14. 2. A dose of 300 mg/kg body weight was administered orally to the Set II and animals were observed at 30 min, 1 hour, 2 hours, and 4 hours post-treatment. The treatment-related clinical signs and symptoms and mortality was observed in this SET II till scheduled treatment termination day 14. 3. A dose of 2000 mg/kg body weight was administered orally to Set III and animals were observed at 30 min, 1 hour, 2 hours, and 4 hours post-treatment. The treatment-related clinical signs and symptoms and mortality was observed in this SET III till scheduled treatment termination day 14. 4. A dose of 2000 mg/kg body weight was administered orally to the Set IV and animals were observed at 30 min, 1 hour, 2 hour, and 4 hour post-treatment. The treatment-related clinical signs and symptoms and mortality was observed in this SET IV till scheduled treatment termination day 14. Vero E6 (African green monkey kidney epithelial) cells, Vero cells and RAW 264.7 (an immortalised murine macrophage cell line) cells were maintained in Dulbecco's Modified Eagle Medium ( DMEM) supplemented with 10% inactivated fetal bovine serum ( FBS) at 37 °C, 5% CO2 in an incubator. The concentration of FBS was reduced to 2% for the cytotoxicity and anti-viral assay. RAW 264.7 cells were used for accessing the antioxidant and anti-inflammatory potential whereas Vero E6 cells were used for cell viability assay and SARS-CoV-2 antiviral assays and Vero cells for CHIKV antiviral assay. For antiviral assays, Washington strain of SARS-CoV-2 was used at the MOI of 0.1 to infect Vero E6 cells. Viral amplification was performed in Vero E6 cells at 37 °C until the appearance of full cytopathic effects. Amplified virus was quantified using plaque assay. The ECSA strain of CHIKV isolate (27) was used for the present study which was propagated in C6/36 cells and used at 50 pfu/well of 96 well plate in Vero cells. MTT assay was used to assess the cell viability of MAM extract using Vero E6 cells and RAW 264.7 cells and its ingredients using RAW 264.7 cells. For that, cells were seeded at the density of 10,000 cells per well in 96 well plate and incubated overnight. Next day, cells were rinsed with Phosphate buffered saline (PBS) and then treated with maximum non-toxic concentration (MNTD) of either MAM extract or with individual ingredient with subsequent 1:2 dilution/well in 2% MEM followed by incubation at 37 °C in a 5% CO2 incubator for 48 hours. After the incubation, media was removed, cells were washed and 100 μl of freshly prepared MTT (0.5 mg/ml) solution was added to each well and plates were incubated for 4 hours at 37 °C in a 5% CO2 incubator followed by removal of supernatant and addition of DMSO (100 μl/well). Then, these plates were incubated at RT for 30 min to allow the dissolution of formazan complexes, the absorbance was recorded at 570 nm and cell survival was plotted with the help of GraphPad Prism 6. To access the antiviral activity of MAM extract, the following types of antiviral assays were performed using previously published protocols with some modifications based on the virus tested. The CHIKV antiviral assay was used as per the published protocol using Vero cells. The following modifications were done in the protocol for antiviral assay for SARS-CoV-2. Vero E6 cells were seeded at 10,000 cells per well in 96 well plate and incubated overnight. Next day, for pre-incubation assay, MAM extract was serially diluted (1:2), mixed with SARS-CoV-2 separately, incubated for 2 hours and then added to seeded Vero E6 cells, incubated for 2 hours again followed by replacement of media and subsequent incubation for 48 hours. For co-incubation assay, Vero E6 cells were first incubated with SARS-CoV-2 for 2 hours, washed to remove unbound virus and incubated with serially diluted MAM extract in 10% DMEM for 48 hours. After 48 hours of incubation supernatant was collected for both the assays and then subjected to viral quantification using plaque assay. A monolayer of Vero E6 (for SARS-COV-2)/Vero cells (for CHIKV) was formed by seeding 20,000 cells per well in 96 well plate and incubated overnight. Next day, media was aspirated and cells were washed with PBS. For SARS-CoV-2, 4 μl of collected sample was mixed with 96 μl media (total volume 100) and was added to first column already containing 100 μl media making the virus dilution to 1:50 followed by 1:2 serial dilutions . Last column was used as negative control. Then these cells were incubated for 2 hours for viral adsorption followed by removal of media and addition of 2% CMC overlay medium. In case of CHIKV, first the antiviral assays were performed as described above which is immediately followed by addition of 2% CMC overlay medium. Then these plates were incubated either for 96 hours (SARS-CoV-2) or 72 hours (CHIKV). After the incubation, CMC overlay was removed and cells were fixed with 10 % formaldehyde for 2–3 hours. After the fixation, plate was rinsed with PBS and 0.25% crystal violet prepared in 30% methanol was added and incubated for 30 min at room temperature. Then, plate was washed and plaques were counted to calculate viral titre. The radical scavenging capacity of the extracts was evaluated by in vitro ABTS assays and total cellular SOD activity in cultured RAW264.7 cells. The ABTS + (2,2-azino-bis (3-ethylbenzthiazoline-6-sulfonic acid) radical scavenging assay was performed in 96-well microplates according to the manufacturer's instruction . For SOD activity, cells were washed with ice cold PBS, lysed with lysis buffer and supernatants were assayed using a Sigma SOD assay kit using manufacturer's instruction. The SOD activity was measured at 450 nm using a microplate reader (Molecular Devices Spectramax M3, San Jose, CA, USA). For intra-cellular ROS level was quantified by using 2,7-dichlorofluorescein diacetate (DCFH-DA) dye. RAW 264.7 cells were plated at a density of 0.5 × 10 5 cells/mL in a 96 well plate, pre-treated with extracts for 4 hours and stimulated with 2 μg/ml of Lipopolysaccharide (LPS) for 24 hours. After incubation, cells were washed PBS and 10 μM DCFH-DA dye was added for another 30 min at 37 °C in dark. After incubation, fluorescence intensity was measured at 485 nm excitation and 535 emissions. The relative intensity with respect to LPS group represented here. The inhibitory effects of extracts on inflammatory mediator production were assessed for Nitrite release and PGE2 production. RAW 264.7 cells were seeded in 96-well plates at a density of 0.5 × 10 5 cells per well. Cells were pre-treated with various concentrations of extracts for 4hrs followed by LPS stimulation for 24 h at 37 °C in an incubator with 5% CO 2 . After incubation cells supernatants were collected and nitrite level was analysed by the Griess reagent according to manufacturer's instructions (Himedia, CCK061). Similarly, PGE2 levels were assessed by using PGE2 ELISA kit (R&D Systems, KGE004B) in accordance with the instructions of the manufacturer. The standard curve was generated from solution provided in the kit and concentration of sample was analysed in accordance with the equation obtained through standard curve. The evaluation of pro-inflammatory cytokines present in supernatant collected from the treated cells was done by using ELISA (R&D Systems, KGE004B). The protocol was followed on the basis of manual provided in the kit. The level of cytokines was quantified in cell culture supernatant in accordance with the equation obtained through standard curve. Data presented here as mean ±standard deviation (SD). The statistical analysis of differences between the sample and LPS groups were analysed by one-way ANOVA. Significant differences were considered whose p value < 0.05. The pharmacognostic study shows the authentication of herbal components used for the MAM Granule preparation. The special characteristics observed in the powder microscopy studies, confirm the presence of authenticated components. The presence of parenchyma cells, oleoresin cells, spiral vessels, and oil cells confirms the presence of the Curcuma longa in the formulation . The presence of perisperm cells, beaker-shaped stone cells, isodiametric stone cells, and starch grains confirms the presence of Piper nigrum in the formulation . The presence of parenchyma cells, bordered pitted vessels, tracheid and starch grains confirms the presence of Withania somnifera in the formulation . The TLC photodocumentation is presented in Fig. 2 . Fig. 1 Microscopy analysis of MAM powder. (A–D) Parenchyma cells, Oleoresin cells, spiral vessels, and oil cells indicating Curcuma longa . (E–H) Perisperm cells, beaker-shaped stone cells, isodiametric stone cells, and starch grains indicating Piper nigrum . (I–L) Parenchyma cells, bordered pitted vessels, tracheid and starch grains indicating Withania somnifera . Fig. 1 Fig. 2 TLC photodocumentation of MAM granules. The TLC photos of MAM granules at UV 254, 366 nm and 520 nm after derivatization with vanillin-sulphuric acid showing presence of Curcuma longa , Piper nigrum and Withania somnifera respectively in the formulation. Fig. 2 The physico-chemical parameters showed 13.363 ± 0.62 % of loss due to drying at 105 °C which may be due to the moisture as well as the volatile oil content present in the ingredients of MAM granules. The total ash of 3.636 ± 0.020 % showed that the drug contains less inorganic content in which 2.380 ± 0.042 % were water soluble ash and nil foreign matter in the form of siliceous matter. The water-soluble extractive (14.27 ± 0.102 %) was higher than alcohol soluble extractive (6.49 ± 0.13 %) inferring the presence of high polar phytocompounds. The drug was slightly acidic with pH value of 6.0. The colour and Rf of spots visualized under UV condition and after derivation with vanillin-sulphuric acid are presented in Table 1 . Table 1 R f and Color of spots Table 1 R f Colour R f Colour R f Colour 0.23 Green 0.06 Blue 0.16 Pale pink 0.28 Green 0.16 yellow 0.41 Pink 0.39 Green 0.40 Yellow 0.46 Pink 0.44 Green 0.45 Blue 0.51 Orange 0.52 Green 0.52 Blue 0.58 Orange 0.58 Green 0.66 Green 0.66 Brown 0.66 Green 0.68 Dark blue 0.70 Yellow 0.73 Green 0.76 Yellow 0.77 Violet 0.80 Green 0.83 Yellow 0.87 Violet 0.85 Green 0.94 Violet 0.92 Green Analysis of MAM reveals a great deal about their elemental composition. ICP-OES analysis of all the MAM revealed that Heavy metal like arsenic, lead, mercury and cadmium were not detected or within permissible limits. The phytochemical screening of MAM Granules for the identification of phytochemical constituents by HPTLC Methods showed various peaks and Rf values as shown in Fig. 3 . Fig. 3 HPTLC Finger print profiling Fig. 3 MAM granules were evaluated for any toxicity in Wistar rats and no treatment-related clinical signs and symptoms were observed in all the animals till scheduled termination, ie, day 14. No mortality was observed at the dose level of 300 mg/kg b.wt. & 2000 mg/kg b.wt. during the experimental period. All the animals exhibited a progressive increase in body weight throughout the experiment period. Further, no external and internal abnormalities were observed during gross pathological evaluation at the end of day 14. At the dose level of 2000 mg/kg body weight, all the animals of set III (Animal no. 7, 8 & 9) and set IV (Animal no. 10, 11 & 12) were terminally sacrificed on day 14. No external and internal abnormalities were observed ( Supplementary Tables 2–4 ). Based on the above findings it was concluded the LD50 cut-off value of MAM granules in the acute oral toxicity study in Wistar rats was greater than 2000 mg/kg b.wt. As per Globally Harmonized System for the classification of chemicals (GHS) MAM Granules in Wistar Rats are classified in Category 5 or Unclassified for which Acute Toxicity Estimate (ATE) Value is 5000 mg/kg b.wt. As the first step, we analysed the cytotoxicity of the aqueous extract of MAM granules using an in vitro-based MTT assay. The MAM was showing 50% cell cytotoxicity (CC 50 ) at the concentration of approximately 22 mg/ml at 48 hours of exposure. The maximum non-toxic dose (MNTD) was obtained around 0.65 mg/ml . Then, we assessed its capability to restrict virus infection. For this purpose, we chose two viruses, namely, SARS-CoV-2 and CHIKV, and systematically evaluated the ability of the extracts of the MAM nutraceutical to restrict virus growth. These assays were performed using maximum no-toxic dose (MNTD) and IC50 was calculated using MNTD as a starting concentration. We performed pre-incubation assay (SARS-CoV-2) and pre-treatment assay for CHIKV to investigate the protective effect against viral infections. Further, we employed co-treatment assay (CHIKV) and co-incubation (SARS-CoV-2) to study the possible inhibition of viral replication. Fig. 4 Antiviral activity of MAM aqueous extract against SARS-CoV-2 and CHIKV . (A) Shows cytotoxicity data of MAM aqueous extract treatment after 48 hours of incubation via MTT assay using Vero-E6 cells. Control represents untreated Vero-E6 cells. The bars represent mean ± SD. (B&C) demonstrates the antiviral activity of MAM aqueous extract during co-incubation and pre-incubation assay respectively with SARS-CoV-2 using Vero-E6 cells. VO represents untreated SARS-CoV-2 infected Vero-E6 cells and served as positive control to compare viral inhibition upon MAM treatment. The bars represent mean ± SD. (D–F) Shows antiviral activity for CHIKV using co-incubation, pre-treatment and post-treatment of MAM extract respectively using Vero cells. Control represents untreated CHIKV infected Vero cells. The bars represent mean ± SD. Fig. 4 Our results investigating the antiviral activity of MAM showed inhibition in viral infectivity of ∼60% during co-incubation assay for both the viruses. We further determined the IC50 values i. e. ∼50 ug/ml for SARS-CoV-2 . For CHIKV, we also observed similar reduction in viral infectivity with an IC50 of ∼20.34 μg/ml . Surprisingly, we could not observe any significant reduction in viral infectivity when MAM aqueous extract was pre-incubated with SARS-CoV-2 followed by infection of vero-E6 cells . Also, when we performed similar experiment as well as post-treatment assay with CHIKV, we observed no significant reduction again in viral infectivity . In order to investigate the immune modulation properties of MAM aqueous extract, we evaluated the anti-oxidant and anti-inflammatory activity using RAW264.7 cells. As the first step, we accessed cytotoxicity of MAM extract on RAW264.7 cells and observed that the extract did not display significant cytotoxicity even at concentrations as high as 4 mg/ml . Based on these results, three concentrations, namely 0.04, 0.4 and 4.0 mg/ml, were used for all subsequent assays. Fig. 5 Investigation of cytotoxicity, anti-oxidant and anti-inflammatory properties of MAM aqueous extract . (A) Shows percentage cell survival of RAW 264.7 cells upon treatment of MAM via MTT assay. Control represents untreated RAW 264.7 cells. LPS represents RAW 264.7 cells treated with LPS. RAW 264.7 cells treated with 0.04, 0.4 and 4 mg/ml of MAM and 2 μg/ml of LPS showed no significant cytotoxicity. (B) demonstrate free radical scavenging capacity of MAM accessed via ABTS assay using RAW 264.7 cells. (C) Shows intracellular level of ROS induced by LPS and upon MAM treatment in RAW 264.7 cells. (D) Shows increase in intracellular SOD activity after MAM treatment in a dose dependant manner in RAW 264.7 cells. Untreated cells served as negative control and LPS treated cells as positive control. (E&F) shows effect on MAM treatment on the production of inflammatory mediators, NO measured as nitrite and PGE2 respectively using RAW 264.7 cells. Untreated cells served as negative control and LPS treated cells as positive control. (G&H) reflects the capability of MAM aqueous extract to counter the release of pro-inflammatory cytokines, TNF-alpha and IL-1 in RAW 264.7 cells. LPS treatment and untreated cells served as positive control and negative control respectively. The bars represent mean ± SD. One way ANOVA was used to calculate P-values. ∗P < 0.05, ∗∗P < 0.001 vs LPS treated Group. Fig. 5 With respect to the scavenging activity, MAM extract demonstrated dose dependent decrease in the production of radical cation (ABTS·+) indicating significant radical scavenging capacity, ∼19%, 28% and 35% activity at 0.04, 0.4 and 4 mg/ml respectively . Afterwards, the intracellular level of ROS generated after the stimulation of RAW cells with Lipopolysaccharide (LPS) was measured which is well known to induce inflammatory response upon treatment. Only LPS treated cells were selected as positive control. We observed similar dose dependent reduction (16%, 20% and 40% reduction at 0.04, 0.4 and 4 mg/ml respectively) in the generation of intracellular ROS in cells treated with MAM extract suggesting a potent antioxidant activity . Since SOD has been reported as the main enzyme participating in the first line defence as anti-oxidant against ROS , we further accessed the activity of intracellular SOD during the treatment of MAM. Monolayers of RAW264.7 cells were treated with the three concentrations of MAM extract. Untreated cells were used as negative control. LPS treatment was used to stimulate the generation of free radicals in these cells. Only LPS treated cells served as positive control. When we measured the SOD activity in MAM treated cells after LPS stimulation, the MAM treated RAW264.7 cells showed dose dependent increase (70%, 75% and 82% at 0.04, 0.4 and 4 mg/ml respectively) in SOD activity as compared to LPS treated positive control . Taking all the above results together, such as the observed reduction in intracellular ROS and significant increase in the intracellular potent antioxidant SOD after in-vitro treatment with MAM extract, it clearly suggests that MAM extract have the capability to provide potent antioxidant activity. After confirming the potent anti-oxidant activity of MAM extract, we subsequently investigated its anti-inflammatory property using the similar method. RAW 264.7 cells were either treated with LPS followed by MAM extract in dose dependent manner or left untreated (negative control). Cells treated with LPS only were used as positive control. The production of nitric oxide (NO) and PGE2, two very important signalling molecules mediating the inflammatory response during viral infection and oxidative stress are reported to increase many folds, therefore we accessed the NO release and production of PGE2 after the MAM treatment and LPS stimulation. In our experiment, we indeed observed a significant reduction of production of these molecules in dose dependent manner in cells treated with MAM extract . Further, in order to investigate the possible mechanism of modulation of inflammatory response mediated by NO and PGE2, we analysed the effect of MAM extract on production of two key pro-inflammatory mediator cytokines, i. e. IL-1 β and TNF α . The LPS stimulated cells had significantly increased the production of these cytokines as compared to their respective untreated controls. However, the treatment with MAM extract had suppressed the production of these cytokines in a dose-dependent manner . The inhibition of the proinflammatory cytokine activity of the highest concentration of extract (4 mg/ml) was noted to be working in similar manner with dexamethasone groups. The reduction of IL-1 beta by MAM extract was noted to exhibit the inhibitory activity with the dexamethasone group. Collectively, our results demonstrate that MAM extract provide potent antioxidant and anti-inflammatory capability after treatment in RAW 264.7 cells. The potent antioxidant and anti-inflammatory properties displayed by MAM extract prompted us to examine the aqueous extracts of its ingredients, namely, Curcuma longa , Withania somnifera and Piper nigrum , in a similar manner to deduce the key ingredient that contributed to these aspects in the nutraceutical formulation. During our investigation of antioxidant potential of each MAM ingredient using above experimental set up, we observed that all three ingredients were capable of scavenging the generated free radicals and intracellular ROS in a dose dependent manner as showed in ABTS scavenging assay and ROS assay . All three ingredients also demonstrated increased intracellular SOD activity . This suggested that the observed potent antioxidant activity of MAM extract was due to the synergistic effect of these three ingredients. Amongst the three ingredients, Withania somnifera demonstrated highest antioxidant activity in all these assays . Moreover, in our assessment of NO release of all three ingredients, Withania somnifera showed highest inhibition in NO release indicating that the major immune modulation property is due to Withania somnifera in MAM extract . The investigation of cytotoxicity of these ingredients showed no significant cytotoxicity with the exception of Withania somnifera , which showed ∼ 40% cytotoxicity at the concentration of 4 mg/ml . Fig. 6 Investigation of anti-oxidant activity and cytotoxicity of MAM ingredients in RAW 264.7 cells. (A&B) demonstrates free radical and intracellular ROS scavenging capacity of each ingredient of MAM in a dose dependant manner assessed via ABTS and ROS assay respectively. Cl indicates Curcuma longa , Pn indicates Piper nigrum and Ws indicates Withania somnifera . (C) Shows increase in intracellular SOD activity after MAM treatment in RAW 264.7 cells. Untreated cells served as negative control and LPS treated cells as positive control. (D–G) Shows cytotoxicity of the three ingredients, Curcuma longa (Cl), Piper Nigrum (Pn) and Withania somnifera (Ws) respectively. The bars represent mean ± SD. Fig. 6 In the present study, we explored the Siddha medicinal system for developing a nutraceutical based therapeutic for COVID-19 and other viral infections. Several medicinal plants have been studied in this aspect and key phytochemicals, have been reported as active ingredients. In this study, we investigated one such Siddha nutraceutical viz. MAM granules that has been developed as per the basic principles of Siddha Gunapadam. The selection of these ingredients was done after studying the Siddha literature and a detailed analysis of the nutrient values of each ingredient revealed that these ingredients are rich in Vitamin A, C, E, zinc and manganese. Additionally, all these ingredients have been reported to have potent anti-oxidant and anti-inflammatory properties, when taken as diet supplement [ , , ]. For instance, Piper longum L. Piperaceae, a fruit commonly known as Indian spice kali mirch, has been reported to show antiviral activity against Coxsackie virus type 3 (CVB3) due to the presence of α-pinene, β-pinene, limonene, myrcene, sabinene, camphene, α-thujone, piperitone, caryophyllene, p -cymene, α-terpinene, and piperamide . Similarly, Curcuma longa L. (Zingiberaceae) is enriched with curcumenone, bisacumol, bisacurone, curcumenol, curcumadiol, and demethoxycurcumin. Curcumin inhibits SARS-CoV-2 replication in human cells, as previously reported for HIV-AIDS, herpes simplex virus (HSV) , chikungunya virus and Zika virus . Likewise, Ashwagandha consists of sterols, alkaloids, saponins, amino acids and polysaccharides. Alkaloids such as ashwagandhine, cuscohygrine, trpine, isopelletierine, anaferine etc. have been isolated from Ashwagandha. The multiple numbers of sterols are isolated from this herb which includes withaferins, withasomidienone, withasomniferin A, withanolides, and withanone. Sitoindisides (VII, VIII, IX and X) and withaferin A. Withaferin A, a constituent of ashwagandha have a high binding affinity towards neuraminidase and inhibit neuraminidase of H1N1 influenza virus potently . Various natural products and phytochemicals have been studied for their antiviral activity against SARS-CoV-2 [ , , , , , ] and CHIKV [ , , , ]. We also have studied siddha medicines and have reported to work as effective antivirals for RNA viruses in addition to possess the immune-regulatory properties . Therefore, we also accessed the antiviral activity of MAM aqueous extract against two RNA viruses i. e. SARS-CoV-2 and CHIKV. Similarly, many phytochemicals have been reported to inhibit viral replication , therefore, we investigated the ability of MAM to inhibit viral replication. We found significant reduction in viral replication of both RNA viruses with IC50 value of ∼50 μg/ml for SARS-CoV-2 and ∼20.34 μg/ml for CHIKV. This inhibition in viral replication indicated the possible interaction between MAM's phytochemicals and viral core proteins resulting in direct inhibition of SARS-CoV-2 and CHIKV viral core protein's function. When a virus infects a cell, they initiate a cascade of cellular events that eventually leads to the propagation of the virus and death of the infected cell. The infected cell embarks on a series of cellular responses, first amongst them being oxidative stress. Increase in the production of reactive oxygen species and free radicals are the hallmark events of oxidative stress . These free radicals tend to damage the cellular components and responsible for tissue injury . Due to the compromised state of cellular homeostasis machinery and viral infection, cells were unable to scavenge these increased ROS and free radicals. One way to counter this oxidative damage is to provide molecules such as vitamin A, C, E, manganese, zinc, polyphenols and carotenoids as well as antioxidant enzymes (e. g. SOD) having capabilities to scavenge these generated free radicals and reactive oxygen species preventing the cellular and tissue damage. These antioxidants directly interact with free radicals, accept or donate electron/hydrogen atom and neutralize the unpaired condition or destroy them and become less reactive, long lived and least dangerous molecules as compare to the original free radicals . This antioxidant activity can be supplemented both ways i. e. extracellularly such as dietary antioxidant and intracellularly (antioxidant enzymes). During our exploration of the efficacy of MAM for its antioxidant property, we observed that the aqueous extract of MAM exhibited significant ability to scavenge the free radicals in commonly used radical cation-based antioxidant assay. The assay follows the principle of generation of cation free radicals (ABTS •+ ) via oxidation by metmyoglobin and H 2 O 2 . The oxidants work as reductant of radical ABTS •+ reducing the number of generated cation radicals . This observed scavenging potential suggested that MAM could supply dietary antioxidants, thus significantly counter the sudden rise in generation of ROS and free radicals when contacted by virus. As free radicals, ROS rise in virus infected cells, cellular homeostasis machinery employ intracellular antioxidants also in addition to the dietary antioxidants to reduce the generation of these reactive species for the prevention of cellular damages. Amongst them, SOD acts as first line of defence mechanism against these reactive species. The observed reduction in intracellular ROS generation and increase in intracellular SOD activity in our experiments in cells treated with MAM extract clearly suggested SOD as main intracellular defence mechanism against ROS and free radicals making MAM as suitable nutraceutical to manage oxidative stress and tissue damage during viral infection. We further accessed the contribution of the three ingredients in anti-oxidant activity of MAM extract. During our assays, all three ingredients demonstrated radical scavenging capacity indicating the synergistic effect of observed activity of MAM extract. However, treatment with Withania somnifera showed potent decrease in production of intracellular ROS and increase in SOD activity as compared to other two ingredients. This indicates that the other two ingredients combine their activity synergistically with Withania somnifera thus providing potent anti-oxidant potential to MAM extract. Viral infection mediated diseases often manifest as inflammation in target tissue leading to development of disease symptoms and sepsis shock in some cases. For instance, SARS-CoV-2 mediated COVID-19 disease leads to initiation of inflammatory response in lungs in response to viral infections. However, dysregulation of this response and subsequent cytokine production leads to development of ARDS . Management of this dysregulated inflammatory response has been shown to be an effective strategy against COVID-19 and prevented significant morbidity and mortality. Similarly, CHIKV fever is characterized by severe joint inflammation and febrile illness and reduction in inflammation provides significant relief and effective management of disease symptoms . These inflammatory responses can be activated by excess production of free radicals and ROS during oxidative stress . Anti-inflammatory molecules suppress the uncontrolled inflammatory response and avoid these damages. In view of these aspects, we evaluated the anti-inflammatory properties of MAM extract by accessing the production of two major inflammatory regulators i. e. NO and PGE2 in addition to measuring the production of two prominent pro-inflammatory cytokines, TNF-α and IL1-β, since both of these molecules serve as mediators of pro-inflammatory response against viral infections. We hypothesized that treatment of cells with MAM extract should reduce the production of these molecules in cells stimulated with LPS. Our data reveal that treatment with MAM extract effectively reduce the production of NO and PGE2. The decrease in inflammatory mediators was more prominent in PGE2 production as compared to NO release . The excessive NO production under LPS stimulation can be achieved via activation of promoter of the NOS2 gene by variety of transcriptional factors . The observed reduction in excessive NO production by MAM treatment might be accomplished by one or combination of these activation mechanism, which require further investigation in detail. Similarly, PGE2 is synthesized primarily by cyclooxeganse-2 (COX-2) and can be induced by LPS treatment . We observed prominent reduction in PGE2 production upon MAM treatment which could be attributed to the difference in the mechanism of production of these inflammatory mediators. NO production is achieved by different pathways hence inhibition of one or two pathways by MAM may not able to show potent inhibition in NO production whereas PGE2 production is done by COX2 only, hence its inhibition via MAM showed drastic decrease in its production. Moreover, potent reduction in the production of proinflammatory cytokines by MAM extract clearly suggest that MAM extract can be effectively used as anti-inflammatory agent for viral infections. 6 Conclusion: Therefore, with these data we propose that MAM could be safely used as antioxidant, anti-inflammatory and antiviral against management of disease mediated by other RNA viruses, not limited to SARS-CoV-2 and CHIKV. Moreover, the animal toxicity results support the designed formulation's safety and therapeutic effects and support its use as an alternative therapeutic supplement for COVID -19. Nevertheless, further studies are needed to verify the underlying mechanism of action of the MAM. The present study along with further preclinical studies and clinical trials can serve as a proof of concept of MAM to be a beneficial therapeutic supplement to combat viral infections. Central Council for Research in Siddha, Ministry of AYUSH, Govt. of India. This study got approval from the Institutional ethical committee of Dabur Research Foundation, Ghaziabad. Uttar Pradesh and IAEC Approval No. is IAEC/611/185. MR, VN – conceived the ideology of granules, involved in designing the protocols. DKV, MR, AH, GS, and SD involved in conception and design of experiments, acquisition of data, or analysis and interpretation of data. TKD – Involved in computational analysis and contributed in drafting the manuscript. SS supervised the overall experiments. DKV drafted the manuscript with inputs from MR, SD, SP, TKD and SS finalized the manuscript. SP, KK, SS – involved in final approval of the version to be published. None The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
|
Review
|
biomedical
|
en
| 0.999997 |
PMC11693496
|
Membrane proteins play essential roles in various cellular processes, including ligand-receptor recognition and activation, intercellular communication, and ion transport . These proteins often undergo subtle conformational changes to perform their functions. Therefore, it is crucial to dissect the nanoscale architecture and dynamics of membrane proteins when they integrate into the membrane. Over the years, many structural techniques, for instance, X-ray crystallography and cryo-EM , have offered valuable insights into the structures of membrane-interacting macromolecules. However, capturing the real-time conformational changes of membrane proteins remains to be a significant challenge. Herein, we introduce two single-molecule methods for directly observing the nanoscale dynamics of membrane proteins interacting with bio-membranes, surface-induced fluorescence attenuation (SIFA) and FRET with quenchers in liposomes (LipoFRET) . SIFA is based on the fluorescence energy transfer between a fluorophore and monolayer graphene oxide (GO) , so it is a sensitive point-to-plane distance indicator . For two single-point coupled dipoles, Like FRET (Förster Resonance Energy Transfer), the emission rate follows a d –6 dependence. Replacing one dipole with a line of dipoles results in a d –5 scaling upon integration. Similarly, for a two-dimensional array of dipoles on a graphene oxide sheet, a d –4 scaling is obtained . Therefore, the distance d between fluorophores and monolayer GO, can be calculated according to the d –4 scaling formula: \begin{document}$ d = {d_0}{\left( {\frac{{F/{F_0}}}{{1 - F/{F_0}}}} \right)^{\tfrac{1}{4}}} , $ \end{document} where F and F 0 are the fluorescence of fluorophores in the presence and absence of GO, respectively, and d 0 is the characteristic distance at which energy transfer efficiency reaches 0.5. Therefore, SIFA is a powerful tool for measuring the orientation and depth of insertion of membrane proteins in supported lipid bilayers (SLBs) produced by direct vesicle fusion on top of a GO layer modified with PEG . LipoFRET was developed based on liposomes , a bio-mimic system with surface curvature and unrestricted membrane fluidity . It is based on the principle of the FRET from one donor to multiple acceptors (quenchers) encapsulated in unilamellar liposomes . Unlike SIFA, LipoFRET does not have a clear calculation formula for distance versus intensity. This is because the distance between quenchers is comparable to d 0 in LipoFRET experiments, the quencher solution cannot be treated as a continuous medium. Instead, Monte Carlo simulations can be utilized to calculate the relative intensity-distance relationships. Fluorophores attached to membrane proteins at different penetration depths (or distances) in the liposome lipid bilayer show different intensities based on their energy transfer efficiency . SIFA and LipoFRET can be applied, progressively, to investigate membrane proteins in solid-supported lipid bilayer systems and liposome systems. The techniques utilize the energy transfer between fluorophores and quenchers, and are able to track the axial movement of a single fluorophore-labeled protein in membrane. • E. coli BL21 strain • 1-palmitoyl-2-oleoyl-sn-glycero-3-phospho-(1'-rac-glycerol) (POPG; Avanti Polar Lipids) • 1-palmitoyl-2-oleoyl-glycero-3-phosphocholine (POPC; Avanti Polar Lipids) • 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphoethanolamine(POPE; Avanti Polar Lipids) • 1,2-dioleoyl-sn-glycero-3-phospho-(1'-myo-inositol-4',5'-bisphosphate) (PI(4,5)P2; Avanti Polar Lipids) • 1,2-dioleoyl-sn-glycero-3-phospho-L-serine (DOPS; Avanti Polar Lipids) • Cardiolipin (CL; Avanti Polar Lipids) • 1,2-dioleoyl-sn-glycero-3-phosphocholine (DOPC, Avanti Polar Lipids) • 1,2-dioleoyl-sn-glycero-3-phosphate (sodium salt) (DOPA, Avanti Polar Lipids) • 1,2-dioleoyl-sn-glycero-3-phosphoethanolamine-N-(cap biotinyl) (Biotinyl Cap PE; Avanti Polar Lipids) • Trypan Blue (Merck) or Blue dextran, MW 10000 (Merck) • Chloroform (Fisher Chemical) • Methanol (Fisher Chemical) • Acetic acid (Merck) • 3-(Triethoxysilyl)propylamine (Merck) • mPEG-SVA, MW 5000 (Laysan Bio) • Biotin-PEG-SVA, MW 5000 (Laysan Bio) • Alexa Fluor 555 C 2 maleimide (Thermo Fisher) • PD-10 Desalting Column (Cytiva) • Superdex™ 75 Increase 10/300 GL (Cytiva) • Isopropyl β-D-1-thiogalactopyranoside (IPTG; Sigma Aldrich) • MATLAB R2021b (MathWorks) • Origin 2021 (OriginLab) • ImageJ 1.52v (National Institutes of Health, USA) • Ultrasonic cleaner • Total internal reflection fluorescent microscope (TIRFM; Nikon Ti2) • EMCCD IX897 (Andor) • Oil immersion objective (100x, N.A. 1.49; Nikon) • QBIS LS 532 nm (CW Solid State Lasers; Coherent) • Filter setup • Langmuir-Blodgett (KSV NIMA) • Extruder kits (Avanti Polar Lipids) (A) Clone the cDNA that encodes the N-terminal fragments MLKL 1–154 fused with a GST tag into the pGEX-4T-2 vector. [CAUTION!] The Alexa Fluor maleimides dyes specifically couple to the thiol groups on Cysteine residue of target proteins. Therefore, the site of the target protein for fluorescent labeling must have a single Cysteine residue. To ensure specific labeling, all other Cysteine residues should be mutated. In this manuscript, S55C, S92C and S125C of MLKL 1–154 are the sites that were specifically labelled. (B) Express the recombinant MLKL in E. coli BL21 cells. Grow the cells in Super Broth with 100 µg/mL ampicillin and shake until the OD 600 reaches 0.8 at 37 °C. (C) Protein expression was induced with 0.5 mmol/L IPTG at 18 °C, shaken for a further 16 h. (D) Harvest cells by centrifugation and lyse them by sonication in a purification buffer containing 10% glycerol. [RECIPE] Purification buffer: 25 mmol/L HEPES, 150 mmol/L NaCl, 0.5 mmol/L TCEP, PH 7.4. (E) Purify the proteins using glutathione-sepharose beads and then cleave the GST tag through thrombin. (F) Purify the proteins using size exclusion chromatography with a Superdex-75 10/300GL size exclusion column. (A) Mix 10 μmol/L MLKL 1–154 with 100 μmol/L Alexa Fluor 555-MAL (Alexa Fluor 555 C2 maleimide) dyes at a molar ratio of 1:10 for site-specific fluorescence labeling. The mixture was reacted under 4 °C for 12 h in the dark. [CAUTION!] Sonicate the dissolved dye in an ultrasonic bath for 15 min to completely dissociate dye oligomers before adding it to the target protein, which can avoid cross-linking of dyes . (B) Remove the extra unreacted Alexa Fluor 555-MAL dyes through PD-10 desalting column and the labelled proteins were flash frozen in liquid nitrogen and stored at –80 °C until required. (A) Mix the phospholipids POPC, POPE, DOPS, PI(4,5)P2, CL, and POPG in chloroform at a molar ratio of 35/20/20/10/10/5. [CAUTION!] Chloroform is highly volatile and toxic. Chloroform can dissolve plastics and phospholipids should be mixed in a sealed glass bottle. [TIP] Note that the composition of the lipids depends on the protein for study. (B) Dry the chloroform with nitrogen and place the lipids mixture in vacuum pumps overnight to remove residual chloroform. (C) Suspend the dried lipid films in imaging buffer to achieve a final lipid mixture concentration of 1–2 mg/mL. Freeze the liquid in liquid nitrogen, then thaw them in a 37 °C water bath. Undergo ten freezing thawing cycles. [RECIPE] Imaging buffer: 25 mmol/L HEPES, 150 mmol/L NaCl, PH 7.4. (D) The lipids liquid was forced to go through a polycarbonate filter with 100 nm pore size using a mini-extruder kit for 21 times in order to form large unilamellar vesicles (LUVs) solutions. (A) Produce ultra-large graphene oxide (GO) flakes by using the modified Hummer’s method and collected by centrifugation . (B) Clean the coverslips by ultrasonicating them in acetone and methanol for 30 min each, then treat them with piranha solution at 95 °C for 2 h. After each cleaning step, rinse the coverslips with deionized water and dry them out with nitrogen gas. [RECIPE] Piranha solution: H 2 SO 4 (98%) and H 2 O 2 (30%) at 7:3 volume ratio. [CAUTION!] Piranha solution is highly corrosive, must slowly add H 2 O 2 to concentrated H 2 SO 4 while stirring slowly with a glass rod to prevent excessive temperatures and solution splash. (C) Use the Langmuir-Blodgett to deposit GO monolayer on the surface of the cleaned coverslips and then heat the coverslips under vacuum at 85 °C for 2 h to remove residual solution. (D) Use double-sided tape to glue the GO-covered coverslip and clean slide into a flow chamber and also glue undeposited GO coverslips and clean slide into chambers for measuring the intrinsic intensity of the fluorophore F 0 . (E) Produce a hydrophilic monolayer on the GO-covered coverslips using the cross-linked compound of 1-Aminopyrene and hydroxyl-PEG-NHS ester (AP-PEG). Incubate the coverslips with AP-PEG for 30 min, then wash away the unreacted AP-PEG with an imaging buffer. (F) Inject the LUVs solution slowly into the GO-PEG chamber and incubate at 37 °C for 12 h to form GO-PEG-supported lipid bilayers. Wash away extra vesicles with an imaging buffer. Repeat these steps for the undeposited GO chamber. [CAUTION!] The chamber should be used within two days after the bilayer preparation. (A) Mount the GO-PEG chamber onto the stage of the TIRFM. (B) Seek Large and non-overlapping monolayer GO in the field of view. [TIP] GO has obvious autofluorescence and the intensity of autofluorescence can be used to determine whether it is a monolayer GO. (C) Inject ~1 nmol/L labeled MLKL into the imaging GO-PEG chamber and record the fluorescence signal with an EMCCD camera at a frame interval of 30 ms. Output the data as 16-bit TIFF movies for further analysis. [TIP] Alexa Fluor 555-MAL fluorophores were excited with a 532-nm laser, as were alternative fluorophores like Cy3, ATTO 555, and TAMRA. The filter setup 49907 ET is also suitable for these fluorophores. The frame interval usually varies from 30 to 100 ms as required. (D) Repeat the same imaging steps with an undeposited GO chamber for the measurement of the intrinsic intensity of the fluorophore. (E) Extract the trajectories of fluorophores using a plugin called Particle Tracker in ImageJ and obtain the fluorescence intensity from tiff film based on the trajectory coordinates using a customized MATLAB script. Intrinsic intensity F 0 was determined through control measurement and relative intensity ratio F / F 0 indicates axial movement of target protein upon lipid bilayers . [TIP] To use the Particle Tracker plugin, open your movie and select Particle Tracker from Plugins -> Particle Detector & Tracker. Key parameters include Radius (particle size in pixels, 3 in this manuscript), Cutoff (score threshold, default 0), Percentile (intensity cutoff for candidate particles, usually 0.2–2), Displacement (max pixel movement, default 2) and Link Range (frames for matching, default 10). Adjust and preview as needed. (A) Mix DOPC, DOPA and Biotinyl Cap PE at 7:3:0.002 molar ratio and dried into lipid film as protocol described. (B) Resuspend the lipid film with imaging buffer containing 5 mmol/L quencher (Trypan Blue or Blue dextran). Also, resuspend the lipid film with imaging buffer without a quencher for the control experiment. (C) Have the lipid mixture solution undergo ten freezing thawing cycles. (D) Extrude the lipid mixture as described in protocol. For the liposomes containing quenchers, remove the quencher outside the liposomes with PD-10 desalting columns. [TIP] In previous studies, liposome size was shown not to affect the intensity-distance curve of LipoFRET . Additionally, the extrusion procedure is capable of producing uniform liposomes, which enables us to adjust the size of the prepared liposomes. This adjustment ensures that the curvature of the prepared liposome is suitable for the target membrane proteins to perform their functions. (A) Clean coverslips as described in protocol till the completion of the piranha solution procedure. (B) Put the coverslips in the vacuum dryer and let them cool down to room temperature. (C) Mix the APTES at the volume ratio of 1% in a mixture of methanol and acetic acid (95:5, volume ratio). Incubate coverslips with liquid mixture for 10 min. (D) Discard the liquid, rinse the coverslips with deionized water, and dry them out with nitrogen gas. (E) Dissolve mPEG-SVA and Biotin-PEG-SVA in the PEG buffer. For 1 mg PEG powder, it would need 10 μL PEG buffer to dissolve it. Mix the mPEG-SVA and Biotin-PEG-SVA at the volume ratio of 99:1. [RECIPE] PEG buffer: 0.6 mol/L K2SO4 and 0.1 mol/L NaHCO3. (F) Add 50–100 μL PEG solution onto one piece of coverslip, then put another piece to stack onto it. (G) Incubate the coverslips in a humid and dark environment for 2.5 h. Then separate the coverslips, rinse them and dry them out. The coverslips may be stored in a vacuum at –20 °C. (A) Mount the PEG-coverslip-assembled flow chamber onto the stage of TIRFM. (B) Add streptavidin at 0.01 mg/mL into the chamber and incubate for 10 min. Wash the chamber with the buffer to remove excess streptavidin. (C) Dilute the quencher-encapsuled liposomes to ~0.01 mg/mL, and inject them into the chamber. Incubate for 5 min. Wash the chamber again with the buffer to remove unbound liposomes. [TIP] Because of the wide adsorption spectra and autofluorescence of the quencher, one may observe the liposomes in an emission channel that fits Cy5 dye to determine if the sufficient density of the liposomes is reached. (D) Add the target protein to the channel to interact with the liposomes, and record images or films. In this example, α-synuclein labeled by Alexa Fluor 555-MAL was also obtained by a similar purification and labelling procedure as described in the protocol. The labeled α-synuclein (α-synuclein K10C-Alexa 555, α-synuclein T72C-Alexa 555, or α-synuclein S129C-Alexa 555) was added at ~1 nmol/L. (E) Repeat the above imaging steps with the liposomes that do not contain a quencher for the measurement of the intrinsic intensity of the fluorophore. (F) Extract the intensity of the fluorophores with the software ImageJ and MATLAB. Gaussian fitting can be applied to derive the intrinsic intensity of the fluorophore-labeled proteins ( F 0 ) without a quencher. Then the fluorescence intensity ( F ) of the proteins on quencher-containing liposomes is also analyzed. The normalized intensity F / F 0 and its changes are compared to the intensity-distance curves to derive the position changes of the labeled site during the conformational motion of the protein. In this protocol, we describe the procedure of SIFA and LipoFRET in detail. As shown in Fig. 2 , SIFA enables us to track the three-dimensional movement of MLKL on supported lipid bilayers with an experimental accuracy of ~0.6 nm for axial movement and ~25 nm for lateral movement . Appling SIFA to different interest site of MLKL , two states “Anchored” and “Embedded” were captured, even though only nanoscale change in architecture between two states, and nanoscale dynamics of MLKL undergo conformational change were detected. Whereas LipoFRET distinguishes the penetration depths of the proteins on the membrane of liposomes. Application of LipoFRET on α-synuclein could detect the spontaneous intensity alteration of α-synuclein K10C-Alexa 555, which represents the shift among three penetration depths . The intensity of α-synuclein S129C-Alexa 555 can also be seen higher than that of α-synuclein T72C-Alexa 555. In addition, the C-terminal of the protein and the membrane surface could be observed to move to a closer position relative to the membrane surface upon the Ca 2+ addition (at 0.1 to 1 μmol/L concentration) when applying the method to α-synuclein S129C-Alexa 555 . Single-molecule fluorescence imaging has long been applied in membrane protein studies, including techniques like FRET and FIONA (Fluorescence Imaging with One Nanometer Accuracy). FRET has been widely used to probe the folding of membrane proteins, intra-molecular movements of domains, and inter-molecular movements of interacting proteins . FRET is suitable for a detection range of approximately 3 to 8 nm . FIONA excels in single-molecule localization, providing axial accuracy of 0.5 nm and lateral accuracy of 1–2 nm, offering unparalleled precision in localizing individual molecules and tracking their movements over time . SIFA and LipoFRET are capable of detecting the insertion of membrane proteins into biological membranes with sub-nanometer experimental accuracy. SIFA is suitable for tracking the three-dimensional movement of target proteins , while LipoFRET is ideal for detecting the location of the curvature-sensitive membrane proteins inserted in bio-membranes . Each technique has unique strengths, making them suitable for different experimental needs. Chenguang Yang, Dongfei Ma, Shuxin Hu, Ming Li and Ying Lu declare that they have no conflict of interest.
|
Review
|
biomedical
|
en
| 0.999997 |
PMC11693498
|
Zong Jie Cui declare that they have no conflict of interest.
|
Other
|
other
|
en
| 0.999996 |
PMC11693499
|
Fibroblast activation protein-α (FAP) is the cell-surface antigen expressed by cancer-associated fibroblasts (CAFs) in the tumor microenvironment (TME). Initially, this antigen was detected in most human astrocytomas, sarcomas, and some melanomas using the monoclonal antibody F19 . Subsequently, FAP was also identified in the reactive stroma of epithelial carcinomas, granulation tissue during wound healing, and malignant cells of bone tissue . In normal adult tissues, FAP is generally absent . Therefore, FAP has shown promise as a prognostic biomarker in a variety of malignancies. For instance, high FAP expression has been associated with poor outcomes in pancreatic, gastric, and oral squamous cell carcinomas . In inflammatory diseases, the reduced circulating FAP within the first 5 days is associated with increased mortality in acute ST-elevation myocardial infarction . FAP is highly expressed in arthritic joints, and its deficiency can positively impact cartilage destruction in cases of inflammatory destructive arthritis . In radioimmunoimaging, the severity of joint inflammation can be visualized using anti-FAP antibodies. For instance, SPECT and PET imaging have been used to quantify the uptake of 111 In- and 89 Zr-labeled anti-FAP antibodies 28H1 in joints . In tumors, FAP expression is detected using near-infrared (NIR) fluorescent probes and quinoline-based FAP-targeted radiotracers ( 68 Ga/ 18 F-labeled FAP inhibitor [FAPIs]), which have shown intense tumor uptake and favorable image contrast capabilities in various tumors . Because cancerous stroma contributes to cancer recurrence and treatment resistance, many initial therapeutic approaches directly targeting cancer cells were inadequate for eliminating tumors. Targeting stromal cells in the TME may effectively treat solid tumors of epithelial origin. This involves depleting or destroying all FAP-expressing cells, including stromal and cancer cells, which can indirectly inhibit tumor cell proliferation, increase collagen accumulation, decrease tumor vascular growth, and ultimately lead to rapid tumor death . Hence, tumor growth and metastasis can be effectively inhibited using FAP antibodies, FAP-targeted radiopharmaceuticals, or silencing FAP expression . Combining FAP-targeted vaccines or antibodies with chemotherapeutic agents or radiotherapy can enhance antitumor efficacy by modulating the TME . These findings suggest that FAP may serve as a target for disrupting FAP-driven tumor progression and eradicating malignant tumors. A substantial volume of clinical and basic research has been published in the field of FAP-directed theranostics. However, very few studies have characterized the trends in FAP-related research and how they relate to the advancement of FAPI-based radiotracers. To bridge this gap, our study conducted a comprehensive bibliometric analysis of scientific publications on FAP and FAPI-based radiotracers. Through this analysis, we quantified and visually described important and reliable scientific publications using various bibliometric indicators, elucidating current research hotspots, trends, and relevant research directions. Between 1992 and 2024, a total of 2,664 publications were retrieved, generating 72,817 references and involving 12,611 authors (supplementary Table S1). Single-author publications numbered 56, while those with multiple authors had an average of eight coauthors per paper. International co-authorship constituted 22.2% of all publications. From 1992 to 2018, there was a gradual increase in the number of publications; however, starting in 2019, there was a rapid surge . China led with 883 articles, of which 10.1% (89/883) featured multinational co-authorship . The USA followed with 303 articles, with 21.3% (82/385) involving multinational collaboration. Germany stood out with the highest percentage of multinational co-authorship (32.9%, 85/258). In 2021, China and Germany surpassed the USA in the number of annual publications, with a rising trend . This shift reflected early US research on FAP-targeted immunotherapy and FAP expression in tumors, while German studies from 2018 focused on radiolabeled FAPIs for PET imaging, experiencing explosive growth. US articles received the most global citations, followed by Chinese articles . However, due to the explosive growth of radiolabeled FAPIs, Germany had the highest number of global citations since 2018. Strong international links were noted among Germany, the USA, China, Japan, and Switzerland . The earliest FAP-related studies originated in the USA , followed by Germany and Switzerland , and China . The top five institutions by publication count were Southwest Medical University, Xiamen University (The First Affiliated Hospital of Xiamen University), Heidelberg University Hospital, Fudan University, and the University of Pennsylvania . Since 2018, the number of articles from Heidelberg University Hospital has grown rapidly, followed by rapid growth in publications from Xiamen University starting in 2020, and a surge from Southwest Medical University since 2021. FAP-related studies are concentrated in core journals, particularly the European Journal of Nuclear Medicine and Molecular Imaging (EJNMMI), the Clinical Nuclear Medicine (CNM), and the Journal of Nuclear Medicine (JNM), classified as Zone 1 by Bradford’s Law . EJNMMI and CNM had more articles but fewer total citations than JNM (supplementary Table S2). EJNMMI’s total citations have surged since 2021, while JNM saw a notable increase since 2019 . Haberkorn Uwe led with 86 publications, holding the highest H-index (89) and the highest number of total citations (4,684). Chen Yue and Chen Haojun followed with 80 and 78 publications and H-indexes of 26 and 24, respectively . Among the top ten influential articles, five were FAPI-based tracer studies authored by German and Chinese researchers, published in JNM and EJNMMI, respectively. The remaining articles focused on the expression of FAP in cancer stroma and FAP-targeted immunotherapy . The co-citation network of FAP research shows a dichotomy: one group is centered on FAPI tracers, while the other focuses on non-radioactive FAP-targeted therapy and the expression of FAP in the tumor microenvironment . Researchers at Heidelberg University Hospital collaborated with the University of Duisburg-Essen via Fendler WP and Kessler L, and with Johannes Gutenberg University Mainz and the University of Antwerp via Roesch Frank. Another collaborative network of multicenter researchers has been established among Huazhong University of Science and Technology, Xiamen University, the National University of Singapore, and Fujian Medical University through the efforts of Lan Xiaoli, Chen Haojun, Zhang Jingjing, Chen Xiaoyuan, and Miao Weibing . From 2003 to 2019, keywords shifted from epithelial cancer, molecular cloning, gene expression, and serine-protease to stroma, vaccine, rheumatoid, chemotherapy, and antitumor immunity, showcasing the evolution of FAP-related research . This evolution began with the discovery of the F19 monoclonal antibody recognizing FAP in various epithelial carcinomas, leading to immunotherapy and immune-combination chemotherapy. Post-2019, "FAP", " 68 Ga-FAPI", and "PET/CT" became predominant, reflecting extensive research on tumor diagnosis using PET/CT since the development of FAPI-based radiotracers in 2018. Besides various cancers, inflammation and cardiovascular disease were common keywords. PET/MRI gained predominance from 2022, mirroring the increased use of PET/MRI in disease diagnosis. Starting in 2022, terms like radiation-dosimetry, radionuclide therapy, volume delineation, tumor retention, and tumor-to-background ratio became popular, highlighting the growing interest in FAP-targeted radioligand therapy. After the search and inclusion process for radiolabeled FAPI, 395 original studies were included , primarily focusing on diagnostic PET imaging (81.8%, 323/395). The number of studies has been increasing annually. Studies of common FAPI-based tracers (55.2%, 218/395) have been gradually increasing, but studies of new FAPI-based tracers (26.6%, 105/395) have been increasing by an even greater margin. Additionally, research on FAP-targeted radioligand therapy has been gradually increasing since 2021 ( Table 1 ). Research methodologies encompassed comparisons between two PET tracers (50.6%, 200/395) ( 68 Ga/ 18 F-FAPI vs. 18 F-FDG, 68 Ga/ 18 F-FAPI vs. other tracers or conventional imaging (CT, MRI, and ultrasonography), new FAPI variants vs. conventional FAPI-04/46), PET/CT with dual-tracer applications (1.5%, 6/395) ( 68 Ga-FAPI + 18 F-FDG), and single-tracer applications (46.8%, 185/395). Notably, publications involving head-to-head comparisons of radiolabeled FAPI and 18 F-FDG PET imaging have surged since 2021 ( Table 2 ). Radionuclides used to label FAPIs for imaging included 11 C, 18 F, 68 Ga, 111 In, 99m Tc, 64 Cu, 89 Zr, and 86 Y. Predominantly, 68 Ga (72.7%) and 18 F (17.5%) were extensively employed. Studies on 68 Ga-labeled FAPI-based radiotracers increased dramatically from 2021 to 2023, and those on 18 F-labeled FAPI-related compounds rose in 2022 and 2023. The therapeutic radionuclide 177 Lu saw increased usage in FAP-targeted radioligand therapy since 2021, while short-ray therapeutic radionuclides like 225 Ac were developed in 2020 ( Table 3 ). PET/CT served as the primary imaging modality in both animal experiments and clinical evaluations ( Table 4 ). The primary application of radiolabeled FAPIs was in cancer diagnosis (particularly for gastrointestinal malignancies), including pancreatic cancer (18.2%, 72/395), colorectal cancer (13.9%, 55/395), gastric cancer (11.4%, 45/395), liver and bile duct cancer (12.9%, 51/395), breast cancer (13.2%, 52/395), and lung cancer (12.2%, 48/395). Research pertaining to these malignancies was published primarily between 2021 and 2023. From 2021 onwards, the number of studies on inflammatory and autoimmune diseases gradually increased ( Table 5 ). Notably, 68 Ga-FAPI showed no advantages over 18 F-FDG in diagnosing multiple myeloma (MM), lymphoma, certain head and neck malignancies, and IgG4-related lymphadenopathy (supplementary Table S4). Since 1990, researchers have proposed using FAP antibodies as radioisotope-labeled ligands or as therapeutic agents in combination with toxic chemotherapeutic substances for cancer therapy . However, the performance of modern antitumor drugs utilizing FAP vaccines, FAP antibodies ( e . g ., Sibrotuzumab), and FAP inhibitors remains unsatisfactory . This is the first study to perform a thorough and comprehensive visual analysis of FAP-related research using a bibliometric approach. Unlike the previous study by van den Hoven et al ., which solely focused on the use of radionuclide-labeled FAPI in oncological and non-oncological diseases , our study encompasses research on the molecular mechanisms of FAP, FAP-related drugs and biomarkers, as well as the development and clinical evaluation of FAPI-based radiotracers. Our study highlights the most relevant and important findings in FAP development and underscores the need for an in-depth review of emergent FAP studies in the field of medicine. Of the 53 countries publishing FAP research, China led in the number of publications, yet its global citation count and international collaboration efforts were limited, indicating conservative research practices. Germany, with the highest citations, showcased its authority and social influence in FAPI-related radiopharmaceuticals. Despite global interest in FAP-related research, enhanced collaborations are needed, especially among national centers other than the German center, to advance research in this field. Half of the top ten most-cited articles (5/10) in the field of FAP-related studies investigated radiolabeled FAPIs (preclinical experiments and clinical investigations), highlighting the broad recognition of radiolabeled FAPIs. The remaining (5/10) highly cited articles primarily focused on FAP-positive immunotherapies and using FAP as a biomarker (supplementary Table S2). Analyzing journal impact in the FAP-targeted theranostics field aids researchers in choosing suitable journals. EJNMMI had the most publications, predominantly featuring FAPI-related clinical investigations (73.1%), which is vital for exploring clinical indications in FAPI PET imaging. Both EJNMMI (36.5%) and JNM (36.2%) emphasized developing novel FAPI-based radiotracers and therapeutic radiopharmaceuticals, conducting preliminary clinical trials (supplementary Table S4). The Clinical Nuclear Medicine , a Zone-1 journal, presented numerous case reports detailing 68 Ga/ 18 F-FAPIs manifestations in rare diseases, valuable reference for future disease diagnosis studies. The Cancers , another Zone-1 journal, emphasized FAP expression in malignant tumors and non-radioactive drug treatment; over half of these articles were reviews, including some on radiolabeled FAPIs. The surge in articles since 2018 is primarily due to radiolabeled FAPI development and clinical application. Most studies concentrated on disease diagnosis, but attention has gradually shifted to tumor stroma-targeted radioligand therapy since 2022. Recent research delved into novel FAPI variants, assessing their uptake, retention, and dosimetry in tumors and normal organs, crucial for therapeutic effectiveness in FAP-targeted radioligand therapy. Novel radiotracers like 68 Ga-DOTA-2P(FAPI) 2 , 68 Ga-DOTAGA.(SA.FAPi) 2 , and 68 Ga-DOTAGA.Glu.(FAPi) 2 , exhibited superior tumor uptake and retention than common 68 Ga-FAPI-04/46 . Researchers developed bispecific heterodimer radiotracers, like 68 Ga-FAPI-RGD and 68 Ga/ 18 F-labeled FAPI-PSMA, demonstrating high tumor uptake and favorable in vivo pharmacokinetics . Due to 18 F-FDG’s limitations in certain cancers, most studies compared the diagnostic accuracies of FAPI-based radiotracers and 18 F-FDG. Clinical studies have confirmed that 68 Ga-labeled FAPI exhibits higher tumor uptake and diagnostic accuracy than 18 F-FDG, especially in gastrointestinal cancers, notably gastric cancer . FAPI-based radiotracers prove more sensitive than 18 F-FDG in identifying recurrent disease, metastatic lymph nodes, brain metastases, liver metastases, and peritoneal metastases. In inflammatory disorders like rheumatoid arthritis, 18 F-FAPI uptake was significantly greater in the early phase of inflammation than 18 F-FDG uptake . However, in certain diseases ( e . g ., MM, lymphoma, and IgG4-related lymphadenopathy), FAPI-based radiotracers may be comparable or even inferior to 18 F-FDG. A few studies reported that 18 F-FDG performed better than 68 Ga-FAPI in detecting metastatic lymph nodes in head and neck cancer; however, no histopathological analysis has been performed in FDG+/FAPI– lymph nodes. Owing to the high specificity of FAPI PET in diagnosing lymph node metastases, lymph nodes with increased 18 F-FDG uptake may be reactive instead of metastatic . Novel FAPI-based radiotracers, like 68 Ga-DOTA.SA.FAPI and 18 F-FAPI-04/42/74, showed superior diagnostic performance than 18 F-FDG, as shown in recent clinical trials, detecting primary and metastatic lesions in breast cancer, metastatic/recurrent gastrointestinal stromal tumors, renal cancer, and lung cancer . The combined use of 18 F-FDG and 68 Ga-FAPI (dual tracer for PET/CT) improved diagnostic accuracy in esophageal, gastric, cervical, and appendiceal cancer . 68 Ga was the most frequently used radionuclide in diagnostic imaging, followed by 18 F. However, 68 Ga is limited to production because it generally requires a 68 Ge/ 68 Ga generator, which yields only enough for 2–4 patients per session, making it unsuitable for large clinical centers. Its short half-life (68 min) adds impracticality for transport to remote medical facilities. Conversely, cyclotrons can generate substantial amounts of 18 F with a longer half-life (110 min). Consequently, some research centers favored the 18 F-AIF-labeled chelator ligand FAPI-74 . Due to the high cost of PET scanning, 99m Tc-labeled FAPI with SPECT imaging may become an equally popular low-cost alternative . Regarding therapeutic radionuclides, most research centers favored 177 Lu, followed by 90 Y. When labeled with FAPIs, both nuclides utilize β-rays to target the tumor’s double-stranded DNA, inducing apoptosis. 90 Y, with a shorter half-life (64.1 hours vs. 6.7 days) and higher average energy (0.9 MeV vs. 0.14 MeV) than 177 Lu , may be better suited for FAPI ligands with shorter tumor retention ( e . g ., FAPI-46 and FAPI dimers) . Promising outcomes in tumor treatment were observed with long-life 131 I-labeled FAPI-02 and FAPI-04 . Besides β-ray-emitting therapeutic agents, α-ray-emitting therapeutic agents like 225 Ac-FAPI-04 effectively inhibited tumor growth in pancreatic cancer xenografts . Animal experiments and clinical trials often require PET/CT, PET/MR, or SPECT/CT. CT scans are ineffective for soft-tissue lesions due to low resolution, while MRI provides superior soft-tissue resolution. PET/MRI holds promise in compensating for PET/CT limitations, especially in detecting small pancreatic carcinomas, distinguishing between potential tumors, and identifying brain and small liver metastases . Multi-sequence MRI aids in interpreting inflammatory and physiologic uptake of 68 Ga-FAPI in the uterus . SPECT, a cost-effective alternative to PET, is crucial for assessing dosimetry with 177 Lu-labeled FAPI variants . FAPIs, acting as non-specific tracers, might be taken up by both malignant tumors and inflammatory tissues, complicating lesion image interpretation. Additionally, FAP expression doesn’t significantly correlate with tumor metastasis or staging. Owing to the dynamic changes in the composition of the extracellular matrix and CAFs during tumor growth and metastasis, FAPI-based radiotracer uptake may be altered throughout the course of tumor metastasis . Furthermore, FAPI-based radiotracers may be less accurate than 18 F-FDG for specific cancers, like lymphomas and multiple myeloma . Thus, FAPI-based radiotracers may serve better as complementary diagnostic and therapeutic assessment tools rather than a comprehensive alternative to traditional 18 F-FDG-based approaches. This study has some limitations. Relevant literature might be excluded if the word of interest doesn’t appear in the search field. Additionally, due to the adaptability of the visualization software, only the Web of Science database was used, overlooking certain publications. Ongoing research, publication delays, and fluctuating attributes in citations, publication numbers, and keyword frequency mean we only considered FAP-based research trends in oncology from January 1, 1990, to May 1, 2024. Future studies should incorporate the most recent information to further elucidate advances in this area. In summary, after more than 30 years of continuous research and development, FAP has not only been characterized as an effective biomarker in disease diagnosis and prognosis but also as a promising target for tumor therapy. A particularly important milestone was the invention of radiolabeled FAPIs as novel radiotracers in the PET imaging of various cancer types. Radiolabeled FAPI is the foremost discovery in FAP research and has shown promising results, although research is in its preliminary stages. Thus, future research should focus on identifying the current limitations of FAPIs to inform their development for clinical applications. The field of oncology is increasingly dedicated to the advancement of FAPI, including assessment of the specific indications for FAPI PET imaging, development of novel FAPI-based radiotracers, and FAP-targeted radioligand therapy in refractory cancers. Finally, our findings highlighted the scope for improved collaboration between different countries and research institutions to broaden the application of FAP in disease diagnosis and treatment. To ensure the accuracy and quality of the study, data were exclusively sourced from the Core Collection of Web of Science. We used the search formula Topic = “fibroblast activation protein” to retrieve all FAP-related literature. The publication date was limited from January 1, 1990, to May 1, 2024. In addition, we analyzed the content and trends in FAPI-based radiotracer research from 2018 to 2024. The inclusion criteria were original studies of FAP in the field of nuclear medicine and molecular imaging. Case reports, reviews, comments, and non-full text were excluded. Two authors (R.D. and C.H.) performed the literature search and screening process. We used the bibliometric package in R (version 4.3.1) to extract the data required for the bibliometric analysis, including publication year, country, author, author keywords, affiliation, reference citations, and journal data. R was used to plot the data for statistical analysis. In addition, we used VOSviewer (version 1.6.18) to plot a collaboration network between countries. We extracted key information from original research articles on radiolabeled FAPIs for data analysis, including study content, methodology, type of radionuclide, imaging modality, and disease type. In addition, for the comparative studies between 18 F-FDG-PET and FAPI-PET, we paid special attention to the limitations that exist in some FAPI applications. Finally, we analyzed the research hotspots and trends based on the distribution of study characteristics throughout the years. FAP Fibroblast-activation protein FAPI Fibroblast-activation protein inhibitor FDG Fluorodeoxyglucose PET/CT Positron emission tomography/computed CAF Cancer-associated fibroblast TME Tumor microenvironment SPECT Single photon emission computed tomography NIR Near-infrared PET/MRI Positron emission tomography/magnetic MM Multiple myeloma resonance imaging Dan Ruan, Simin Wu, Xuehua Lin, Liang Zhao, Jiayu Cai, Weizhi Xu, Yizhen Pang, Qiang Xie, Xiaobo Qu and Haojun Chen declare that they have no conflict of interest.
|
Review
|
biomedical
|
en
| 0.999996 |
PMC11693543
|
The incidence and mortality of gastric cancer tend to decline, but the incidence of proximal gastric cancer (PGC) continues to increase worldwide, especially in Western Europe and East Asia. 1 , 2 , 3 PGC is usually defined as a tumor located in the cardia and upper third of the stomach, and total gastrectomy (TG) plus D2 lymphadenectomy is the standard treatment strategy with confirmed efficacy in terms of tumor radicality. 4 , 5 However, there is a complete loss of gastric function when TG is performed, which further leads to postoperative malnutrition. 6 As a function‐preserving surgical procedure, proximal gastrectomy (PG) has been confirmed to be comparable to TG in terms of oncologic safety and feasibility, and PG can be effective at maintaining postoperative weight and improving quality of life (QOL). 7 , 8 , 9 Considering that PG destroys the normal antireflux barrier of the cardia, conventional esophagogastrostomy seems to no longer be popular, and many modified procedures, including gastric tube reconstruction, 10 jejunal interposition, 11 double tract reconstruction (DTR), 12 and the double flap technique (DFT), 13 have been developed to prevent reflux esophagitis after surgery. However, choosing an appropriate reconstruction method for reducing regurgitation satisfactorily remains controversial. Among these methods, DTR reduces reflux symptoms by constructing two digestive pathways of food, and its surgical procedure is relatively complicated. In contrast to DTR, in DFT, H‐shaped double seromuscular flaps are created and cover the lower esophagus and anastomosis as a one‐way valve to prevent reflux esophagitis. However, the risk of postoperative anastomotic stenosis after DFT cannot be ignored. Proximal gastrectomy with DTR or DFT has already shown excellent outcomes in terms of postoperative survival and complications, as well as improved postoperative nutritional status compared to that after TG. 14 , 15 However, the superiority of DTR over DFT remains debatable. Some studies have reported the short‐term outcomes of these two reconstruction methods and have concluded that DFT is superior to DTR 16 , 17 ; however, the relevant clinical evidence is still insufficient. Therefore, we aimed to clarify the efficacy of these reconstruction methods for treating proximal gastric cancer by assessing surgical outcomes, postoperative complications, nutritional status, and patient QOL. A total of 286 patients with proximal gastric cancer treated at the First Affiliated Hospital of Xi'an Jiaotong University between January 2020 and March 2023 were initially enrolled in this study. The detailed screening process is shown in Figure 1 . Inclusion criteria: (1) proximal gastric cancer evaluated as cT1‐2N0M0; (2) complete clinicopathological data; (3) no neoadjuvant therapy; (4) laparoscopic proximal gastrectomy (LPG); and (5) R0 resection. Exclusion criteria: (1) non‐DTR or DFT reconstruction; (2) primary tumor at two or more sites; (3) follow‐up time less than 1 year. The included patients were divided into the DTR group ( n = 80) and the DFT group ( n = 24). Subsequently, propensity score matching (PSM) analysis was conducted to balance background characteristics according to the following factors: age, sex, body mass index (BMI), American Society of Anesthesiologists physical status score (ASA‐PS), tumor size, and pathological stage. Finally, 48 patients who underwent LPG with DTR and 24 patients who underwent LPG with DFT were included in this study. This retrospective cohort study was approved by the Ethics Committee of the First Affiliated Hospital of Xi'an Jiaotong University. Due to the retrospective design of this study, the requirement for informed consent was waived. Regarding the use of data in the retrospective study, patients were allowed to opt out of the study at any time. All patients were evaluated preoperatively for cT1‐2N0M0 proximal gastric cancer, and LPG plus D1+ lymphadenectomy was performed according to the Japanese Gastric Cancer Treatment Guidelines (5th) to ensure that at least 1/2 of the stomach volume was preserved. 18 The team's chief surgeons, Lin Fan and Xiangming Che, possessed over 20 years of clinical expertise and had accumulated experience in performing over 1000 laparoscopic procedures. They were also involved as a quality control, and they supervised the key stages of this study. We completely exposed the lower abdominal esophagus through the esophageal hiatus with a harmonic scalpel and a linear stapler was used to transect the esophagus 3 cm from the tumor. DFT was preferred when the volume of the remnant stomach was close to 2/3 of the stomach. Nevertheless, when the tumor invades the esophagus by over 2 cm, necessitating resection of an extended abdominal segment of the esophagus to achieve negative resection margins, and considering the complexity of intra‐mediastinal anastomosis between the esophagus and residual stomach, DTR becomes the preferred option. When DFT was performed, four extra marked points should be carried out at the posterior wall of the esophagus 5 cm from the resected end because contraction of the esophagus is usually inevitable after resection. After that, the stomach was extracted through an approximately 5 cm long median epigastric incision and transected at the appropriate location depending on the tumor size. Moreover, intraoperative frozen sections were examined to ensure R0 resection when the distal margin of resection was uncertain. The main steps of DTR include the following: the jejunum was transected approximately 20 cm below the Treitz ligament, a side‐to‐side anastomosis of the esophagus was performed, and the distal jejunum was transected intracorporeally with a linear stapler. Gastrojejunostomy (GJ) was performed 12 ~ 15 cm below the esophagojejunostomy (EJ) site with linear staplers. An overlap jejunojejunostomy (JJ) was performed between the proximal jejunum and the distal jejunum 40 cm below the EJ with a linear stapler. The common openings of the above three anastomoses were closed by knotless barbed absorbable sutures (V‐Loc™ 180) under laparoscopy. In the DFT procedure, we made an H‐shaped (2.5–3.5 cm wide × 3.5 cm high) mark 3–4 cm below the margin of the anterior remnant stomach wall with methylene blue after transecting the proximal stomach extracorporeally . Remarkably, the width of this H‐shaped marking was 2.5–3.5 cm, and the dimensions depended on the diameter of the esophagus evaluated by preoperative computed tomography or intraoperative exploration to establish the normal tension of the anastomosis. Double seromuscular flaps were created using electric cautery to carefully dissect the serosa and muscular layer along the H‐shaped marking, after which the submucosal layer was exposed . Before the remnant stomach was returned to the abdominal cavity, the inferior edge of the muco‐submucosal window was opened in preparation for subsequent anastomosis. The remaining steps were performed laparoscopically, and all sutures were hand‐sewn intracorporeally. Four sutures were used to secure the esophagus on the superior edge of the muco‐submucosal window at the previously marked points. Next, the esophageal stump was opened for esophagogastrostomy (EG). The posterior esophageal wall and the inferior edge of the muco‐submucosal layer were sutured continuously with V‐Loc™ 180 , and anastomosis between the anterior esophageal wall and the whole layer of the anterior stomach wall was performed using the same strategy at the lower end of the window . Finally, interrupted sutures were used to suture both side ends of the flaps together as a preliminary fix, and continuous suturing was performed between the flaps and the anastomosis using V‐Loc™ . This procedure led to the anastomosis being completely wrapped and reinforced by the flaps . It should be emphasized that DFT was not always performed intra‐abdominally. For a few patients with insufficient length of the abdominal segment of the esophagus, we attempted intra‐mediastinal DFT to explore its feasibility and efficacy and extreme caution should be exercised during operation to avoid damaging structures such as the pleura. The background characteristics and postoperative results of all patients were obtained from the electronic medical records system. The clinicopathologic features included age, sex, BMI, ASA‐PS, Lauren classification, tumor size, pT stage, pN stage, pathological stage, preoperative comorbidities, and adjuvant chemotherapy. The surgical outcomes, including operation time, anastomosis time, estimated blood loss, and the number of retrieved LNs, were also recorded. The postoperative outcomes were days of gas‐passing, days of starting diet, postoperative length of hospital stay, and early (within 30 days of surgery) and late (after 30 days) complications of surgery. The Clavien–Dindo classification of surgical complications was used to classify the severity of postoperative complications. 19 The follow‐up was conducted through the outpatient department, telephone visits, and internet communication, and the details included the following: (1) Body weight and hematologic parameters: total protein, albumin, hemoglobin, total cholesterol, vitamin B12, and lymphocyte counts. Moreover, the Controlling Nutritional Status (CONUT) score was calculated as a comprehensive indicator of nutritional status. 20 (2) The results of endoscopy 1 year after surgery and reflux esophagitis were graded according to the Los Angeles Classification System. 21 (3) The Postgastrectomy Syndrome Assessment Scale (PGSAS)‐45 was used to evaluate QOL. 22 All statistical analyses were performed with SPSS 24.0 (SPSS/IBM). Continuous variables are presented as the mean ± standard deviation (SD) or median (range), and Student's t test or the Mann–Whitney U test was used for intergroup comparisons. Classification variables are expressed as a percentage (%), and Fisher's exact test was applied to compare differences between the two groups. We used a logistic regression model with a caliper of 0.2 standard deviations for PSM analysis. All P values cited were two‐sided, and p < 0.05 was considered to indicate statistical significance. Table 1 details the clinicopathological characteristics of patients who underwent LPG‐DTR and LPG‐DFT before and after matching. Before matching, the two groups differed significantly in terms of age ( p = 0.001), BMI ( p = 0.040), and tumor size ( p = 0.019). Regarding adjuvant chemotherapy, S‐1 and SOX were frequently adopted in patients with pathological stage II. After matching, there were no significant differences in clinicopathological characteristics between the DTR and DFT groups. A comparison of surgical outcomes and postoperative complications between the two groups is shown in Table 2 . The operative time, estimated blood loss, and number of retrieved LNs were comparable between the two groups. Nevertheless, the anastomosis time was significantly longer in the DFT group than in the DTR group (70.1 vs. 52.7 min, p < 0.001). In addition, the days of gas‐passing and the days of starting diet were shorter in the DFT group than in the DTR group (3.0 vs. 4.0 days, p < 0.001) and (4.0 vs. 5.0 days, p < 0.001), respectively. The postoperative length of hospital stay was significantly shorter in the DFT group (8.5 vs. 10.0 days, p < 0.001). Although the rates of early and late complications were lower in the DFT group (both 8.3%) than in the DTR group (12.5% and 10.4%), there were no significant differences between them ( p = 0.710 and p = 1.000, respectively). No patients died in either group. Three patients in the DTR group and two patients in the DFT group experienced pneumonia, and all the patients were cured after active treatment with antibiotics. One patient in the DTR group underwent conservative treatment for small intestinal obstruction 3 months after surgery, and three patients developed reflux esophagitis (Grade B) 1 year after surgery and received antireflux drug therapy. Besides, only one patient in the DFT group developed reflux esophagitis (Grade B) without significant reflux related symptoms . To clarify the potential impact of anastomotic sites in DFT on clinical outcomes, 24 patients in the DFT group were divided into two groups, intra‐abdominal DFT (IA‐DFT, n = 18) or intra‐mediastinal DFT (IM‐DFT, n = 6), and postoperative complications were further compared. However, as shown in Table S2 , there were no significant differences for incidence of postoperative complications between the two groups. Figure 4 shows the changes in body weight and hematologic parameters in the DTR and DFT groups. The rate of body weight loss in the DFT group was less than that in the DTR group at 6 and 12 months after surgery but was significantly different only at 12 months after surgery (−5.6% vs. −11.6%, p = 0.012). Similarly, postoperative total protein and albumin levels tended to decrease in both groups and were significantly higher in the DFT group than in the DTR group at 12 months after surgery . There were no significant differences in the rates of change in total cholesterol, lymphocyte count, hemoglobin, or vitamin B12 between the DTR and DFT groups at any point after surgery . Furthermore, we calculated the CONUT score for both groups (Table S3 ). The preoperative scores of the two groups were comparable, and no patients experienced severe malnutrition (score 9–12) within 1 year after surgery. However, the CONUT score did not differ between the two groups at 6 or 12 months after surgery. All questionnaires were taken back in the DTR and DFT groups, and the median duration of the QOL survey was 16 months after surgery. As summarized in Table 3 , DTR showed better results in the meal‐related distress subscale (DTR 1.6 vs. DFT 2.1, p < 0.001). DFT was superior to DTR in terms of diarrhea subscale, constipation subscale and dumping subscale (DFT 1.6 vs. DTR 1.8, p = 0.005; DFT 1.8 vs. DTR 2.0, p = 0.022; DFT 1.4 vs. DTR 1.6, p = 0.007, respectively). However, the seven subscales did not influence the total symptom score (DTR 1.8 vs. DFT 1.7, p = 0.520), which indicates acceptable postoperative symptoms in both groups. Many studies have demonstrated oncologic safety and improved postoperative nutritional status after PG. 7 , 8 , 11 , 14 , 15 Nevertheless, due to the variety of reconstruction methods, deciding which method can provide the most benefits remains a matter of controversy. The results of our study showed the advantages of DFT in several aspects. Double tract reconstruction was first introduced by Aikou et al. 23 As a modification of esophagogastrostomy (EG), DFT was first reported by Kamikawa et al. 24 and both techniques have shown excellent results in reducing postoperative reflux esophagitis. 25 A previous meta‐analysis comparing different reconstruction techniques after PG reported that the pooled incidence of reflux esophagitis for DFT was 8.9%, which was not significantly different from DTR (8.6%), but both incidences were significantly lower than that in EG (19.3%). 26 The length of the interposed jejunum between the EJ and GJ in the DTR may affect the prevention of reflux esophagitis, and the length was 10–15 cm in most studies. 12 , 16 Ma et al. 27 reported that properly extending the length of the interposed jejunum to 15–20 cm according to the size of the remnant stomach could be more effective at preventing reflux esophagitis. However, it should be considered that the digestive tract torsion and difficulty of postoperative endoscopy are due to excessive length. 28 Compared to DTR, DFT constructs a similar physiological digestive pathway, and its ingenious anastomotic design can preserve the functions of the excised cardiac and lower esophageal sphincters to a certain extent. Upper gastrointestinal radiography was routinely performed on postoperative day 7. In the DFT group, the contrast media did not regurgitate into the esophagus, even though the patients were at a position 30° head lower . The DFT appears to be the most promising antireflux procedure, yet the relatively high incidence of anastomotic stenosis after DFT is still problematic. The incidence of anastomotic stenosis after DTR in previous studies ranged from 0 to 13.3%, and one patient (2.1%) from the DTR group in our cohort had anastomotic stenosis, which is consistent with the results of a previous study. 29 Saze et al. 30 reported that DFT was superior to other reconstruction methods for reducing reflux symptoms. In the DFT procedure, the tension at the anastomosis site is theoretically higher than that in other methods, which may cause postoperative anastomotic stenosis because the two seromuscular flaps are tightly wrapped around the lower esophagus, as reported in previous studies. 13 , 14 , 31 The width of the H‐shaped seromuscular flaps is usually 2.5 cm, whereas we created the H‐shaped seromuscular flap with a width of 2.5–3.5 cm based on the diameter of the esophagus so that the compression from the seromuscular flaps could be undermined during the subsequent embedding of the esophagus. Saeki et al. 32 reported the safety and feasibility of V‐Loc in DFT. Hosoda et al. 33 also suggested that V‐Loc was able to reduce the incidence of anastomotic stenosis in DFT. The present study similarly demonstrated the advantages of DFT because only one (4.2%) patient in the DFT group exhibited anastomotic stenosis. In addition, V‐Loc can greatly reduce the difficulty of hand‐sewing because no ligation is needed, and the anastomosis time can be relatively shortened. Therefore, we suggest that this modified DFT procedure can be performed by surgeons with extensive experience in minimally invasive surgery and that surgeons should pay more attention to completely aligning the tissues and controlling the stitch length to maintain stable tension in the anastomosis. Regarding postoperative nutritional status, both groups showed varying degrees of reduced body weight and nutritional parameters compared to those observed during the preoperative period. However, the body weight and albumin levels at 1 year after surgery were higher in the DFT group than in the DTR group, similar to the results of a previous meta‐analysis, 17 and these differences were related to the different digestive pathways of food. In DFT, all food enters the remnant stomach and passes through the duodenum, through which chyme can be mixed with pancreatic juice and gastrointestinal hormones. Compared to DFT, only a few foods enter the remnant stomach directly in DTR. Ahn et al. 12 reported that the proportion of food entering the remnant stomach in DTR patients was approximately 60%. Wang et al. 34 concluded that an appropriately enlarged gastrojejunal anastomosis in DTR patients would have advantages in terms of postoperative nutritional status compared with that in TG‐RY patients due to more food entering the duodenal pathway. These studies indicated that appropriately expanding the size of gastrojejunostomy in DTR may contribute to improving postoperative nutritional status. Postoperative QOL after PG is also a great concern. Generally, heartburn and acid regurgitation caused by reflux esophagitis after PG severely impair QOL. In our study, the esophageal reflux subscale score of the DFT was as good as that of the DTR. In addition, we found that the results of the meal‐related distress subscale, which included the sense of food sticking, postprandial fullness, and early satiation, were worse in the DFT group. We speculate that the jejunum pathway in the DTR may help to facilitate the emptying of the remnant stomach to reduce the fullness. However, inadequate food digestion may be the main reason why patients in the DTR group experienced more intense diarrhea, constipation, and dumping. Kunisaki et al. 35 completed a PGSAS‐45 NEXT study, which found that although PG is beneficial to improving QOL compared with TG, the main outcome measures of PGSAS‐45 could be easily affected by various background factors, such as age, gender, and adjuvant chemotherapy. Our study was limited by the number of patients and could not conduct relevant multivariate analysis to further clarify the advantages of DFT over DTR. However, the results of univariate analysis preliminarily suggest that DFT is a digestive tract reconstruction method after proximal gastrectomy worthy of promotion. This study has several limitations. First, the retrospective nature of the study cannot be ignored. Although we used PSM to balance the baseline data, selection bias remained, and the patients ultimately included were limited and possibly underrepresented. Second, our center performed these two techniques relatively late, so we obtained only short‐term outcomes; however, follow‐up will continue to evaluate their long‐term efficacy. Third, for the assessment of postoperative nutritional status, we only compared preoperative, 6‐month, and 1‐year postoperative data, which may have resulted in unobserved differences between the two reconstruction methods at a certain postoperative stage. In conclusion, this study revealed that despite the complexity of the procedure and longer anastomosis time, DFT emerged as a superior alternative to DTR in terms of facilitating early postoperative recovery, sustaining nutritional status, and improving QOL. DFT may be a promising procedure after PG. However, large‐sample, prospective, randomized trials should be conducted to validate these results. Lindi Cai: Writing – original draft. Guanglin Qiu: Writing – original draft. Mengke Zhu: Investigation; methodology; software. Shangning Han: Investigation; methodology; validation. Pengwei Zhao: Visualization. Panxing Wang: Investigation; visualization. Xiaowen Li: Investigation; validation. Xinhua Liao: Data curation. Xiangming Che: Data curation; project administration. Lin Fan: Project administration; writing – review and editing. This work was supported by grants from the Key Research and Development Projects of Shaanxi Province . The authors have no conflicts of interest to declare. Approval of the research protocol: This retrospective cohort study conformed to the provisions of the Declaration of Helsinki and was approved by the Ethics Committee of the First Affiliated Hospital of Xi'an Jiaotong University. Informed Consent: Due to the retrospective design of this study, the requirement for informed consent was waived. Registry and the registration no. of the study/trial: N/A. Animal studies: N/A.
|
Review
|
biomedical
|
en
| 0.999998 |
PMC11693547
|
Today, we are honored to have Professor Jeff Drebin, M.D., Ph.D., from Memorial Sloan Kettering Cancer Center, and the 2024 President Elect of the American Surgical Association, as our guest. We extend our sincere gratitude to Professor Drebin for taking time out of his busy schedule to join us at the 79th Annual Meeting of The Japanese Society of Gastroenterological Surgery. In addition, we deeply appreciate your participation and your contribution to the journal, Annals of Gastroenterological Surgery , an official journal of JSGS. Thank you very much, Professor Jeff. Recently, you published a paper in Nature about the mRNA vaccine for pancreatic cancer, and I wanted to ask you a few questions regarding this topic. It's a very hot topic (A brand new technology, mRNA vaccination for pancreatic cancer) and exciting paper, but I have several questions. The first one is about the personalized and rapid biosynthetic one, which boasts high efficiency with high immunogenicity neoantigens for the prevention of recurrence, particularly in the case of minimal residual disease (MRD) in pancreatic cancer. How do you determine the neoantigens ? The mRNA vaccine work, I should give credit to my junior partner, Vinod Balachandran, who's a surgeon scientist at Memorial Sloan Kettering, who really has done a lot of the important laboratory work. Vinod not only has a very big and successful laboratory, he's also a pretty good surgeon. But Vinod's work had suggested that some patients who are long‐term survivors have evidence of a T‐cell response, and therefore raise the question of could we stimulate a T‐cell response with a vaccine, and in this case an mRNA vaccine. 1 There may be other approaches that would work well. The first patient to get a vaccine in our trial was in December of 2019. Although we all know of mRNA vaccines from COVID and COVID vaccines, the first patient in New York to get it was in 2019. It happened to be my patient. But all of us within the group had patients in the trial. In December of 2019, he got his first dose of a vaccine made by BioNTech specific to his tumor. The choice of mRNA had to do with the desire to be able to do this quickly, to have a broad antigenic representation from the tumor and some thoughts about safety. Although those were all things we evaluated in the phase one trial. Certainly, there are ongoing studies using a similar approach with peptide vaccines as well as DNA vaccines. And so, I wouldn't claim this is the only approach that will work. But it was an approach BioNtech was interested in working on. Genentech participated in providing atezolizumab, the PD‐L1 inhibitor. And so, we sort of had both corporate support. We had our group, which was a relatively busy Pancreas Cancer Surgery Group. And so, we did the trial. You identified individual personalized neoantigens from the mutations. However, there are four major genes that dominate pancreatic cancer. We have come to understand that these big four genes may overshadow the potential for other neoantigens to emerge. How do you decide on favorite neoantigens? In terms of antigenicity, pancreas cancer has a relatively low number of antigens. 2 For most of our patients, they could fit up to 20 antigenic domains in the mRNA vaccine. Most of our patients didn't have 20 expressed neoantigens. In fact, I think the one who signed up for the trial couldn't participate because he had none. But most patients had anywhere from one to five or six. The responsiveness to the vaccine didn't correlate really with how many antigens they had interestingly. The strategy that Genentech used was to look at both sequencing to identify mutations that could give rise to neoantigens, and then RNA sequencing to determine that they were expressed. The idea being that if the RNA was present, protein would be present, and therefore, there would be antigenic protein present in the cancer cell. Because the numbers were relatively small that fulfilled both criteria, it wasn't a question of having to narrow it down. We could put every mutation that was present in the patient onto their vaccine. I want to confirm that your work on the mRNA vaccine predates the COVID‐19 pandemic. While mRNA vaccines are now widely recognized due to their use in combating COVID‐19, was your system developed before the pandemic ? Our mRNA vaccine was developed before the COVID‐19 pandemic. Everyone knows that mRNA vaccine is very famous for COVID‐19, but the system ignited before COVID‐19. Actually, the COVID vaccine became a perfect control. Because after we started our trial and we saw some evidence that those who responded to the vaccine had a longer survival than those who didn't, one interpretation may be those who responded have a good immune system. Those who don't respond have a bad immune system. It's not the vaccine. The vaccine is just a selection for those who have a good immune system. But all of them got COVID vaccine. The COVID vaccine response in the pancreas tumor non‐responders was as good as it was in the pancreas tumor responders. Their immune systems were equivalent. There was no difference in their ability to react to the COVID vaccine. It was simply that they didn't react to their own tumor vaccine in some patients. Pancreatic cancer is a very challenging target because it is often surrounded by non‐inflamed tissue, CAFs, or fibroblasts. Your system seems highly effective since it targets the cancer after a pancreatectomy, which should enhance its effectiveness. Is my understanding correct ? This is an adjuvant vaccine. We are talking about trying to put together a trial of a neoadjuvant vaccine. As you point out, one of the concerns in pancreas cancer is that the microenvironment of the tumor may suppress the immune infiltration of T cells or the function of T cells. And so, by doing this in an adjuvant setting, we have removed the tumor and it is really a minimal residual disease that we're hoping to treat. That may not have as many problems in terms of the microenvironment. Can your system be implemented on a global scale? Could you elaborate on how we can move forward with this ? The next step—this was a phase one trial. It was really to say, could we do this? Our goal was to get the vaccine into patients within 10 weeks and we succeeded. The goal was to make sure that it didn't make people sick. We succeeded in this before the COVID vaccine, so we didn't know that an mRNA vaccine would be relatively safe. Then, the sign of efficacy was very encouraging. But again, responder versus non‐responder is a poor indication of whether a treatment ultimately will work. And so, to validate this, we're now conducting a prospective randomized trial in patients who have pancreaticoduodenectomy and will either get standard adjuvant chemotherapy or chemotherapy plus the vaccine. Now, it's only in pancreaticoduodenectomy patients, because one thing that came out in the phase one study was pancreaticoduodenectomy patients were more likely to respond to the vaccine and patients who had a distal pancreatectomy with splenectomy were less likely to respond to it. We now know from animal studies that splenectomy may decrease responsiveness to the vaccine. 1 We hope that pancreatic cancer patients worldwide will be able to access your system. Are there any plans to expand this treatment globally ? The trial is a worldwide trial. I don't know that any sites are open in Japan yet, but if there are places that are interested, I suspect BioNTech would be happy to support them. I was in Korea 3 or 4 months ago and actually they have one site now that's open and it's open at a bunch of places in the United States. We hope to accrue several hundred patients for the phase two study if the results are good. One of the things that makes pancreas cancer research quicker is that if patients don't respond, they tend to not do well, and you get an answer quickly. If there is good evidence of efficacy at that point, I think it'll become a standard. I'm very interested in the current status because, in the Nature paper, the patient who had a good response showed 100% survival. What is the situation now? The number we are now the 4 to 5‐year mark, with two relapses among the eight cases, both showing a good response. I believe they're both still alive, but I think two have had a relapse. Six out of eight are still relapse‐free at 3, 4, or 5 years. Whereas 100% of the non‐responders relapsed. Recently, some papers have shown that microbiota can influence immunotherapy. Do you have any data on whether the mRNA vaccination is affected by dysbiosis or microbiome ? I don't have data, but I think it's an excellent question. I suspect as we know, the checkpoint inhibitor responses are affected by the microbiota. I suspect there will be something, but I don't have an answer. Next, I'd like to ask a more clinically relevant question. Do you believe that neoadjuvant chemotherapy is necessary for all resectable pancreatic cancers, not just borderline resectable cases ? I don't (believe it). Within our group of seven HPB surgeons, there's a range of opinions. At Memorial, I think there are still some patients who go straight to surgery, some who get neoadjuvant. My personal practice is if there's borderline disease, if there's even venous involvement, that would require a major vein resection with interposition grafting, and there's a chance that it'll respond in a way that minimizes that or allows a side bite of the vein, rather than a total segmental, long segmental resection, we'll do neoadjuvant. But a patient with a disease well away from arteries and veins and no other problem, we'll take them straight to surgery. Well, I personally believe that neoadjuvant chemotherapy might be beneficial even for stage one pancreatic cancer because, compared to breast or colon cancer, stage one pancreatic cancer still has a less favorable outcome. After surgery, even adjuvant chemotherapy may not be sufficient for stage one pancreatic cancer. How would you improve outcomes for stage one pancreatic cancer patients ? I don't disagree with you at all. I think pancreas cancer—there is no cancer, maybe other than melanoma, which a 1‐centimeter tumor is so bad. One‐centimeter lung cancer, 1‐centimeter colon cancer, gastric cancer, breast cancer, these are cured 80%, 90% of the time. Pancreas cancer, less than 50%. Clearly, it's an advanced disease in many cases. For me, the distinction is that if you look carefully at the neoadjuvant data, there are patients who don't have tumor progression and yet never have surgery. You have to look carefully. It's not something people advertise, because it's hard to explain. But about 10% or 15% of people are made sick enough by the neoadjuvant therapy that they never come to surgery. And so, you have to trade off those patients with the fact that you avoid surgery in patients who are going to have early recurrence after surgery, which we've all seen and is very disappointing, of course. I take your point that it's a systemic disease (cause this is what he said). I think it's critical that people get chemotherapy. I'm not so sure that whether you get it upfront or get it at the end will make a difference. As you know, there are studies going on to try to address that in a randomized fashion, 3 but I don't think we have the answers. Although there have been studies, PREOPANC from the Netherlands 4 was one that suggests that neoadjuvant is better. There was also a recent study, the NorPACT Study, 5 , 6 which suggested neoadjuvant actually had a higher failure to get to surgery and did not improve overall survival on an intention to treat. I suspect if you ask five surgeons, you'll get five different answers. Many institutions in the United States have taken the approach of everyone should get it, including some very prestigious and high‐volume centers. I don't disagree that would be one way to interpret the data. I understand in many centers in Japan, that's standard. I don't have a reason to say it shouldn't be. We tend to take a little bit more of an individualized view and say that if people are clearly resectable, with no evidence of vascular involvement, we go to surgery. Except for one of my partners who would do neoadjuvant. And in general, our patients get FOLFIRINOX for eight cycles. When I do it, I tell people, we'll reassess at the end of 2 months, so four cycles. If they're having a response by CA19‐9, and by imaging, the tumor has responded dramatically, we may do surgery at 2 months. In my experience, it's not that common to see dramatic tumor shrinkage at 2 months. Markers may go down better. It takes longer, I think, for the fibrosis to resolve. Even if the cancer cells are dead, the fibrous scar is still there. It's more common if I'm doing it for a reason, to get it away from an artery, to make a vein resection a little simpler, we would more commonly do 4 months. At 4 months, people are often getting a little worn out by FOLFIRINOX, and so we give them a break by doing a Whipple operation. In your or your colleagues' practice, what is the indication of neoadjuvant therapy for resectable pancreatic cancer ? I don't do neoadjuvant for resectable. If a patient says to me, could I have it? My doctor says I should have it. I won't tell them that it's wrong. But in general, in my experience—and we did a neoadjuvant trial when I was at the University of Pennsylvania—I had patients who refused to participate in the trial because they said, I just want my tumor out. I don't want (take any extra steps or add any risks) —there's a—it's not logical. I don't think it's sensible, but I think there's a visceral (need to eliminate it immediately) —I have cancer. Get it out of me as soon as possible. In general, most patients don't argue if you say you can take it out. It's the opposite, really. When you say, I would like to do neoadjuvant and the patient says, can't you take it out first, doctor? You mentioned anatomically resectable pancreatic cancer, but recently, some researchers have referred to biologically resectable or biologically borderline cancer . Certainly, if the CA 19‐9 is elevated, even if there appears to be—and somewhere between 500 and 1000 is my personal benchmark. If it's elevated at that level or higher, even if the tumor looks resectable, I would send them for neoadjuvant. That's a good point, thank you. I didn't point that out. I think if the markers are very high, as was noted earlier, pancreas cancer is almost always a systemic illness at diagnosis, and markers that high are going to predict early relapse. And so, we would do neoadjuvant in that setting. Since the Memorial is top of the cancer centers in the United States, do you use liquid biopsy or something for the indication of biological malignancy of the pancreatic cancer ? We do have a liquid biopsy program. We were one of the first places to do routine sequencing of all of our tumors. We have over 100 000 sequences of all types at Memorial now. But the liquid biopsy, we tend not to use for decision making. It has been used for minimal residual disease status after surgery. In fact, there was a trial that was led by Eileen O'Reilly, who is our Chief of Medical Oncology, focusing on the pancreas. Eileen did a study looking at a vaccine approach for MRD, minimal residual disease defined either by persistent elevation of CA 19‐9 or persistent positive liquid biopsy, even with negative imaging. 7 This was a KRAS vaccine trial that was also very encouraging. We have good data that adjuvant therapy requires 6 months. 8 Less than 6 months has a higher relapse rate than patients who get a full 6 months, mostly from the British, from Neoptolemus and the European group. We tend to think of 6 months total as an important number. We don't have a lot of data for this; but if someone receives 2 months before, we tend to do 4 months after. If someone receives 4 months before, they will get 2 months after. If someone is doing well and wants to complete all 6 months before having surgery, we certainly do some total neoadjuvant. Most patients sort of get worn out before 6 months, but we think there's something important in 6 months, but I can't tell you. There have been really good studies in the neoadjuvant into adjuvant. The other thing is to look at the pathology when you resect the patient. I think patients who have nodal disease, persistent after neoadjuvant, my bias is that's a bad finding. Patients who have very little evidence of tumor response histologically, when the tumor is resected, again, maybe even should have a different adjuvant therapy if the neoadjuvant didn't show much effect. I think you have to tailor this to the patients. I should say at the outset that Dan Von Hoff, who led the global trial that led to, was my co‐leader of the Stand Up to Cancer group, and that was one of the projects that we supported. 10 I've been a gem and nab‐paclitaxel supporter from early, and in fact, at the University of Pennsylvania, where I was until 8 years ago when I came to Memorial, I would say more patients got gem and nab‐paclitaxel than FOLFIRINOX. For one thing, if you're going to do trials of chemotherapy plus some other agent, because gem and nab‐paclitaxel may be a little better tolerated, it's easier to combine with another form of treatment. That being said, our oncologists tend to go with FOLFIRINOX as their first line, whether in the neoadjuvant or the adjuvant setting. That's maybe an institutional preference as much as anything. As you know, recently, the number of elderly patients undergoing surgery has been increasing. Even octogenarians or patients over 90 are receiving pancreaticoduodenectomy. We believe that the postoperative complication rate and mortality are similar to those of younger patients. However, there must be careful patient selection. Some European papers have also mentioned the lower efficacy of adjuvant chemotherapy in elderly patient . 11 I think this difference contributes to the disparity in survival between younger and older patient groups . I think this is an increasing problem. We know that pancreas cancer is a disease of aging. At least in the United States, it doubles every decade of life between the 40s and the 90s. We see it rarely in people in their 40s, but not uncommonly in the 80s and 90s. You raised the question about chemotherapy, whether neoadjuvant or adjuvant. I think it's often the case that oncologists are concerned with giving either FOLFIRINOX or gem and nab‐paclitaxel to elderly patients. And so, it becomes a challenge. I think—I quote my patients as having increased mortality, but our overall mortality rate is pretty low. And so, the number is 1% to 2%. And so, I tell them it's 2% to 4%, so a bit higher, but in general, they're unlikely to get full‐dose chemotherapy. Radiation therapy, this is really where radiation therapy may play a role in palliating people and buying a bit of time. But I think sometimes surgery is really the only alternative. And so, if you can do it safely, you have to be careful. You want to pick the patient who will tolerate a big operation. I think we would be much more hesitant to do a major vascular reconstruction as opposed to a relatively simple, straightforward, quick operation. Again, I think what I usually ask patients is what they're (capable of) —I don't routinely do stress testing or cardiac workup unless they have a history. I ask them about how mobile they are. I had a lady who was 89 years old. I said, can you walk a flight of stairs? She said, “I live in a four‐story walkup, but our laundry is in the basement, so I walk five floors when I carry the laundry basket up and down.” She went home in 8 days. I knew she was going to do well after pancreaticoduodenectomy. I think you have to pick—if she had told me, I can't walk a flight of stairs without getting out of breath, and I have home oxygen, or I'm on four different heart medicines, then I think you need to be more careful and pick the patients carefully. Some Japanese researchers advocate the usefulness of both preoperative and postoperative rehabilitation—early rehabilitation, including ERAS (Enhanced Recovery After Surgery) and early recovery. However, in the United States, the postoperative hospital stay is often very short. Are there any measures to improve the postoperative period? Yes. I think prehab has not caught on very much. One of the challenges, if you're going to do neoadjuvant, I think that's a good opportunity to also do prehabilitation. If you're not going to do it, there's not too much evidence. It's like nutritional supplementation. You need weeks to really impact the patient, not a few days. And so, I'm not sure prehabilitation for a week and then doing their. If you're going to spend 2 months doing neoadjuvant, I think it's a good idea. But in general, if you're going to do surgery (of pancreaticoduodenectomy) first, probably you just need to do surgery. Again, the length of stay, on average, is 7–10 days in our group. I wouldn't say it necessarily correlates that strongly with age. Again, I always say if I have an 80‐year‐old woman and a 50‐year‐old man, the 80‐year‐old woman will go home sooner every time. It's not just age— some of that's selection, because there are very few 50‐year‐olds I won't operate on. There are 80‐year‐olds who I would say no. Some of it's selection. Yes. It's important. Since we know that surgery by itself does not do very well, that really multimodality therapy is critical for everyone to have the best long‐term survival. I think it's often the case that our oncologists don't want to treat with neoadjuvant, but if they come through a big operation, okay, it gives them a little more confidence to then give adjuvant. That's, again, our institution. I won't say that's a national standard. But certainly, there are places that do routine neoadjuvant, even in older patients. Actually, we previously published a paper comparing chemotherapy alone versus surgery in patients over the age of 80. In that study, we found that the completion of adjuvant chemotherapy is critical for long‐term survival. As you mentioned, adjuvant chemotherapy may be necessary for long‐term survival in very elderly patients. But in that case, what would you recommend for adjuvant chemotherapy—still FOLFIRINOX ? One of the things—I don't get any support from any pharmaceutical company, so this is not (about promoting any specific treatment) —and I'm not a medical oncologist, so I don't have to give any chemotherapy. But I think the studies of gemcitabine‐capecitabine, gem‐cape, had results markedly better than single‐agent gem and not that much worse than FOLFIRINOX or gem and nab‐paclitaxel in the adjuvant setting. In older patients, that's actually where I try to steer the oncologist. What they'll often do is do a cycle of gem alone, and if the patient tolerates it, then add the capecitabine. I had a 90‐year‐old man. Again, it shows the selection. He lived about six blocks from where the oncologist's office was, and he would walk over to get his chemotherapy and walk back to his apartment after his Whipple operation and got 6 months of adjuvant gem‐cape. I think, again, if you select the patients, recognize that it's not only FOLFIRINOX or gem and nab‐paclitaxel, there is another factor. Even single‐agent gemcitabine has some benefits. I have a 93‐year‐old who I did a Whipple on in the last few months who's now getting single‐agent gem. Doing fine with it. The average survival of a 90‐year‐old is only 4 years, even if they die of pancreas cancer. Hopefully, we'll at least get some of that. I think, and I should say that I've been taught by our radiation oncologists, it's not something that I practice, but I have seen a lot of patients now. I think one of the things we know is that radiation therapy can cure cancer, but the doses required generally will kill the patient. And so, the trick is to find a dose of radiation that will inhibit the cancer but not hurt the patient. Ablative techniques are really an approach to delivering a higher dose to the tumor without doing too much damage to surrounding tissues. In radiation therapy, there's been a great deal of focus on the type of particle, whether it's traditional gamma, whether it's proton therapy, carbon ion, various ways of focusing the beam, intensity‐modulated radiation therapy, stereotactic body radiation therapy. Probably, these are not as important as actually the dose. Regardless of the particle type, it's critical to get a dose of approximately 100 grays or 10 000 centigrays, whereas the prior dose that we traditionally gave for pancreas was around 5000, 5400. That dose is really a palliative dose, and there's pretty good evidence that will slow tumors down, but rarely result in good long‐term control. The other thing is that we know that pancreas cancers in the majority of patients have perineural invasion, they have lymphatic invasion, and so the tumor is not a very small, focused area. It's somewhat diffuse. Stereotactic approaches have the advantage of being tremendously focused. But for a disease that's going to be on the edges, that may not be a good thing. In fact, there's data that Stereotactic Body Radiation Therapy may be inferior to more traditional approaches because of the inability to cover the border. 12 And so, the ablative approaches that our radiation therapists use is designed to get up to about 100 gray (Gy) of the tumor and to cover the surrounding area, and yet to do it with a dose that doesn't cause patients to get very sick, and that we are doing more and more for local control, either preoperatively or in patients who can't have surgery. In Japan, chemoradiotherapy is usually indicated only for locally advanced pancreatic cancer. What are your thoughts on its indication for resectable pancreatic cancer? We actually have a similar belief. I wouldn't say it's universal. There are several very good high‐volume centers led by very experienced surgeons in which radiation therapy, particularly in the neoadjuvant setting, is routine. MD Anderson, of course, pioneered that for many, many years, and a number of their disciples that are at other institutions where that would be standard. That's not our approach. For patients, even if they have a neoadjuvant approach, neoadjuvant chemotherapy, we don't do radiation therapy unless we're in a situation of locally advanced disease or borderline disease that's not responding to chemotherapy. But the ablative radiation therapy does not prevent subsequent surgery. On occasion, that becomes a step. Patients get routinely 3–4 months of chemotherapy. If they haven't responded well enough, they may get ablative radiation therapy and then still potentially have surgery or not. Now, for patients with locally advanced disease, and I think all surgeons know, there are patients who know that response to chemotherapy is going to result in a tumor that's completely encasing the celiac or the SMA going away. In those patients, ablative radiation can result in fairly impressive long‐term local control. Of course, it does nothing for systemic disease. But increasingly, I just saw a patient with this problem this week. Increasingly, we're being asked to do palliative hepaticojejunostomies 2 or 3 years after a patient with locally advanced disease had a wall stent placed, got chemo, got ablative radiation, and now their wall stent has clogged up, they've had it cleaned out and had a new stent put inside it, and now that's gotten clogged up and they keep coming back with cholangitis, and we're asked to do a hepaticojejunostomy 2, 3 years after their diagnosis, knowing that cancer is still there, but it's not growing or progressing. I've done a few of them and several of my partners have done some. Again, operating on patients with locally advanced disease is not something I'm encouraging. But I think it's an indication of how, in some patients, ablative radiation really buys people time. We published a paper in which we looked at about 100 patients who had ablative radiation for locally advanced or borderline disease versus 100 who had surgery. The groups weren't really comparable because, of course, the surgical group tended to be borderline, the radiation group tended to be locally advanced, arterial involvement in the vast majority in the radiation group, in the vast minority in the surgical group. But we then looked at the pattern of recurrence, and actually, the local control rate with radiation was better than with surgery. Now, metastasis was more common if you left the tumor there and didn't take it out, so 2, 3, 4 years later, you leave live cancer, it's not surprising. Furthermore, we know that patients who have locally advanced disease are more likely to have metastatic disease than borderline patients. These groups weren't totally comparable. But it's interesting that the local control rate is remarkably good. The other thing we sometimes do is we'll have a patient who, a year or two after a pancreaticoduodenectomy, has local recurrence. Frequently, it's at the artery. The margin that's most commonly found is the uncinate margin, and they've now got local recurrence. CA 19‐9 may be going up. No metastasis. Of course, they always ask, can you operate? The answer is, no, we can't. But they can often do ablative treatment for that. That appears to, again, at least hold it in check for years. That may be a good, not a curative approach, but a palliative approach. Our standard is 4 months of chemotherapy. In part, sort of, I think it gets back to the question of neoadjuvant. One of the advantages of neoadjuvant chemotherapy is there's a group of patients who are going to have early systemic relapse after surgery, if you do a surgery‐first approach. I think the same is true of radiation. You'd hate to give someone a course of radiation therapy. During that, they sometimes use capecitabine as a potentiator, but they don't get full‐dose chemotherapy during the radiation. If you assume that they have locally advanced disease, they certainly have microscopic systemic disease. They should get chemotherapy first, and then if they are able to get radiation after that, that becomes a way to get the majority of patients out around 2–3 years, which is not that different than how they do with surgery. I think one of the questions that we really haven't answered, in animals, there's very good data of a so‐called abscopal effect. You radiate the tumor. It releases antigens. The checkpoint inhibitor then can stimulate and help the immune system. I don't think we have great evidence of that in pancreas cancer. In humans. But I don't think it's a, I think it's well worth studying, and I know there are trials going on. We're going to wrap up this conversation. Thank you very much for your insightful elaboration and explanation about pancreatic cancer. We face significant challenges in curing pancreatic cancer, but today's discussion with Professor Drebin provides a glimmer of hope. In the future, we may be able to eradicate pancreatic cancer. Thank you very much for your active discussion. Koshi Mimori: writing manuscript; Tsutomu Fujii: Interviewer; Masayuki Sho: Interviewer; Itaru Endo: Interviewer; Ken Shirabe: Supervise study; Yuko Kitagawa: Principle investigator.
|
Other
|
other
|
en
| 0.999998 |
PMC11693556
|
Colorectal cancer (CRC) is the third most common malignancy and second leading cause of cancer‐related deaths worldwide. An estimated 1.8 million patients were newly diagnosed worldwide in 2018, and approximately 880 000 died due to CRC. The 5‐year survival rate for patients with stage I CRC is over 90%, while that for patients with stage IV CRC is only 11%. 1 Therefore, the early detection and diagnosis of CRC are directly related to its prognosis, and an accurate evaluation and prediction of the prognosis, including malignancy and risk of recurrence, is important for selecting the appropriate treatment for CRC. However, there are few factors that are more effective and useful in evaluating the malignancy of CRC than clinicopathological factors, such as tumor differentiation, tumor depth, lymph node metastasis, distant metastasis, and vascular invasion. It has been reported that most cancers generally have structural heterogeneity, which is recognized as an important issue leading to aggressiveness of cancer and chemotherapy. 2 Thus, quantification of structural abnormality of the tumor has a potential to be a biomarker for cancer treatment. Recently, the studies have focused on using texture analyses of medical images to quantify the structural abnormality of tumor, and reported that textural features in medical images can be a biomarker for cancer treatment and prognosis. 3 Computed tomographic colonography (CTC) is widely used for the preoperative examination of CRC, as it is minimally invasive, does not require skilled hands for colonoscopy or barium enema, and is highly reproducible. 4 To our knowledge, no study has reported that structural abnormality of tumor measured on CTC images is associated with pathologic features and prognosis of CRC. The present study investigated whether or not texture analyses of CRC using CTC images is useful for diagnosing CRC malignancy. This retrospective study was performed with the approval of the Institutional Review Board of Chiba University Graduate School of Medicine. All patients provided their written informed consent to undergo a contrast‐enhanced CTC (CE‐CTC) examination, but participation was not required because of the retrospective nature of this study. We retrospectively identified 465 patients with CRC who underwent CTC before curative surgery for CRC between January 2014 and December 2017. The following patients were excluded: (1) those who underwent only non‐CE‐CTC; (2) those for whom imaging analysis was not available in medical records; and (3) those who received neoadjuvant chemotherapy (NAC) or neoadjuvant chemoradiotherapy (NACRT) before surgery. In total, 263 patients were eligible for this study . All patients in this study underwent preoperative colonoscopy. All tumor locations and sizes were confirmed by colonoscopy and CTC images. Height was not measured in this study. The pre‐treatment for the CE‐CTC examination consisted of a change to a low‐residue diet the day before the examination, 100 mL of iodinated oral contrast agent after each meal, and 250 mL of magnesium citrate at 9 p.m. Before the CTC examination, a rectal tube was inserted, and carbon dioxide gas was pumped into the rectum and colon at a constant pressure of 20 mmHg using a dedicated device (PROTOCO2L; BRACCO Imaging S.p.A., Milan, Italy). CTC was performed using a 64‐section multidetector row CT scanner (Revolution EVO; GE Healthcare, Milwaukee, WI, USA). Contrast medium (300 mgI/mL, 600 mgI/kg) was injected at 6.0 mL/s, and images were taken after 80 s with a rotation time of 0.5 s and pitch factor of 0.984. All images were reconstructed using a standard reconstruction algorithm with a slice thickness of 0.625 mm and reconstruction interval of 0.625 mm, and the FOV was sized to fit the patient. Imaging analyses were performed using the Attractive imaging analysis software program (PixSpace Ltd., Fukuoka, Japan). By importing the DICOM images of the portal phase CTC into the imaging analysis software program, a virtual endoscopic image was automatically constructed. The tumor site could then be identified from the constructed virtual endoscopic image, and an arbitrary cross‐sectional view could be selected. In this study, we manually drew a region of interest (ROI) from a multiplanar reconstruction (MPR) image of the same area according to the maximum diameter of the tumor . Image analyses were performed by an independent observer (H.M., with 8 years of experience in CT interpretation). Eight texture parameters, including the fractal dimension (FD), skewness, kurtosis, entropy, and gray‐level co‐occurrence matrix (GLCM) texture parameters, including GLCM‐correlation, GLCM‐autocorrelation, GLCM‐entropy, and GLCM‐homogeneity, in the ROI area were calculated. In addition, the correlation, autocorrelation, entropy, and homogeneity were evaluated as GLCM parameters. We examined inter‐operator reproducibility using 20 patients' data which were randomly selected from 263 patients. Two observers (H.M. and T.T.) measured the parameters of tumors using the aforementioned techniques, and the intraclass correlation (ICC) was calculated. All the parameters (FD, skewness, kurtosis, entropy, and GLCM) had a value of 0.7 or higher. An FA is used to quantify the complexity of clinical images. The numerical value obtained with an FA is called the FD. The FD was measured using the box‐counting method 5 and is defined by the following equation: N L = KL −FD , where L is the box size, and N L is the number of boxes of size L required to cover the object. log K can be obtained from the y ‐intercept obtained by linear regression of a log plot of N L vs. L . This study used three histogram parameters: skewness, kurtosis, and entropy. Skewness is a statistic that expresses if the distribution is symmetric or not. If a distribution is symmetric, statistical number is 0. If skewness is greater or less than 0, then it is called right‐skewed or left‐skewed and a distribution expresses asymmetric. Kurtosis is a statistic that tells us if a distribution is taller or shorter than a normal distribution. High kurtosis tends to have higher peak, in contrast low kurtosis tends to be flatter. Entropy is a statistic that heterogeneity, and is an indicator of the amount of information in an image. It becomes larger when pixels have much value of density. 6 The GLCM is an efficient texture analysis method proposed by Haralick that uses second‐order statistics to characterize the properties of two or more pixel values occurring at a particular location. 7 It is widely used as a powerful tool because of its ability to identify second‐order spatial relationships between pixels or voxels in input image data. 8 The GLCM is a constructed matrix in which P ( i , j ) describes the probability of a pair of gray levels ( i and j ) occurring in an image. All gray‐level pairs are separated by a certain distance in a certain direction. In this study, GLCM was computed at a distance of 1 voxel, and the direction angles were 0, 45, 90, and 135. We took the average of the GLCM for the four directions as the texture feature values. The formulae for calculating the texture feature values in this study are as follows 9 : Correlation = ∑ i = 1 G ∑ j = 1 G i − μ j − μ P i , j σ 2 Autocorrelation = ∑ i = 1 G ∑ j = 1 G ijP i , j Entropy H = − ∑ i = 1 G ∑ j = 1 G P i , j log 2 P i , j Homogeneity = ∑ i = 1 G ∑ j = 1 G P i , j 1 + i + j 2 The histopathological evaluation of the surgical specimen was performed after surgery by board‐certified pathologists at our institute. Tumor staging (T), nodal staging (N), and lymphatic and vascular invasion status of each specimen were evaluated. These histopathological features were assessed using the union for international cancer control staging (UICC) TNM classification of colorectal cancer. 10 All statistical analyses were performed using the EZR software program (Jichi Medical University Saitama Medical Center, Saitama, Japan). 11 Patients were divided into two groups according to the histological findings and the Mann–Whitney U test was applied for the comparison. The median value of the texture parameter was used as the cutoff value for the comparison with the overall survival (OS). A Kaplan–Meier analysis was performed for the OS, and the Cox regression test was used. We expressed hazard ratios (HRs) with 95% confidence intervals (CIs) using this model. p < 0.05 was considered statistically significant difference. This study included 156 men and 107 women with a median age of 70 (range: 22–101) years old. The median tumor size was 35 (range: 3–130) mm. The patients' characteristics are listed in Table 1 . Associations between texture parameters and pathological features are shown in Table 2 . All texture parameters except skewness showed significant differences between the pT1–2 and pT3–4 groups. GLCM‐homogeneity showed a significant difference between the N‐negative and N‐positive groups ( p = 0.004). GLCM‐correlation and GLCM‐homogeneity levels showed significant differences between the Ly‐negative and Ly‐positive groups ( p = 0.001 and 0.012, respectively). FD, GLCM‐entropy, GLCM‐correlation, and GLCM‐autocorrelation showed significant differences between the V‐negative and V‐positive groups ( p = 0.001, 0.033, 0.021, and 0.046, respectively). The median value of the texture parameter was used as cut off, and the population was divided into two groups using the median of the texture parameters as the cutoff value. In the comparison of overall survival, GLCM‐correlation and GLCM‐homogeneity showed significant differences (Table 3 ). In the Kaplan–Meier analysis , patients with high GC (≥0.708) tumors or those with high GLCM‐homogeneity tumors (≥0.098) showed a significantly worse OS than others ( p = 0.001 and 0.04, respectively). Table 4 shows the results of univariate and multivariate analyses of tumor parameters for the OS. A univariate analysis using Cox's regression model showed that the tumor depth, lymph node metastasis, venous invasion, GLCM‐correlation, and GLCM‐homogeneity were significantly correlated with the OS. A multivariate analysis demonstrated that lymph node metastasis and GLCM‐correlation were independent prognostic factors for the OS ( p = 0.001 and 0.021, respectively). GLCM‐correlation and GLCM‐homogeneity were included in the multivariate analysis, but no correlation was found (correlation coefficient = 0.309). In this study, we used the term “malignancy” as prognosis of the tumor. Malignant tumors generally have structural heterogeneity, which is recognized as an important issue leading to malignancy of cancer and chemotherapy resistance. Hayano et al. 12 reported that structural heterogeneity leads to a heterogeneous blood supply within the tumor, which may result in a hypoxic tumor environment. They further found that hypoxia and necrosis contribute to intratumoral heterogeneity by increasing the number of low‐density areas within the tumor. Therefore, intratumoral heterogeneity may be associated with treatment response and the identification of drug targets, and survival. Thus, quantification of structural abnormality of the tumor has a potential to be a biomarker for cancer treatment. With the wide availability of CT examinations in clinical practice, the analysis of tumor structural heterogeneity on CT images is expected to become more useful and practical as a biomarker for cancer. Several previous reports have shown that heterogeneity of tumor structures on CT correlates with the treatment response and prognosis in head and neck cancer, esophageal cancer, lung cancer, and renal cell carcinoma. 13 , 14 , 15 , 16 , 17 , 18 , 19 In the present study, we measured the heterogeneity of CE‐CTC images using a texture analysis, which is useful for quantifying the complexity, direction, and contrast variation in digital images. A higher GLCM measured on CE‐CTC images was significantly associated with malignancy of CRC and a worse prognosis. GLCM‐correlation was shown to be an independent prognostic factor for the OS in a multivariate analysis. This suggests that GLCM may be associated with intratumoral heterogeneity. Intratumoral heterogeneity is multifactorial and has been reported to be related to several factors, including hypoxia, necrosis, angiogenesis, and genetic variation. 20 , 21 Bum et al. 22 reported that GLCM is a strong predictor of the OS in pancreatic cancer, and Chen et al. 23 reported that GLCM may be a predictor of OS in small‐cell lung cancer. However, there are currently no reports showing an association between GLCM and long‐term outcome of the patients with CRC. As mentioned above, in the current practice of CRC, the evaluation of malignancy is determined by pathological factors. There are few useful biomarkers that can assess the malignancy and the prognosis of cancer before surgery, such as tumor differentiation, tumor depth, lymph node metastasis, and distant metastasis. We therefore believe that if GLCM becomes possible to predict malignancy of cancer before surgery, appropriate treatment may be selected. The present study used preoperative CE‐CTC images, which are relatively easy to obtain and differ little between centers. CTC is useful in the preoperative examination of CRC because it is less strongly affected by patient factors, such as body size, bowel shape, and length, than colonoscopy and can evaluate the proximal bowel even in cases of severe stenosis where the endoscope cannot pass through. In the present study, GLCM, which is a texture parameter obtained using CE‐CTC images, was suggested as a new biomarker for detecting and stratifying malignancy of CRC. However, our study had several limitations. First, it used single‐center, retrospective data. Therefore, the findings should be confirmed through multicenter prospective investigations. Second, the method of tumor ROI delineation was subjective and performed by a single team leader. Therefore, a new, reproducible, and reliable tumor segmentation method is required. Third, our analysis used CE‐CTC images; however, a volumetric analysis may be more representative of the tumor structure. Therefore, a three‐dimensional analysis approach should be developed. GLCM is the only biomarker correlated with prognosis that can be measured preoperatively for colorectal cancer. GLCM derived from texture analyses using CE‐CTC images may become a potentially viable biomarker for assessing malignancy of CRC before surgery. Hisashi Mamiya: Writing – original draft preparation; Formal analysis and investigation; Conceptualization; visualization; data curation. Toru Tochigi: Writing – review and editing; conceptualization; Methodology; Funding acquisition; Resources; project administration. Koichi Hayano: Writing – review and editing; conceptualization; Methodology; Supervision; project administration. Gaku Ohira: Supervision; conceptualization; validation. Shunsuke Imanishi: Supervision; conceptualization. Tetsuro Maruyama: Supervision; conceptualization; data curation. Yoshihiro Kurata: Supervision; conceptualization. Yumiko Takahashi: Supervision; conceptualization. Atsushi Hirata: Supervision; conceptualization. Hisahiro Matsubara: Supervision; conceptualization. There are no relevant financial or nonfinancial relationships to disclose. Prof. Matsubara is a member of the Editorial Board for AGSurg . Approval of the research protocol: N/A. Informed Consent: All patients provided their written informed consent to undergo a contrast‐enhanced CTC examination, but participation was not required because of the retrospective nature of this study. Registry and the Registration No. of the study/trial: N/A. Animal Studies: N/A.
|
Review
|
biomedical
|
en
| 0.999997 |
PMC11693581
|
Anastomotic recurrence at the site of surgery for colorectal cancer (CRC) is speculated to occur due to the implantation of detached malignant cells, although the exact mechanism remains unclear. This type of recurrence is considered a local recurrence and can significantly affect the quality of life and lead to serious outcomes. 1 The local recurrence rate after rectal cancer surgery is approximately 5%–10%. 2 Local recurrence remains a relatively common form of recurrence, along with liver metastasis, but is now surpassed by lung metastasis. 3 Previous studies have reported the presence of intraluminal exfoliated cancer cells (ECCs) in patients with CRC. 4 To prevent the spread of malignant cells within the bowel lumen, surgeons have avoided touching or manipulating tumors for 100 years. 5 , 6 The use of a standard circular stapler for anastomosis has been suggested to increase the accumulation of intraluminal ECCs at the anastomotic site, thereby increasing the risk of recurrence. 7 Intraluminal irrigation is a widely used procedure globally to prevent ECCs transplantation in patients with CRC. Several prospective clinical trials 8 and meta‐analyses 9 have evaluated the effectiveness of intraluminal lavage in preventing local recurrence, but its effectiveness remains controversial. A number of these studies have described the use of varying irrigation fluid volumes to efficiently remove ECCs. However, owing to the small sample sizes, no standard has yet been established for the appropriate volume or type of irrigation to use. 9 , 10 , 11 Our institution, like many others, performs intraluminal washout for sigmoid colon resection with anastomosis by using a double‐stapling technique (DST), a technique similar to that used in rectal cancer surgery. This study aimed to determine the necessary and optimal methods (irrigation volume and solution type) for intraluminal washout based on clinicopathological factors, including tumor location. A total of 140 consecutive patients, comprising 91 men and 49 women with an average age of 68.9 years, who underwent sigmoidectomy or anterior resection for sigmoid colon cancer or rectal cancer at the University of Yamanashi (Yamanashi, Japan) between July 2018 and December 2022, were included in the study. Of these patients, 56 and 84 had sigmoid colon and rectal cancers, respectively. Clinicopathological findings were obtained from the hospital clinical records. The location of the tumor and position of its lower edge were determined using enema examination. According to the Japanese Classification of Colorectal Carcinomas, the rectum was divided into three sites: the rectosigmoid (RS), upper rectum (above the peritoneal reflection, Ra), and lower rectum (below the peritoneal reflection, Rb). 12 Undifferentiated histological types include poorly differentiated mucinous adenocarcinomas and signet ring cell carcinomas. Local recurrence was defined as anastomotic, pelvic (including peritoneal dissemination confined to the pelvis), and lateral lymph node recurrence. Our institute followed a standard preoperative bowel preparation procedure for patients without bowel obstruction. This procedure involved administering a combination of magnesium citrate (250 mL) and sodium picosulfate solution (0.75%, 10 mL) the day before surgery. The length of the distal free margin (DM) was determined based on the tumor location as follows: 10 cm at the sigmoid colon, 3 cm at the RS and Ra, and 2 cm at the Rb. 12 During the surgery, the resected specimen was gently stretched and fixed with pins to measure the length of the DM. Before dissection, the distal rectum was clamped to occlude the rectal stump from the tumor. Intraluminal washout was performed using a transanally inserted Nelaton catheter (8.5 mm in diameter) (Izumo Health Co., Ltd., Japan), with either physiological saline or distilled water. During washout, samples were collected at four time points: before washout and after irrigation with 500, 1000, and 2000 mL of physiological saline or distilled water . Physiological saline was used as the irrigation solution from July 2018 to December 2020, and distilled water was used from January 2021 to December 2022. Samples were collected by injecting 20 mL of physiological saline solution into the lumen of the intestine. The collected samples were centrifuged at 3000 rpm for 5 min, and the resulting clots were examined using Papanicolaou staining. Two experienced cytotechnologists analyzed the stained samples to confirm the diagnosis, subsequently reviewed by a pathologist. The samples were classified according to the Papanicolaou classification system, with classes I, II, and III categorized as non‐malignant and classes IV and V as malignant . Statistical analyses were conducted using Prism (v.9, GraphPad Software, San Diego, CA). Comparisons were made using the chi‐square test or Fisher's exact test and the Mann–Whitney U test, as appropriate. Logistic regression analysis was performed to determine risk factors for positive EECs at each washout point. Statistical significance was set at p ≤ 0.05. Table 1 shows the clinicopathological characteristics of the study participants. Among the 84 patients diagnosed with rectal cancer, 75% ( n = 63) received various forms of preoperative treatment. By contrast, only 20% ( n = 11) of the 56 patients diagnosed with sigmoid colon cancer received preoperative treatment. With regard to the tumor's anal margin location in the rectal cancer group, 35 cases were located at the RS, 19 at the Ra, and 30 at the Rb. No significant differences were observed in preoperative bowel preparation, type of irrigation solution used, surgical approach, tumor size, histological type, or depth between the rectal and sigmoid colon cancer groups. Notably, the DMs were significantly shorter in patients with rectal cancer than in those with sigmoid colon cancer ( p < 0.001). Figure 2 illustrates the positive rates of ECCs detected in perfusate samples, collected at four time points. These rates are presented both as an overall percentage and categorized according to tumor‐occupied sites. Initially, 46.4% of all patients showed positive ECCs prior to the washout procedure. This rate progressively decreased with increasing perfusate volume; it was 18.5% after 1000 mL, 10.0% after 1500 mL, and finally, 7.1% following a 2000 mL washout. When analyzing data by tumor site, we observed differing patterns. For patients with sigmoid colon cancer, the positive rate of ECCs was 21% before washout, which decreased to 1.8% in one case after a 2000 mL washout. By contrast, 63% of patients with rectal cancer had ECCs before washout. Even after a 2000 mL washout, 10.7% of these patients still exhibited ECCs. The relationship between the distance from the anal verge to the tumor and the DM is shown in Figure 3 . Both variables were significantly and positively correlated ( p < 0.001, R 2 = 0.01). Figure 3B shows the correlation between tumor distance from the anal verge and the detection of ECCs. A significant increase in positive ECCs was found in patients whose tumors were closer to the anal verge, both before and after the 1000 mL washout ( p < 0.05). However, no significant difference was observed when comparing the results of the 1500 and 2000 mL washouts. The DM lengths in patients with ECCs before the washout procedure are shown in Figure 3C . Notably, one patient with sigmoid colon cancer who tested positive for ECCs following a 2000 mL washout had a DM length of 11.5 cm. An analysis of the prewashout risk factors for the potential presence of ECCs was performed. In the univariate analysis, no significant differences were found in age, sex, type of irrigation solution, surgical approach, histological type, or the presence of lymph node metastasis. However, differences were observed in the presence of preoperative treatment, preoperative bowel pretreatment status, tumor location, DM, tumor size, and tumor depth at the four time points previously mentioned (Table 2 ). Multivariate analysis showed that a shorter DM ( p < 0.03) and larger tumor size ( p < 0.03) emerged as ultimate risk factors for positive ECCs after a 2000 mL irrigation (Table 3 ). The reported local recurrence rate after rectal cancer surgery ranges from 5% to 10%, 2 , 3 with the majority of cases occurring within the first 2 years after surgery. Specifically, 60%–80% recur in the first year and 90%–93% in the second year. 13 In this study, local recurrence was observed in 1.4% of all cases (two of the 140 cases) despite the relatively short observation period . Both cases involved patients with rectal cancer. Specifically, local recurrence occurred in 2.4%, two of the 84 rectal cancer cases. These findings suggest that intraluminal washout may effectively reduce local recurrence rates. Of the two cases where local recurrence was observed, in one case, anastomosis site had positive ECCs after a 2000 mL intraluminal washout. In the other case, peritoneal dissemination confined to the pelvis with an abscess by microperforation at the tumor site preoperatively (data not shown). In a previous study, the effectiveness of intraluminal washout with physiological saline in eliminating ECCs was evaluated in a limited number of patients with sigmoid colon and rectal cancers requiring a DST. 14 The study reported that while patients with rectal cancer required a 1000 mL or more intraluminal washout, such volume may not be necessary for patients with sigmoid colon cancer. Building on these findings, this research expanded the sample size, increased the irrigation volume, included distilled water as an irrigation solution, and examined risk factors for positive ECCs. As a result, in patients with sigmoid colon cancer with adequate preoperative bowel preparation, a long DM, and a small tumor size, a 1000 mL intraluminal washout was considered sufficient. By contrast, in patients with rectal cancer with a short DM and a large tumor size, a 2000 mL or more intraluminal washout was necessary. This study is the first to investigate the volume of bowel irrigation and two types of irrigation solution in patients with sigmoid colon and rectal cancers who require a DST. Intraluminal washout during rectal cancer surgery has been a longstanding surgical practice. 6 Viable cancer cells, detached from the tumor, are present within the intestinal lumen adjacent to the tumor site. 4 , 7 , 11 , 15 Many surgeons have explored the advantages of intraluminal washout in preventing local recurrence. 16 Examination of tissue collected with the DST circular stapler has demonstrated that intraluminal washout eliminates disseminated malignant cells. 10 , 17 However, the efficacy of intraluminal washout in eliminating ECCs is dependent on the irrigation volume. Previous reports have not consistently confirmed the disappearance of malignant cells in all patients, even with an irrigation volume of 500 mL of physiological saline. 17 , 18 However, the irrigation volume required to completely eradicate disseminated malignant cells in CRC has not yet been determined. Previous studies have addressed appropriate irrigation solution volumes for patients with rectal cancer. Maeda et al. reported that 2000 mL of irrigation solution is necessary for ECC elimination. 10 Similarly, other researchers have recommended 1500 mL of physiological saline to reduce the risk of local recurrence. 19 Our research results indicate that after a 2000 mL intraluminal washout, positive ECCs were observed in 1.7% of patients with sigmoid colon cancer and 10.7% of patients with rectal cancer. This suggests that a minimum of 1000 mL intraluminal washout is necessary for sigmoid colon cancer and 2000 mL for rectal cancer to eliminate detached malignant cells. Free malignant cells spread from the tumor into the colonic lumen. Therefore, the spread of malignant cells within the lumen is a matter of concern. In patients with larger tumors, tumor cells were assumed to be detected more frequently. The multivariate analysis in this study identified the tumor size at the time of the 2000 mL intraluminal washout as an independent risk factor (Tables 2 , 3 ). This finding suggests the possibility of mechanical stimulation of the tumor during surgeries, such as contact and compression, in patients with larger tumors. Surgeries in the pelvic cavity beyond the tumor are required, particularly in patients with lower rectal cancer. Regarding the surgical approach, Hasegawa et al. reported that open surgery significantly increases the presence of ECCs in the bowel compared with laparoscopic surgery. 20 Open surgery, considered more traumatic, may facilitate the spread of malignant cells into the bowel. In this study, laparoscopic and robotic techniques were predominantly used, accounting for the majority of surgeries, with only a small percentage (4.5%) being open surgeries. No statistically significant differences were observed in the prevalence of ECCs between these methods. To determine whether laparoscopic and robotic surgeries reduce ECCs, a randomized study with a sufficient sample size would be necessary. However, considering the significant international standardization and distinct advantages of laparoscopic and robotic surgeries, conducting such investigations may not be feasible. Additionally, positive ECCs did not show statistically significant differences in relation to other clinical and pathological factors such as tumor invasion depth. Various irrigation solutions have been used for bowel cleansing. These include physiological saline, distilled water, cetrimide, chlorhexidine, povidone‐iodine solution, and ethanol. 21 , 22 In a study on bowel cleansing before colorectal anastomosis, the use of sodium hypochlorite and povidone‐iodine reduces bacterial counts in the rectal stump. 23 However, the precise relationship between intestinal bacterial count and factors such as ECC count and local recurrence remains unclear. In this study, we investigated the effects of physiological saline and distilled water. Considering concerns regarding tissue damage associated with the concentration of agents in other irrigation solutions, further exploration is warranted. Physiological saline and distilled water are considered highly convenient because of their minimal tissue damage, cost‐effectiveness, and lack of the need for concentration adjustment. Additionally, our study demonstrated that the use of these irrigation solutions did not affect the positive rate of ECCs. This study has several limitations. First, there is still insufficient evidence to demonstrate that the exfoliated cancer cells in the lumen actually cause local recurrence. Second, it focused solely on the volume and type of irrigation solution, which may not be representative of other facilities using different irrigation devices. Third, this was a single‐arm trial where the irrigation solution was changed during the study, and randomization was not performed. Finally, this was a small‐scale, single‐center prospective study. To validate these findings, larger prospective multicenter studies on a national scale, such as multicenter randomized controlled trials, are necessary. In conclusion, in patients with sigmoid colon cancer with adequate preoperative bowel preparation, a long DM, and a small tumor size, a 1000 mL intraluminal washout may be sufficient. On the other hand, in patients with rectal cancer with a short DM and a large tumor size, a minimum of 2000 mL intraluminal washout is indicated. Shinji Furuya drafted the manuscript. Koichi Takiguchi, Hiroki Shimizu, Makoto Sudo, Shuguru Maruyama, Yuuki Nakata, and Yoshihiko Kawaguchi collected the intraluminal washout samples. Kunio Mochizuki and Tetsuo Kondo performed the cytological assessment of the washout fluid samples. Yuuki Nakata created the figures. Shinji Furuya and Daisuke Ichikawa conceived and designed the study and edited the manuscript. The final version of this manuscript has been approved by all authors. This research received no specific grant from any funding agency in the public, commercial, or not‐for‐profit sectors. Daisuke Ichikawa is Associate Editor of Annals of Gastroenterological Surgery . The remaining authors declare no conflict of interests for this article. Approval of the research protocol: Ethical approval was obtained from the University of Yamanashi Faculty of Medicine Ethics Committee for this prospective study using medical records . The protocol for this research project conforms to the provisions of the Declaration of Helsinki. All data were stored on a secure hospital server with access granted only to the authors of this study. Subsequent analyses were performed on the de‐identified datasets, and the database was accessed from July 2018 to December 2023. Patient privacy and confidentiality were maintained throughout all phases of the study. Informed consent: Participants provided paper‐based informed consent at the time of admission, and their consent forms were digitized and stored in a database. Minors were excluded from the study. Registry and the registration No. of the study/trial: N/A. Animal studies: N/A.
|
Review
|
biomedical
|
en
| 0.999996 |
PMC11693608
|
Colorectal cancer is the third most common cancer worldwide. 1 Over 700 000 people are newly diagnosed with rectal cancer, and more than 300 000 people die from rectal cancer. 2 Minimally invasive surgery for rectal cancer has evolved over time and has been widely adopted; however, laparoscopic surgery for rectal cancer (Lap) has difficulties, including the requirement for straight and inflexible devices. Robotic surgery (Ro) for rectal cancer has several advantages over Lap in terms of the use of articulating instruments, enhanced dexterity with tremor filtration, and motion scaling. Weber et al 3 reported the first case of robot‐assisted colorectal resection in 2002; since then, Ro has been widely used worldwide. The latest randomized controlled trial (RCT) of Ro versus Lap surgery for middle and low rectal cancer (REAL) revealed better short‐term outcomes for Ro than for Lap. 4 Postoperative complications of Clavien‐Dindo grade II or higher within 30 days after surgery were less frequent in the Ro group than in the Lap group (16.2% vs. 23.1%, d = 0.003). After surgery, the patients in the Ro group recovered faster than those in the Lap group, with a shorter time to first flatus, time to first defecation, and postoperative hospital stay. RCTs are performed mainly in specialized centers with extensive surgical experience in eligible patients without complications, and attention should be paid to the adaptation of the results to the general population. However, real‐world data on robotic surgery for rectal cancer are insufficient for evaluating the effectiveness of Ro in real‐world clinical settings. Several studies 5 , 6 using real‐world data have reported that the total cost of Ro is higher than that of Lap. On the other hand, Mizuguchi et al 7 reported total costs for low‐anterior resection were significantly lower in Ro than Lap (1 955 216 vs. 2 031 511 JPY, p < 0.001) using the Diagnosis Procedure Combination (DPC), a large‐scale medical database that records information on hospitalized patients at acute care hospitals in Japan. The study was the first to report that Ro had lower total costs than Lap; however, the authors did not provide details on the operative medical costs. Thus, this study aimed to clarify the short‐term outcomes and medical costs, including operative costs, of Ro compared with Lap using a nationwide large‐sample dataset. We identified patients who underwent Lap or Ro for rectal cancer between January 2018 and January 2021 from the nationwide Japanese inpatient database provided by Medical Data Vision Co., Ltd. We defined the main disease using the International Classification of Diseases (ICD)‐10 codes (C19 and C20) from the database of the main disease names that triggered hospitalization and invested the most medical resources. Patients with missing information (clinical cancer stage, body mass index, or smoking index) were excluded. The DPC system is a major bundled payment system for medical services applied to inpatients in acute care in Japan. 8 DPC Data include basic patient information such as age, sex, height, and weight; primary illness at admission; comorbidities at admission; comorbidities that developed during hospitalization; surgeries and procedures performed; all medical resources used; anesthesia time; and discharge outcome. It is a nationwide administrative claims database that covers over 1700 hospitals and 7 million inpatients. 7 The database is linked to hospitalized patients' insurance information and records total medical costs. Body mass index (BMI) was classified into four categories (<18.5, 18.5–24.9, 25.0–29.9, and ≥30 kg/m 2 ), and the smoking index was categorized into three categories (0, 1–49, and ≥50 pack‐years). The Charlson comorbidity index was calculated according to Quan's protocol, and each International Classification of Diseases, tenth revision, code for the 17 comorbidities was converted into a score and summed for each patient. The hospital scale was defined as the number of beds, and we created three categories (≤199, 200–499, and ≥500). The clinical stages were graded as 0–I, II, III, or IV. The tumor location was classified as rectosigmoid cancer (C19) or rectal cancer (C20). Intraoperative data included the type of surgical approach (laparoscopic or robotic), creation of a stoma, duration of anesthesia, and blood transfusion. Because data on operative time and intraoperative blood loss were not available from the DPC data, the duration of anesthesia and blood transfusion volume were used as surrogate information. Postadmission complications were distinguished based on admission comorbidities. The patient outcomes included in‐hospital mortality, morbidity, length of postoperative stay, reoperation during the same admission period, 30‐day readmission, and medical costs. Morbidities included anastomotic leakage (T813), surgical‐site infection [(SSI), T793, T813, T814, T941], peritoneal abscess (K65), bleeding (K661, R58, T810, T811), ileus and bowel obstruction (K560, K562, K565‐567, K913), sepsis (A021, A227, A241, A267, A282, A327, A394, A40, A41, A548, B007, B349, B377), respiratory failure (J12‐18, J690, J691, J958, J959, J96), pulmonary embolism (I26), acute coronary syndrome (I21‐25), stroke (I60‐66), acute renal failure (N17), and urinary tract infection (N10, N30, N390). ICD‐10 codes belonging to multiple classifications were assigned to appropriate complications by reviewing the text. Urinary dysfunction was determined based on whether the patient underwent urethral catheterization, intermittent voiding procedures, or medical therapy (alpha1‐receptor blockers, cholinergic drugs, or cholinesterase inhibitors) on or after the first postoperative day. Medical costs were calculated as total hospitalization and operative medical costs. Informed consent was not required because of data anonymity. This study was approved by the Institutional Review Board of Osaka Medical and Pharmaceutical University . Clinical data and postoperative outcomes were compared between the two groups. Propensity score matching (PSM) was used to match patients who underwent Ro with those who underwent Lap. We used a logistic regression model to calculate the propensity scores. The model was based on the following potential confounding variables: sex, age, BMI, smoking index, Charlson comorbidity index, hospital scale (number of beds), hospital type (designated cancer hospital or not), tumor location, cancer stage, year of operation, and surgical methods. Standardized differences were calculated to compare patient confounders between the Lap and Ro groups. We used nearest‐neighbor matching with a caliper width equal to 0.2 of the standard deviation of the logit of the propensity scores. Categorical variables were compared using the chi‐square test, and for continuous variables the Mann–Whitney U test was employed. The significance level was set at p < 0.05 for all statistical tests, and all p values were 2‐sided. All the analyses were performed using STATA v. 17 (StataCorp, College Station, TX, USA). A total of 18 952 patients were analyzed in this study and a flow chart is depicted in Figure 1 . We identified significant differences in age, smoking index, Charlson comorbidity index, number of beds, designated cancer hospital, tumor location, clinical stage, and PSM was then performed. The baseline characteristics in the two groups were closely balanced by PSM, resulting in 1396 matched pairs (Table 1 ). The short‐term outcomes and medical costs before and after PSM are presented in Table 2 . Following PSM, we identified significant differences in operative medical costs (Lap vs. Ro: 1 291 371 vs. 1 312 462 JPY, p = 0.013), surgical site infection (2.9% vs. 1.5%, p = 0.010), and respiratory failure rates (1.3% vs. 0.6%, p = 0.049). While comparing Lap versus Ro, the readmission rate (2.4% vs. 2.9%, p = 0.35), postoperative length of stay (12 vs. 13 days, p = 0.20), and total medical costs (1 862 439 vs. 1 895 822 JPY, p = 0.051) did not differ significantly. One significant concern regarding Ro is the overall cost of hospitalization, despite evidence that it yields better short‐term outcomes and is more beneficial to patients compared to Lap. Using a real‐world cohort, our study demonstrated that while the operative medical costs for Ro in rectal cancer were significantly higher than those for Lap, the total medical costs did not show a statistically significant difference. The higher costs associated with Ro are largely due to disposable consumables which limit the number of times it can be used. Despite these higher operative costs, the improved short‐term outcomes associated with Ro have eventually led to similar overall costs compared to Lap. Patel et al 9 also reported similar findings in their retrospective study conducted at a single center in Canada. They observed that implementing a robotic colorectal surgery program at a Canadian tertiary care center resulted in improved clinical outcomes without a significant increase in the cost of care. Specifically, Ro was associated with higher mean operative medical costs than Lap (mean difference (MD); −$2549, 95% confidence interval (CI); −3374 to −1723$, p < 0.0001), whereas the mean total costs of care were not significantly different . In this study, the difference in total medical costs was not statistically significant ( p = 0.051); however, there was a trend suggesting higher costs for robotic surgery. Furthermore, the actual expenses for Ro are likely higher than those estimated under the DPC system due to equipment purchase costs and maintenance expenses, indicating that robotic surgery places a greater financial burden on hospitals compared to laparoscopic surgery. On the other hand, Morelli et al 10 reported that increased surgeon experience leads to a significant reduction in costs, indicating that the financial burden associated with Ro may decrease as surgeons gain more familiarity with the technique. Our study relied on data from the early stages of insurance coverage for robotic surgery in Japan. Similar to the findings of Morelli et al, 10 we expect that costs related to Ro will decline as Japanese surgeons continue to accumulate experience with robotic procedures. Additionally, the emergence of new robotic platforms in the 2020s, such as Hinotori (Medicaroid, Kobe, Japan) and Hugo RAS (Medtronic, Galway, Ireland), is fostering increased price competition, which is expected to reduce medical costs in the future. In Japan, Ro for rectal cancer was covered by the National Health Insurance in 2018. Every hospital had to pay medical costs for the first 10 cases because national insurance was not available for the first 10 cases, which is a Japanese‐specific insurance system. The first 10 cases were not registered in the Japanese nationwide inpatient database, and the 11th and subsequent cases were registered at each hospital. 7 Therefore, the first phase of the learning curve was not registered in this study, and many hospitals were presumably in the middle of the learning curve, which might have affected the favorable short‐term results. The present study showed that Ro was associated with a lower SSI rate than was Lap (1.5% vs. 2.9%, p = 0.010). SSIs constitute a financial burden owing to prolonged hospitalization, treatment expenses, and medical staff costs. 11 , 12 , 13 , 14 This study found no significant difference in the length of hospital stay (Lap vs. Ro: 12 vs. 13 days, p = 0.20), but the incidence of SSIs in Lap was approximately twice as high as that in Ro, probably contributing to the increased costs of Lap. According to NCI‐CTC v.2.0, SSIs were defined as superficial, deep, and organ/space, and pelvic abscess and anastomotic leakage were termed organ/space SSIs. 15 In this study, we defined SSIs using ICD‐10 as follows: T793 (posttraumatic wound infection), T813 (disruption of operation wound), T814 (infection following a procedure; abscess of intra‐abdominal, stitch, subphrenic, and wound), and T941 (sequelae of injury of intrathoracic organs), in accordance with our previously reported cohort studies using DPC. 16 SSIs also included one part of leakage after surgery; although not significantly different (Lap vs. Ro: 3.2% vs. 2.4%, p = 0.25), it tended to be lower in Ro, which may be due to the difference in SSI occurrence. This Japanese nationwide inpatient database clearly showed a lower incidence of SSIs in Ro, which is lower than the data from two well‐known RCTs on Ro [the ROLARR trail 17 : 21/236 (8.9%), the REAL trial 4 :18/536 (3.1%)]; however, the grading of SSIs was unclear in this cohort. In the present study, Ro was associated with a lower rate of respiratory failure in comparison to Lap (1.3% vs. 0.6%, p = 0.049). After PMS, baseline characteristics such as BMI, smoking index, and Charlson comorbidity index were well matched without significant differences, and there were fewer respiratory complications with Ro. Although the difference was not statistically significant, this study period was the introduction phase of robotic surgery; thus, surgeons may have hesitated to perform it on patients with respiratory complications. To the best of our knowledge, three studies have compared the total costs of robotic and laparoscopic surgeries using real‐world data (summarized in Table 3 ). Halabi et al 5 and Chen et al 6 reported that the total cost of Ro is higher than that of Lap. Mizuguchi et al 7 reported that total costs for low anterior resection were significantly lower in the Ro group than in the Lap group; however, for high anterior and abdominoperineal resections, the difference was not significant. None of these studies clarified surgical costs, and our study is the first to report surgical costs using real‐world data. Our study had several limitations. First, the DPC dataset did not include important information, such as intraoperative and histological outcomes. We could not evaluate the operation time, blood loss, or distance from the anal verge, which have potential impacts on short‐term outcomes. Second, the main diseases in accordance with ICD‐10 codes were defined by the attending surgeons and medical office staff; therefore, the criteria may be open to interpretation. Finally, the learning curves of the surgeons and institutions may not have been considered. Several institutions in Japan performed Ro before 2018, when this surgery was covered by national health insurance. There may be a difference in the short‐term results between these institutions and those that started after 2018. Despite these limitations, this study is the first to report a comparison of the surgical costs of laparoscopic and robotic surgeries using real‐world big data. The accumulation of evidence worldwide is essential to demonstrating the benefits of Ro for rectal cancer. PSM analysis revealed that Ro was associated with better outcomes than Lap in terms of surgical site infection and respiratory failure rates. The operative medical costs of Ro were significantly higher than those of Lap. However, there was no significant difference in the total medical costs between the two surgeries for rectal cancer. Hiroki Hamamoto: Conceptualization; project administration; writing – original draft. Masato Ota: Formal analysis. Toru Kuramoto: Investigation. Kazuya Kitada: Investigation. Kohei Taniguchi: Investigation. Mitsuhiro Asakuma: Investigation. Yasuhiro Oura: Investigation. Yuri Ito: Supervision. The authors received no specific funding for this work. The authors declare no conflicts of interest for this article. Approval of the research protocol by an Institutional Reviewer Board: This study was approved by the Institutional Review Board of Osaka Medical and Pharmaceutical University . Informed Consent: N/A. Registry and the Registration No. of the study/trial: N/A. Animal Studies: N/A.
|
Other
|
biomedical
|
en
| 0.999996 |
PMC11694141
|
In view of climate change and geopolitical challenges, Europe is turning to renewable energy sources like the sun and wind to reduce dependence on fossil fuels. However, aligning renewable electricity supply with demand is challenging. A viable solution is converting surplus electricity into so-called ‘green’ hydrogen via electrolysis, which can then be transformed into methanol (MeOH) or dimethyl ether (DME), effectively storing the hydrogen. 1,2 DME offers a higher volumetric energy density of 21 MJ L −1 compared to hydrogen with 8.5 MJ L −1 , 3 is environmentally benign, and easily liquefies under slightly elevated pressure for use with existing liquid gas infrastructure. It already has several applications from propellant to diesel substitute, highlighting its potential as a green energy solution. 4–6 Typically, DME is produced in a two-step process: first, converting syngas (CO/H 2 ) to methanol using a Cu/ZnO/Al 2 O 3 catalyst, then, in a second step, dehydrating MeOH into DME with a solid acid catalyst. 7,8 A more efficient approach is the direct synthesis, converting CO or CO 2 with H 2 into DME in one step. This method has several advantages, such as simplified operational procedures, increased reaction rates and enhanced equilibrium conversion, achieved through the continuous removal of MeOH as an intermediate from the reaction mixture. Although this process is not yet ready for commercial application, it has gained significant interest from major players in the DME production industry, such as Topsoe, Air Products & Chemicals for its efficiency and potential. 9,10 The conversion of CO 2 to DME via catalytic hydrogenation is favored from a thermodynamic perspective ( eqn (1) ). This process requires two different catalytic functionalities: a metallic catalyst for the conversion of CO 2 to methanol, and a solid acid catalyst for the subsequent dehydration of methanol to DME. 8,11 1 2CO 2 + 6H 2 ↔ CH 3 OCH 3 + 3H 2 O Δ H 298 K = −123 kJ mol −1 Within the scientific literature, various catalysts with Brønsted or Lewis acidic functionalities have shown to be effective for dehydrating MeOH to DME, with performance depending on the acidic sites' density and strength. Weak and medium acid centers favor DME production, while very strong acid centers may cause formation of other hydrocarbons and coke. 12–14 Notable catalysts include γ-Al 2 O 3 , H-ZSM-5, mesoporous silicates such as MCM-41 15 or aluminophosphates, 16 whereby Al 2 O 3 and H-ZSM-5 are most commonly used. 8,17 Al 2 O 3 faces challenges due to the adsorption of water produced during the reaction, which inhibits the active sites. 18 Conversely, in zeolites like H-ZSM-5, there is a tendency to generate methane or other hydrocarbons as undesirable by-products due to the excessively strong acidic sites. 19 To overcome the drawbacks of using alumina or zeolites for methanol dehydration, an alternative emerges in the form of Keggin-type heteropolyacids (HPAs) immobilized on supports with high surface areas. 20,21 These anionic metal-oxide clusters, with the general formula [XM 12 O 40 ] n − , feature a central heteroatom X (typically P or Si) and a metal atom M (usually Mo or W). Their properties can be customized by modifying counterions or metal atoms, tailoring charge, acidity, and pH stability for optimal catalytic performance. 22–24 Due to their low surface area (approximately 5–10 m 2 g −1 ), HPAs benefit significantly from being supported on high surface area supports (such as TiO 2 , SiO 2 , ZrO 2 ). This approach gains enhanced access to active centers, boosting their activity in methanol dehydration. 6,25–27 Attributable to their high Brønsted acidity, lacking the excessively strong acidic sites of zeolites, HPAs exhibit remarkable catalytic activity in the dehydration of methanol and have been subject of various studies. 9,12,20,25,28–31 These studies highlight the strong catalytic performance of HPAs, especially supported H 3 PW 12 O 40 (HPW) and H 4 SiW 12 O 40 (HSiW) due to their high acidity. 30,32 In some instances, these have even outperformed the catalytic activity of H-ZSM-5. 33 Notably, HPW supported on MCM-41 exhibited a 100% selectivity towards DME from MeOH at equilibrium conversion. 34 The inherent advantages of HPAs, such as operating under mild conditions, minimizing byproduct formation, thermal stability and resisting deactivation by water, make them especially promising for converting methanol to DME. 9 To the best of our knowledge, only a limited range of unsubstituted, commercially available HPAs have been utilized in DME synthesis. In this study, the research scope is extended to include transition-metal substituted HPAs to examine the effects of incorporating different heteroatoms such as vanadium and indium. The incorporation of these heteroatoms allow for the modification of the acid sites within the HPAs. 35 This study aims to explore how varying the acidity through different heteroatoms influences their performance as catalysts in the conversion of methanol to DME. Additionally, this research marks the first instance where both commercial and specially designed catalysts have been evaluated under uniform experimental conditions, enabling a detailed comparative and comprehensive analysis of their catalytic performance. Moreover, diverse supports were employed to further investigate the HPA–support interactions. The following HPAs were supported on Montmorillonite K10 (K10) via wet impregnation: H 4 SiW 12 O 40 (HSiW), H 3 PMo 12 O 40 (HPMo), H 3 PW 12 O 40 (HPW), H 8 PV 5 Mo 7 O 40 (HPVMo), H 6 PInMo 11 O 40 (HPInMo), and H 4 SiMo 12 O 40 (HSiMo). Furthermore, HSiW was supported on different carriers (Al 2 O 3 , ZrO 2 TiO 2 , Celite® 545), using the same method. The supports and catalysts were characterized via inductively coupled plasma optical emission spectroscopy (ICP-OES), N 2 -physisorption, X-ray diffraction (XRD), NH 3 -temperature programmed desorption (NH 3 -TPD), scanning electron microscopy (SEM) and infrared spectroscopy (IR). All catalysts were tested in combination with the commercially available Cu/ZnO/Al 2 O 3 methanol synthesis catalyst in a fixed-bed reactor , whereby the two catalyst materials were arranged in two layers separated by a layer of glass wool . The reaction conditions were set at 250 °C and 50 bar, with a gas hourly space velocity (GHSV) of 10 000 h −1 , and a feed gas composition of H 2 /CO 2 at a ratio of 3 : 1. The gas-phase was analyzed using online gas chromatography . An in-depth description of the catalyst synthesis and characterization 35–38 including all used chemicals (Table S1 † ), the catalytic experiments 39 and the catalytic evaluation, can be found in the ESI. † Initially, monolayers of various HPAs, including both commercially available and custom-synthesized variants, were deposited on K10 and their performance was evaluated as part of a bifunctional catalyst system together with commercial Cu/ZnO/Al 2 O 3 catalyst for DME synthesis. Subsequently, the most promising HPA from the initial screening was combined with different support materials, and their catalytic performance in DME synthesis was systematically evaluated. Initially, various HPAs were immobilized on montmorillonite K10 (K10) as carrier. K10 was chosen as support material based on its previously reported performance, which results from its thermal stability, high surface area, excellent adsorption capacity, and excellent mechanical properties. 12,40 The acidic properties of K10 can be enhanced through impregnation with HPAs. 41 The range of HPAs included commercial available HPAs (H 4 SiW 12 O 40 – HSiW, H 3 PMo 12 O 40 – HPMo, and H 3 PW 12 O 40 – HPW) as well as specially synthesized HPAs (H 8 PV 5 Mo 7 O 40 – HPVMo, H 6 PInMo 11 O 40 – HPInMo, and H 4 SiMo 12 O 40 – HSiMo). This selection covers a range of different framework elements (Mo, W), different heteroelements (P, Si), and different charges, resulting in differences concerning the number of protons and their acidic strength. N 2 physisorption data reveal that K10, as expected, is a mesoporous layered silicate with an average pore radius just below 2 nm ( Table 1 ). A single Keggin molecule possesses a diameter of approximately 1 nm, indicating that HPA molecules can infiltrate the pores and potentially cover the entire surface area. 35 The application of HPAs on K10 results in a reduction of the BET surface area by about half in all samples, additionally, a significant decrease in pore volume is also observed. This finding aligns with previous studies, which additionally demonstrated an increase in micropore volume upon impregnation of K10 using HPMo and HPW. 12 The impregnation of K10 with HPAs aimed at achieving a monolayer of HPA on the entire surface of the support material. The results of elemental analysis ( Table 1 ) were used for the calculation of effective loading (Loading eff ), which is compared to the maximum theoretical loading (Loading theor ) to evaluate the impregnation efficiency. Elemental analysis indicates that the impregnation of all HPAs was successful, achieving the target Loading theor . For HPMo, HPInMo, and HSiMo, a higher Loading eff is observed, which may be attributed to measurement inaccuracies in the elemental analysis. SEM-EDX mapping indicates macroscopic homogeneous distribution of the HPA on the support . Combined with the Loading eff values, which align with the predicted Loading theor , this supports the assumption that monolayer coverage has been achieved. SEM indicates no change in morphology of the catalyst due to the synthesis procedure . The preservation of the HPA structure upon supporting on K10 is evident in the IR spectra , apparent by the characteristic Keggin vibration bands: 1049–1060 cm −1 for P–O vibration, 945–962 cm −1 for M <svg xmlns="http://www.w3.org/2000/svg" version="1.0" width="13.200000pt" height="16.000000pt" viewBox="0 0 13.200000 16.000000" preserveAspectRatio="xMidYMid meet"><metadata> Created by potrace 1.16, written by Peter Selinger 2001-2019 </metadata><g transform="translate scale" fill="currentColor" stroke="none"><path d="M0 440 l0 -40 320 0 320 0 0 40 0 40 -320 0 -320 0 0 -40z M0 280 l0 -40 320 0 320 0 0 40 0 40 -320 0 -320 0 0 -40z"/></g></svg> O terminal , 866–877 cm −1 for M–O–M vertex , and 643–767 cm −1 for M–O–M edge . 35 K10 itself displays a very broad vibration band at 1027 cm −1 from the stretching vibration of Si–O groups, 42 which overlaps with the P O vibration of the HPAs. Additionally, the samples were characterized by X-ray diffraction . It is evident that the characteristic peaks of the support material were preserved after the synthesis, indicating the structure remained intact. However, a reduction in the intensity of the diffraction peaks of pure K10 is observed following impregnation, indicative of a partial loss of crystallinity due to the impregnation process. 41,43 Furthermore, no peaks corresponding to the HPAs are detected, this is attributed to the insufficient quantity of HPA on the support, resulting in background noise predominance. NH 3 -TPD data indicate varying acidities among the different supported HPAs. It is evident that supporting the HPAs on K10 results in increased acidity compared to pure K10 for all HPAs. The supported catalysts themselves exhibit distinct acidity strengths ( Table 1 ). For instance, HPInMo demonstrates a five-fold higher normalized adsorption capacity of 2.48, related to mass of the catalyst, compared to commercially available HSiW (1.00) and HPW (1.02). The supported, unsubstituted HPMo exhibits a relatively high adsorption capacity of 1.91. In contrast, the incorporation of vanadium (HPVMo) reduces this capacity to 1.44, while HSiMo exhibits an even lower adsorption capacity of 1.36. Thus, incorporation of different heteroatoms allows for targeted adjustment of the acidity of supported HPAs, allowing specific investigation in this study into the impact of acidity on catalytic activity in DME synthesis. All supported HPAs were tested in combination with the commercial Cu/ZnO/Al 2 O 3 methanol synthesis catalyst for single-stage DME synthesis from a 3/1 H 2 /CO 2 mixture . Pure K10 already shows a DME yield of 4.76%, resulting from its own acidic sites . Impregnation with HPInMo and HPVMo results in a decrease in catalytic activity ( Y DME = 4.69% and 3.95%) compared to pure K10. This reduction in activity could be attributed to the decreased surface area of these HPAs, leading to fewer active sites available on the K10 surface. This limitation could not be compensated by the catalytic efficiency of the HPAs, despite their elevated acidity, which was determined by NH 3 -TPD. Conversely, after impregnation of K10 with HPW and HSiMo, slight increases in catalytic activity were observed, yielding DME of 5.73% and 5.24% respectively, marginally surpassing the performance of pure K10. The highest yields, exceeding 7%, were achieved using HSiW and HPMo impregnated on K10. Under the chosen operating conditions, the thermodynamic DME equilibrium yield of 13%, calculated using the property method Soave–Redlich–Kwong in ASPEN Plus, was not attained using the bifunctional catalyst system, due to the low residence time applied in our setup. The maximum was 54% of equilibrium yield with HPMo/K10 and HSiW/K10. NH 3 -TPD data ( Table 1 ) reveal no direct correlation between the measured acidity and catalytic activity. For instance, impregnation of K10 with HPInMo increases the acidity fivefold, yet the DME yield decreases post-impregnation compared to pure K10. Conversely, K10 impregnated with HSiW and HPMo, which exhibit the highest catalytic activity, show an acidity increase by just two and four times, respectively, compared to pure K10. This discrepancy can be attributed to the reactions being conducted under optimal conditions for methanol synthesis, 44 where especially the Brønsted acidic sites of the heteropoly acids have a negligible impact on DME formation. 41 These conditions were chosen to maximize methanol yield for its subsequent conversion to DME, but leading to no acidity–activity correlation. The DME selectivities S DME for each supported HPA catalyst follow the same trend as for Y DME . The combined selectivities of DME and MeOH make up approximately 50%, with the remaining 50% attributed to the by-product CO (Table S2 † ) resulting from the competing reverse water–gas-shift (RWGS) reaction. This indicates that in each experiment conducted, the Cu/ZnO/Al 2 O 3 catalyst produced almost an equal amount of MeOH and CO, as no further reaction of CO occurs on the DME catalyst. 45 Consequently, the comparison of DME synthesis activities of the catalysts for the second reaction step is based on consistent conditions. The productivity P mass follows the same trend as the DME yield ( Y DME ), as a consistent mass of catalyst was used across all synthesis experiments . However, due to the varying molar masses of the individual HPAs, the molar-based productivity P mol shows significant differences . Here too, HSiW and HPMo on K10 exhibit the highest productivities with 77.84 and 59.40 mol DME mol HPA −1 h −1 , respectively, with HSiW/K10 having a higher productivity than HPMo/K10 due to its lower molar mass. HPVMo/K10 and HPInMo/K10 continue to show the lowest P mol (both around 30 mol DME mol HPA −1 h −1 ). The comparison of data between HSiW, HPW, HSiMo, and HPMo on K10 is interesting. Among the tungstates, the Si-containing HPA achieves better results, while HPMo catalyzes the reaction more efficiently than both HSiMo and HPW. Thus, it cannot be stated that either of the metals (W or Mo) offers an advantage, nor is there a trend favoring a central hetero atom (Si or P). The IR spectra indicate that the Keggin structure is preserved after the reaction across all catalysts . The Keggin bands are most distinct for the HSiW/K10 and HPW/K10 catalysts. For all molybdenum-containing HPAs, the vibrational bands are identifiable but exhibit weaker intensity. Additionally, all of the molybdates show a dark blue coloration after the reaction , suggesting a reduction process has occurred during the reaction to form molybdenum blue ( eqn (2) ). 46,47 The darker coloration and weakening of IR bands indicate that this reduction is incomplete, suggesting the presence of the reduced species of the catalyst as well as poorer catalyst stability. 2 [PMo VI 12 O 40 ] 3− + 4e − ⇌ [PMo V 4 Mo VI 8 O 40 ] 7− As an interim conclusion, it is notable that the impregnation of K10 with HSiW and HPMo particularly lead to increased DME yields compared to pure K10. By considering molar-based productivity P mol , HSiW/K10 is identified as the most efficient catalyst. To validate these findings, the reproducibility of the experimental procedure was investigated using HSiW/K10 in multiple repetitions. These experiments resulted in consistent yields and selectivities for the by-products, MeOH and CO, as well as stable catalyst productivity across the experiments , and thereby confirmed the initial results. Following the identification of HSiW as the optimal HPA for DME synthesis, its performance was further evaluated on various support materials. To this end, HSiW was immobilized on ZrO 2 , Al 2 O 3 , TiO 2 , and Celite® 545 (hereafter simply referred to as Celite). Celite, primarily composed of SiO 2 , possesses a unique internal structure with vacuoles surrounded by interconnected pores within its silica walls, providing an ideal surface for physical adsorption. Due to its adsorptive and insulating properties, Celite is widely used in applications such as filtration, chromatography, and mild abrasives. 48 ZrO 2 , Al 2 O 3 , and TiO 2 , on the other hand, are established support materials for supported catalysts, valued for their stability and compatibility with a variety of catalytic processes. 49–52 The influence of support materials in enhancing the catalytic activity of HPAs for DME synthesis is pivotal, as demonstrated in previous studies, which have highlighted the beneficial effects of utilizing various supports such as SiO 2 or TiO 2 for HPAs. 25,53 However, detailed analyses of the support's influence for HPAs remain insufficiently explored in existing research. The amount of HSiW used for synthesis was adjusted to the surface area of each support to create a monolayer. The impregnation was carried out as described above. In Table 2 the elemental analysis as well as the effective loading Loading eff and the maximum theoretical loading Loading theor and the point of zero charge of the supports are listed. For all supports, the actual and theoretical loadings closely match, indicating complete impregnation of HSiW on each support. IR spectra confirm the preservation of the Keggin structure of all supported catalysts . Celite, like K10, represents another silicate used for supporting HSiW. It exhibits a notably low surface area of just 1 m 2 g −1 and no measurable pore volume ( Table 2 ). The minimal surface area measured can be attributed to Celite's very large pores of ≥200 nm, visible in SEM . These pores are too large to be quantified using the available BET measurement equipment. Post-impregnation, SEM images indicate pore blockage , and the clustering effect increases the measured surface area to 4.35 m 2 g −1 . For the three oxide materials (ZrO 2 , Al 2 O 3 , and TiO 2 ), SEM images , combined with SEM-EDX images , indicate that the particles remain approximately the same size, thus undamaged post-synthesis, and reveal a homogeneous distribution of the HPA across the entire surface. Among these materials, ZrO 2 has the smallest surface area at 91 m 2 g −1 , while Al 2 O 3 possesses the largest of 277 m 2 g −1 . Post-impregnation, the surface areas of Al 2 O 3 and TiO 2 decrease by approximately 40%, with a significant reduction in pore volumes as well. Conversely, ZrO 2 shows only an 11% reduction of surface area, with smaller decreases in pore radius and volume, suggesting a particularly uniform distribution of HPA molecules across the entire surface of the support ( Table 2 ). The supported catalysts as well as the supports themselves were employed in the synthesis of DME . Among the tested supports, pure K10 demonstrates significant inherent catalytic activity. The incorporation of HPAs onto the supports invariably lead to an enhanced catalytic performance compared to the unmodified supports. The DME yield across all HPA-modified catalysts is observed to be around 7%, with a P mass of 0.5 g DME g cat −1 h −1 . Due to the limited precision of the measurements, the productivity data do not decisively distinguish the most effective HPA-support combination. Remarkably, the mass-normalized productivity of unsupported HSiW, matches that of the supported catalyst materials. When normalizing productivity to the molar amount of catalyst , unsupported HSiW exhibits the lowest productivity of 35.77 mol DME mol HPA −1 h −1 . For each support, it is observed that the catalytic activity is consistently enhanced by the support material. This enhancement is attributed to the generally increased surface area, which improves accessibility to active sites crucial for converting MeOH to DME. Interestingly, catalytic activity does not directly correlate solely with higher surface area and therefore with a higher loading of the HSiW monolayer. Impregnation on Celite slightly increases P mol to 47.68 mol DME mol HPA −1 h −1 , followed by HSiW on Al 2 O 3 , TiO 2 and K10, with the HSiW/ZrO 2 as combination achieving the highest P mol of 125.44 mol DME mol HPA −1 h −1 . This suggests a cooperative effect between the support and the HPA, which enhances the catalytic activity. As previously demonstrated and confirmed in this section, the combined selectivities of DME and MeOH consistently make up about 50%, with the remaining 50% attributed to the by-product CO . This steady result indicates that MeOH production by Cu/ZnO/Al 2 O 3 catalyst remains consistent across all experiments, with no further CO conversion by the supported HPA catalyst. This allows for a fair comparison of the DME formation by the supported HPAs in the second reaction step under uniform conditions. The pure supports used for the HPA catalysts showed no catalytic activity for DME synthesis, except for K10, which shows partial conversion of MeOH to DME without any HPA supported. NH 3 -TPD analysis indicates that catalytic activity also does not directly correlate with measured Brønsted acidity. Specifically, HSiW/ZrO 2 exhibits the second highest acidity after HSiW/Al 2 O 3 . These findings suggest additional factors influencing catalytic activity beyond surface area and Brønsted acidity. Previous studies indicate that ZrO 2 provides additional sites for methanol adsorption, enhancing methanol conversion and leading to higher DME production. 25,54 SEM-EDX analysis and N 2 -physisorption also confirm that despite ZrO 2 's smaller surface area, it is fully and uniformly covered by HPA after impregnation, ensuring optimal catalytic activity through enhanced accessibility of acid sites, highlighting ZrO 2 as an exceptional support material. The most effective catalyst identified in this study, hereafter referred to as HSiW/ZrO 2 W , was compared with the leading literature-reported catalyst for DME synthesis from CO 2 , HSiW/ZrO 2 K , as reported by Kubas et al. 21 To enable a direct comparison of the catalytic performance, the catalyst was synthesized following the method outlined by Kubas, 21 with equivalent HPA-unit loading of 1 HPA unit per nm 2 of, and subsequently tested under identical reaction conditions. The catalytic performance ( Table 3 ) of HSiW/ZrO 2 K shows generally good agreement with HSiW/ZrO 2 W , with slightly higher values for DME yield ( Y DME = 7.08%) and selectivity ( S DME = 30.91%) for HSiW/ZrO 2 K , compared to HSiW/ZrO 2 W with Y DME = 6.88% and S DME = 31.09%. The mass-specific productivities for both catalysts are equivalent, with P mass = 0.48 g DME g cat −1 h −1 (HSiW/ZrO 2 W ) and 0.47 g DME g cat −1 h −1 (HSiW/ZrO 2 K ). However, due to lower HPA loading, the molar productivity of our HSiW/ZrO 2 W is higher compared to the HSiW/ZrO 2 K catalyst reported by Kubas et al. , 21 indicating a possible improvement in HPA dispersion resulting from the synthesis method we used in this study. Overall, the comparison underscores the enhanced catalytic activity of HSiW supported on ZrO 2 as a robust support material, irrespective of specific synthesis or reaction conditions. This study further demonstrates, through the use of tailored heteropoly acid catalysts and a range of supports, that parameters such as support surface area, pore size, and the tuned acidity of heteropoly acids do not have a definitive impact on catalytic activity. Notably, HSiW/ZrO 2 consistently outperforms other polyoxometalates, although the exact underlying mechanisms remain unclear and warrant further investigation. In this study, various HPA catalysts were employed for the single-step synthesis of DME. Therefore, bifunctional catalyst systems, combining commercial Cu/ZnO/Al 2 O 3 catalyst with supported HPAs, have been prepared. Both commercial HPAs (HPW, HPMo, HSiW) and specially synthesized HPAs (HPVMo, HPInMo, HSiMo) were used. The successful impregnation of K10 montmorillonite with monolayers of various HPAs was confirmed by a range of analytical techniques including ICP-OES, SEM-EDX, and N 2 -physisorption. Subsequently, these catalysts were evaluated, in combination with a methanol synthesis catalyst, for their DME synthesis activity in a fixed-bed reactor. HSiW emerged as the most effective catalyst in this screening, achieving a DME yield of 7.06% (53% of the equilibrium yield) and a molar productivity of 77.84 mol DME mol HPA −1 h −1 . Upon impregnation onto different supports, HSiW supported on ZrO 2 proved to be the optimal catalyst, enhancing the molar productivity up to 125.44 mol DME mol HPA −1 h −1 . Overall, we evaluated an unprecedented range of heteropolyacids and support materials for this reaction. The results highlight that, beyond the strengths and numbers of acidic centers, the uniform dispersion of HSiW on ZrO 2 enhances accessibility to catalytic active sites. The data supporting our article with the title “Study of supported heteropolyacid catalysts for one step DME synthesis from CO 2 and H 2 ” have been included as part of the ESI. † Further information is available on request. Anne Wesner was responsible for synthesis and characterization of the catalysts, interpreting data, conceptualizing the experimental workflow, and drafting the manuscript. Nick Herrmann performed supervision and design of catalytic experiments. Lasse Prawitt and Angela Ortmann carried out the catalyst synthesis as well as characterization and conducted all catalytic experiments. Prof. Jakob Albert provided infrastructure and equipment. As principal investigator, Dr Maximilian J. Poller was responsible for conceptualization of this project, acquired financial support, coordinated and supervised the project. All authors contributed to the discussion of the work and the scientific writing. There are no conflicts to declare.
|
Other
|
other
|
en
| 0.999998 |
PMC11694195
|
Football is the world’s most popular team sport, played by over 250 million people in more than 200 countries. FIFA reported that more than half of the global population aged four and over (more than 3 billion people worldwide), watched the 2018 World Cup on TV. On the other hand, women’s football is also becoming very popular and the number of participants is estimated to reach 40 million world-wide . This popularity has been reflected in the growing importance of international women’s competitions since the first Women’s World Cup held in 1991. It has been estimated that 1.12 billion viewers tuned into official broadcast coverage of FIFA Women’s World Cup 2019 on TV at home, on digital platforms or out-of-home. The 2019 final game was the most watched FIFA Women’s World Cup match ever, with over 260 million viewers and an average live match TV attendance of 82 million, more than double that of the 2015 Women’s World Cup. Football injuries are common, and the incidence of time-loss injuries in football matches differs for both sexes . It is known that most of the injuries in professional football occur during competitive games rather than trainings . Major sporting events such as the World Cup comprise numerous elite athletes from around the globe, and it is imperative to consider the potential opportunities they present for professionals. From a medical point of view, injuries and incidents that occur in world cup matches can be considered as the most attractive events for sports medicine physicians interested in football medicine. Numerous studies have been conducted on football match injuries in national leagues and international tournaments. These studies generally examine the epidemiology of injuries and the duration of absence (i.e. burden) resulting from these injuries [ 7 – 9 ]. However, the knowledge about field incidents leading to injury time-outs is limited, as well as information on types and characteristics of the related injuries during the match. The incidence of stoppage time due to field injuries during professional football matches has been reported between 1.6 and 3.3 incidents per game for men’s professional football [ 10 – 12 ]. It is important to understand the possible increased effect of football on injuries in elite female players as interest in women’s football develops. However, there is a lack of knowledge about stoppage time injury incidents in women’s matches. A precise understanding of the injury characteristics and circumstances during a game would allow to develop effective prevention and treatment strategies in football medicine as well. Therefore, the aim of the present study was to reveal how often elite level football players need medical care in high-level international tournaments and whether there is any difference between men’s and women’s football. In most studies about football injuries, the medical records of tournament or club medics are used [ 8 , 13 – 15 ]. Although these records are not readily accessible, match video records are. Furthermore, the medical records are not able to provide information about injury time-outs during the match. For these reasons, video-based analysis is a frequently preferred method in the current literature . Data were collected retrospectively through video analysis of the 2018 FIFA Men’s World Cup (MWC) matches in Russia and the 2019 FIFA Women’s World Cup (WWC) matches in France. Data on the incident rates during World Cup games were documented through videotape analysis via Wyscout.com ® . As all data originated solely from publicly available sources and we only report anonymous data, Research Ethics Board approval was not required. Recordings of the television broadcasts for all 116 matches at the MWC 2018 and WWC 2019 were obtained and used for the analysis. Demographic data regarding the World Cup matches is presented in Table 1 . Two experienced authors reviewed the recordings of all the matches at normal speed, including stoppage times. For accurate analysis, each injury time-outs was reviewed several times at different slowmotion speeds and freeze-frames in a standardized manner. The two authors discussed uncertain situations with each other. Meanwhile, the authors documented all injury time-outs using a dedicated form and subsequently analysed videotape recordings utilising Football Incident Analysis (FIA). . The FIA method is known a reliable tool and has good intra- and inter-observer reproducibility . The injury report contains details of the type, location and mechanism of the injury, game time when the injury occurred, as well as stoppage duration for the incident. It also contains detailed information about the player (age, position, nationality), player’s action during the incident, the score and the teams’ drawing/losing/winning situation at the time of the injury, the game level (group vs. knockout stage), and the medical staff’s decision (substitution or continuation of the play). An injury time-out was defined as any health complaint that occurred during a match and resulted in the cessation of the game. This included any medical attention received from the medical staff physician and/or physiotherapist, whether on or off the pitch, regardless of the nature/severity of the applied treatment . Injury definitions and recording procedures complied with the International Olympic Committee consensus statement for recording and reporting epidemiological data on injury in sport . Incidence of injuries (injury rate, IR) was expressed as injuries per 1000 player hours of exposure together with the 95% confidence interval (CI). Sixty-four games were played by 32 teams in the 2018 FIFA Men’s World Cup, whereas 24 teams played a total of 52 matches in the 2019 FIFA Women’s World Cup. The ages of the players and referees obtained and recorded from official data of the Federation Internationale de Football Association (FIFA). The decision made by the referee for each incident was recorded from the video analysis as no foul or a free kick for or against the exposed player and whether the situation resulted in a yellow or red card was also noted. Previous studies have shown that the frequency and type of injury varies according to the minutes of the competition. Therefore, the time of injury was divided into the matches’ 15-minute segments in line with the literature . The Statistical Package for the Social Sciences (SPSS) software was used for statistical analysis (IBM SPSS Statistics for Mac, Armonk, New York, USA). The variables were analyzed using both visual methods (such as histograms and probability plots) and analytical techniques (such as the Kolmogorov-Smirnov test) to identify normal or non-normal distributions. Descriptive analyses are presented in the form of mean ± standard deviation (SD) and median, minimum, and maximum values for continuous variables and frequency counts and percentages for categorical variables. Due to the non-normal distribution of the continuous variables, the Mann-Whitney test was used to compare two independent groups. The Chi-squared or Fisher’s Exact Test was used to compare the categoric variables between two independent groups. Effect sizes were measured by Phi coefficient for 2 × 2 Chi-square tests, by Cramer’s V for 2 × 3 Chi-square tests, and by r for the Mann-Whitney U test. Effect sizes were interpreted as follows: < 0.2 null effects, < 0.5 small effects, < 0.8 medium effects, and > 0.8 large effects. The incidence of injury and injury time-out was calculated using the formula: / ((minutes of exposure / 60) × number of players exposed), and was given as the number of injuries or injury time-outs per 1000 match-hours with 95% Confidence Intervals (CIs). The injury incidence and injury time-out rates per 1000 match-hours with 95% CI were calculated assuming a Poisson distribution. In order to determine the effect size, the incidence rate ratios (IRRs) with 95% CI and the significant probability of the result of null-hypothesis (incidence rate ratio equals 1) testing for injury incidence rates across the different groups were calculated using Poisson regression models or negative binomial regression models when appropriate. A 5% type-I error level was used to infer statistical significance. A total of 265 stoppage times due to field incidents were recorded during both world cups, 123 in the MWC and 142 in the WWC ( Table 2 ). A total of 282 players were treated on the field throughout the study period, including 17 injury time-outs requiring the medical care of two players simultaneously. In 11.2% of the matches (13 out of 116), the referee did not interrupt the game due to the injury incident. The women’s semi-final match recorded the highest number of injury time-outs in a single game (n=8). WWC had an average of 2.7 ± 1.8 injury time-out/match and MWC had an average of 1.9 ± 1.3 injury time-out/match. The incidence of stoppage time per 1000 player hours was significantly higher in women compared to men . The duration of a stoppage time due to an injury was determined through videotape analysis and is defined as the interval between two whistles of the referee. This is because the referee stops the game in order to invite medical staff onto the pitch and the game starts again after the potential needed medical check and treatment. The mean stoppage time was 105.4 ± 38.5 seconds (40–332 sec.) which was not different between men’s and women’s football ( Table 2 ). The average time from the incident to the referee’s whistle to stop the game for the injury event was 9.4 ± 15.5 seconds (minmax: 1.0-128.0 sec.). The frequency of time-outs was slightly higher during the second half of the games. ( Table 2 ). Upon further analysis of the data based on six quarter-hour intervals, we observed that the incidence of injury time-outs was higher during the final 15 minutes of both halves. Half of the total game duration was tied (draw), equating to 50.3% (5,375 minutes) of the entire matches duration. Players received medical care 132 times, representing approximately half of the total events (46.8%), during periods when the score of the game was tied. The team leading (n=84, 29.8%) at the time of the incident had more injury time-outs than the losing team (n=66, 23.4%). A review of the incidents that occurred in the final 15 minutes of the match (n=52) revealed that 50% were taken by the leading team (n=26), 26.9% by the losing team (n=14) and 23.1% by tied-score team (n=12). Almost one sixth of all incidents caused the injured women players to be substituted (n=23, 15.3%) ( Table 2 ). Despite women needed more medical care during games, the substitution rate was found higher in men (n = 36, 27.3%, p = 0.02). Injured women were younger than injured men (27.5 ± 3.4 years (min-max: 19.0–39.0 years), 28.6 ± 4.1 years (min-max: 20–45 years), p = 0.007, respectively). The most common incident mechanism leading to the need for medical care for both women and men was sudden-onset contact-, compared to sudden-onset non-contact injuries (n = 119, 79.3%, n = 100, 75.8%, respectively) ( Table 3 ). The most common injured site was the lower extremity in men (47.7%) and women (46.0%), followed by the head and neck, trunk and upper extremity, respectively. Ankle and lower leg were the most commonly injured body parts in the lower extremity in both sexes. The types of injuries are presented in Figure 1 . Women had a higher total injury incidence compared to men (IRR = 1.4 [95%CI = 1.1–1.8], p = 0.005) ( Table 4 ). When the incidence of injuries by body region was compared, it was observed that this difference was due to the unspecified body region as opposed to the specific body region (IRR = 2.7 [95%CI = 1.1–6.5], p = 0.03). A total of 34 players (12.1%) received medical care for muscle injuries in both tournaments (men: n = 17, 12.9%; women: n = 17, 11.3%) (15). The incidence of structural muscle injuries was slightly higher in women , however, there was no significant difference between the two groups (IRR = 1.2 [95%CI = 0.6–2.4], p = 0.5). According to the analysis of total injury incidences by teams’ continents, the incidence of receiving medical care during play was found to be significantly higher in African players compared to other continents for both genders (Incidence [95%CI]: Africa = 126.0 [92.8 to 159.3], Europe = 68.7 [56.7 to 83.7], South America = 51.8 [27.4 to 75.3], North America = 30.3 [2.5 to 58.1], p < 0.001) ( Table 5 ). We investigated the stoppage times due to injury incidents resulting in players receiving primary medical care on the field in professional football matches in elite level tournaments. This is the first study comparing men and women. We showed that women had a higher incidence of injury time-outs compared to men. One of the main findings of our study is that women had a higher incidence of injury time-out compared to men. Previous studies have found that the overall injury incidence was significantly higher in men than in women, however the rate of serious injuries (causing more than 28 days of absence) appears to be significantly higher in women [ 4 , 21 – 24 ]. It would be expected to see higher incidents in men’s football, since the men’s game may have greater frequency and force of physical contact as it involves larger, faster players on the same-sized field. However, our findings clearly show that women’s professional football caused more incidents than men’s in recent world cup tournaments. This may be due to gender differences in the experience of pain, which are multifactorial and depend on complex factors such as psychosocial factors and gonadal hormone levels . In this context, the results of previous studies showed that women stay away from the pitch for longer periods due to on-field injuries [ 4 , 21 – 24 ]. Another finding of our study that supports this might be that although women needed more frequent medical care during games, the rate of substitution rate due to field injuries was lower than in men. We recognize that the factors underlying sex differences in pain experience are multifactorial and complex; therefore, further investigation of the reasons for the higher incidence of injuries in women’s football and a better understanding of risk factors (such as psychosocial factors, gonadal hormone levels, and menstrual phase) are needed in order to develop effective strategies for injury prevention. Although football injuries are of equal concern for men and women, most of the data reported to date relate to men. To the best of our knowledge, only one study has been published comparing data between men and women in top-level international tournaments . In contrast to our study, Walden et al. compared the difference of time-loss injuries which were defined as an incident causing a player to miss the next training session or match by analyzing the exposure and injury characteristics of European Football Championships between men and women players, and youth players as well. Their findings revealed that the injury incidences at the 2004 Men’s European Championships and the 2005 Women’s European Championship were similar between sexes concluding that the risk of injury in international football is not higher in women than in men . In our study, the total match injury incidence was significantly higher in women’s football. Herein, we agree with the conclusion of Giza et al. which emphasized that age, skill level or improved training and fitness may have influenced injury incidence, adding a clear sex dimension to this multifactorial approach proposal . The results of the previous study showed that the injury time-outs seem to be higher in the Turkish First Division compared to the Turkish Super League (3.98/game vs 3.14/game respectively) . In accordance with the previous literature, the skill level has definitely an effect on match injury incidence, but the role of sex should not be ignored and should be taken into account . Previously, the average effective playing time ranged between 52 and 58 minutes per match, during the 2018 FIFA World Cup in Russia males FIFA world cup. This indicates that a significant portion of the playing time is lost in various ways. We showed that field injuries resulted in an average of 3 minute and 15 second stoppage time in the 2018 FIFA World Cup. However, the total stoppage time was found higher in 2019 Women’s World Cup (on average, 4 minutes 56 seconds). Given the competitive nature of football, particularly at the highest levels, it is plausible that changes in the score could influence a number of factors related to the conduct of the game. These include the strategies employed by the teams, the attitudes and precautions of the players, the intensity of the match, and/or the incidence of injury. The main conclusion drawn from this study that supports this idea is the association between injury breaks in World Cup matches and score adjustments. A comparison of the medical care required by players revealed that those of the currently winning team required significantly more medical care than those on the losing team, aligning with the study of Ryynänen et al. . However, as expected, teams leading at the incident time had a significantly higher incidence of injury time-outs than teams that were losing during the last 15 minutes of the game (50.0% vs. 26.9%, respectively). We contend that feigning injury may be a more common tactic for cooling down a match than previously assumed. In general, when their team is leading, players are tempted to burning time off the clock while taking a brief respite. Herein, we may conclude that, almost 5 minutes of stoppage time due to injuries may affect the attractiveness of a football match, especially considering that the most of these incidents seem questionable or unnecessary. Another interesting finding of our study was about the lower substitution rates in women football due to these incidents. The literature indicates that the overall injury incidence was reported to be similar in men and women football, although the proportion of severe injuries has been shown to be higher in women . However, we found that the rate of substitutions after the incidents in men was significantly higher than women (27.3% vs 15.3%, respectively). This result aligns with the results of our previous study in men footballers, with 17.4% of incidents resulting in an impossibility to complete the game in Turkish Super League . Given the short-term nature of the tournament in question, it was impossible to ascertain whether the players who could not complete the game could participate in the potential subsequent ones. Therefore, the information about the time for return to play was not obtained. Although the latter information would be important to capture, our study setting (observational retrospective study based on publicly available data) does not allow it. However, higher rates of substitutions after men’s injuries may be possibly attributed to (i) potentially higher damage/more serious injuries occurring during the events, and/or (ii) less questionable incident frequency in men’s world cup tournament. The latter relates to some cases where the player would simulate an injury for a particular reason . Alinged with the above discussed point (i) the higher rates of sudden onset non-contact injuries in men’s football (14.4% vs 12.7%, respectively) might be one of the contributing factors for increased substitutions frequency compared with women. The risk of sustaining an injury without contact with another player (eg. muscle injuries) is thus high in international football and may also be of importance for suggesting an increase in substitution rate by the governing bodies (e.g., FIFA). In that regard, FIFA has recently considered an additional substitution when a player is taken out of the field by the medical staff for a suspected concussion. In that regard, we suggest to enlarge this measure and consider allowing more ‘injury related substitution’ for the fairness of the game, even though we reconsider that this point is tricky, as simulating an injury by unfair players/teams is something not easy to objectively assess from a referee’ prospective. As we know, most injuries in football were sustained during contact with another player regardless of the sex . However, why match incidents caused higher substitution rates for men’s foot-ball remains to be answered. Regarding the location of the injuries, the literature reported that the most common incident location in football is the lower limb, with lower limb injuries accounting for 74% to 86.8% of total injuries . Accordingly, we found that the rate of lower limb events was predominant (approximately half of all events in both sexes. Women have a different injury risk profile then men and studies have reported that the risk of serious knee injury (such as anterior cruciate ligament (ACL) rupture) is at least twice as high in women, regardless of level of exposure or participation . However, as mentioned above, since we evaluated a short-term tournament and it was not possible to determine the exact diagnoses of the players’ injuries by video analysis, our study could not provide any information about the incidence of specific diagnoses such as ACL injury. The unpublished results of our group’ previous study revealed that the player’s nationality is another reason for injury occurrence . European citizen football players were found to receive significantly less medical care than Turkish counterpart when compared to their time on the pitch, while for African and South American players these rates were consistent with the time spent on the pitch . In the present study, it is noteworthy that the incidence of receiving medical care during World Cup matches was significantly higher in African players (Middle America being ranked second here) for both genders. Rosenbaum et al. stated that the assumptions regarding the likelihood of players from certain cultures or regions faking injuries should be made with caution for such tournaments since the number of countries representing each confederation is small compared to the sample size of games evaluated . This points fully applies here with both above mentioned regions (Africa and Middle America), being only a small portion of the tournaments’ teams. We believe that these data are important for football governing bodies not only in deciding whether efforts to prevent injury simulation are necessary, but also in deve loping prevention strategies. There are limitations to our study. First, we recorded injuries as events where a player required medical treatment from a team physician and/or physiotherapist, but the only source of information we obtained was videotape recordings. Therefore, we were unable to define what the definitive diagnosis of injuries was or what the exact cause of a substitution was. However, we identified all match incidents on the video recordings in top level tournament and therefore the completeness of data and the level of competition chosen can be considered strong points of the present study. Secondly, as we evaluated a short-term tournament; it was not possible to identify whether all of these players who could not complete the game were able to play next game, or train the day after. Third, comparing our findings with previous literature should be done with caution, as risk factors and inter-player competition may differ between domestic league games and elite level international tournaments. This is the first study that compared the incidence and features of stoppage time due to field injuries between men’s and women’s professional football in highest-level tournaments (FIFA Men’s World Cup and FIFA Women’s World Cup). Both injury time-out incidence and overall injury incidence were higher in women’s games than in men’s. The expansion of the knowledge of team physicians, coaches, referees, and the football governing bodies regarding the medical requirements of players during a game would potentially facilitate the identification of player behavior patterns and the promotion of fair play. Not only we provide information for football medicine physicians about field injuries, but we also hope that our study will potentially impact awareness regarding referees’ attitudes towards foul play.
|
Other
|
other
|
en
| 0.999999 |
PMC11694196
|
Soccer is an intermittent sport that is characterised by short bursts of high-intensity actions such as sprinting, changing direction, accelerations, decelerations, jumping and tackling, alternated by long periods of low-intensity . These high-intensity actions have been shown to lead to decisive moments of the match, such as goals, assists and defensive situations , highlighting their importance to soccer match-play and match outcome. Furthermore, research based on multi-season comparisons of key soccer parameters is very important for the development of soccer knowledge [ 3 – 8 ]. The knowledge related to the evolution direction of soccer players’ match activities allows coaches to take actions to optimise the training process. Studies have shown that the playing intensity in soccer has increased significantly over the years, and this trend is expected to continue, according to recent research . Barnes et al. reported that high-intensity running distance and sprinting distance increased by approximately 30–35% across a 7-season period in the English Premier League (EPL) . The evolution of match play has also demonstrated the importance of short high-intensity actions, such as accelerations and decelerations, both in and out of possession. A recent meta-analysis reported a greater frequency of high (> 2.5 m/s 2 ) and very high (> 3.5 m/s 2 ) intensity decelerations compared to accelerations . These actions have been shown to significantly influence the match outcome. According to Longo et al. , sprint activity was identified as one of the parameters that was most significantly associated with the likelihood of being in the first positions of the final ranking in the Italian “Serie A” during season 2016/17. Furthermore, sprint activity was also associated with an increase in shots, goal attempts, assists and steals, suggesting that this is a key component that affects match success. These findings were recently supported in a study that analysed goals scored in the EPL during the 2018/19 season. The most common pattern reported was a linear forward movement prior to scored goal, followed by a deceleration, and turn . Running performance and match outcome have been shown to be influenced by contextual factors (e.g., match location, opposition quality, match status, etc.) . It has been suggested that match location (i.e. playing at home or away) influences many aspects of the game, with evidence supporting the existence of a home advantage phenomena in soccer . Additionally, Fernandez-Navarro et al. found that home teams tend to have a faster playing tempo, higher pressure strategies, and performed more attacking phases of play. These findings are similar to those reported by Gollan et al. , who investigated the influence of contextual factors on soccer playing styles from the EPL. The study showed that home teams are more likely to adopt longer possession strategies, while reducing transition play. The authors also investigated the influence of opposition quality, highlighting that lower-ranked teams were more likely to play defensively against a higher-ranked team. Additionally, when match location and opposition quality were combined, the quality of the opposition exerted a greater influence on the playing style adopted than match location. Therefore, the aim of this study was to compare external match load, specifically running at certain thresholds and explosive actions (accelerations and decelerations) according to match outcome (win, draw, loss), match location (home, away) and quality of opponent across five competitive seasons. This research employed a five-year longitudinal study design to examine a single male professional team. The study team competed in the EPL and ECL during the study period. The EPL comprises of 38 matches, 19 home and 19 away across a 10-month season, commencing in August and completing in May. While the ECL consists of 46 matches, 23 home and 23 away, across the same duration and calendar period. The study team was promoted at the end of season 2020/21, thus the data examined consisted of three ECL seasons and two seasons from the EPL . The examined team predominantly utilised the 4-3-3 system during match-play. Forty-six professional outfield soccer players (age 23.2 ± 5.9 years, weight 80.3 ± 7.0 kg, height 1.81 ± 0.07 m) from the same English professional club were involved in the study. Data from the complete 2018/19 to 2022/23 seasons were included. The inclusion criteria for the study were: (i) to have been at the club for at least one full season (mean ± SD = 2.6 ± 1.3 seasons), (ii) participated in at least 40% of matches during the study seasons at the club (mean ± SD = 74% ± 26%), (iii) individual players’ data were only included when 60-minutes of a match was fulfilled, and (iv) did not participate in another training program during the study. Additionally, the exclusion criteria for the study were: (i) long-term (three months) injury, (ii) joining the team during the in-season of any study season, and (iii) an in-sufficient number of satellite connection signals. Players were assigned to one of five positions as match demands for these differ significantly. The methodology of differentiating specialised positions was adapted from previous research . As various situational factors have an influence on the style of play that can be modulated by different tactical roles , context was considered whilst using a player’s average position in an attempt to determine a player’s relevant tactical role in the team . All participants examined were classified based on the regular playing position adopted at the start of each season and remained consistent through-out the study period: Centre backs (n = 13), full backs (n = 6), centre midfielders (n = 15), attacking midfielders (n = 8), and centre forwards (n = 4). Based on the study team formation of 4-3-3, the three midfield players were structured as two deeper, holding positions with defensive and offensive responsibilities, while the one played in a position just behind the centre forward and had very limited defensive duties and thus was classified as an attacking mid-fielder. Goalkeepers were excluded from the investigation due to the specific nature of the match activity and low running demands . All data collected resulted from normal analytical procedures regarding player monitoring over the competitive season, nevertheless, written informed consent was obtained from all participants. The study was conducted according to the requirements of the Declaration of Helsinki and was approved by the local Ethics Committee of University of Central Lancashire and the English professional club from which the participants volunteered . To ensure confidentiality, all data were anonymised prior to analysis. For each match, the outcome (win, draw, loss), match location (home, away) and quality of opponent (top or bottom six teams, remaining mid-table teams (12 in the ECL and eight in the EPL) across five competitive seasons were recorded by the lead researcher. The definition of opponent standard was based on the previous season final league ranking position . External match load was consistently monitored across the study seasons during all matches using an 18 Hz Global Positioning System (GPS) technology tracking system (Apex Pod, version 4.03, 50 g, 88 × 33 mm; Statsports; Northern Ireland, UK) that has been previously validated for tracking distance covered and peak velocity during simulated team sports and linear sprinting and accelerometry-based variables . All devices were activated 30-minutes before data collection to allow the acquisition of satellite signals and to synchronise the GPS clock with the satellite’s atomic clock . Quantifying the devices’ accuracy indicated a 2.5% estimation error in distance covered, with accuracy improving as the distance covered increased and the speed of movement decreased . To avoid inter-unit error, each player consistently wore the same device during the study period , and was replaced if damaged, although the present GPS system has previously reported excellent inter-unit reliability . Specifically designed vests were used to hold the devices, located on the player’s upper torso, and anatomically adjusted to each player, as previously described . The GPS signal quality and horizontal dilution of position was connected to a mean number of 21 ± 3 satellites, range 18–23, while HDOP for all seasons ranged between 0.9–1.3. On completion of each match, external match load was extracted using proprietary software (Apex, 10 Hz version 4.3.8, Statsports Software; Northern Ireland, UK) as software-derived data is a more simple and efficient way for practitioners to obtain data in an applied environment, with no differences reported between processing methods (software-derived to raw processed) . The dwell time (minimum effort duration) was set at 0.5 s to detect high-intensity running and 1 s to detect sprint distance efforts, in-line with manufacturers recommendations and default settings to maintain consistent data processing . Furthermore, the internal processing of the GPS units utilised the Doppler shift method to calculate both distance and velocity data which is shown to display a higher level of precision and less error compared with data calculated via positional differentiation . Variables were based on previous publications and in practical settings are commonly utilised by analysts in elite soccer. The absolute total distance covered (m); high-speed running distance (m; total distance covered 5.5–7 m/s); sprint distance (m; total distance covered > 7 m/s); the number of accelerations (> 3 m/s 2 with minimum duration of 0.5 s) and decelerations (< 3 m/s 2 with minimum duration of 0.5 s) were examined. The mean average for each metric per minute during match-play were obtained and analysed across all study seasons. Descriptive data (mean ± SD ) were determined for all external match load variables of interest for position, match outcome, opponent, and match location. Homogeneity of variance was assessed via Levene’s statistic and, where violated, Welch’s adjustment was used to correct the F-ratio. Multiple one-way analysis of variance (ANOVA) were conducted to identify positional differences (centre back v full back v centre midfield v attacking midfield v centre forward), outcome differences (win v draw v loss), and opponent differences (top six v mid-table v bottom six) for all external match load variables. Posthoc analysis was used to identify the position, outcomes, and opponent that were significantly different to one another using either Bonferroni or Games-Howell post-hoc analyses, where equal variances were and were not assumed, respectively. An independent t-test was used to determine any match location differences in external match load measures (home v away). Three factor ANOVA’s (3 × 3 × 2) were conducted for each position across all external match load measures to determine the interaction effects between opponent, outcome, and match location. As above, Bonferroni or Games-Howell post-hoc analysis was used to identify specific differences. Effect size ( η 2 ) values highlighted the magnitude of the main and interaction effects from the ANOVA and Cohen’s d values ( d ) were also reported to show the magnitude for significant results following post-hoc analysis. η 2 values in the range 0–0.009 are considered insignificant effect sizes, 0.01–0.0588 as small effect sizes, 0.0589–0.1379 as medium effect sizes, and values greater than 0.1379 as large effect sizes. Cohens d effect size magnitudes were interpreted using the following classifications: trivial < 0.19; small 0.2–0.59; 0.6–1.19 moderate; 1.2–1.9 large; 2.0–3.9 very large; > 4.0 extremely large . All significance values were accepted at p < 0.05 and all statistical procedures were conducted using JASP (version 0.18) for Macintosh. The results of the one-way ANOVA comparing GPS metrics across different positions can be seen in Table 1A . Findings revealed a significant difference in distance covered across positions ( p < 0.001; η 2 = 0.06). More specifically, full backs, centre midfielders and centre forwards covered more total distance than attacking midfielders ( p < 0.001; d = 0.347-0.660), while full backs and centre midfielders also covered more total distance than centre backs ( p < 0.001; d = 0.394-0.494). Centre midfielders also covered more total distance than centre forwards ( p < 0.001; d = 0.313). In terms of m/min, there was a significant effect for position ( p < 0.001; η 2 = 0.454), as centre midfielders covered more, and centre backs covered less, m/min than all other positions ( p < 0.001; d = 0.643-2.380). Attacking midfielders also had a greater m/min than full backs and centre forwards ( p < 0.001; d = 0.615 and 0.986), while full backs had greater m/min than centre forwards ( p < 0.001; d = 0.643–2.380). In terms of high-speed running distance and sprint distance, there were significant differences across positions ( p < 0.001; η 2 = 0.285 and 0.056, respectively). All positions completed more high-speed running distance than centre backs ( p < 0.001; d = 0.884–1.843), while full backs, attacking midfielders, and centre forwards completed more sprint distance than centre backs ( p = 0.001–0.027; d = 0.234–0.709) Attacking midfielders completed more high-speed running and sprint distance than any other position ( p < 0.001; d = 0.460-1.843). Full backs and centre forwards completed more highspeed running distance than centre midfielders ( p < 0.001; d = 0.460–1.843). Finally, there were significant differences for the number of accelerations and decelerations across positions ( p < 0.001; η 2 = 0.069 and 0.162, respectively). Full backs and centre forwards completed more accelerations and decelerations than centre backs and centre midfielders (p = 0.001–0.027; d = 0.414–1.226). While centre forwards also completed more accelerations and decelerations than attacking midfielders ( p = 0.001–0.027; d = 0.394 and 0.374, respectively). Attacking midfielders completed more accelerations than centre backs and centre midfielders ( p = 0.001–0.003; d = 0.254 and 0.415, respectively), as well as more decelerations than centre backs ( p < 0.001; d = 0.656). Full backs also completed more decelerations than attacking midfielders ( p < 0.001; d = 0.611), and centre midfielders more decelerations than centre backs ( p < 0.001; d = 0.697). Table 1B highlights the various GPS metrics when observed across outcome of matches. There were no significant differences in any variables across different outcomes of a match. When comparing GPS metrics across different opponents (see Table 1C ), there were significant differences in the meters covered per minute and number of decelerations across different levels of opponent ( p < 0.001; η 2 = 0.012 and 0.014, respectively). There were more m/min and decelerations when playing against the top six compared to mid-table and bottom six ( p < 0.001; d = 0.213-0.322). Finally, the only metric that differed between playing at home or away was the number of accelerations, where there were significantly more at home, compared to away ( p < 0.001; d = 0.145). Total distance for each position in respect to outcome, opposition, and match location is presented in Table 2 . There was a significant main effect for outcome ( p = 0.009; η 2 = 0.016–0.026) where centre midfielders covered more distance during wins than losses ( p = 0.008; d = 0.340), while full backs covered more distance in wins and draws compared to losses ( p = 0.013–0.028; d = 0.420 and 0.413, respectively). Post-hoc analysis also revealed that centre forwards covered more distance in losses to mid-table opponents compared to when the team won against the bottom six ( p = 0.034; d = 0.830). Centre midfielders covered more total distance when winning away compared to losing away ( p = 0.049; d = 0.453). Finally, full backs covered more distance when winning against mid-table teams at home, compared to losing against mid-table teams away ( p = 0.019; d = 0.922). Table 3 displays the distance per minute (m/min) in relation to position, outcome, location and opposition. For m/min, there was a significant main effect for opponent for full backs, centre midfielders, attacking midfielders, and centre forwards ( p = 0.001–0.045; η 2 = 0.017–0.057). Full backs ( p = 0.040; d = 0.400), centre midfielders ( p = 0.001; d = 0.437), attacking midfielders ( p < 0.001; d = 0.681), and centre forwards ( p = 0.003; d = 0.655) covered more m/min against top six teams compared to bottom six. Attacking mid-fielders also covered more m/min against top six compared to mid-table teams ( p < 0.001; d = 0.579). There was also a significant main effect for location (p = 0.041; η 2 = 0.017), as centre forwards covered more m/min in away matches compared to matches at home ( p = 0.041; d = 0.300). There was also a main effect for outcome x opposition x location for centre backs ( p = 0.025; η 2 = 0.021), as centre backs covered more m/min in away losses against a top six team compared to away draws against the top six ( p = 0.027; d = 1.039), and away wins against mid-table teams (p = 0.007; d = 0.908). Attacking midfielders covered more m/min in wins and draws against the top six compared to losses against bottom six (p = 0.007 and 0.042; d = 1.220 and 1.093). Attacking midfielders also covered more m/min in matches away against the top six compared to mid-table and bottom six teams, both home and away ( p = 0.011 and 0.047; d = 0.627 and 0.841). Centre forwards covered more m/min against top six teams away, compared to the bottom six at home ( p = 0.004; d = 1.077), and centre midfielders covered more m/min away to top six teams than in away matches against the bottom six teams ( p = 0.043; d = 0.496). Table 4 displays the high-speed running distances.There was a significant main effect for outcome ( p < 0.001; η 2 = 0.033), where centre backs covered more high-speed running in losses compared to wins ( p < 0.001; d = 0.493) and draws ( p = 0.005; d = 0.409). Opposingly, centre midfielders covered more high-speed running in wins compared to losses ( p = 0.048; d = 0.272). A significant main effect for opponent was found ( p < 0.001; η 2 = 0.026), as centre midfielders covered more high-speed running against top six teams compared to mid-table and bottom six teams ( p = 0.004 and 0.001; d = 0.340 and 0.444, respectively). Finally, there was a significant main effect for out-come × opponent × match location for high-speed running ( p = 0.040; η 2 = 0.017), as centre midfielders completed more high-speed running distance in wins against the top six at home, compared to wins against mid-table at home and loss against mid-table away ( p = 0.035 and 0.002; d = 0.785–0.959). Additionally, centre backs covered more high-speed running in losses to top six teams in away matches compared to draws with the top six away, wins against bottom six at home, wins against mid-table home and away, and wins against a top six team at home ( p = 0.001–0.025; d = 0.850–1.260). Additional post-hoc analysis revealed that centre backs covered more high-speed running in losses to top six and mid-table teams compared to wins against mid-table teams ( p < 0.001; d = 0.840 and 0.712, respectively). Centre backs covered more high-speed running distance in losses away compared to draws away ( p = 0.005; d = 0.581) and wins at home ( p < 0.001; d = 0.617). Centre backs also completed more high-speed running distance in losses at home compared to wins at home ( p = 0.036; d = 0.517) (see Table 4 ). In terms of sprint distance (see Table 5 ), there was a main effect for outcome for centre forwards and centre midfielders ( p = 0.001 and 0.002; η 2 = 0.066 and 0.021, respectively) as these positions both covered more sprint distance in wins than losses ( p = 0.036 and 0.004; d = 0.446 and 0.366, respectively) and draws ( p < 0.001 and p = 0.030; d = 0.681 and 0.283, respectively). There was also a main interaction for outcome × match location ( p < 0.001; η 2 = 0.059), as centre forwards covered more sprint distance in wins away compared to draws home and away, away losses and home wins ( p = 0.027; d = 0.680 and 1.199). Centre midfielders also covered more sprint distance in home and away wins compared to away losses ( p = 0.048 and 0.002; d = 0.400 and 0.597, respectively). There was also a significant main interaction for opponent × match location for centre backs ( p = 0.006; η 2 = 0.020), as this position covered more sprint distance in away matches against the top six compared to mid-table teams away and top six teams at home ( p = 0.009 and 0.043; d = 0.561 and 0.540, respectively). Additional findings from post-hoc analysis highlighted that centre midfielders covered more sprint distance in wins against the top six compared to losses against mid-table teams and draws against bottom six ( p = 0.013 and 0.007; d = 0.687 and 0.822, respectively), while centre forwards covered more sprint distance in wins against the top six compared to draws against the top six and midtable teams ( p = 0.016 and 0.037; d = 1.160 and 0.971, respectively). Additionally, centre backs covered more sprint distance in wins and losses away against the top six compared to wins against mid-table teams at home ( p = 0.049 and 0.007; d = 1.124 and 0.870, respectively). Finally, centre forwards covered more sprint distance in wins away against top six teams compared to draws against mid-table teams away ( p = 0.033; d = 1.741). In terms of number of accelerations (see Table 6 ), there was a significant main effect for outcome for centre midfielders ( p = 0.019; η 2 = 0.014), as this position completed more accelerations in wins than losses ( p = 0.023; d = 0.302). There was also a significant interaction effect for opponent × match location ( p < 0.001; η 2 = 0.042), with full backs completing more accelerations at home to top six and mid-table teams compared to bottom six teams at home ( p = 0.014 and 0.012; d = 0.818 and 0.775, respectively). For decelerations (see Table 7 ), there was a significant main effect for outcome ( p = 0.024; η 2 = 0.013), with centre midfielders completing more decelerations in wins than losses ( p = 0.048; d = 0.273). There was also a significant main effect for opponent (p < 0.001–0.011; η 2 = 0.025–0.035) as centre backs completed more decelerations against top six ( p < 0.001; d = 0.512) and mid-table teams ( p = 0.037; d = 0.303) compared to bottom six teams. Centre midfielders completed more decelerations against top six teams than others ( p < 0.001; d = 0.370–0.541), and full backs completed more decelerations against top six teams than the bottom six ( p = 0.008; d = 0.485). There was also a significant main effect for match location ( p = 0.019; η 2 = 0.010), with centre backs completing more decelerations at home than away ( p = 0.019; d = 0.236). Additional post-hoc analysis revealed that centre backs completed more decelerations in losses against the top six compared to wins against mid-table and bottom six teams ( p = 0.006 and 0.003; d = 0.638 and 0.733) and draws ( p < 0.001; d = 0.963) and losses ( p = 0.015; d = 0.861) against bottom six teams. Centre mid-fielders completed more decelerations in wins and draws against the top six compared to draws against bottom six teams ( p = 0.001 and 0.003; d = 0.924 and 0.877, respectively). Centre backs also completed more decelerations at home against top six teams compared to mid-table and bottom six teams away ( p = 0.013 and p < 0.001; d = 0.527 and 0.761), and bottom six teams at home ( p = 0.007; d = 0.688). Centre midfielders completed more decelerations in matches against the top six teams at home, compared to mid-table teams at home, and both home and away against bottom six teams ( p = 0.006–0.016; d = 0.494–0.666). Finally, centre midfielders completed more decelerations in wins and draws against a top six team at home compared to draws against the bottom six at home ( p = 0.006 and 0.040; d = 1.310 and 1.281, respectively). This study compared the total distance, high-speed running distance, sprint distance and explosive actions according to playing position, match outcome, match location and quality of opponent across five competitive seasons. The main findings showed that attacking mid-fielders covered less total distance, while centre midfielders covered the highest total distance. When considering high-speed running and sprint distance, centre backs covered less distances while attacking midfielders covered the greatest distances. Full backs performed the highest number of accelerations, while similar values were observed for the remaining positions. In addition, centre forwards performed the highest number of decelerations while centre backs and full backs performed the lowest number of decelerations. When playing positions were not considered (analysed by team values), no differences were observed regarding different match out-comes (win, draw, loss). Such findings have been found in previous research from the Iranian Premier League that reported no significant differences in match running or accelerometry based measures between match outcomes. Although, other research examining Portuguese soccer players found that higher values of total distance were evident when the team outcome was win or draw compared to loss . However, such findings were reported on players that participated in the second league, thus, this may suggest that higher level teams from Premier leagues seem to not be influenced by match outcome. Even so, caution is warranted when generalising these results to other contexts. Additionally, there was a higher number of accelerations, when playing at home compared to away matches. Although, unsubstantiated in this study, this interesting result may potentially be partly explained by the motivational factor of home advantage that has previously been researched . Still, contrasting results were found in female soccer players where no differences were showed in external load metrics when playing home or away . Furthermore, match location was also not considered a major factor in Portuguese amateur soccer , while research examining professional Portuguese (second league) soccer players showed that total distance covered in home matches was significantly higher than in away matches, while in contrast to the present findings, more accelerations were performed in away compared to home matches . Such information highlights the contextual importance of the competitive level, where higher level teams (Premier and second-tier leagues) seem to be influenced by match location. Regarding positional differences, attacking midfielders covered less total distance while centre midfielders covered the highest which could be related to the specific role of the position, game plan and the coach strategy . Indeed, centre midfielders have been reported to cover greater distances in professional, semi-professional, and amateur teams . Centre backs covered the lowest highspeed running and sprint distances which is also in line with some earlier studies . Such justification may be associated with the technical and tactical role of this position (e.g., aerial duels, tackles, positioning, and interception of balls passed to the attackers). In contrast, attacking midfielders covered the greatest high-speed running and sprint distances which again may be associated with the specific positional demands of the role. For example, this position being responsible for joining attacking phases of play and potentially running from deep midfield positions to beyond the line of forward players and behind the opponents’ defensive line, thus covering large spaces at high-speed running and sprint distances, thus contributing significantly to decisive moments of play . Full backs performed the highest number of accelerations while similar values were observed for the other positions. Thus, it may be suggested that the team tactically was very compact, limiting spaces within and between the team units, and therefore the production of these type of actions was very similar, that may partly explain the identical number of accelerations. Still, the higher number of accelerations for full backs may be reflective of the deep defensive positioning when out of possession, while on attacking transition moments, fulfilling a key attacking role by accelerating quickly to join the attacking phase of play with or without the ball. In addition, centre forwards performed the highest number of decelerations. In fact, other professional soccer players showed that centre forwards performed higher sprint distances [ 16 , 35 , 37 – 39 ]. Such scenario was not evident in the present study, although the type of actions for this position may contribute to more decelerations (e.g., pressing actions, constant change of direction movements, stopping movements to avoid offsides). Moreover, centre backs and full backs performed the lowest number of decelerations. While such findings are easily found in previous research for centre backs and full backs that usually performed a greater number of accelerations and decelerations . Such contrasting findings may be explained by the different competition contexts (countries), and tactical model of team play. Regarding the analysis of all contextual factors by playing position, there were several relevant findings which confirm the hypothesis of this study that all variables can influence running and accelerometery based measures in differing positions. The hypothesis related to match running measures was also confirmed in previous research conducted on professional Portuguese soccer players . Notably, no research with a similar design is available thus, appropriate comparisons to support or contrast the findings of this study is difficult. Central defenders also covered more high-speed running in losses than during winning and drawing matches, and more specifically the same occurred in losses versus top six and mid-table teams compared to wins versus mid-table teams. Such results can be justified by the study of Lago et al. . The authors found that for each minute when the team was losing, an additional meter of distance was covered at sprint higher than 5.5 m/s when compared to winning. This can also be supported by the tendency of defenders covering higher high-speed running distance when out of possession when compared to in possession which is justifiable to recover the ball faster . Similarly, more high-speed running occurred in losses away compared to draws away and wins at home, as well as in losses at home compared to wins at home. Greater sprint distance occurred in away matches against the top six teams compared to away matches versus mid-table teams. Furthermore, greater sprint distances were evident in top six home matches as well as in wins and losses away versus top six teams compared to wins against mid-table teams at home. Cumulatively, these findings may be reflective of the game situation where there are increased running demands for central defenders when the team lose or play a higher quality team. However, due to the fact that more decelerations were completed by centre backs when winning matches compared to losing matches and at home compared to away matches may also be reflective of the demands of the centre backs. During wins or when playing at home, the study team may demonstrate more aggressive actions when out of possession and so press the opposition team more frequently, resulting in more decelerations for these players. Finally, more decelerations were performed at home versus top six teams compared to mid-table and bottom six teams away and against bottom six teams at home. Similar to the high-speed running data, this may be reflective of the requirements placed on these players when playing against higher quality opposition (top six teams), where the need to close the opposition down more frequently in their own third of the pitch was required. These results were partially supported with previous studies that found higher intensity activities for defenders when matches were lost . Full backs covered more total distance in wins and draws compared to losses, where contributing factors such as greater team possession and thus more frequent attacking phases were possibly evident. Such scenario was also evident in wins versus mid-table teams at home compared to losses against mid-table teams away. This position also performed more accelerations at home against top six and mid-table teams compared to bottom six teams at home. Additionally, this position also performed more decelerations against top six teams compared to bottom six teams. Speculatively, this may relate to individual player characteristics where motivation to produce high physical output and perform optimally against better opposition was observed. Previous research highlighted that top-level teams cover more distance at walking and jogging speeds and less total and high-speed running distances compared to bottom-level teams, where higher total distance was performed at home and against high-ranked teams . Earlier studies seem to support this current result and potentially justify the varying physical outputs and different tactical playing patterns adopted by the analysed team . Centre midfielders covered more total distance and high-speed running in winning results compared to losses. Such findings are supported by a study that analysed the influence of time winning and time losing on playing positions with and without ball possession in a professional Spanish Premier league team. This study found that mid-fielder increase their distance (> 5.8 m/s) when winning . There was more high-speed running against top six teams compared to mid-table and bottom six teams. This was evident in wins against the top six teams at home compared to wins against mid-table teams at home and losses against mid-table teams away. This position also performed more sprint distance, accelerations and decelerations in winning out-comes compared to losing. Sprint distance was also higher in drawing matches compared to losing matches. More sprint distance occurred in home and away wins compared to away losses which again is in line with previous research . Regarding decelerations, these were more evident versus top six teams compared to mid-table and bottom six teams as well as in wins against top six teams compared to losses versus mid-table teams and draws against bottom six teams. More decelerations occurred in wins and draws against top six teams compared with draws versus bottom six teams. Similar data were evident in wins and draws versus top six teams at home compared to draws against bottom six teams at home. Previous research has supported that playing at home may contribute to more wins . Such finding can reinforce covering more distances and explosive actions. However, some studies showed lower high-intensity activity when winning than when losing or drawing , suggesting that organised teams present a higher tactical capacity that consequently requires lower running demands . Still, this seems to contrast the current study findings . Centre forwards covered more total distance in losses to mid-table teams compared with wins against bottom six teams due to greater defensive requirements in these matches which consequently increased running demands which contrasts with older research in the EPL, that found a higher percentage of time spent at > 4 m/s by attacking players when winning a match (1.3%), while defenders achieved a lower percentage (−0.7%) . Sprint distance was greater in winning results compared with drawing and losing. Additionally, more sprint distance occurred when winning away than drawing at home and away, away losses, and home wins. The same situation was also evident when wins against top six teams compared to draws versus top six and mid-table teams as well as in wins away versus top six teams compared to draws against mid-table teams away. These findings were partially supported by previous research that found higher intensity activities in matches won . Moreover, considering the previous study of Redwood-Brown , it seems that an evolution of higher intensity was reached. Despite the novel approach in the present study, match outcome can be further analysed with consideration to the seven phases of match status. Recently, it has been shown that in general, the first half of the match can result in more changes in the status of the match, while the second half is more related to the maintenance of the match outcome . Additionally, match halves also seem to influence running and accelerometry measures . Moreover, this type of analysis should include pacing strategies, collective tactical behaviour and the game model that may influence all data interpretation. Furthermore, time winning and time losing as well as ball possession also seems to be relevant contextual variables than can influence match outcome. For example, it was found for each minute that teams were winning, total distance was > 5.8 m/s with increased ball possession, while, for each minute that teams were losing, total distance > 5.8 m/s without possession decreasing . Although, total distance without ball possession increased when teams were winning, and decreased when teams were losing and thus should be considered for future research. Finally, to extend the present findings to other contexts such as possession characteristics, team formation, competition levels, age groups, and differing leagues and countries would be beneficial and therefore, future research should consider examining these variables. In conclusion, external match load variables were influenced by playing position and contextual factors of match outcome, match location and quality of the opponent. Playing position, match outcome, match location and the quality of the opponent have a significant impact on total distance, high-speed running and sprinting when playing home or away against top, middle or bottom six teams. Additionally, the match outcome also affected these external match load variables. Coaches and performance staff may utilise these contextual findings to optimally prepare and recovery players whilst considering match outcome, match location and the quality of the opponent. However, evidently there are distinct results when analysed separately. For this reason, future research should aim to extend the present findings for other contexts, competition levels, age groups, and differing leagues and countries.
|
Study
|
biomedical
|
en
| 0.999995 |
PMC11694197
|
In soccer, various training methods are implemented that produce differing physical stimuli and monitoring these external load demands is widely adopted at different levels due to its significant role in providing neuromuscular stimulation and facilitating physical adaptations . This strategy enables coaches and practitioners to better plan, adjust, and assess a team’s training to enhance performance, operating under the belief that a blend of training stimuli and ample recuperation will enhance adaptation to training and improve physical fitness and performance . On the contrary, inadequate training or insufficient recovery can potentially lead to an increased risk of injury/illness and a decline in physical readiness . To mitigate these negative consequences, it is crucial to monitor players’ training/match load to inform the programming and adaptation of training and recovery processes . Regarding the comparison of different divisions, previous research analyzed the match running performance among the FA Premier League, now the English Premier League (EPL), English Championship (second division, ECL), and League One . This research found that the players in the first division (Premier League) covered less total distance and had lower high-intensity running distances than those in the lower leagues (ECL and League One). Another similar study compared the EPL and ECL and found that players of the second division covered more total distance, high-intensity running distance and sprint-intensity actions than players of the first division . However, contrasting results were found when examining data from the first and second divisions of Spain, where first division teams covered more total, high-intensity and very high-intensity distances than second division teams . Similarly, another study examining Norwegian football showed higher total and high-intensity distances in first league teams when compared with lower divisions . Thus, the recent studies support higher values for first division teams of Spain and Norway, while older research showed the opposite. Therefore, considering data from elite English teams, un update is warranted. Given the distinct physiological demands of each playing position, external load measures also exhibit variations across different playing positions over a competitive season. In the existing literature, the influence of mediating factors on load, such as playing position, has been thoroughly assessed in the context of professional soccer in the EPL and Spanish First League (LaLiga) . For instance, recent studies have observed that central midfielders, in contrast to attackers and defenders, tend to cover greater total distances at low and medium intensities, as well as moderate-intensity acceleration distances, among elite EPL and Spanish First Division soccer players. Additionally, the available evidence specifically highlighted that wide attackers and wide defenders have shown the highest performance in terms of very high-speed running, high-intensity acceleration, and sprinting distances due to the perpetual attacking and defensive functions in the EPL . Furthermore, in a study assessing the position-specific development of physical performance parameters over a seven-season period in the EPL, it was found that wide and forward positions increased the distance covered at high-intensity and in sprinting more than central defenders and central midfielders . Moreover, it is relevant to highlight that competitive match-play is a dominant component of the physical load completed by soccer players in a training microcycle and constitutes the most important weekly session. Thus, when planning training sessions, performance improvements and lowering injury risk should be major factors, while reference values of match data from various leagues can support this process . In addition, training prescription must consider the level and different playing positions . Given the diverse coaching philosophies ingrained in contemporary elite soccer, it becomes evident that additional research is necessary to enrich our understanding of how training workloads in soccer are structured throughout seasonal cycles or consecutive seasons . Thus, the aims of this study were to: compar training sessions between the EPL and ECL and examine differences between playing positions. The study hypothesis was that the EPL will present higher values than the ECL during training . This research employed a rigorous four-year longitudinal study design to investigate a male professional team. The study team competed in the EPL and ECL during the study period. The EPL comprises of 38 matches, 19 home and 19 away across a 10-month season, commencing in August and completing in May. While the ECL consists of 46 matches, 23 home and 23 away, across the same duration and calendar period. The study team was promoted from the ECL at the end of season 2020–21, thus the data examined consisted of two ECL seasons and two seasons from the EPL. A non-probabilistic sampling strategy was adopted to select the participants. The focus of the study was on meticulously monitoring the training load of players during all training sessions. Throughout the entire observation period, spanning from 2019-20 to 2022-23, consistent player monitoring strategies were implemented without any intervention or interference from the researchers in the team’s training processes. Data from 46 1 st team outfield soccer players (age 24.6 ± 5.9 years, weight 74.6 ± 7.8 kg, height 1.79 ± 0.09 m) from an English professional club during the complete seasons 2019-20 to 2022-23 seasons were included. The inclusion criteria for the study have been previously applied and included: (i) listed on the roster of the 1 st team squad of the English club at the start of each study season, (ii) trained regularly, (iii) participated in at least 80% of training sessions and matches, (iv) did not use dietary supplements during the study, and (v) did not participate in another training program along with this study. Additionally, the exclusion criteria for the study have also been previously employed and included: (i) long-term (three months or longer) injury, (ii) joining the team late in any of the study seasons, (iii) goalkeepers, due to the different variations in the physical demands with outfield players . Players were assigned to a specific position as running demands for these differ significantly. The methodology to differentiate specialized positions was adapted from previous research . As various situational factors have an influence on the style of play that can be modulated by different tactical roles , context was considered whilst using a player’s average position in an attempt to determine a player’s relevant tactical role in the team . All participants examined were classified based on their regular playing position at the start of each season and remained consistent throughout each study season: centre-backs (n = 13), full-backs (n = 6), centre midfielders (n = 15), attacking midfielders (n = 8), and centre forwards (n = 4). All data collected resulted from normal player monitoring procedures, nevertheless, written informed consent was obtained from all participants. The study was conducted according to the requirements of the Declaration of Helsinki and was approved by the local Ethics Committee of Cardiff Metropolitan University and the English professional club from which the participants volunteered . To ensure confidentiality, all data were anonymized prior to analysis. Training data were collected over a four-year consecutive period from the 2019–20 to the 2022–23 competitive seasons. Only team pitch-based training sessions were included for analysis. All other sessions, individual training sessions, recovery sessions, and rehabilitation training sessions were excluded . The planning of all soccer content was cyclical in nature and reflective of modern methods of periodization in elite soccer and thus the external physical load experienced by players was undulating across a micro-cycle leading to match-play. The number of days between matches differed and training sessions in elite soccer micro-cycles have recently been classified based on days prior to a match (MD minus (-)) or post-match (MD plus (+)) . All training sessions were integrated to include technical, tactical, physical and mental components. All players completed one or two strength and power gym-based sessions per micro-cycle incorporating upper and lower body and core exercises, although these sessions were not included in the analyses as mentioned earlier . All running training data was collected at the club’s official training facility. Players only participated in official competitive league matches during a micro-cycle and thus the structure of the training days was standardized across all seasons. The first day post-match (MD+1) were generally a day off and therefore no GPS data was available. Additional fitness sessions for non-starters were limited to the immediate post-match period and GPS data was collected but not included in the study analysis. The start of the next MD micro-cycle was MD-5, five days prior to competition, and targeted compensation training for the non-starters from the previous match and on-field recovery for the starting players. Four days pre-match (MD-4) focussed on drills designed to develop players’ strength, power and ability to repeatedly produce explosive actions. This session was devised to improve technical and tactical understanding when ‘out-of-possession’ whilst developing the necessary physical qualities to produce high accelerations and decelerations without decrement. Individual and unit (defence, midfield, attack) practices followed by positional games and small-sided games with goalkeepers in restricted pitch dimensions were delivered. When delivered, three days prematch (MD-3) aimed to tactically prepare players when ‘in-possession’ whilst developing position-specific high-intensity and sprint running capabilities. Practices entailed full-pitch attacking tactical patterns (10 v 0, 10 v 4) and large numbered games regularly concluding in 11 v 11 format (> 8 v 8 plus goalkeepers). The structure of MD-2, two days prior to the match, concentrated on repeating technical-tactical information at low-intensity in various functional pitch areas and dimensions and thus was regarded as an ‘underloaded’ session considering all key GPS metrics. This session included position-specific passing patterns and then divided players into unit-specific drills for defending or attacking. The final session of the weekly micro-cycle, MD-1, was standardized with no variety and drills intended to provide neural stimulation to players whilst also finalizing tactical situations and set-plays. In micro-cycles where two matches were played (i.e. Saturday and Tuesday), the micro-cycle structure altered to the following: MD+1 off-feet recovery for starting players and compensation training for the non-starters; MD+2 would replicate a standardized MD-1 without any explosive actions (shooting, short sprints). For the purposes of this study, the tactical periodization approach and subsequent training load from all MD-5, MD-4, MD-3, MD-2, and MD-1 training sessions performed across the 2019-20 and 2022-23 seasons were examined. For study reliability and validity, only data from players who performed the full session have been included, withdrawing data from players whose training load was manipulated due to fatigue management or injury . A total number of 840 team training days and 65,219 training data points, that were drills performed by players within training sessions, were examined. Physical data were consistently monitored across four study seasons during all training sessions using a 18 Hz Global Positioning System (GPS) technology tracking system (Apex Pod, version 4.03, 50 gr, 88 × 33 mm; Statsports; Northern Ireland, UK) that has been previously validated in a student population for tracking distance covered and peak velocity during simulated team sports and linear sprinting . All devices were activated 30-minutes before data collection to allow the acquisition of satellite signals and to synchronize the GPS clock with the satellite’s atomic clock . Quantifying the devices’ accuracy indicated a 2.5% estimation error in distance covered, with accuracy improving as the distance covered increased and the speed of movement decreased . To avoid inter-unit error, each player wore the same device during the study period , although the present GPS system has previously reported excellent inter-unit reliability . Specifically designed vests were used to hold the devices, located on the player’s upper torso, and anatomically adjusted to each player, as previously described . The GPS signal quality and horizontal dilution of position was connected to a mean number of 21 ± 3 satellites, range 18–23, while HDOP across all seasons was 1.3. On completion of each session, GPS data were extracted using proprietary software (Apex, 10 Hz version 4.3.8, Statsports Software; Northern Ireland, UK) as software-derived data is a more simple and efficient way for practitioners to obtain data in an applied environment, with no differences reported between processing methods (software-derived to raw processed) . The dwell time (minimum effort duration) was set at 0.5 s to detect high-intensity running and 1 s to detect sprint distance efforts, in-line with manufacturers recommendations and default settings to maintain consistent data processing . Furthermore, the internal processing of the GPS units utilized the Doppler shift method to calculate both distance and velocity data which is shown to display a higher level of precision and less error compared with data calculated via positional differentiation . Relative distances covered per minute (m/min) in the following categories: total distance (m), high-speed running (HSR) distance (m; total distance covered 5.5–7 m/s); sprint distance (m; total distance covered > 7 m/s); high metabolic load distance (HMLD) (m; the total amount of HSR, coupled with the total distance of accelerations (> 3 m/s 2 ) and decelerations (< -3 m/s 2 )) were examined and have been reported based on previous studies . The HMLD variable refers to the distance covered with a power consumption above 25.5 W/kg. This value corresponds to running at a constant velocity of 5.5 m/s or 19.8 km/h on grass. The number of HML efforts (number of efforts performed above 25.5 W/kg), sprint efforts (total number of sprints performed > 7 m/s), accelerations (> 3 m/s 2 with minimum duration of 0.5 s) and decelerations (< -3 m/s 2 with minimum duration of 0.5 s) were also examined. Descriptive data (mean ± SD) were determined for all GPS variables of interest for position (centre-backs, full-backs, centre midfielders, attacking midfielders, and centre forwards) and league (EPL, ECL). Homogeneity of variance was assessed via Levene’s statistic and, where violated, Welch’s adjustment was used to correct the F-ratio. Multiple two-way (5 × 2) analysis of variance (ANOVA’s) were conducted across all GPS variables to determine the interaction effects between position and league. Post-hoc analysis, using either Bonferroni or Games-Howell post-hoc analyses, where equal variances were and were not assumed, was conducted to identify the differences in training demands between leagues for each position. Effect size ( η 2 ) values and Cohen’s d values ( d ) are also reported for significant results. η 2 values in the range 0–0.0099 are considered insignificant effect sizes, 0.0100–0.0588 as small effect sizes, 0.0589–0.1379 as medium effect sizes, and values greater than 0.1379 as large effect sizes. Cohens d effect size magnitudes were interpreted using the following classifications: trivial < 0.19; small 0.2–0.59; 0.6–1.19 moderate; 1.2–1.9 large; 2.0–3.9 very large; > 4.0 extremely large . All significance values were accepted at p < 0.05 and all statistical procedures were conducted using JASP (version 0.18) for Macintosh. Results of the two-way ANOVA for each GPS metric are reported in Table 1 (mean ± SD) and Figure 1 . There was a significant interaction effect between position and league for all GPS metrics ( p < 0.001; η 2 = 0.001–0.003), except for relative HSR distance, sprint distance, and sprint efforts ( p > 0.05). Centre-backs and attacking midfielders covered more total distance, HSR distance, and HMLD per minute, and completed more HML efforts, accelerations, and decelerations per minute in training sessions during the EPL compared to the ECL ( p < 0.001–0.002; d = 0.086–0.340). Centre midfielders and full-backs completed more total distance and HMLD per minute, and completed more HML efforts, accelerations, and decelerations per minute in EPL training compared to the ECL ( p < 0.001; d = 0.164–0.325). Finally, centre forwards covered more total distance per minute in training sessions in the EPL compared to the ECL ( p = 0.017; d = 0.089). There was also a significant main effect for league for all GPS metrics ( p < 0.001; η 2 = 0.001–0.009), with EPL training sessions resulting in greater total distance per minute, HSR distance per minute, HMLD per minute, and number of HML efforts, accelerations, and decelerations per minute compared to training in the ECL ( p < 0.001; d = 0.061–0.224). Sprint distance per minute and the number of sprints per minute were higher in the ECL training sessions compared to the EPL ( p < 0.001, d = 0.043; p = 0.003, d = 0.031, respectively). Findings revealed a significant main effect for position for all GPS metrics ( p < 0.001; η 2 = 0.001–0.005). Post-Hoc analysis confirmed that centre midfielders covered more distance per minute than all other positions ( p < 0.001, d = 0.040–0.167), while full-backs and attacking midfielders also covered more than centre-backs and centre forwards ( p < 0.001–0.018, d = 0.041–0.127). For HSR per minute, full-backs covered more than centre-backs, centre midfielders and centre forwards ( p < 0.001, d = 0.067–0.103), while attacking midfielders also covered more than centre-backs ( p < 0.001, d = 0.061). Attacking midfielders, centre midfielders, and full-backs covered more HMLD per minute than centre-backs and centre forwards ( p < 0.001, d = 0.113–0.153). Attacking midfielders, centre midfielders, and full-backs covered more HML efforts per minute than centre-backs and centre forwards ( p < 0.001–0.013, d = 0.061–0.229). Full-backs covered more sprint distance per minute and completed more sprints per minute than all other positions ( p < 0.001, d = 0.054–0.098). Attacking midfielders, centre midfielders and full-backs covered more accelerations and decelerations per minute than centre forwards ( p < 0.001–0.003, d = 0.053–0.134). The main findings from the present study showed higher training values in the EPL compared to the ECL with the exception of sprint (both in distance and efforts) that showed higher training values in the ECL (sig-nificant differences, although effect sizes were trivial). When comparing player positions, loading pattern varied between metrics. The results of the present study showed higher values during the EPL season compared with the ECL season, which contradicts older research in an English team but is in line with previous studies that examined first and second division Spanish and Norweigan teams and found higher load demands in first division teams. This may partly be attributed to the first division league requiring a higher physical capacity and consequently, higher match running capacity . Another explanation may be related to the playing formation implemented by the first division teams that may require higher external loads , although this variable was not addressed in the present study. However, previous studies [ 7 – 11 ] analyzed match data that was not examined in the present study that only included training data. Nonetheless, considering that match-play is regarded as the most important session of the training week with the highest load , training session design should understand and utilize match data values as a reference. Thus, higher training loads would be expected in the EPL when compared with the ECL . In addition, recent research compared senior (first team) and U-18 soccer players from the same EPL team and reported higher high-intensity (5.5–7 m/s) and sprint (> 7 m/s) values for first team players compared with U-18 . Furthermore, U-18 players covered higher total distance than first team players that may be associated with the lower competitive level of the U-18 players . However, contrasting results were found in a recent study that compared first and U-18 soccer players from the same Scottish Premier team and found no differences in external load measures between groups . Although, these studies had different designs to the present research and U-18 soccer players were examined, while the current study investigated the same senior (first team) players competing in the EPL and ECL (> 18 years) . As previously mentioned, a minor exception was found in sprinting, both in distance and efforts. However, it should be acknowledged that differences in sprint distance, efforts and relative distances were trivial, although statistically significant. This may partly be explained by the very large number of drills examined. Moreover, in both leagues there was a very small amount of sprint distance (0.3–0.4 m/min) and sprint efforts (0.02 efforts/min) during the examined training sessions . This may possibly be linked to sprint distance being equal to zero in drills, and thus can also be highlighted as a limitation. Furthermore, indeed style of play and team formation were not considered in the present study and recently a study that examined EPL players showed that formation and possession can have a significant impact on total distance, HSR, and HMLD . Although these contextual factors lead to speculation that may partly explain the current results, however more research is warranted to confirm this notion. Considering playing position, the usual trend of higher total distance values for centre midfielders was confirmed. This position was followed by full-backs and attacking midfielders. The same scenario occured for HML distance and efforts. Moreover, full-backs showed higher values for HSR, accelerations and decelerations and were followed by attacking and central midfielders. Full-backs also showed higher values of sprinting, both distance and efforts and were followed by centre-backs and centre midfielders. The present results were similar to those reported in a recent systematic review although some differences were evident. For example, wide midfielders, although not examined in the present study, and centre forwards covered greater running distances (> 14 km/h), while central midfielders performed a higher number of accelerations and decelerations . However, it is relevant to highlight that the playing position findings are associated with the differing tactical roles within the team, particularly when defending and attacking. Specifically, the general trend of this study showed higher values for centre midfielders, full-backs and attacking midfielders that can be associated with covering a larger action zone in both training and matches . Therefore, a fundamental attribute for these positions is a higher aerobic capacity than other positions such as central-backs and centre forwards . Considering practical applications, it may be suggested that the EPL training was more demanding than the ECL with the exception of sprint measures. This information is relevant for ECL coaches and performance staff to obtain knowledge on the training load values performed in the EPL. Similarly, for EPL coaches and staff these findings may support training design to maintain EPL status and avoid relegation to the ECL. Furthermore, sports scientists may utilize the findings of the current study to design position-specific physical conditioning training and individualized recovery sessions , whilst considering league standard and position. Finally, to aid practitioners design more effective training, contextualizing key physical demands with tactical structure may be of great benefit. Despite the findings of the current study, there are some limitations that should be listed. As mentioned, style of play and playing formation were not considered and might possibly explain some of the current study findings. Moreover, the evolution of the team across the four seasons would provide additional knowledge for coaches and performance staff. For instance, it could reveal evolution in terms of external load according to possession classification, playing style and formation . Therefore, the aforementioned variables should be considered for future research. Finally, all data should be cautiously interpreted as only one team from the EPL and ECL was examined, therefore the generalization to different leagues/countries must be considered. The main conclusion was associated with higher training values during the EPL with the exception of sprint, both in distance and efforts. Nonetheless, this study showed the importance of greater demands in HSR and sprint distances of EPL training when compared with ECL sessions. The values presented in this study constitute possible reference values that may be used by coaches, performance staff, or practitioners to achieve desirable competitive levels to cope with EPL and ECL demands. Furthermore, these findings may allow coaches of ECL teams to replicate such values or even increase during specific training sessions in order to prepare players for the EPL. In addition, the present data provides some guidance on the differing physical demands placed on various positions and may support coaches and practitioners to design position-specific drills incorporating physical and technical/tactical strategies. Nevertheless, all presented values should be interpreted with caution since only data from one team was utilized.
|
Other
|
biomedical
|
en
| 0.999996 |
PMC11694199
|
Rugby union, henceforth referred to as rugby, is a collision sport characterized by intermittent, high-intensity activities (e.g., sprints, collisions) interspersed with periods of low-intensity activities and rest . Training for rugby imposes a stress or training load (TL) affecting well-being in a dose-dependent way . For example, a high TL reportedly promotes a deterioration in next-day sleep quality, motivation, fatigue, stress, and appetite versus a low TL . Furthermore, a typical training week in professional rugby players, which can include multiple daily sessions, revealed significant fatigue and soreness 1–2 days after baseline (day 1) testing, with small to moderate effect-size shifts in fatigue, soreness, and sleep quality 1–6 days later . Despite these findings, temporal knowledge of athlete recovery is often limited by next-day comparisons . Consequently, at the elite playing level, a more detailed longitudinal examination of TL and well-being changes over time is needed. Rugby matches, which account for ~5–11% of all rugby-related activities , present another strong adaptive stimulus with timelagged outcomes (i.e., delayed effects over several days after a given stimulus). This includes post-match changes in mood, sleep, muscle damage, and fatigue metrics, alongside performance and hormonal state . Reviews of rugby literature indicate that psychological well-being (e.g., stress, fatigue) is highly responsive to competition and it’s outcome, but recovers within 2–3 days . In contrast, post-match perceptions of physical recovery (e.g., soreness) might take four or more days before restoration to baseline values . This difference in psychological and physical recovery is likely due to exercise- and impact-related muscle damage . Surprisingly, few studies have quantified individual perceptions of competitive stress across repeated matches . This, we believe, is a prerequisite to discern the time-course of psychological and physical recovery, relative to diverse match loads or dose-response equivalency under ecological conditions. Tournament play presents another unique stressor characterized by training and match loads of varying intensity, duration, and complexity, along with planned and weekly TL variation . These demands are frequently coupled with short recovery periods between training days and competition; dramatically affecting athlete well-being and recovery. Rugby studies have linked different TL indicators to well-being, stress, and recovery in such settings , but findings are generally restricted by the reporting of pooled (e.g., seasonal or weekly) effects. To date, no research has sought to illuminate the TL and well-being associations (i.e., dose-dependencies, time-lagged interplay) at the daily level during elite rugby tournament play. Since elite athlete monitoring and exercise prescription often resides in a narrow assessment-feedback window, understanding these intricacies could provide a stronger basis to guide decisions to optimize recovery, performance, and team success. To address gaps in the literature, we investigated the dose-response and time-lagged effects of daily TL on athlete well-being (i.e., mood, stress, soreness, fatigue, sleep quality) during a 3-week international rugby series. Our primary goals were to: (1) profile daily TL and wellness fluctuations between and within study weeks, and (2) model the daily TL and well-being associations at varying dosages and time lags. To capture significant or meaningful effects in an elite rugby context , we tested the impact of three daily TLs (i.e., within-person mean, +1SD and +2SD above the within-person mean) on well-being across 0–5 lag days. No firm hypotheses were made in relation to these goals, as the rugby series profiling and lagged analyses were exploratory in nature and contingent upon data collected herein. Twenty-two elite male rugby players, who formed part of a national (Scotland) training squad preparing for an international series in 2010, were assessed in this research. To ensure robust model estimates, we set a minimum number of TL (8 or more) and well-being (12 or more) observations for study inclusion, which also excluded injured players in our final analyses. Our sample comprised of 10 backs and 12 forwards with a mean (± SD) age, height, and body mass of 27.6 ± 3.4 years, 1.88 ± 0.09 m, and 102.2 ± 14.1 kg, respectively. The participants entered a training-camp environment upon selection, and this ensured some control in terms of physical training and other environmental factors (e.g., nutritional intake, meal timing, sleep, travel). Each participant received a full health and medical screening before study commencement. Written informed consent was given prior to data collection, but after a full briefing of the study aims, procedures, and potential benefits. Ethical approval was granted from the Swansea University Human Research Ethics Committee, Swansea. A longitudinal, single-group, observational design was employed to achieve the study goals. The participants were monitored across a 3-week international rugby series played in the northern hemisphere autumn test window, involving three matches against southern hemisphere teams. Daily TL was assessed 4–5 days per week across all training and match activities. Psychological (i.e., mood, stress) and physical (i.e., soreness, fatigue, sleep quality) well-being were assessed at a similar weekly frequency. In the first instance, each variable was described in terms of changes within, and differences between, study weeks. Next, we applied a distributed lag non-linear model (DLNM) to estimate the bi-dimensional lag-response associations on the pooled dataset. Specifically, we predicted the well-being responses at 0–5 lag days following three incremental daily TLs. Training days were scheduled for the Monday, Tuesday, Wednesday, and Friday of each week (i.e., 1–3 sessions a day, 30–90 mins a session), with Thursday and Sunday allocated as rest days. Some training adjustments were made depending on the match outcome and team performance. For example, an extra rest day was prescribed on Monday in the last week. Training load was monitored across all planned sessions (i.e., team and skills-based workouts, conditioning sessions, exercise stress-testing, gym workouts) using the session rating of perceived exertion (sRPE) method, anchored on a 0–10 Likert scale . This metric is widely used in rugby research and practice to track individual perception of physical and physiological stressors . Briefly, the participants provided a sRPE within 30 mins of each training session, which was multiplied by activity duration (in minutes) to determine TL in arbitrary units (A.U) . Where two or more sessions were completed per day, the TLs were summed to derive a single daily TL representing physical stress on that day. The three international matches were played on consecutive Saturdays at the home venue, or a nearby venue, for this team. For those participants selected to play, a daily TL was computed by multiplying the sRPE by actual time played (in minutes) . Playing time was recorded by coaching staff on game day and later verified using an online resource ( http://en.espn.co.uk/rugby/ ). Because of restricted post-match access to the studied athletes, the sRPE data were collected the next day prior to team breakfast. The total number of TL observations, including all training and match days, was 281 (participant range = 8–14). Athlete well-being was self-assessed before breakfast (served around 8–9 am) on the Tuesday, Wednesday, Friday, Saturday, and/or Sunday of each week. Wake time was not strictly assessed, but the athletes arose before breakfast. Likewise, bed time was not prescribed, but rather self-selected to ensure sufficient sleep was obtained for training and matches. Participants completed a simple inventory rating their current psychological (i.e., mood, general stress) and physical (i.e., muscle soreness, general fatigue, sleep quality) state . Each subscale was scored on a 10-point Likert scale, anchored from one (extremely low / poor) up to 10 (maximal / excellent). Single-item perceived measures are widely used in rug-by , and often exhibit greater sensitivity than objective markers to detect small individual changes in recovery and fatigue . The total number of mood, stress, soreness, fatigue, and sleep quality observations was 323–324 (participant range = 12–16). Study data were analyzed using R software . First, we described the daily TL and well-being trajectories, after plotting each time series over the 21-day period. To better represent well-being dynamics, each time series was smoothed using a generalized additive model . Second, descriptive statistics were calculated for each variable, including within-person means and SDs, along with an intraclass correlation coefficient (ICC) to assess measurement reliability. The ICCs were interpreted as being poor (< 0.50), moderate (0.50 to 0.75), good (0.75 to 0.90), and excellent (> 0.90) . Finally, within-person Pearson correlations were computed to assess bivariate relationships between study variables, that we defined as a weak (0.20 to < 0.40), moderate (0.40 to < 0.60), strong (0.60 to < 0.80) or very strong (0.80+) effect size . To examine the dose-response and time-lagged effect of daily TL on athlete well-being, we ran a series of DLNMs in the dlnm package . In a two-step process, we first constructed a cross-basis for each comparison; a bi-dimensional space of functions describing the association along the spaces of predictor and lags . The cross-basis was fitted with a natural cubic spline for lag-response, quadratic spline for exposure-response, and a maximum lag of five days. Participant was added as a group factor. One requirement (for a predictor) is an equally-spaced, complete, and ordered time series . To achieve this, missing daily TLs were allocated a value of 1 and actual daily TLs corrected by the same amount. Once constructed, the cross-basis function was entered into a random intercept, linear mixed-effects model to predict the daily TL and well-being associations at each nominated lag length. The DLNM results are plotted as lag-response curves over 0–5 days (at 0.5 day or 12-hourly intervals). Estimates were derived for three daily TLs (358, 576, 794 A.U) determined from the within-person descriptive results: see Table 1 . We chose a reference point of 140 A.U (-1SD below the within-person mean), as each relationship was modelled with a non-linear function with no obvious reference value. The well-being estimates at each lag were mean-centered before plotting, as the default software procedure. As such, y-axis values above and below zero respectively indicate higher and lower well-being scores from study-averaged values. All predictions are presented with a 95% confidence interval (CI). A 95% CI band that excludes zero can be interpreted as representing a significant window. The time-series plots are illustrated in Figure 1 . A higher average daily TL emerged in week one (489 ± 244 A.U) than in weeks two (301 ± 186 A.U) and three (267 ± 155 A.U), due to a decline in training intensity and/or frequency; see methods above. Daily (match) TLs were similar in weeks one (519 ± 232 A.U) and two (510 ± 223 A.U), but lower in week three (364 ± 231 A.U). These outcomes followed a substantial defeat (by 46 points) before two close victories (by 4 and 3 points). On average, daily TL tended to rise and fall within a training week (Monday = 367 ± 296 A.U, Tuesday = 465 ± 228 A.U, Wednesday = 376 ± 107 A.U, Friday = 154 ± 40.5 A.U), before increasing on match day (Saturday = 459 ± 237 A.U). Mood and sleep quality tended to rise (up to 1SD) a day before each match, before falling below (down to -1.7SD) the study average or baseline scores 1–2 days after, and returning to baseline 3–4 days before the next game. Perceptions of stress, soreness, and fatigue showed a reversal in these patterns, with lower scores (down to -0.9SD) a day prior to competition, rising values (up to 1.5SD) 1–2 days after, and a mid-week return to baseline. Table 1 summarizes the descriptive and reliability statistics for each study variable. The ICC for daily TL was trivial (0.02), meaning that trait reliability for this outcome was extremely poor. Interpreted another way, 98% of the variance in daily TL is explainable by state factors; a result coinciding with the considerable TL range (18–979 A.U). The well-being ICCs were stronger (0.19 to 0.31), but still poor overall, and again indicate more variance (69–81%) at the state level. In summary, we found that the predominant source of measurement variation in the current context was day-to-day (with-in-person) shifts, whilst trait-like (between-person) differences was a relatively minor source. Within-person changes in daily TL were not significantly related to any well-being measure (see Table 2 ). We did find significant ( p < 0.001) interrelationships between all well-being subscales, varying between weak and strong effects. Within-person perceptions of sleep quality and mood state tended to rise and fall together (i.e., positive relationships), as did stress, soreness, and fatigue. A rise in sleep quality and mood was accompanied by a decline in stress, soreness, and fatigue (i.e., negative relationships). These linkages are consistent with the plotted time series, whereby covarying patterns of change between two variables yielded positive relationships and opposing patterns produced negative relationships. The lag-response associations are displayed in Figure 2 . A significant decline in mood (-0.6 to -2.0 units; -9 to -30%) was seen at a daily TL of 358 A.U (0.5–1.0 days, 2A), 576 A.U (0.5– 1.0 days, 2B), and 794 A.U (4.5–5.0 days, 2C) from mean-centered values. Stress increased at all daily TLs from 0.5 (11%) to 1.6 units (36%) with a biphasic response noted at 576 A.U (1.0–1.5 and 5.0 days, 2E) and 794 A.U (1.0–2.0 and 5.0 days, 2F). Soreness did not deviate significantly at 358 A.U (2G) versus mean-centered values, whereas fatigue (2J) showed some change (±0.5 units; ±10%) at this load. Both subscales responded similarly at the two highest loads. At 576 A.U, we saw a significant increase in soreness (0.6 to 1.7 units; 12 to 34%, 2H) and fatigue (0.8 to 1.6 units; 17 to 33%, 2K) after 0.5–2.0 days, with 794 A.U promoting a more dramatic rise in soreness (up to 3.7 units; 76%) and fatigue (up to 3.5 units; 72%) on lag periods from 0.5–3.0 days (2I and 2L respectively). A small biphasic rise in fatigue also emerged at 358 A.U (0.6 units; 12%), 576 A.U (0.9 units, 19%) and 794 A.U (1.1 units; 22%), all at a time lag of 5.0 days. Changes in sleep quality mirrored that of mood; declining significantly (-0.8 to -1.3 units; -13 to -21%) from mean-centered scores at a daily TL of 358 A.U (0.5–2.0 days, 2M), 576 A.U (1.0–2.0 and 5.0 days, 2N), and 794 A.U (5.0 days, 2O). This study explored daily TL and well-being associations across an international rugby series, including detailed characterization of delayed well-being effects at three different TLs. Descriptive profiling revealed substantial fluctuations in daily TL within and between study weeks, whereas the well-being subscales oscillated around a stable weekly baseline. The DLNMs identified delayed, but highly nuanced, effects of daily TL exposure on each well-being subscale. The well-being responses were distinguishable by differences in lag interval, duration, and magnitude of change, at each TL dosage. Daily TL was highly variable across the international rugby series, in line with other tournament play or competition data . The large weekly variation in daily TL reflects, in part, a team-management strategy. Following the defeat in week one, a coaching decision was made to reduce training volume, leading to a -38% and -45% drop in daily TL over subsequent weeks. The quality of each match opponent is another consideration. Weeks one and two yielded similar match TLs against the 1 st and 2 nd ranked teams in the world (participant team was ranked 7 th and 8 th ), but a -30% drop in match TL arose in the last week against the 11 th ranked team. The daily TL shifts within a weekly macrocycle (i.e., +27% to -58% vs. Monday training) are typical of tapering strategies used in rugby to ensure peak performance on game day . Whilst the day-to-day variation in TL was considerable across the rugby series, the well-being subscales exhibited smaller weekly fluctuations, before returning to a stable baseline. This consistency probably reflects the psychological processing of information, where internal drivers operate within tighter boundaries than external workload or TL measurements , coupled with narrower scales of measurement (e.g., 1–10 Likert) and stronger stability over time , as we demonstrated herein. Cursory inspection of the DLNMs confirmed that physical stress can adversely affect mood state and quality of sleep, whilst promoting greater stress, muscle soreness, and general fatigue . More intricate patterns transpired at different TL dosages. For mood and sleep quality, a lag delay emerged at increasing daily TLs that speculatively reflects greater use of recovery strategies (e.g., naps, massage, contrast showers) following harder training days and matches. A rising daily TL also promoted a larger stress, soreness, and fatigue response, both in magnitude and duration. Soreness and fatigue were most reactive in this work, likely due to a combination of physical demands, bodily contacts and collisions . The thresholding of the soreness subscale (i.e., no change at 358 A.U) can potentially be explained by stress habituation across the training camp, so that a stronger stimulus is needed to induce a perceptible change. The biphasic stress and fatigue response are also novel, but not widely reported in rugby literature owing to study limitations (e.g., next-day comparisons). One possible reason is a cumulative training effect that is intrinsic to our dataset. These nuanced patterns add to our understanding of well-being, as a multifaceted and dynamic process that adapts, transiently, to rugby stressors in intricate ways . Also noteworthy is that the well-being changes are plausible (± 3.7 units) and predicted by TLs typical of an elite rugby environment, suggesting real-world interpretations and applications. Erudition of the time-course of well-being recovery, with an incremental rise in daily TL, provides a stronger basis to guide decisions on team planning and management. For instance, a weekly TL can be better distributed to ensure that soreness and fatigue scores fall within an acceptable match-day range to optimize performance. This can be achieved by prescribing the heaviest training +3 days earlier, with implications for AM and PM load distribution based on the 0.5-day lag intervals that, if needed, can be condensed further (e.g., hourly) for more refined exercise prescription. The targeted recovery of soreness and fatigue might also prove expedient when a daily TL exceeds a nominal threshold (e.g., 1–2SD above the mean) for a given athlete and their anticipated recovery. Further possibilities exist to inform psychological-based strategies. As an example, a pre-match or post-match psychological intervention (e.g., player video footage, coach feedback) could help counter mood-related disturbances that manifest across a training week. Our findings also highlight the utility of the distributed lag (linear or non-linear) model, as a flexible approach to aggregate and explore complex bivariate associations in longitudinal sports data . This includes simple, yet informative, plots to better communicate results to target audiences and data-driven estimates with real-world implications. Several drawbacks of the current study are recognized. The selective recruitment of elite rugby players may limit knowledge transference to lesser-trained cohorts and to non-elite settings. Moreover, the TL and well-being data were collected on different time scales across the day, but for simplicity were modeled as time-matching variables. The next-day collection of match sRPE is another limitation. We also assumed that our well-being predictions are constant at the start and end of the series, and equivalent across all rugby activities. We do feel, however, that reasonable estimates were obtained once aggregating data across the rugby series. Further bias might arise from positional (i.e., forwards, backs) differences in rugby match demands and weekly workloads . Sensitivity analyses (data not shown) indicated that the addition of positional group did not improve our models. Sensor-derived measures (e.g., global positioning, accelerometry) of player load were not available during this study, but would advance future work. We envisage other benefits by study replication across longer rugby tournaments and different seasonal phases, such as the partitioning of stress loads (and ensuing recovery) into periods of training only and training plus matches, as well as co-validation and refinement of our predictions for broader use in sport. This study offers new insight regarding physical stress and temporal dependencies in athlete well-being during an international rugby series. Daily TL exposure predicted adverse responses (i.e., declining mood and sleep quality, rising stress, soreness, and fatigue) that differed in time lag, lag duration, and magnitude, relative to dosage. A more precise understanding of these associations can guide training prescription and psychological strategies to optimize recovery, performance, and team success. Examples include better distribution of a weekly TL and the targeted recovery of soreness and fatigue, above an individual TL threshold.
|
Study
|
other
|
en
| 0.999996 |
PMC11694200
|
Although many hypoxic training methods exist, the majority of endurance athletes still utilize altitude training with prolonged exposure to moderate hypoxia (living high, training high) . However, more and more elite athletes who need to repeat high intense efforts during competition include intermittent hypoxic training methods in their training, and, more specifically, high-intensity interval training in hypoxia (HIIT). Such intermittent hypoxic training has been shown to augment oxidative stress compared to similar training done in normoxia . Indeed, aggravation of oxidative damage through unbalanced DNA strand breakage , increase in lipid peroxidation and protein oxidation have been reported under exercise in hypoxic conditions. Although all mechanisms underlying this unnecessary oxidative stress are not entirely clear, they lead to reduced redox potential within the mitochondria as well as increased catecholamine production and activation of the xanthine oxidase pathway (see for more details). This could also be the consequence, not only of the increased activity of ROS generated, but also of the decreased activity of antioxidant systems . Antioxidant supplementation can have beneficial effects in attenuating and/or preventing oxidative damage associated with exercise in hypoxia. Nitric oxide (NO), as an important antioxidant agent that suppresses the formation of free radicals , through the NO uncoupling pathway, would be a worthy intervention. However, its bio-availability can be enhanced through an alternative pathway involving sequential reduction of nitrate (NO 3 −) to nitrite (NO 2 −) and further to NO. The latter is independent of oxygen , and therefore augmented in hypoxic conditions. Reducing ROS formation is expected to be beneficial, especially mediated by the oxygen independent NO 3 − – NO 2 − – NO pathway. Therefore, increasing NO bio-availability of this pathway may provide potential ergogenic effects for exercise performed in hypoxia, as the availability of NO has been suggested to influence human acclimatisation to altitude . Not-withstanding this hypothesis, there is little evidence on the influence of NO 3 − supplementation on oxidative stress induced by hypoxic high-intensity exercise. Ashmore et al. studied rats exposed to normobaric hypoxia, reporting that dietary NO 3 − supplementation reduced the levels of oxidative stress markers at rest, suggesting that this strategy may be of benefit to individuals exposed to altitude. Carriker et al. investigated changes in oxidative stress and arterial oxygen saturation (SaO 2 ) during exercise in hypobaric hypoxia following acute NO 3 − supplementation in well-trained males and concluded that acute NO 3 − supplementation yielded no beneficial changes in oxidative stress. However, to our knowledge, the influence of NO 3 − supplementation on oxidative stress during a longterm period of high-intensity exercise in hypoxia remains unknown. Whereas the acute combination of both hypoxia and exercise (of low, moderate and high intensities) clearly augments oxidative stress long-term responses to combined stimuli remain debated. In particular, high-intensity training (work performed above the lactate threshold interspersed by periods of low-intensity exercise or complete rest) has been shown to significantly augment oxidative stress mostly via reduced antioxidant capacity . In contrast with these findings, moderate intensity exercise does not seem to modify antioxidant status or significantly alter redox balance . It must be noted that during these interventions the “living high, training low” model was implemented, and therefore exercise sessions were performed in normoxia. Collectively, this suggests that long-term high-intensity training under hypoxic conditions (training high) may regulate even more the systemic redox balance in humans. Therefore, increasing NO bioavailability, via the NO 3 − – NO 2 − – NO pathway, may provide a key complement in reducing oxidative stress induced by exercise (especially at high intensity) in hypoxia. The aim of the present study was therefore to analyse the effects of dietary NO 3 − supplementation combined with prolonged high-intensity training performed under normobaric hypoxic conditions on antioxidant/pro-oxidant balance. It was hypothesized that enhancing NO production via the NO 3 − – NO 2 − – NO pathway, by dietary NO 3 − supplementation, would mitigate oxidative stress under hypoxic conditions. Thirty trained, developmental age male subjects (mean ± SD: 54.4 ± 8.2 ml · kg · −1 min · −1 , 36.2 ± 6.3 yrs., 71.5 ± 8.1 kg and 174.8 ± 6.8 cm for relative maximal oxygen uptake: V ˙ O 2 max, age, weight, and height, respectively), training 4–5 sessions per week (> 10 years of experience), without differences between groups), volunteered to take part in this study . All subjects signed an informed consent form after they had been informed of all experimental procedures and possible risks associated with the experiments. Ethical approval was obtained by the Ethics Committee of the University of Trás-os-Montes and Alto Douro and all procedures complied with the ethics code of the Declaration of Helsinki. This randomized, single-blind, placebo-controlled, independent group study was conducted in a normobaric hypoxic facility (Porto’s Exercise Medical Center, Portugal: b-Cat). Participants performed (cycle ergometer) 12 high-intensity interval training (HIIT) sessions during a 4-week period (3 sessions/week), while randomly assigned to one of three experimental groups: (i) HNO: high-intensity exercise training sessions in normobaric hypoxia with NO 3 − supplement; ii) HPL: high-intensity exercise training sessions in normobaric hypoxia with placebo supplement, and iii) CON: high-intensity exercise training sessions in normoxia (F i O 2 = 20.9%) with placebo supplement. Subjects were instructed to maintain their habitual physical activity level and normal diet but were asked to abstain from the use of any chewing gum and antibacterial mouthwashes products . In the week before (baseline) and after the 4-week period (post-intervention; between the second and third day after the last training session), venous blood samples were collected (in the late afternoon/early evening period) from the median cubital vein and participants performed an exercise transition from rest to severe intensity until exhaustion (T lim ) on a cycle ergometer (Lode Excalibur Sport, Groningen, The Netherlands). Each week, participants performed two sessions of short aerobic intervals (HIT: 2 sets of 6 × 1 min at 90%Δ with 1 min active recovery between repetitions and 3 min between sets) and one session of repeated sprint training (RST: 4 sets of 6 × 10 s “all-out”, with 20 s active recovery and 3 min between sets) on a cycle ergometer (Lode Excalibur Sport, Groningen, The Netherlands). All training intensities used were relative to specific p V ˙ O 2 max (assessed in hypoxia for HNO and HPL, and in normoxia for the CON group). The number of repetitions was increased from 6 (1 st and 2 nd weeks) to 7 (3 rd and 4 th weeks) in both HIT and RST sessions (for more details see ). Supplements were ingested 2.5–3 h prior to each HIIT session. NO 3 − was administered in the form of beetroot juice containing 400 mg of a powdered standardized beetroot extract (containing 2% NO 3 −, ~8.4 mmol) dissolved in 150 ml of water (Sabeet, Sabinsa Corporation). An equivalent volume of currant juice was served as a control drink. Time sustained was determined through the T lim test and performed at 80%Δ, as previously reported . The test ended when the cadence could no longer be maintained within 10 rpm of the preferred cadence for > 5 s. Before the test, a standard 5 min warm-up exercise (50% of p V ˙ O 2 max), followed by a 5 min passive rest, was performed. All tests were performed at the same time of day (± 2 h). During the T lim test, changes in muscle O 2 saturation (SmO 2 ) and in total haemoglobin (THb) were assessed in the capillaries (vastus lateralis muscle) through a near-infrared spectroscopy (NIRS) monitor (Moxy monitor, Fortiori Design, Minnesota, USA), positioned as previously suggested , and demonstrated to be valid and reliable to measure SmO 2 . After venous blood samples were withdrawn, plasma was immediately separated by centrifugation . Then samples were stowed in aliquots and immediately stored at -80ºC until analysed for: i) subsequent oxidative stress markers (advanced oxidation protein products: AOPP, malondialdehydes: MDA, nitrotyrosine, ferric-reducing antioxidant power: FRAP, and uric acid: UA), ii) antioxidant enzymes (superoxide dismutase: SOD, catalase, glutathione peroxidase: GPX, and myeloperoxidase: MPO) and, iii) nitric oxide metabolites (NO 3 −, NO 2 − and NOx). All assays on plasma sample were conducted by spectrophotometry. Raw SmO 2 and THb data were treated using a smooth spline filter to reduce the noise created by movement and data presented every 2 s. Baseline SmO 2 (SmO 2base ) and baseline THb were computed as a 30-s average while subjects performed 3 min of unloaded baseline pedalling (8 W) at their preferred cadence before the beginning of each test. Minimum SmO 2 (SmO 2min ) was the lowest 6-s average obtained during each test. Maximum SmO 2 (SmO 2max ) and maximum THb were the highest 6-s average obtained during each test with the recovery phase included. Average SmO 2 from 30 to 120 s after the end of each test was used to assess recovery of SmO 2 (SmO 2recovery ). For each test, baseline SmO 2base and SmO 2min are expressed as % of SmO 2max (relative-SmO 2base and relative-SmO 2min , respectively). Change in SmO 2 (DSmO 2 ) and change in THb DTHb were calculated as the difference between relative SmO 2min and relative SmO 2base and the difference between maximal and baseline THb . Catalase activity in the plasma was determined using H 2 O 2 as a substrate and formaldehyde as a standard. Catalase activity was determined by the formation rate of formaldehyde induced by the reaction of methanol and H 2 O 2 using catalase as the enzyme (intra-assay coefficient of variation: CV = 3.1%). GPX activity was determined as the rate of oxidation of NADPH to NADP+ after addition of glutathione reductase (GR), reduced glutathione (GSH) and NADPH, using H 2 O 2 as a substrate (intra-assay CV = 4.6%). SOD activity was determined by the degree of inhibition of the reaction between superoxide radicals, produced by the hypoxanthine—xanthine oxidase system, and nitroblue tetrazolium (intra-assay CV = 5.6%). MPO activity was measured in plasma by determination of the kinetic absorbance at 653 nm after addition of H 2 O 2 and 3,3’,5,5’-tetra-methylbenzidine (TMB: intra-assay CV = 5.1%). FRAP concentration was calculated using an aqueous solution of known Fe 2+ concentration (FeSO 4 , 7H 2 O 2 ) as standard at a wavelength of 593 nm (intra-assay CV = 2.9%). The concentration of plasma UA was determined using a commercially available kit. The principle is: uricase acts on uric acid to produce allantoin, CO 2 and H 2 O 2 . The absorbance is proportional to the uric acid quantity in the sample (intra-assay CV = 0.9%). AOPP assay was calibrated with a chloramine-T solution that absorbs at 340 nm in the presence of potassium iodide. AOPP concentrations were expressed as μmolL −1 of chloramine-T equivalents (intra-assay CV = 5.4%). MDA concentration was determined by extracting the pink chromogen with n-butanol and measuring its absorbance at 532 nm by spectrophotometry using 1,1,3,3-tetrae-thoxypropan as standard (intra-assay CV = 2.2%). The metabolites of NO, NO 2 − and NO 3 − were measured using the reagent of Griess, a mixture of sulfanilamide, naphthalene ethylene diamine dihydro-chloride and phosphoric acid. This reagent binds nitrite to form a dye which absorbs at 550 nm. In a second measurement, the NO 3 − reductase was added to the plasma sample in order to convert NO 3 − into nitrite to measure the total amount of nitrites and nitrates (NOx) (intra-assay CV = 3.9, 5.2 and 4.8% for NO 2 −, NO 3 − and NOx, respectively). The plasmatic nitrotyrosine concentration was measured by the ELISA method (enzyme-linked immunosorbent assay: intra-assay CV = 6.8%). It was considered 10 participants per group for a type I error of 5%, a power of 80%, with statistical significance, and an average population effect size to be detected with probability of 0.5 (G*Power software version 3.1.9.2), considering the time-to-task failure variable at the severe intensity exercise domain. The Shapiro-Wilk test was used to confirm data normality and homogeneity. Data are presented as mean ± SD. Repeated measures analysis of variance (ANOVA) with two factors (group × time) was used to test main and interaction effects for the studied variables. A contrast analysis was used for post-hoc comparisons when an interaction effect was observed (Bon-ferroni test). Magnitudes of standardized effects (partial eta square – r 2 ) were determined as follows: small, 0.2–0.5; moderate, 0.5–0.8, and large, > 0.8. All statistical procedures were conducted with SPSS 24.0 and the significance level was set at 5%. NOx increased (+22%) between pre- and post-intervention periods, though not significantly (time effect: p = 0.06). Also, plasmatic nitrates (+21%) and nitrites (+33%) increased non-significantly (time effect: p = 0.07 and p = 0.09 for nitrates and nitrites, respectively). Only nitrotyrosine significantly decreased (time effect: p = 0.04) from pre- to post-intervention, regardless of the group (-22% in the HNO group, -41% in the HPL group and -45% in the control group) . There was a time effect for GPX (+12%, p = 0.025), increasing but only in CON ( p = 0.017, 20%) and not in the HNO and HPL groups. In addition, at post-intervention, GPX activity in plasma was lower for HPL compared to HNO ( p = 0.04) and CON ( p = 0.01) groups . In contrast, SOD, catalase and FRAP concentration were not modified between pre- and post-intervention for the 3 groups ( Table 1 ). There was a time effect for MDA . MDA increased in HNO (+60%; p = 0.001) and in CON (+30%; p = 0.023) but not in the HPL group . Conversely, AOPP, uric acid, and myeloperoxidase activity were not modified by the protocol whatever the group ( Table 1 ). SmO 2recovery decreased (20%) from pre- to post-training (time effect: p = 0.01). However, the decrease was significant only in CON but not in HNO or HPL. In addition, at post-in-tervention, SmO 2recovery was higher in HNO ( p = 0.001) and HPL than in the CON group (group effect: p = 0.02; interaction effect: p = 0.01) ( Table 2 ). At post-intervention, SmO 2base (25%; p = 0.007) and relative SmO 2base (22%; p = 0.002) were lower in HPL compared to the CON group. This latter parameter was also lower in HPL (8%; p = 0.03) compared to the HNO group. SmO 2min (61%; p = 0.003) and relative SmO 2min (60%; p = 0.04) were lower in HNO compared to the CON group. In addition, DSmO 2 was also lower in HNO compared to the HPL group (15%; p = 0.04) ( Table 2 ). The aim of this study was to analyse the effects of a prolonged combination of dietary NO 3 − supplementation with HIIT performed in normobaric hypoxia on antioxidant/pro-oxidant balance in male endurance subjects. It was hypothesized that dietary NO 3 − supplementation would mitigate the detrimental effect of hypoxia on oxidative stress, mainly by enhancing NO production via the NO 3 − – NO 2 − – NO pathway. Our main results showed that hypoxia inhibits the increase in GPX activity as only the CON group showed differences between pre-and post-intervention periods. However, our initial hypothesis that dietary NO 3 − supplementation would mitigate the detrimental effect of hypoxia on oxidative stress was confirmed. It appears that NO 3 − supplementation (in the HNO group) could compensate for the inhibitory effect of hypoxia (observed in HPL group), although hypoxia limited the increase in MDA in response to HIIT. Our results confirmed that a normoxic HIIT increased GPX activity in plasma as already observed in untrained subjects , where-as in our study, the subjects were endurance trained. Hypoxia (with and without NO 3 − supplementation) attenuated this increase in GPX following 4 weeks of HIIT exposure. One may hypothesize that the total metabolic stimulus (exercise demands) of HIIT sessions performed in hypoxia was lower and induced lower mitochondrial ROS production . However, the total energy produced and the distance achieved in HIIT sessions (averaged over the 12 training sessions) were not different between the HNO, HPL and CON groups, suggesting that the relative exercise intensity performed was similar. Our results of GPX are also contradictory to those showing that acute hypoxia increases oxidative stress and decreases the activity of antioxidant enzymes . Only one study has measured the activity of antioxidant enzymes following a 3-week training program of HIIT in hypoxia . These authors observed an increase in GPX activity from pre- to post-intervention. However, several differences may explain the discrepancy between this study and ours: the athletes were professional, with V ˙ O 2 max values on average 30% higher than in the present study . On the other hand, although the intensity of the exercise sessions was lower (95% of lactate threshold), the duration of intervals was longer than ours. Consequently, the impact on mitochondrial activity and the resulting radical production was likely stronger (30–40 min of intensive intervals). Finally, Michalczyk et al. measured the GPX activity in red blood cells while we measured it in plasma, which reflects more the systemic pro-antioxidant balance. Although several previous studies have reported that endurance training in normoxia or in hypoxia induces an improvement in the SOD and catalase activities, we did not observe any significant change for these two antioxidant enzymes. Nevertheless, it is important to emphasize that the work of Miyazaki et al. was conducted with an untrained population, whereas the participants in the present study were endurance trained. Therefore, one may speculate that the activities of these antioxidant enzymes at the beginning of the intervention were already sufficiently high in our subjects compared to untrained subjects, thus contributing to limit their increase . Antioxidant enzymes are higher for endurance athletes vs. sedentary or non-endurance/sprint athletes. In support of this, our results corroborate those of Robertson et al. , who did not report a significant improvement in SOD and catalase following HIIT in both normoxia and hypoxia in highly endurance-trained cyclists. Hypoxia blunted the MDA increase induced by the training intervention (i.e., increase in CON group vs no change in HPL group). This could be a result of lower ROS production in mitochondria in hypoxia during the HIIT sessions. Unlike GPX, it appears that NO 3 − supplementation blunted the inhibitory effect of hypoxia on MDA increase after 4 weeks of HIIT training. The first hypothesis would be that NO 3 − supplementation could have induced a higher intensity (i.e., power output) performed during HIIT sessions in hypoxia, as proposed by Cocksedge et al. . This would therefore increase the production of ROS mainly from the upregulation of NADPH oxidase 2 and activation of the phospholipase A2 pathway. However, since there were no differences in energy (419.91 ± 52.76 vs. 442.04 ± 66.84 kJ) or distances covered during the 12 training sessions between HNO and HPL, respectively, this mechanism should not be considered. It is also unlikely that the lipid content and the amount of plasma lipid peroxidation substrate were increased by NO 3 − supplementation. Indeed, such supplementation seems rather to lower plasma triglycerides and cholesterol . Another hypothesis to explain the MDA increase in plasma might be related to the effects of NO 3 − on oxygen availability in muscles. In the HNO group, post-training SmO 2 min was lower than in the other groups ( Table 2 ), suggesting that NO 3 − intake may induce greater vasodilation and therefore a greater oxygen delivery to active muscles in hypoxia . This later mechanism during hypoxic exercise under NO 3 − supplementation would therefore induce greater ROS mitochondrial production and lead to an increase in the production of MDA. The small effect of NO 3 − supplementation on the oxidative stress and antioxidant markers could be explained by the very modest increase in nitrates in the circulation. It should also be acknowledged that the blood samples were collected a few days after the last supplementation. It was reported previously that there was no additional improvement in exercise tolerance after ingesting beetroot juice containing 16.8 compared with 8.4 mmol NO 3 − over 24 h . In this context, the nitrates dose in our study may therefore be too low (8.4 mmol before each session) to stimulate NO metabolism. The necessary dose of ingested nitrates to significantly increase plasma nitrate concentration, especially chronically, should be higher in endurance-trained athletes (0.07 mmol NO 3 −/kg body weight per day) . This limited increase in plasma nitrate concentrations could be explained by the characteristics of our trained subjects, who usually present greater endothelial NOS (nitric oxide synthase) activity and therefore high endogenous NO production . In addition, trained subjects have higher plasma nitrite concentrations than sedentary or active subjects , and also the response to a standard dose of nitrates may be lowered . Finally, recent evidence showed that nitrate supplementation preferentially modified contractile function in type II fibres (vs. type I), which are in lower proportion in endurance athletes, likely explaining the limited physiological response to nitrate supplementation . Regardless of the condition (hypoxia and/or NO 3 − supplementation), the 4 weeks of HIIT training had only minor effects on plasma nitrite levels. Our results confirm those obtained by Dreißigacker et al. showing that high-intensity exercise in normoxia did not induce a significant increase in nitrites in trained cyclists. These authors suggested that nitrite was probably more reduced in NO in erythrocytes during high-intensity than during low-intensity exercises. Through its ability to produce peroxynitrite by reacting with the superoxide anion, NO could also cause nitrosative stress on biomolecules and increase its end products, such as nitrotyrosine . Nevertheless, we did not observe any significant change in plasmatic nitrotyrosine, either in the NPL and HPL groups, or in the HNO group. It has been observed in highly trained endurance athletes that NO 3 − supplementation containing 8 and 16 mmol of nitrates did not induce a significant increase in plasma peroxynitrite following an incremental and maximal effort on a treadmill . On the other hand, when nitrate supplementation contained 24 mmol of nitrates, these authors observed a significant increase in plasma peroxynitrite. Therefore, the relatively low dose of nitrate (8.4 mmol) ingested before each training session by the participants of the HNO group may partly explain the lack of a significant effect of nitrate supplementation on plasma nitrotyrosine. We did not detect any significant change in plasma AOPP and uric acid. To our knowledge, no study has measured the effects of such hypoxic training intervention (i.e., living low, training high) on plasma AOPP. In the context of the present study, one may hypothesize that the ROS generated during the training sessions was not sufficient to induce significant protein oxidation, regarding the enzymatic antioxidant capacities of the endurance-trained participants . Finally, in agreement with our results, two recent studies have shown, in endurance-trained athletes, that HIIT training carried out in normoxia or hypoxia did not induce a significant modification of plasma uric acid . One may question the relevance of using time-to-exhaustion exercise since performance time is known as less reliable than for timetrial exercise. First, as reported by Hopkins et al. , the constant-power test is not better or worse than a constant-work or constant-duration test. When converted to mean power, constantload tests are more reliable than the other tests. Secondly, by definition, time-to-exhaustion exercise does not require the intensity to be paced. Consequently, this minimizes the potential changes in lactate production and in the oxidative-glycolytic balance of the exercise (known to be one of the most important determinants of the physiological responses, including muscle de-reoxygenation) during high-intensity exercises in hypoxia. In the present study we adopted an independent group design (not a crossover one), and although we ensured that all groups were blinded for both the intervention (normoxia vs. hypoxia conditions) and the supplementation (NO 3 − vs. placebo) conditions, the fact that some of the participants may have identified the group to which they were allocated may have slightly influenced the results obtained. In addition, it cannot be excluded that a higher daily dose of NO 3 − may have a more pronounced effect. Moreover, since heart rate, lactate or rate of perceived exertion responses are either directly influenced by hypoxia or are irrelevant for RST, the quantification of internal training loads is difficult in the present study. Also, it is important to bear in mind that although the NIRS device used in the present study is a valid and reliable one, there is still a need for further development of this equipment at higher intensities. The present study focused on the effects of high-intensity normobaric hypoxic training associated with NO 3 − supplementation on oxidative stress, antioxidant defence and NO metabolism in endurance subjects. Normobaric hypoxic exposure during HIT and RST sessions blunted the increase in GPX and MDA at the end of the training period. In addition, since the NO 3 − supplementation used in the study had only a modest effect on plasma nitrate and nitrite contents, its effects only very slightly mitigate the detrimental effects of high-intensity training under normobaric hypoxic conditions on oxidative stress and antioxidant markers. Future studies should test a higher dose of NO 3 − to conclude whether NO 3 − supplementation can provide beneficial effects on the oxidative stress-NO metabolism axis in response to high intensity normobaric hypoxic training. Considering the latter, NO 3 − supplementation cannot be currently recommended to mitigate the detrimental effects of high-intensity training under hypoxic conditions on oxidative stress and antioxidant markers.
|
Other
|
biomedical
|
en
| 0.999997 |
PMC11694201
|
Soccer is a sport that demonstrates its intermittent nature through short bursts of high-intensity activity interspersed with longer periods of low-intensity actions . Benchmarking and profiling these physical characteristics is important to physically prepare players for the demands of match-play through effective training programmes . Due to the adaptation of modern technology in soccer , the use of tracking-based technologies such as the Global Positioning System (GPS) has grown significantly. This has improved the ability of applied practitioners to profile the physical demands of players from competitive matches and training, which has facilitated more precise training prescriptions, load modifications, and thus, better preparation of players for match-play [ 4 – 6 ]. There has been a noticeable increase in high-intensity activities during competition worldwide over the last few decades [ 7 – 9 ]. Early work suggested that sprint distance and the number of sprints increased by ~35 and ~85% respectively, while mean sprint distance was lower in 2012/13 compared to 2006/07 with the proportion of explosive sprints increasing . More recently, the trend of increased running demands has been observed following the 2022 World Cup tournament . Distances covered at higher intensities were 16–92% and 36–138% higher for wide midfielders and wide forwards compared to central defenders, defensive and central midfielders, as well as centre forwards . While defensive and central midfielders covered a greater proportion of distance at higher intensities out-of-possession (71–83%), and attacking mid-fielders, wide, and centre forwards covered more in-possession (55–68%) . Moreover, high-intensity actions (e.g., sprinting, accelerating, and decelerating) significantly influence decisive moments of the match . Consequently, high-intensity movements in match-play have gained more attention . Even so, researchers have a tendency to examine total, high-speed running and sprint distance in isolation, without considering acceleration and deceleration movements . These high-intensity actions induce not only physiological but also mechanical demands, accounting for ~10% of total workload of elite soccer players during match-play, irrespective of playing position . Additionally, the number of accelerations during match-play is up to ~8 times higher than sprint actions (~90–120 vs ~15–30, respectively) , while deceleration actions occur as often as acceleration actions, leading to an even greater mechanical load . Thus, it appears crucial for practitioners to profile such high-intensity actions throughout a season so that effective preparation and recovery can be implemented to allow players to cope with the physiological and mechanical demands of match-play. Given the fact that physical characteristics of players vary across playing positions as well as different leagues , it would be insightful to compare differences between playing positions and various elite soccer leagues. In this regard, previous research examining data from the season 2006/07 showed that English Premier League (EPL) players displayed higher high-intensity running distance in matches than La Liga players irrespective of playing position . More recently, another study compared Portuguese and Dutch second leagues and found that Portuguese players produced higher total and sprinting training distances, although no comparisons during match-play or among playing positions were considered. Numerous publications are available covering reference ranges for basic running characteristics of soccer players and various contextual variables. Nonetheless, no previous studies considered accelerations and decelerations , thus reinforcing the relevancy of the present study as it analysed two different countries/leagues and positional differences which is vital to improve coaches’ knowledge of various training methods to enhance player preparation and recovery . Where these data are utilised in the scouting assessment of soccer players, by coaches to design training, and by performance staff and physiotherapists to develop individual recovery protocols. Therefore, the aims of the present study were to: (i) quantify accelerations and decelerations of soccer players during match-play across two consecutive seasons from the EPL and French Ligue 1 (L1); and (ii) compare any positional differences between the two elite European soccer leagues. Based on previous literature , the study hypothesis was that the EPL team would present a higher number of accelerations and decelerations during match-play. This longitudinal study over two consecutive seasons involved professional soccer players from two European teams, the EPL and L1. Match acceleration and deceleration performance variables were collected using a GPS (Apex Pod, Statsports; Northern Ireland, UK). Data from all competitive matches across both leagues during two seasons were analysed. A non-probabilistic sampling protocol was employed to recruit participants. During the observation period, consistent player monitoring approaches were implemented without any interference from the researchers. Data from both seasons included 58 male players (EPL: age 23.2 ± 5.9 years, weight 75.2 ± 8.1 kg, height 1.83 ± 0.06 m; L1: age 24.3 ± 5.1 years, weight 76.6 ± 8.5 kg, height 1.83 ± 0.07 m). The data was obtained from all official matches played during both seasons (EPL n = 38, L1 n = 34). The EPL team adopted a 4-3-3 or 3-5-2 formation and implemented a hybrid model of possession that included possession-based and direct-play strategies. While the L1 team consistently implemented a 4-3-3 formation and also adopted a mixed approach of tactical strategies when in possession. Furthermore, for both study teams when out of possession a mixture of high-press and mid-block (a narrow and compact team shape defending the middle third of the pitch) strategies were employed. The research inclusion criteria have been previously applied and were: (i) named in the first-team squad at the start of both study seasons, (ii) played in at least 80% of matches, and (iii) only completed official team training during the study period. Additionally, the exclusion criteria for the study have also been previously employed and included: (i) long-term (three months or longer) injured player data, (ii) joining the team late in either of the study seasons, (iii) lack of full, complete match data, (iv) an in-sufficient number of satellite connection signals, and (v) goalkeepers, due to the specific nature of match activity and low running demands . Only outfield players who completed the entire match (≥ 90 min) were included for analysis. Players were assigned to one of five playing positions as match demands for these differ significantly. The methodology of differentiating specialised positions was adapted from previous research . Participants were classified as: EPL full-backs (FB; n = 3), centre back (CB; n = 5), centre midfielders (CM; n = 7), attacking midfielders (AM; n = 4), and centre forwards (CF; n = 4); L1 full-backs (FB; n = 7), centre back (CB; n = 6), centre midfielders (CM; n = 10), attacking midfielders (AM; n = 5), and centre forwards (CF; n = 7). The small sample size is supported by previous studies in soccer . Even so, the power of the sample size was calculated through G-Power . Post-hoc analysis was conducted considering the study aims. For comparison analysis, an F-test, with a total of 58 participants with a p = 0.05 and effect size of 0.1 was performed. The actual power achieved was 86%. All data collected resulted from normal analytical procedures regarding player monitoring over the competitive season, nevertheless, written informed consent was obtained from all participants. All data were anonymised prior to analysis in accordance with the Declaration of Helsinki. Moreover, this study was approved by the local ethics committee of the University of Central Lancashire and the professional clubs from which the participants volunteered . Data were collected from all (n = 144) in-season matches played by the examined teams across the two study seasons. The examined EPL and L1 team participated in a total of 76 and 68 matches respectively across the study seasons. Accelerometer match data were consistently monitored across the study seasons using an 18 Hz GPS technology tracking system (Apex Pod, version 4.03, 50 gr, 88 × 33 mm; Statsports; Northern Ireland, UK) that has previously provided good to moderate reliability (coefficient of variation (CV) = 0.1 to 3.9%) for the majority of thresh-old-based accelerations and decelerations . The 18 Hz system has also shown good validity and reliability for determining the distances covered (typical error of estimate (TEE): 1.6–8.0%; CV: 1.1–5.1%) and sprint mechanical properties (TEE: 4.5–14.3%; CV: 3.1–7.5%) . All data collection procedures and unit error and reliability have previously been reported . Following every match, accelerometry data were extracted using proprietary software (Apex, version 4.3.8, Statsports Software; Northern Ireland, UK) and exported to a secure database for analysis, as software-derived data is a more simple and efficient way for practitioners to obtain data in an applied environment, with no differences reported between processing methods (software-derived to raw processed) . Variables analysed were selected based on previous publications and were analysed as absolute (total number) and relative data (divided by actual playing time for each player). Thus, the total number of accelerations and decelerations and the number of accelerations (> +3 m/s −2 with minimum duration of 0.5 s) and decelerations (< -3 m/s −2 with minimum duration of 0.5 s) per minute were examined. Descriptive data (mean ± SD) were determined for the number of accelerations and decelerations per minute for the different positions (CB, FB, CM, AM, and CF) and leagues (EPL and L1). Homogeneity of variance was assessed via Levene’s statistic and, where violated, Welch’s adjustment was used to correct the F-ratio. Two-way (5 × 2) analysis of variance (ANOVA’s) were conducted to identify differences in the number of accelerations and decelerations per minute across different positions and leagues. Post-hoc analysis was used to identify the positions that were significantly different to one another using either Bonferroni or Games-Howell post-hoc analyses, where equal variances were and were not assumed, respectively. Effect size ( η 2 ) values were reported for the ANOVA results, while Cohen’s d values ( d ) were reported for significant post-hoc results. η 2 values in the range 0–0.009 are considered insignificant effect sizes, 0.01–0.0588 as small effect sizes, 0.0589–0.1379 as medium effect sizes, and values greater than 0.1379 as large effect sizes. Cohens d effect size magnitudes were interpreted using the following classifications: trivial < 0.19; small 0.2–0.59; 0.6–1.19 moderate; 1.2–1.9 large; 2.0–3.9 very large; > 4.0 extremely large . All significance values were accepted at p < 0.05 and all statistical procedures were conducted using JASP (version 0.18) for Macintosh. Descriptive statistics for the number of accelerations and decelerations for the whole team are presented in Figure 1 . The two-way ANOVA identified a significant main effect for position (p < 0.001; η 2 = 0.034) for the number of accelerations per minute. Full-backs and AM completed more accelerations per minute than CB and CM (p < 0.001–0.019; d = 0.319–0.499), while CF also completed more than CB (p = 0.003; d = 0.440). For the number of decelerations per minute, there was also a significant main effect for position (p < 0.001; η 2 = 0.076), where AM, CM, and FB completed more decelerations per minute than CB (p < 0.001; d = 0.621–0.847). Full-backs and CM also completed more decelerations per minute than CF (p = 0.001–0.032; d = 0.350–0.513). There was also a significant main effect for league for number of accelerations per minute (p < 0.001; η 2 = 0.094) and number of decelerations per minute (p < 0.001; η 2 = 0.075). Players from the EPL performed significantly more accelerations and decelerations per minute than L1 players ( d = 0.719 and 0.652, respectively). Descriptive statistics for the number of accelerations and decelerations per minute for each position across both leagues are presented in Table 1 . There was no significant interaction effect between league and position for the number of accelerations per minute (p = 0.901; η 2 = 0.001), or the number of decelerations per minute (p = 0.104; η 2 = 0.007). However, when considering the number of accelerations per minute, AM from the EPL completed more than CB in the EPL (p = 0.028; d = 0.535), and all positions in L1 (p < 0.001; d = 0.726–1.250), while FB from the EPL also completed more accelerations per minute than all positions from L1 (p < 0.001; d = 0.724–1.228). Centre backs, CM, and CF from the EPL completed more accelerations per minute than CB and CM from L1 (p < 0.001; d = 0.569–1.071). Finally, CF and FB from L1 completed more accelerations per minute than CB from L1 (p < 0.001; d = 0.724 and 1.228, respectively). The number of decelerations per minute was greater for AM compared to CB in the EPL (p = 0.033; d = 0.527), and all positions in L1 (p < 0.001; d = 0.506–1.221). Centre midfielders and FB completed more decelerations per minute than CB from the EPL (p < 0.001; d = 0.798 and 0.893, respectively) and all positions from L1 (p < 0.001; d = 0.691–1.586). Finally, CF and CB from the EPL completed more decelerations per minute than CB from L1 (p < 0.001; d = 0.857 and 0.693, respectively). The aims of the present study were to: (i) quantify accelerations and decelerations of soccer players during match-play across two consecutive seasons from the EPL and L1; and (ii) compare any positional differences between the two elite European soccer leagues. The main findings of the present study were that the relative total number of accelerations and decelerations were higher in the EPL when compared to L1 considering whole team data (p < 0.001 with moderate effect size for both variables). Since soccer involves the interaction of physical, technical and tactical actions among players, the adoption of differing technical/tactical strategies will result in distinct physical demands , which was confirmed by the present findings. This aligns with the perception and grounded research opinion that the EPL is characterised by a more physically demanding and fast-paced style of play, in particular adopting a more ‘direct’ style of play, with a higher efficient attack conducted in a short time duration . This style may also be reinforced by a study examining the differences in fouls and cards administered as indicators of aggressive play in the Premier Leagues of England, France, Germany, Italy, and Spain and supported the notion that the EPL was the most aggressive league across Europe . Nonetheless, no research was found considering the comparison of different leagues and playing positions based on accelerometry-based variables. Thus, more research is warranted to confirm such statements. The analysis of positional acceleration and deceleration demands have recently been documented , albeit not in elite European soccer players and not comparing league and positional differences. Similar to other research that reported CB perform the lowest number of accelerations during match-play, whilst WM execute the greatest compared to all other positions and the highest number of acceleration efforts was observed in FB . The present study showed that CB performed less accelerations in both leagues. However, CF and FB performed more accelerations in L1 while FB and AM performed more accelerations in the EPL. Moreover, previous research showed that CB perform the lowest number of decelerations during match-play, whilst CF executed the greatest compared to all other positions . The present study corroborated these previous findings in terms of a low number of decelerations for CB, although in contrast to earlier work that found FB performed the highest number of decelerations . It is relevant to mention that previous research only utilised data from La Liga which may cause some bias when interpreting results . Even so, another study suggested that wide midfielders usually perform a high number of decelerations . While the present study did include wide midfielders, FB were highlighted as players who covered a large area in the field being similar to midfielders, thus justifying the higher results. The lack of identical results in both leagues challenges the prevailing notion of distinct physical demands . Furthermore, it may also reflect the tactical strategy (offensively and defensively) of the examined teams that subject AM (in the EPL) and CF (in L1) to produce similar explosive actions when compared with FB during competitive match-play. Notwithstanding, when considering the total number of accelerations and decelerations, all positions showed higher values in the EPL team compared with the L1 team (although not all results were significant). Such findings reinforce the analysis of relative data of accelerations and decelerations per minute. Additionally, although not examined in the present study, it is also possible that tactical formation, that contributes to explosive actions, may influence match result. For instance, a recent study compared the three best Spanish soccer teams examining running measures with and without ball possession considering different playing positions, where different formations and styles were employed: 4-4-2 formation with a compact defence and direct attack strategy; 4-3-3 formation with an indirect style of play; and 4-3-3 formation with intricate attacks and effective counterattacks . While the authors found minimal differences between the three different formations, when considering the various positions, it was clear that team formation and the differing tactical demands have a significant influence on running performance . Thus, it is speculated that accelerations and decelerations may also differ in this study as different playing position patterns were revealed. While contextualised high-intensity running profiles of elite soccer players with reference to general and specialised tactical roles have gained interest recently , there is no research specifically examining the quantity or quality of accelerations and decelerations across two different elite European leagues considering tactical development, thus further research is warranted. Furthermore, if other contextual variables such as opposition standard, possession characteristics and match location were considered, there is the possibility that significant differences in relative accelerations and decelerations for CF of both leagues would be observed being the only position that significantly differed between leagues. For instance, it was recently shown that CF performed a significantly higher number of acceleration efforts against top-level teams when playing at home compared to away matches . Additionally, the frequency of decelerations per minute was also position-specific . In particular, the number of decelerations per minute performed by FB, CM, WM and CF were higher than CB, which is consistent with previous studies , while CF were similar between the EPL and L1. These varying acceleration and deceleration results between studies may partly be explained by the effect of differing playing formations as positional differences are significantly affected by playing formations . However, thoroughly examining the effect of tactical aspects such as in and out of possession strategies and team formation on explosive acceleration and deceleration actions seems problematic as the development of tactical nuances suggests that elite teams do not select and maintain the same strategies or formations throughout the whole match or season . An approach that merges the tactical elements of match-play with the physical outputs warrants further investigation to allow a greater understanding of these demands that may be practically useful when designing position-specific drills and sessions to optimally prepare players. The present results have some practical implications for coaches and performance staff in tailoring position-specific training regimes, load management and individualised recovery strategies based on differing leagues and positional requirements. The unique case of CF, when analysing relative data, suggests that, despite the overall differences in acceleration and deceleration actions between leagues, certain playing positions may share common physical demands irrespective of league context. However, this may be of greater interest when contextualising acceleration and deceleration behaviours with tactical variables as this may help practitioners design more effective training programmes . Notably, it is relevant to highlight that this study utilised data from two seasons to compare two different leagues which seems to be the first research to consider this concept. Finally, it should be emphasised the importance of analysing relative per minute rather than absolute data. Several limitations should be noted when interpretating the findings of this study. The current data are reflective of the methods and practices of two elite soccer clubs from varying European leagues, however the positional match running performance and variations resulting from possession classifications and team formation were not considered for analysis. Also, this study did not examine the effects of standard opposition and match location. Consequently, the results should only be generalised to similar cohorts, level of competition, and tactical approaches as previously suggested . Thus, future studies should be conducted to compare current findings utilising larger sample sizes with various team formations and possession time. Moreover, some studies reported that acceleration and deceleration metrics may have a high measurement error and variation . Although, there are more recent research reporting such metrics . In addition, this study did not consider the effects on running distances (e.g., total distance, HSR or sprinting) and effective playing time . Such limitations suggest that future research should include these contextual factors as these variables can influence match outcome . Furthermore, considering match outcome (win, draw, loss) can affect the quantity of accelerations and decelerations performed, this contextual variable should be included in future research. Also, further studies should focus on including acceleration and deceleration metrics into fatigue assessment and recovery protocols. Lastly, there were no analysis considering external factors such as time of day of matches and weather conditions (e.g., rain, wind, or temperature) which can also affect the findings of this study and should be considered in future studies when comparing different leagues and contexts. In conclusion, this study provides valuable insights into the nuanced differences in explosive actions across playing positions in the EPL and L1. While confirming the general trend of higher acceleration and deceleration frequencies in the EPL, the unique case of CF challenges current evidence, emphasising the need for a more granular understanding of positional demands of explosive actions incorporating accelerations and decelerations in elite soccer. Further research exploring the contextual and tactical factors influencing these patterns will contribute to a more comprehensive picture of the physical demands in elite European soccer.
|
Other
|
other
|
en
| 0.999997 |
PMC11694204
|
The number of volleyball players is increasing throughout the world, while the new national federations being founded provide proof of the growing popularity of this sport discipline. At present, the International Volleyball Federation (FIVB) includes 220 national sports associations, which are affiliated with five continental federations. We can expect this trend to grow even more intensively because the game continues to become more and more interesting, quicker-paced and better balanced. Both male and female players are displaying a growing level of skills. Changes in the game regulations have also affected the evolution of the players’ techniques and tactics, as well as the structure of the game. The volleyball match has been influenced the most by changes introduced by the International Volleyball Federation (FIVB) in 1998. The revised rules included changes in the scoring system, the introduction of the libero player and an extended time allowed for a serve, from five to eight seconds, with a limit of one attempt. All the changes that have been introduced into the discipline have directly affected the dynamics of the obtained results. To identify the trends in international volleyball over the past years in detail, with the possibility of simulating the results that may occur in subsequent years, it is necessary to conduct a multiple regression analysis using the function of time and significant explanatory variables. Performance factors and their relations with games’ outcomes have been a goal of scientific investigations since the 1990s. At that time Eon and Schutz analysed matches from three participants of the International Volleyball Cup. Later, some studies sought to identify the relationships between selected game factors and winning matches. Some researchers have tried to predict the competition outcome using certain mathematical and statistical formulas . Pesaturias et al. discovered that blocking was a significant factor in the game outcomes of the 2006 men’s World Championship. Silva et al. found that service points, reception and blocking errors were significant factors in discriminating the possibility to predict game results and service points as variables strongly correlated with match success. However, the number of publications concerning the largest volleyball events in the world over a long period of time has remained insufficient. In this paper, six men’s and six women’s national volleyball teams competing in the most important international events were analysed. The study used the following variables to characterise the performance of the analysed teams: The results of the following events were analysed: the World Championship games starting in 2006, the World Cups starting in 2007, the World Grand Champions Cups starting in 2005 and the Olympic Games starting in 2004 and ending with the Olympics in Rio de Janeiro in 2016. An analysis of data obtained from the official www.fivb.org website allowed for a comparison of the best women’s and men’s volley-ball teams between 2004 and 2016. All variables were expressed as mean or median ± standard deviation (SD). Before using a parametric test, the assumption of normality was verified using the Kolmogorov-Smirnov test. The distributions of all variables were normal or close to normal. The numbers of quality data for analysing variables were obtained using the analysis of the contingency table. The results and the output data were presented as mean values – entries in the table matrix. First, the values describing each variable were standardised, taking into account the number of matches played in each of the analysed years, according to the accuracy of the comparison of the variability dynamics. The totals were divided by the number of played matches, which yielded the mean values for the variables in a given year. Because the aim during the first stage of the study was to carry out detailed and significant analyses concerning the teams’ performance and to determine the correlations between the analysed variables, a research pattern with an RXn n Y structure of variables was used, i.e. a single polytomous dependent variable ( Y ), as well as n polytomous independent variables ( Xn n ) in accordance with the method of non-probability sampling. The variables describing the models of the volleyball teams that could serve as explanatory variables were established based on detailed samples and tests. The correlations between the compared test results were estimated using a univariate ridge regression analysis, for both women’s and men’s teams. A simplified biometric model of regression was assumed with the following general form: Y = ∑ i = 1 k α j x j + ξ Y To determine the most important predictors of sports results, the study employed a multiple regression analysis and identified the correlations using the Pearson correlation coefficient . At the same time, specified variables describing the models of the volleyball teams were obtained. Results of the analysis of correlations between the variables in the groups of women’s and men’s teams and the dependent variable – the sports results – and forward ridge multiple regression analysis for the Y-SR (SR-Sport Result) dependent variable are presented below. In Table 1 , the results of the correlation analysis in the women’s group are presented. The results of the analysis of correlations between the variables and the dependent variable (sports results) in the women’s teams indicated that the AP, SP and TE variables had the strongest correlation with the SR. It can therefore be concluded that the number of attack attempts, successful serves and team errors will affect the team’s outcome. In Table 2 , the results of the correlation analysis in the group of men are presented. The correlations analysis between the variables and the dependent variable (sports results) in the men’s teams revealed that the BP, SP and AA variables had the strongest correlation with the SR. Therefore, the numbers of successful blocks, successful serves and attack attempts had the largest effect on a team’s outcome. In Table 3 , the results of the analysis of correlations between all the variables describing the game structure and the sports results in the women’s group are presented, while in Table 4 , the results of a step-wise ridge regression analysis in the women’s group are presented. The forward ridge multiple regression analysis, which was conducted for the Y-SR dependent variable in the women’s group based on the determined values of the significant independent variables, led to the following form of the regression function: Y − S R W o m e n = 4.449 − 0.027 × A P + 0.008 × A A − 0.015 × B P At the same time, the analysis revealed that the AP variable was the most important predictor of the Y-SR variable in the women’s group. The result of the regression equation means that if variable AP increases by one unit, SR will decrease by 0.027, whereas if AA increases by one unit, SR will increase by 0.008 and if SP increases, SR will decrease by another 0.015. Because no separate analysis of the attacks and counterattacks, which belonged to two different complexes (in the latest division of the technical elements), was conducted, it can be assumed that the analysed game-related statistics belonged to Complex II. In Table 5 , the results of the analysis of correlations between all the variables describing the game structure and the sports results in the men’s group are presented, while in Table 6 , the results of a stepwise ridge regression analysis in the men’s group are presented. Based on the forward ridge multiple regression analysis for the Y-SR dependent variable in the men’s group and on the determined values of the significant independent variables, the following form of the regression equation was defined: Y − S R M e n = 3.960 − 0.019 × A A − 0.047 × B P − 0.017 × S P This means, for instance, that if AA increases by one unit, then the value of SR increases by 0.019. The same applies to each variable in the function, taking their signs into account. The conducted study also showed that the AA variable was the most important predictor for the Y-SR variable in the men’s group. The result of the regression equation means that if variable AA increases by one unit, SR will increase by 0.019, whereas if BP increases by one unit, SR will decrease by 0.047, and similarly, if SP increases, SR will decrease by another 0.017. The analysis showed that, as in the women’s group, the most important technical elements affecting the sports results belonged to Complex II. In the first stage of the analyses conducted in this study, detailed and significant variables related to the performance and the correlations between the analysed game-related statistics were determined. The results of this research concerned the best teams in the world in the years 2004–2016 according to gender. The obtained data allowed us to identify significant determinants of sports results in elite volleyball. Among the women’s teams, the multiple regression procedure revealed a decrease in the number of attack attempts (AA), as well as an increase in the number of successful attacks (AP), and the increase in the number of successful blocks (BP) had a significant impact on the game outcome. On the other hand, correlations between the analysed variables among the men revealed that a decrease in the number of attack attempts (AA) along with an increase in the number of successful serves (SP) and blocking points (BP) showed the strongest correlation with the obtained results. In both cases, the factors belong to Complex I and Complex II. The study did not consider a division into attacks belonging to Complex I and counterattacks belonging to Complex II. The initial analysis of variance revealed a normal distribution with slight deviations, but still within the norm . The results presented above do not reflect the purpose of this study in an entirely accurate manner; however, they definitely confirmed that the variables determining a victory belonged to Complex II. The top teams rely on an optimal mastery of the elements of this complex. Similar research results were obtained by Laios and Kountouris after the Olympic Games in Athens. According to their study, insufficient technical training in the elements from Complex II is one of the reasons for failure. Similar research was conducted by Castro et al. , who focused on the World Cup held in 2007. Their analysis encompassed game-related statistics ascribed to the appropriate complexes, while their results state explicitly that Complex II is composed of elements that, once mastered, will allow the players to carry out combination plays during a match. Castro et al. also observed that in this complex, the effectiveness of an attack depends on its type, form, direction and pace, as well as on the number of blocking players. Other authors have revealed that effective defensive play allows a team to use different fast offensive options, which results in highly effective offensive play . The results of the studies published by other authors indicate a successful attack as a variable that correlates with the sports results, regardless of the stage it is employed in a match. Other reports have confirmed the hypothesis formulated by the authors of the present study that, even if an attack does not score, it may lead to an unfavourable situation in the opponent’s defence, which can help in setting up a counterattack. Different analyses have also indicated that to achieve the intended effect, a high level of effectiveness of the players in the first line of defence (block) is necessary. Others indicate that the block is the second variable that can affect success , which was confirmed by Afonso et al. and Palao et al. in their studies. The results of other research strongly indicate that offensive skills are more important in the context of the outcome of a volleyball game . The studies by Afonso et al. conducted after the Men’s World Cup in 2007 revealed that receiving a serve is a particularly important factor, as it gives the team an opportunity to use many different toss variants and a chance to score. Errors in receiving a serve will have a significant impact on the final result. However, a proper serve receive helps the team to organise an optimal attack strategy and increases the chances for a successful attack . The present study also involved a comparative analysis of the descriptive statistics. The results regarding the women’s teams over the period of 13 years revealed that, among all the teams, the most varied values concerned the number of attack attempts (AA), while the highest percentage variation concerned the number of serve points (SP) and the sports results (SR). Regarding the men’s teams, as with the women’s teams, the largest deviations were noted in the number of attack attempts (AA) and the serve attempts (SA), while the greatest percentage variation concerned, first and foremost, the number of successful serves (SP), the obtained sports results (SR) and the number of team errors (TE). Among both the women and the men, the distribution of the variance of the variables showed slight negative or positive skews, which nonetheless fell within the normal range . The obtained results suggest that when training the plans of young and senior players should predominantly emphasise the development of individual tactics in attacks. Increasing the effectiveness of this skill, particularly those from Zones Two and Four, as well as the number of attacks from the centre preceded by a good receive and an optimal, quick toss are all essential to win a volleyball game. Simultaneously decreasing the number of attack attempts and optimising the attacks will create opportunities to significantly increase the effectiveness of a team’s offensive plays. Furthermore, particular attention should be paid to the individual tactics during serves along with a number of guidelines for eliminating errors. The serve is the only element in the game of volleyball that relies on the efforts of an individual player and is crucial for obtaining a direct point. Practically every serve that scores a point requires an immediate change in the subsequent serve in order to optimise this element, thereby increasing the chance for victory. The next stage of the study involved an analysis of the correlations between the variables and the achieved sports results. In this case, the results concerning the women’s teams showed a successful attack (AP) to be a definitive determinant of success. At the same time, the number of successful serves (SP) and avoiding team errors (TE) were both significant factors. The number of successful blocks (BP) was the least important factor. In terms of the men’s teams, the strongest correlations were observed for the serve point and the blocking point (SP and BP). The correlations identified for these variables involved a simultaneous decrease in the number of serve attempts and the attempted offensive actions (SA, AA). Conversely, the number of successful attacks (AP) showed a significant correlation with the sports results. Furthermore, the number of team errors (TE) was observed to be increasingly important. Consequently, the training practices of both men and women should include efforts to eliminate team errors. A player’s individual tactics, optimal choices and reactions, combined with perfect technical training, are highly important tools for reducing the occurrence of this variable. Executing a jump serve, particularly in the case of women, carries the risk of an error, which is why a lower number of jump serves may affect the outcome of a match. In the case of men, a successful block can also play a significant role in match outcomes. In a set, a team scores on average three blocking points; therefore, increasing this number to five is a great achievement, and while it does not guarantee success, it does increase the likelihood of winning. In order to optimise this element, it is essential to identify the individual tactics and preferences of the opposing team’s setter, as well as his or her typical behaviour towards the end of a match. All observations must be supported by working on team tactics. Similar results, which are partially confirmed by the results presented in this study, have been obtained by other authors , who concluded that attack points and serve points are the most important determinants of sports results. Conversely, in the interpretations by Marcellino et al. , the attack, block and serve are considered as the elements that bring in points directly through actions, and as a result, these are what affect the final outcome. Other studies have shown that during a match between two teams playing at a similar level, the team that has committed fewer serving errors will be the winner , which was also confirmed by Marelica . The subsequent analyses concerned the forward ridge multiple regression for the independent variables, which were conducted in order to establish the factors determining the sports results between 2004 and 2016 for each team separately while taking both the women and the men into account. Within the women’s group, the results were divided equally between successful attacks (AP) and team errors (TE), whereas in the case of the men, a successful serve (SP) was most significant. It seems, however, that such a presentation of the factors determining the sports results in volleyball is far from precise. It is definitely better to compare the determinants of the sports results for each year. Analyses of the factors involved in the women’s teams clearly indicated that the number of successful attacks (AP) and serve points (SP) were the game-related statistics with the greatest influence on the outcome of volleyball matches at the highest level. Both a successful attack and a successful serve played a decisive role in the games played during the analysed period, and the values of these variables were the highest. Therefore, it can be concluded that by increasing the number of successful attacks and serve points, the chances of achieving a victory will increase. At the same time, a successful block (BP) was found to have decreased importance. The results of the study concerning the men’s teams, as with the women, indicated a successful serve (SP) as the most significant determinant. A successful block (BP) was found to be the second determinant, followed by a successful attack (AP). Therefore, the values of these game-related statistics (SP, BP, AP) should be supported by a lower total number of attack attempts, block attempts and serve attempts (AA, BA, SA). It was also observed that the importance of committing team errors (TE) has increased considerably in recent years, having a strong effect on the outcome of a game. It can be concluded that by increasing the effectiveness of the executed blocks and serves, with a simultaneous decrease in the number of team errors, the outcome will be affected. The publications of other authors partially confirm the above-mentioned results , indicating that a good receive of a serve allows a team to initiate a successful attack. The conducted analysis also revealed that the teams at the highest international level present a very similar level of individual and team skills, so the outcome of a match between them is typically decided by the smallest details. In conclusion, analysis of game-related statistics registered during the most prestigious international volleyball competitions during the years 2004–2016 indicates that in the case of the women’s group the numbers of successful attacks and blocking points were the fundamental factors determining success in the analysed period. Considering the men’s teams the number of blocking points and serve points had the most significant impact on the final result. Identification of these variables should contribute to appropriate planning and implementation of volleyball training.
|
Study
|
other
|
en
| 0.999998 |
PMC11694205
|
The first FIFA Women’s World Cup (FWWC) took place in China in 1991 , and to date, only 9 editions of this championship have been contested. This, coupled with the growth period that women’s football is experiencing in terms of the number of players and economic investment , has led to improvements in technical, tactical, and physical aspects over short periods of time . For instance, the physical report published on the penultimate edition of the FWWC France 2019 noted an increase in the maximum speeds of the fastest players by approximately 2 km/h compared to the previous edition, a change that has also been observed in the edition held in Australia and New Zealand . Similarly, the increase in resources from federations and clubs for women’s football has enhanced the professionalization of the sport, elevating the technical and tactical level of the players. In this regard, the European continent has seen on the field the commitment to youth women’s football, and the FWWC Australia & New Zealand 2023 has marked a turning point in the hegemony of the United States in global women’s football. In 2019, 40% of registered female players worldwide were from the United States , while in 2023, 72% of players under the age of 20 were registered with UEFA federations , clearly reflecting how the European continent leads in women’s academy football. In relation to these technical and tactical indicators, for example, the average passing accuracy in the 2011, 2015, and 2019 World Cups was 69%, 71%, and 74%, respectively . In the edition of Australia and New Zealand 2023, Spain, the winning team, averaged an 86.58% passing accuracy, completing an average of 572 successful passes per game . This surpassed the best passing accuracy observed four years earlier (Japan – 82%) by 4 percentage points. Undoubtedly, the increase in research on performance indicators in women’s football, from technical, tactical, and conditioning perspectives, has also contributed to the professionalization of players – in Figure 1 , the evolution in the number of publications on women’s football (“female football” OR “women’s soccer”) and men’s football (“male football” OR “men’s soccer”) from the year 2000 to the year 2023 is presented, consulted on 17/03/2024 in Web of Science. Based on that, research focused on understanding the influence of contextual variables in women’s football, such as home advantage , match status , or the quality of opposition , has helped comprehend individual and collective behaviour based on different moments and contexts of the match. Similarly, the efforts of various researchers to expand the scientific knowledge base on aspects related to the internal and external load of players [ 13 – 16 ] have facilitated the adaptation of more specific training tasks and situations in women’s football. In direct relation to the technical and tactical performance indicators in women’s football, Scanlan et al. investigated the tactical criteria determining goal-scoring opportunities in the FWWC Canada 2015, similar to the studies by Iván-Baragaño et al. and Maneiro et al. , which proposed a model of offensive success for ball possessions in women’s football, using the last World Cups played in 2015 and 2019 as a sample. On the other hand, Casal et al. analysed how the participation of goalkeepers influenced the development of ball possessions in the Spanish Women’s League, and similarly, Errekagorri et al. conducted a case study on a team in the second women’s division, analysing collective performance through the integration of tactical and conditioning variables. In a similar vein, a recent study analysed technical and tactical differences using event data obtained from the U.S. women’s soccer team, observing statistically significant differences compared to others in the distance between the defensive line and the goal line in defensive pressure. Lastly, in direct relation to the development of ball possessions in women’s football, Dipple et al. analysed the influence of variables related to passes executed in the National Women’s Super League in the United States and the Football Association Women’s Super League in England, noting that winning teams exhibited, among other characteristics, a higher number of total passes and successful passes in the final third. For these reasons, and considering that research on technical and tactical indicators in women’s football dates back only a few years and the analysed samples are still limited, this study was conducted. Thus, with the aim of deepening our understanding of the evolution of elite women’s football in the last 4 years, the objective of this study was to analyse and compare, both individually and multivariately, the technical-tactical similarities and differences associated with the offensive phase between the FIFA Women’s World Cup France 2019 and the FIFA Women’s World Cup Australia & New Zealand 2023. To carry out this study, an observational methodology was employed, utilizing a nomothetic (various units of analysis corresponding to each of the teams and championships analysed), punctual (involving intrasessional follow-up throughout each of the matches), and multidimensional (various levels of response reflected in the observation instrument) design. This design corresponds to the third observational quadrant proposed by Anguera et al. . All analysed matches were recorded from public television, stored on an external hard drive, and analysed after the event. According to the Belmont Report , the use of publicly available images for research purposes does not require informed consent from participants or approval from an ethics committee A total of 4,669 ball possessions were analysed (n-FWWC19 = 2.323; n-FWWC23 = 2.346) in the 32 matches (16 matches per championship) corresponding to the knockout phase of the FIFA Women’s World Cup 2019 and 2023. Each of the teams was considered as a unit of analysis, and thus their technical-tactical behaviour was analysed as a unit. The inclusion criteria for the analysis of possessions consisted of: i) two consecutive contacts by the same player with the ball, or ii) a completed pass, or iii) a shot taken, provided that the duration was equal to or greater than 4 s . To homogenize the analysed sample, ball possessions that took place during extra time were excluded. The observation instrument utilized for this study was proposed by Iván-Baragaño et al. and can be referred to in Table 1 . The instrument was developed ad hoc by a committee of experts in football and consisted of a combination of field format and category systems. In total, 17 criteria related to the start, development, and outcome of ball possessions were analysed. Among the criteria related to the start of possessions, contextual criteria such as match outcome, match status or temporality were introduced, as well as spatial criteria related to the start zone of possessions. The development of possessions was based on the analysis of criteria such as offensive and defensive tactical intention or the duration of possessions, as well as the number of passes. Finally, the outcome of possessions was recorded as Goal, Shot, Pass into Area, or Unsuccessful. The recording tool used was the open-source software LINCE PLUS V 1.3.2. . The criteria “start zone (length)” and “start zone (width)” were recorded as presented in Figure 2 . Prior to the recording and coding of all actions, three observers were trained and familiarized with the observation instrument following the procedure proposed by Losada & Manolov . Two of them held doctoral degrees in Sports Science with over 30 years of combined experience in observational methodology, and the third was a Ph.D. student. All three possessed the UEFA PRO coaching license. Data quality control was conducted using Cohen’s kappa coefficient , obtaining an average value from the pairs of 0.869, considered excellent according to the scale of Landis & Koch . This average was calculated as both the inter-observer and intra-observer value after recording a total of 258 ball possessions from two randomly selected matches. The researcher responsible for recording the possessions analysed the initial sample (n = 258) twice to verify consistency in the recording. The results of the data quality control for each of the analysed criteria are presented in Table 2 . Firstly, a descriptive and comparative analysis was conducted through frequency counts for the two categories of the FWWC criterion . The existence of statistically significant differences was tested using the chi-square statistic, and the effect size was quantified using the contingency coefficient. The effect size was categorized as small (ES = 0.10), medium (ES = 0.30), and large (ES = 0.50) (30). For the variables MD (seconds), MO (seconds), Possession Time, and Passes, the independent samples t-test was applied, justified by the central limit theorem due to the large sample size. Prior to this, the distribution of each variable and group, as well as the presence of outliers, was assessed through graphical representation. The effect size for these four criteria was calculated as the difference between the standardized means of each, categorized in the same manner as before . To address the second part of the objective, a decision tree analysis was conducted using FWWC as the dependent variable. Before selecting the final hyperparameters, various partitions and sample possibilities were preliminarily tested to avoid overfitting and under-fitting and enhance the model’s accuracy. For the final model, 70% of the observations were randomly selected as the training sample, and the remaining 30% were used as the validation sample. The other analysed criteria were introduced into the model as predictors or independents. The tree growth method used was chi-square automatic interaction detection (CHAID). The statistical significance value for creating new nodes was set at p < .05, with a minimum of 100 observations for parent nodes and 50 for end nodes. The maximum depth of the decision tree was set at 4 levels. Lastly, the model’s validity was evaluated using the correct classification table (false positives/false negatives) and the area under the ROC curve (AUC), considered excellent (0.90 < AUC < 1.00), good (0.80 < AUC < 0.90), fair (0.70 < AUC < 0.8), poor (0.6 < AUC < 0.7), and fail (0.5 < AUC < 0.6) . All analyses were conducted using SPSS 26.0 statistical software . The descriptive and bivariate results are presented in Table 3 . Statistically significant differences were found between the two analysed championships in the criteria Match Outcome ( p < .001; ES = .09), Match Status ( p < .001; ES = .135), Interaction Context ( p < .001; ES = .09), and Defensive Intention ( p < .001; ES = .07). Additionally, a significant increase was observed in the distribution of the 4 analysed quantitative variables: MD (seconds) ( p < .001; ES = .11), MO (seconds) ( p < .001; ES = .13); Possession Time ( p < .001; ES = .18), and Passes ( p < .001; ES = .16), as depicted in Figure 3 . Regarding the decision tree model , the criteria introduced by the algorithm were: i) Match Status, ii) Time, iii) MO (seconds), iv) Start Zone (width), v) Passes, vi) Defensive Intention, and vii) Possession zone. The probabilities assigned to each category of the FWWC criterion can be consulted in Figure 3 . Among the most notable results were the following. Node 0 consisted of a total of 1,368 observations (30% of the sample), with 49% corresponding to FWWC23 and 51% to FWWC19. The first predictor introduced by the algorithm was Match Status ( χ 2 = 60.258; df = 2, p < .001), resulting in the three main branches of the decision tree for the categories Drawing (Node 1: FWWC23 = 55.6%; FWWC19 = 44.4%), Winning (Node 2: FWWC23 = 47.4%; FWWC19 = 52.6%), and Losing (Node 3: FWWC23 = 39.7%; FWWC19 = 60.3%). Continuing the reading of the tree along the central branch, from Node 2 (Match Status = Winning), the next criterion introduced by the decision tree was MO (seconds) ( χ 2 = 10.250; df = 1, p < .001). Thus, Node 6 (Match Status = Winning & MO (seconds) ≤ 5) yielded a probability in favour of the FWWC19 category of 60% (n = 78), while Node 7, with a value greater than 5 for the MO (seconds) criterion, decreased the probability in favour of FWWC19 to 47.2%. To conclude this central branch, the next criterion introduced was Passes ( χ 2 = 60.258; df = 1, p < .001), the highest probability being observed in favour of the FWWC23 category for the interaction of categories: i) Match Status = Winning, MO (seconds) ≥ 5, and Passes ≥ 3 (Node 15; FWWC23 = 58.0%, FWWC19 = 42.0%). Based on the branches derived from Node 3 (Match Status = Losing), it was observed that the probability of a possession made under that match status and with a temporality of 1Q or 2Q (i.e., from the start of the match until minute 30) corresponding to the FWWC19 category was 80%, while the observed probability of a possession losing, from minute 30 onwards was: Node 9; FWWC23 = 43%, FWWC19 = 57%. The training decision tree showed a correct classification percentage of 58.1% (64.9% sensitivity (FWWC23) – 51.1% specificity (FWWC19). Finally, the validation model presented a correct classification percentage of 57.9% (65.4% sensitivity (FWWC23) – 50.7% specificity (FWWC19) and an area under the curve (AUC) equal to 0.581 (95% CI = 0.565–598), showing that despite the differences found, the classification of ball possessions based on the FWWC criterion using a decision tree model did not achieve acceptable results. The objective of this study was to analyse and compare, individually and multivariately, the technical-tactical similarities and differences associated with the offensive phase between the FIFA Women’s World Cup France 2019 and the FIFA Women’s World Cup Australia & New Zealand 2023. For this purpose, 4,669 ball possessions were analysed between the two championships using observational methodology. The average number of possessions per team and per game was 72.59 and 73.31 in the FWWC France 2019 and FWWC 2023, respectively. This result might indicate a priori that the number of losses, steals, and/or ball transitions remained stable during both championships. However, the average number of possessions in both championships clearly contradicts the results obtained in this study for the Possession Time criterion. Compared to the 2019 France edition, ball possessions were on average 13.5% longer (Possession Time FWWC19 = 13.93 s; FWWC23 = 15.82 s), which can be explained by a longer total duration and effective playing time of the matches played. Although these data have not been provided in the official reports of each match , the fact that matches such as Spain – Netherlands in the Quarterfinals had an extra time of 20 minutes (+30 minutes of subsequent overtime) seems to indicate that this difference reflects a trend present throughout the championship . Similarly, the 13.5% increase in the average duration of ball possessions aligns with the 12.5% increase observed in this study in the time of possession in own half (from 7.24 s to 8.15 s on average between the two editions), the 14% increase in time in the opponent’s half , and the 15% increase in the average number of passes per analysed action (3.62 passes/possession vs 4.19 passes/possession). While statistically, there was a small or moderate effect size, from a football perspective, these results are highly important, highlighting an improvement in the quality of ball possessions based on increased technical and tactical efficiency of the participating teams . Moreover, the results obtained from the decision tree analysis seem to confirm this tendency in the FIFA Women’s World Cup Australia & New Zealand 2023. In nodes 7 and 15 of the decision tree, it was observed that possessions made while winning, with a duration longer than 5 s in the opponent’s half (node 7), and more than two passes (node 15) were significantly more likely to occur in the 2023 edition compared to their sibling nodes (node 6 and node 14, respectively). Regarding the Interaction Context criterion, statistically significant differences were also observed between the two analysed championships. While the decision tree algorithm did not introduce this criterion into the model, possibly due to the high number of categories and the consequent reduction in the number of observations per category, the differences observed in the initial interaction context could imply that teams are modifying their offensive and defensive strategies, although this will need to be studied in future research. In both FIFAWWC19 and FWWC23, the categories with the highest frequency were middle vs middle (MM) (FWWC19 = 40.6%; FWWC23 = 38.0%) and rear vs forward (RA) (FWWC19 = 31.8%; FWWC23 = 36.9%). This aligns with the decrease in the frequency of highly offensive interaction contexts such as A0 (forward vs goalkeeper) observed by Barreira et al. and with the frequencies observed by Maneiro et al. in the Men’s European Championships of 2008 and 2016. In this study , it was observed that the category with the highest frequency in men’s football in both samples was RA. Thus, a higher frequency of the MM category compared to RA (characteristic of women’s football and more frequent in FWWC19) might indicate greater difficulty for teams in the offensive phase and overcoming areas of higher player density. Finally, a significant reduction in the appearance of the A0 category was observed in the 2023 edition. This interaction context, often caused by a technical error by the player, was 10 times less frequent, possibly indicating an improvement in the technical (and tactical) skills of the defensive line and goalkeepers. In line with this, Kirkendall stated, after interviewing women’s football coaches, that the defensive line typically showed lower technical performance in the attacking phase, and although these data could only be verified through individual performance analysis, the findings of this study also suggest that this situation is changing. Lastly, but not less important, statistically significant differences were observed in the two analysed criteria associated with the match outcome (match status and match outcome). In the 2023 edition held in Australia and New Zealand, there was a greater prevalence of the categories Draw and Drawing, along with a decrease in the frequency of the categories Lose and Losing, corresponding to the match outcome and match status criteria, respectively. This change may be significant for various reasons. First, previous studies in both men’s and women’s football have demonstrated that the match status is a criterion that can modify offensive and defensive strategies. Additionally, the 2023 edition was the first to feature 32 national teams, as part of FIFA’s commitment to increasing the number of teams. Thus, the increase in teams did not lead to a greater imbalance in the analysed matches, nor does it seem to have done so in the group stage based on the number of goals scored (2.56 goals/match), which is the lowest in the history of the Women’s World Cup. Based on this criterion (Match Status), in nodes 8 and 9 of the decision tree, a clear difference between the two championships can be observed: 80% of possessions made while losing between the 0 th and 30 th minute of the match were recorded in FWWC2019, represented by this criterion interaction, among other events, in the fact that the United States managed to take the lead before the 14 th minute in all matches played (except the final against the Netherlands) . To conclude this discussion section, it should be noted that the decision tree model, designed to assess the classification ability (and thus differentiation) between the two analysed championships, was not able to correctly classify both categories (FWWC19 and FWWC23) adequately (AUC = .581). Firstly, we must acknowledge that, while we consider the observed differences in this study between the two championships significant from a technical and tactical perspective, there would need to be more evident statistical patterns to enable accurate classification using a decision tree model. However, on the other hand, we must contextualize the reality that has been analysed. Thus, considering that the history of the FIFA Women’s World Cup began in 1991 with 12 participating teams and that progress has been continuous over the editions held, the differences found could be considered a significant change in the performance of participating teams. This change should be confirmed in future elite women’s football championships. In this study, a descriptive and comparative analysis of ball possessions in the FIFA Women’s World Cup France 2019 and the FIFA Women’s World Cup France 2023 was conducted. A total of 4,669 ball possessions were analysed between the two championships. While the number of possessions per team and per match was similar between the two championships, a statistically significant increase was observed in FWWC23 in the duration of possessions, the number of passes per possession, as well as possession time in the own and opponent’s field. These differences may be associated with a better technical and tactical performance of the teams. On the other hand, significant differences were observed in the criteria of Interaction Context and Defensive Intention. For the former, a slight difference related to criteria with high offensive value was observed. Regarding defensive intention, there was an increase in the category Pressure, possibly related to a greater tendency shown in FWWC23 to control the game through ball possession. Finally, the differences found between the two championships for the criteria Match Status and Match Outcome could be explained by greater parity in the matches. This aligns with the fact that the number of goals per match was the lowest in the history of the World Cup, helping to explain the increase from 24 to 32 teams proposed by FIFA between the two championships. The authors declare no conflicts of interest.
|
Other
|
other
|
en
| 0.999996 |
PMC11694207
|
An examination of the match-play characteristics of women’s football demonstrates that the technical and tactical performances of teams have steadily progressed over time . However, one area of performance that has evolved exponentially in the women’s game over the last decade is the speed and the intensity of match-play. For instance, high-intensity running and sprinting distance on a team level has increased by around 20–30% between the FIFA Women’s World Cup Canada 2015 and France 2019 . Accordingly, given such accelerated rates of physical development, it is crucial that the match demands of contemporary competitions such as the FIFA Women’s World Cup Australia and New Zealand 2023 are analysed and documented. Findings from such analyses could be valuable to benchmark the current intensity of international female match-play, while also providing a framework for the development of female-specific drills via the replication of such demands [ 3 – 5 ]. Besides, there is also an onus on practitioners to be cognisant of modern-day demands, as an intensification of match-play could be one of a multitude of risk factors associated with increased injury prevalence in players . It is customary for studies exploring the match demands of elite female players to find that performances are highly dependent upon the positional role in the team [ 2 , 4 , 7 – 10 ]. Some of these studies employed broad positional categories such as defenders, midfielders and forwards, while others assigned up to four or five different positions. Recent findings demonstrated that using eight to eleven outfield roles to quantify the match demands of elite male players resulted in highly distinguishable movement characteristics compared to broad categories . However, research to date has yet to partition international female players into highly distinctive outfield positions. Redressing this shortcoming could be highly relevant to the women’s football community, especially if the data was from a recent tournament such as the FIFA Women’s World Cup 2023. Thus, a study that breaks female international players down into specialised outfield positions and even individual player roles may enable practitioners to use match physical performance trends as a blueprint to create position- and individual-specific drills . Researchers have created position-specific drills that tax the relevant physical attributes of elite male players, while simultaneously mimicking some of the most commonly occurring technical and tactical actions . Thus, a detailed match analysis of the FIFA Women’s World Cup 2023 may facilitate this process for women’s football. In addition, it is unclear which positions in modern female international match-play contribute more physically in- or out-of-possession of the ball or how their work-rates are distributed across halves. Gaining a deeper understanding of this latter point could be advantageous given new directives have increased game time since the FIFA Women’s World Cup France 2019. To offer practitioners more nuanced and actionable positional insights, match physical analyses should ideally be layered with context [ 15 – 17 ]. As a reductionist approach has typically been applied to women’s match demands research [ 2 , 4 , 7 – 10 , 18 ], further investigations are warranted that use an array of tactical and technical metrics alongside match physical performance measures. As FIFA tournaments now use Enhanced Football Intelligence metrics , this may progress our knowledge of such a complex sport. Moreover, limited research exists that has revealed whole tournament positional trends in conjunction with pertinent examples of individuals in selected positions. Using such a dual approach may further our understanding of the dynamic interplay between physical, technical and tactical factors . Thus, this study aimed to benchmark the match demands of specialised positions at the FIFA World Cup Australia and New Zealand 2023. With FIFA’s official approval, all sixty-four games at the FIFA Women’s World Cup Australia and New Zealand 2023 were analysed. The data provider assigned eight outfield roles to enable positional differences to be determined. Games were filtered so only players who completed the entire match were evaluated. This fell in line with the physical analyses conducted for the previous two editions of the FIFA Women’s World Cup , in addition to the FIFA World Cup Qatar 2022 . This enabled 806 player observations to be analysed across various positions (210 centre backs [CB], 207 wide defenders [WD], 60 defensive midfielders [DM], 118 central midfielders [CM], 38 attacking midfielders [AM], 63 wide midfielders [WM], 34 wide forwards [WF], and 76 centre forwards [CF]). As this data were freely available, no ethical approval was required . FIFA Women’s World Cup 2023 games were analysed using an optical tracking system that operated at 25 Hz (TRACAB, Chyron Hego, Sweden). This systems validity has been quantified to verify the capture process and subsequent accuracy of the data . After system calibration (e.g., pitch locations of known distances) and various quality control processes (e.g., regularly checking the tracking of players during games), the data captured were analysed using match analysis software. This produced a data set on each player’s activity pattern during a match using female-specific speed zones . Players’ activities were coded into the following: Analyses primarily reported physical metrics that provided insights into the volume (total distance) and intensity of match-play (high-intensity and sprinting distance). Total distance represented the sum of the ground covered in all speed zones. High-intensity distance consisted of the aggregation of zones 4 and 5 (≥19.0 km · h -1 ), while sprinting exclusively included zone 5 distance (≥23.0 km · h -1 ). Additionally, these metrics were analysed based on possession status or if the ball was out of play. Top speeds attained in games were also quantified across all positions. To further contextualise the physical data, FIFA’s Enhanced Football Intelligence metrics were also quantified. This included the coding of events such as the number of passes, successful passes, crosses, ball progressions, total offers made to receive and various movement types, in addition to applied pressures. Event definitions can be found in freely available documentation . The speed zones adopted at the FIFA Women’s World Cup 2023 were identical to those at the Canada 2015 and France 2019 editions, but the latter two utilised a different optical tracking system (STATS LLC, USA). Thus, it would be challenging to compare positions between tournaments. Consequently, a within tournament analysis of selected positions at the high and low end of the sprinting continuum was conducted instead. As the evolution of match physical performances are more pronounced at zone 5 , only sprinting distances were evaluated in this way. To further aid this comparison, only relative measures were calculated (e.g., zone 5 distance as a percentage of total distance). Moreover, some positions were combined and/or omitted from the 2023 data as previous analyses did not categorise players into eight outfield roles. This allowed profiling of 2154 player observations across the 2015, 2019 and 2023 tournaments, respectively (161, 199, 210 centre backs [CB], 151, 175, 207 wide defenders [WD], 160, 191, 178 defensive/central midfielders [DM/CM], 98, 98, 97 wide midfielders/forwards [WM/WF], and 73, 80, 76 centre forwards [CF]). Analyses were conducted using statistical software (SPSS Inc, Version 26.0, IBM Corp, USA). Descriptive statistics were calculated on each variable. To verify normality, z-scores were obtained through dividing the skewness and kurtosis values by their standard error. Differences across role, possession status, halve and tournament were determined using factorial analysis of variance (ANOVA). In the event of a significant difference occurring, univariate analyses using Bonferroni-corrected pairwise comparisons were employed. Statistical significance was set at P < 0.05. Quadrant plot analysis composed of a simple percentage distribution computation . The coefficient of variation (CV) was used to determine the data spread across metrics. Effect sizes (ES) were computed to determine the meaningfulness of any differences. The ES magnitudes were classed as trivial (< 0.2), small (> 0.2–0.6), moderate (> 0.6–1.2) and large (> 1.2). Pearson’s coefficients were used for correlation analyses and the magnitudes of the associations were regarded as trivial ( r < 0.1), small ( r > 0.1–0.3), moderate ( r > 0.3–0.5), large ( r > 0.5–0.7), very large ( r > 0.7–0.9), and nearly perfect ( r > 0.9). Values are presented as means and standard deviations unless otherwise stated. Figures 1A - D benchmark and display the variation across each position from a physical perspective. Examples at both ends of the physical continuum are also highlighted to add context and aid interpretability. Figure 1A indicates that CM and DM covered 5-15% more total distance than CB, WD and CF (P<0.01; ES: 0.6-1.8). Moreover, AM and WF have the highest coefficient of variation for the total distance covered (CV: 9.0-9.8%) compared to CB, WD, DM, CM and CF (CV: 6.4-8.0%). Figure 1B reveals that the distances covered at high-intensity (≥19.0 km · h -1 ) were 18–89% greater in AM, WM, WF and CF compared to CB, WD, DM and CM ( P < 0.01; ES: 0.5–2.0). While the distances covered sprinting (≥23.0 km · h -1 ) were 88–163% higher in WD, AM, WM, WF and CF compared to CB, DM and CM . Moreover, CM and CF exhibited the greatest coefficient of variation (CV: 37.7–37.9% and 66.4–73.9%) for the distances covered at higher intensities (≥19.0 and ≥23.0 km·h -1 , respectively) compared to other positions (CV: 21.8–34.6% and 37.7–57.6%). Figure 1D revealed that the top speeds attained during games were higher for AM, WM, WF and CF compared to DM and CM ( P < 0.05; ES: 0.5–0.7). These top speed efforts varied across position (CB: 6.5%, WD: 6.2%, DM: 7.1%, CM: 6.6%, AM: 6.8%, WM: 5.8%, WF: 6.9%, CF: 6.9%). The top ten speeds indicated that offensive positions such as CF (30%), WF (20%) and AM (20%) accounted for 70% of these efforts. Figure 2 uses quadrants to compare each position against one another. Regarding defenders, both CB and WD demonstrated moderate magnitude associations between total and high-intensity game distances ( r = 0.37 and 0.36; P < 0.01). CB primarily occupied the lower-left quadrant (lower-left [LLQ] = 87%, lower-right [LRQ] = 6%, upper-left [ULQ] = 5% and upper-right quadrant [URQ] = 2%). Although WD occupied the lower-left quadrant most, some were also found in the upper-left and upper-right quadrants (LLQ = 42%, LRQ = 14%, ULQ = 21% and URQ = 23%). Regarding midfielders, a moderate magnitude correlation was found between a DM total and high-intensity game distances ( r = 0.44; P < 0.01). DM were primarily found in the lower-right quadrant (LLQ = 31%, LRQ = 50%, ULQ = 2% and URQ = 17%). While only a small magnitude association was evident between a CM total and high-intensity game distances ( r = 0.23; P < 0.05). Although CM occupied the lower-right quadrant more, some also deviated into the lower-left and upper-right quadrants (LLQ = 32%, LRQ = 44%, ULQ = 3% and URQ = 21%). Both WM and AM demonstrated small magnitude associations between the total and high-intensity game distances ( r = 0.14 and 0.22; P > 0.05). WM mainly occupied the upper-right quadrant (LLQ = 6%, LRQ = 13%, ULQ = 32% and URQ = 49%). Similarly, AM were also upper-right quadrant dominant but more distribution occurred across other quadrants (LLQ = 13%, LRQ = 18%, ULQ = 29% and URQ = 40%). Regarding attackers, WF demonstrated a large magnitude association between total and high-intensity game distances ( r = 0.51; P < 0.01). WF were mainly distributed in the upper-right quadrant (LLQ = 15%, LRQ = 6%, ULQ = 18% and URQ = 61%). In contrast, a small magnitude correlation was evident between a CF total and high-intensity game distances ( r = 0.12; P > 0.05). CF were the position that fluctuated the most with no clear majority quadrant (LLQ=26%, LRQ=20%, ULQ=30% and URQ=24%). Table 1 presents match events for each position at the FIFA Women’s World Cup 2023. Regarding distribution events, CB, WD, DM and CM passed more than AM, WF and CF ( P < 0.01; ES:1.0–1.6). More crosses were performed by WD, AM, WM and WF than CB and DM ( P < 0.01; ES: 0.8–1.4), while ball progressions were greater for AM, WM, WF and CF than CB, WD and DM ( P < 0.05; ES: 0.5–1.2). Total offers made to receive the ball were greater for DM, CM, AM, WM, WF, CF than CB and WD ( P < 0.01; ES: 0.7–1.7). Regarding offer movement type, DM, CM, AM and CF moved more between the lines than CB, WD and WM ( P < 0.01; ES: 1.0–1.9). While DM and CM moved more in front of the lines than other positions ( P < 0.01; ES:1.0–1.6). Movements in behind lines were more common for offensive roles such as AM, WM, WF and CF as opposed to defensive roles like CB, WD, DM and CM ( P < 0.01; ES: 0.9–2.3). Total pressure applied was greater for all other positions versus CB and WD ( P < 0.01; ES: 1.0–2.0). Direct pressure was highest for DM and CM compared to other positions ( P < 0.05; ES: 0.5–1.3) and indirect pressure was greater for CM, AM, WM, WF and CF than CB and WD ( P < 0.01; ES: 0.9–2.3). Defensive positions such as CB and DM covered a greater proportion of their overall distance out-of-possession compared to more offensive positions such as CF (42–43% vs 38%; P < 0.05; ES: 0.4–0.5). Although, CF covered a higher proportion of their overall distance covered in-possession compared to CB, WD, DM, this just failed to reach statistical significance (40% vs 37–38%; P > 0.05; ES: 0.3–0.4). Defensive positions such as CB and DM covered a greater proportion of their distance at higher intensities (≥19.0 and ≥23.0 km · h -1 ) out-of-possession than offensive positions such as WM, AM, WF and CF (72–75% vs 40–48%; P < 0.01; ES: 1.9–2.4 and 75–82% vs 35–45%; P < 0.01; ES: 1.3–1.4, respectively). In contrast, WM, AM, WF and CF covered a higher proportion of their distance at higher intensities (≥19.0 and ≥23.0 km · h -1 ) in-possession compared to CB and DM (51–59% vs 20–26%; P < 0.01; ES: 1.1–2.8 and 54–62% vs 15–22%; P < 0.01; ES: 1.5–2.4, respectively). Figures 3A - C highlight the half-by-half differences across position on a per minute basis. Figure 3A illustrates that most positions demonstrated a second half reduction in total distance covered compared to the first half ( P < 0.01; ES: 0.6–1.0), with WF the only exception ( P > 0.05; ES: 0.4). Figures 3B - C demonstrate that a decline between halves for high-intensity distance (≥19.0 km · h -1 ) was only evident for CF ( P < 0.01; ES: 0.5), whilst a decline between halves for sprinting distance (≥23.0 km · h -1 ) was only found for WD ( P < 0.05; ES: 0.3). Figure 4 highlights a clear trend within each competition as CB and DM/CM covered the lowest sprint distance as a proportion of total distance. Given the similarities across tournaments, one could gain longitudinal insights by comparing the magnitude of change across time for these roles. For instance, in Canada 2015, CB performed 11% more sprinting than DM/CM ( P > 0.05; ES: 0.2). However, in France 2019 in addition to Australia and New Zealand 2023, CB performed 21–26% more sprinting than DM/CM ( P < 0.01; ES: 0.3–0.4). Another obvious trend within each competition was that CF and WM/WF covered the most sprinting distance as a proportion of total distance. In Canada 2015, there was a negligible difference between these positions (ES: 0.0; P > 0.05). However, in France 2019, CF sprinted 11% less than WM/WF (ES: 0.2; P > 0.05) but in Australia and New Zealand 2023 this trend reversed as CF covered 11% more sprinting than WM/WF (ES: 0.2; P > 0.05). Although this later trend failed to reach statistical significance, it indicates a large relative shift (-11% to +11%). This study was the first to benchmark the position-specific match demands at the FIFA Women’s World Cup Australia and New Zealand 2023. This was accomplished through combining FIFA’s Enhanced Football Intelligence metrics alongside match physical performance measures. This added context to the physical trends so more nuanced positional insights could be determined. Moreover, highlighting pertinent examples of individuals in selected positions may also aid our understanding of the dynamic interplay between physical, technical and tactical factors. Data demonstrated that central and defensive midfielders covered the greatest total distance and in contrast centre backs covered the lowest overall ground at the FIFA Women’s World Cup 2023. Similar findings were found at both Canada 2015 and France 2019 tournaments . As differences were comparable across three Women’s World Cups (13–15% higher in central midfielder’s vs centre backs), it appears this trend remains stable at the highest standard of women’s football. Midfielders are renowned for their all-round work-rate , and to cite examples, Spain’s Teresa Abelleira and Zambia’s Ireen Lungu covered around 12.3–13.3 km during selected games. Despite contrasting possession profiles, these two players set the upper total distance requirement for contemporary international female midfielders. These benchmarks could be related to the midfield position being dynamic during all phases of play, particularly at the low to moderate speeds [ 2 , 11 – 13 , 24 ]. To support this assertion, FIFA’s Enhanced Football Intelligence metrics demonstrated that midfielders performed a high number of passes, offers to receive, especially in front of lines and applied pressures. In contrast, centre backs have a more sporadic activity profile, with more of their activity occurring out-of-possession , hence the lower total distance covered. Furthermore, centre backs are acutely influenced by the opposition’s attacking quality and work-rate . For example, the Philippines’ Jessika Cowart covered around 11.6 km against Switzerland, which was the highest total distance for this position of the competition. In that specific match, Switzerland’s attacking dominance resulted in their highest figures for movements in behind and final-third phase counts of their tournament . To nullify this, the Philippines’ defensive unit had to respond with their greatest number of applied pressures of their three games, which may explain Cowart’s physical output. On the contrary, Canadian centre back Kadeisha Buchanan covered only 7.8 km against Nigeria, which was the lowest total distance of the tournament. Buchanan may have been less active in that game as her team were consistently on the front foot, as evidenced by Canada registering their highest progression and final-third phase counts of their tournament . The examples above clearly highlight the impact of numerous contextual factors on the demands of certain playing positions. Researchers suggest that high-intensity running during matches is a valid measure of physical performance in women’s football because of its strong association with training status and is a distinguishing characteristic between different standards of player . Moreover, others have demonstrated the instrumental role that intensity plays in the pre-ample to game-changing moments . Data revealed that wide midfielders and forwards covered around 40–90% more distance at high-intensity (≥19.0 km · h -1 ) than centre backs in addition to central and defensive midfielders. The average high-intensity game distance using fixed speed zones for both of these wide roles was around 0.8 km. However, at the upper level, Zambia’s Racheal Kundananji and Haiti’s Roseline Éloissaint covered around 1.4–1.6 km, underscoring how demanding these wide roles are in the modern women’s game. Although physical fitness is related to the high-intensity distances covered by elite female players during matches, this relationship remains complex . Whereas superior intermittent endurance capacity has been found in female players operating in wide versus central roles , this work could now be considered outdated. Recent research has also demonstrated that the skeletal muscle phenotype of female players appears to be associated with their high-intensity match performances , thus wide players could be more likely to possess a certain phenotype profile. Although adopting such a reductionist perspective can be problematic in such a complex sport [ 15 – 17 ], as it is likely that this finding is multifaceted. For instance, the high-intensity nature of wide players could be related to the space afforded to them along the flanks, enabling them to accelerate to higher speeds when tactically required . Moreover, wide players may be more active at high-intensity due to modern tactics, with teams now commonly employing compact mid-blocks (e.g., narrow defensive shape in the mid third of the pitch) that prompt the opposition to use the flanks to create chances . Whereas, central players operate in player compact pitch areas, which limits their ability to accelerate into space . Particularly if required to maintain tactical discipline and hold a position in central areas for extended periods . This may have resulted in central roles performing lower accumulated high-intensity distances of between 0.4–0.6 km per match at the competition. A novel finding of the present study was that centre forwards covered the most sprint distance (≥23.0 km · h -1 ) during matches. For instance, centre forwards sprinted an average of 263 m per game at the tournament, which was 7–8% more than their attacking counterparts. Data demonstrated that most of this sprinting was while their team was in-possession of the ball. This is unsurprising as a key role of a centre forward is to move rapidly to receive the ball in attacking areas, particularly making runs into the space in behind the opposition defence . Accordingly, FIFA’s Enhanced Football Intelligence metrics indicated that centre forwards amassed around 45–80% more movements in behind than other attacking positions at this tournament. This was often coupled with defending intensely from the front via applied pressure. Both of these factors could go some way towards explaining the evolution observed in centre forwards’ sprinting outputs. Indeed, centre forwards covered a staggering 39–163% more sprinting than defensive positions at the FIFA Women’s World Cup 2023. This highlights the need for practitioners to develop drills that require players to sprint in a position-specific way . The top ten speeds attained across position at the tournament revealed that some players attained speeds of >32.0 km · h -1 , which agrees with data from European competitions . There was a positional divide, with a higher proportion of these top speeds been produced by offensive as opposed to defensive players (70% vs 30%). Similarly, average top speeds attained in games were fastest for attacking midfielders, wide midfielders, wide forwards and centre-forwards, while defensive and central midfielders had the lowest top speeds. Slight discrepancies are evident between the top speeds attained during sprint testing and those reported in the present study . Disparities would be expected given the different collection methods (tracking system vs timing gates) and that top speeds attained during testing don’t necessarily agree with those attained during matchplay . Moreover, caution is needed when interpreting top speeds during match-play due to the limited sampling frequency of the optical tracking system used. Despite such complexities, these findings could attest to the importance of speed development across all positions in women’s football, especially for attacking players. Practitioners can use match physical performance trends as a blue-print to create position-specific drills . Although complex, these drills should tax the relevant physical attributes, while simultaneously mimicking commonly occurring position-specific technical and tactical actions . Over time this approach may develop specific adaptations in players that enables them to better fulfil their duties across the whole game (volume) and particularly during intensified match-play periods (intensity) . To facilitate this process for women’s football, it could be advantageous to correlate distinct physical dimensions (volume vs intensity) to determining specific positional characteristics. Using such an approach indicated most centre backs exhibited low volume and intensity characteristics, which agreed with observations from the FIFA Men’s World Cup 2022 . Wide defenders typically covered a low volume but intensity varied somewhat. As offensive wing-backs as opposed to traditional full-backs are usually higher on the intensity continuum due to more overlapping, support play, dribbling etc , this could account for some of this variation. Central and defensive midfielder’s physical performances emulated one another, implying a prerequisite of both was to cover considerable volume during matches. Although intensity varied more in central midfielders due to box-to-box and more traditional midfielders being included in the same positional category . Wide midfielders, wide forwards and attacking midfielders exhibited both high volume and intensity qualities but substantial variation existed for these roles. Finally, centre forwards were the position that fluctuated the most for volume and intensity. This may indicate the existence of various centre forward archetypes within the sample (e.g., false 9s, supporting forwards, target forwards, out-and-out strikers, etc). Although a practitioner could extrapolate from such trends, the degree of physical preparation needed, caution is warranted given the extensive variation across most positions. This is especially evident for centre forwards, as players resided fairly equally across quadrants. Given such a data spread, conditioning practices should always be in alignment with the age and capabilities of the players in these roles, in addition to the playing style adopted by each team . Elite female players have been found to cover less distance both in total and at higher intensities in the second compared to the first half of matches . Previously, trends were based on a reasonably similar duration played in the two halves. However, new directives to added time in the FIFA Women’s World Cup 2023 resulted in much longer second halves. If the intense nature of women’s football was played over longer durations, then some degree of fatigue might be expected towards the end of games, especially for the most demanding positions . Accordingly, second-half declines in high-intensity and sprint distances on a per minute basis were more marked for attacking midfielders and centre forwards. While centre backs, defensive midfielders, central midfielders and wide forwards illustrated less pronounced second-half declines or, in some cases, maintained or even increased their second-half intensity. As high-intensity actions during elite female games have been found to heavily deplete muscle glycogen stores in Type II fibres , these positional trends seem logical. Thus, limited energy availability in the second half could have resulted in more pronounced performance declines for those sprinting more overall in tournament games (e.g., centre forwards and attacking midfielders). This may also partly explain why players in positions completing less sprinting across games were able to maintain or even increase their second-half sprint performances (e.g., centre backs and central midfielders). Wide forwards were the only position to deviate from this trend. Due to the demanding nature of this role, it could be assumed that wingers possess superior physical capabilities compared with other positions . However, this finding could also be related to wide forwards being more active during the second half of games, as the onus to create attacking opportunities increases, particularly via crosses . The evolution of match physical performances has been quantified in various competitions . Specifically in the women’s game, the amount of sprinting increased by around 20–50% across various positions between the Canada 2015 and France 2019 FIFA editions . Thus, it is crucial to map such advancements, especially given the growth of the women’s game in recent years. However, employing different optical tracking systems across major international tournaments can hinder the ability to survey such performance developments . The present study analysed the largest sample of international female players published to date across three FIFA Women’s World Cup editions. Despite identical speed zone across competitions, the optical tracking systems differed and unfortunately no between tournament positional comparisons could be made. Alternatively, a within tournament positional comparison revealed that centre backs and centre forwards demonstrated more pronounced changes in their relative sprint distance compared to other positions in Australia and New Zealand 2023 and France 2019 compared to Canada 2015. Accordingly, the sprint profiles of these two positional adversaries could be inextricably connected (e.g., centre backs vs centre forwards). As centre forwards increasingly attack space or run with the ball, it follows that centre backs are required to defensively react through various runs to press, cover or track back [ 11 – 13 , 17 ]. As the present study found diametrically opposed sprinting profiles in- and out-of-possession for these two roles, this could further support that assertion. Although the author has been overly cautious in this analysis by only observing within-competition positional trends for relative measures, the reader should still be cognisant of the numerous caveats associated with comparing trends from different technologies and view the trends with some caution. To provide a balanced perspective, the major shortcomings of the present work should be examined. Firstly, it is worth noting that the data provider assigned the specialised positional roles. These were based on the tactical systems adopted at the start of games and the same players may have been assigned different roles across various matches. Although the author agreed with most role assignments by the data provider some disagreement was evident at times. Secondly, the physical data was limited to locomotor metrics and thus omits crucial information on position-specific acceleration and change of direction profiles . Thirdly, the present study used both FIFA’s Enhanced Football Intelligence metrics and pertinent individual examples to add much-needed context. However, to provide more nuanced and actionable insights, this analysis preferably needed the physical data to be directly synchronised with various tactical phases and/or scenarios . Fourthly, additional contextual factors (e.g., score, match importance, level of opponent and weather conditions) should have been explored to further understand the complexity associated with the demands of the game. Finally, although the optical tracking system used in the present study has been found to be valid , more research is needed to verify its reproducibility. In line with this, the reliability and validity of FIFA’s Enhanced Football Intelligence metrics should also be quantified in future studies. This study demonstrated the marked differences in match demands across specialised positions at the FIFA Women’s World Cup 2023. These trends could be used to benchmark international female players and to form a basic blue-print for training requirements that replicate the most important facets of performance across specific roles.
|
Other
|
other
|
en
| 0.999996 |
PMC11694210
|
Monitoring player load is a common practice in high-performance sports, as it helps coaching staff understand changes in player performance, prevent possible injuries, and avoid overloads . Padel studies examining external load have mainly focused on three areas: temporal structure , player movements , and game actions, including technical and tactical parameters . The current study addresses temporal and movement aspects. Regarding temporal aspects, professional point duration varies between 10 and 15 seconds, typically longer for women than for men . Match duration largely depends on the number of points played and thus on the level of competitiveness between the pairs . In terms of player movement, studies have primarily analysed distance covered, speed, and types of movement. Findings indicate that players cover between 2000 and 3000 m per match, half of which occurs during active play (when the ball is in play) . The variation in these metrics depends on the competitive level of the match, temporal aspects, or the players’ level . Furthermore, players spend the majority of match time at speeds below 3 km/h, and approximately 30% of the match at speeds between 3 and 6 km/h . This movement speed during active phases increases with the level of play . The data reported by previous studies confirm that padel is an intermittent sport characterized by repeated high-intensity efforts, including accelerations, decelerations, changes of direction, and stroke execution, dispersed over a variable timeframe . In other team sports, where accelerations and decelerations are considered important performance indicators, a significant number of studies have explored the specific characteristics of these types of movements . However, a gap in research exists regarding the study of acceleration and deceleration profiles demanded by padel, complicating the calibration of physical preparation, the implementation of injury prevention strategies, or the design of competition strategies . Traditionally, the analysis of external load in racket sports was limited to quantifying the distance covered, presenting various limitations, such as not considering other aspects related to the accelerations occurring in jumps and turns . Nevertheless, the use of electronic performance tracking systems (EPTS) in recent years has allowed for the analysis of other scarcely studied external load variables in padel, such as acceleration and deceleration profiles, explosive distances, and Player Load . This last indicator has been reported as a reliable and valid indicator , showing a high correlation with physiological variables such as heart rate, V ˙ O 2 max , and subjective scales of perceived effort . For these reasons, player load has been the subject of research in a significant number of recent publications in sports such as football , basketball , and racket sports such as tennis . Although previous studies have analysed performance differences in padel players based on the outcome , no information has been reported related to accelerations and decelerations, explosive distance, or player load. Therefore, the objectives of this study were to describe the main external load variables in professional padel players during matches, recorded using an inertial device, and to establish if there were differences between winners and losers. The equality between the winning and losing couples increases as the rounds of the draw progress, with the final being the match with the greatest equality . It was hypothesised that the winners of the matches would show greater mobility than the losers. A total of 83 male professional padel players, aged between 20 and 44 years, all ranked within the 40 to 216 range of the World Padel Tour professional circuit, were recorded. In terms of laterality, 63 players were right-handed and 20 were left-handed. The analysed matches included 9 finals and 14 semifinals from the Gold Circuit 24 World Padel Tour Next Season 2021–2022. This study was performed in accordance with the ethical standards of the Helsinki Declaration. All players voluntarily participated in the study, signing an informed consent form prior to their involvement. In this study, the WIMU Pro device (RealTrack Systems, Almería, Spain), regarded as a hybrid EPTS due to its GNSS and LPS technology integration, was utilized. It enables performance monitoring in outdoor locations via GNSS technology and indoor facilities where GNSS technology is limited. All data in our study were gathered from outdoor facilities. The WIMU has been demonstrated to have high validity in positioning or tracking . In each match, data were collected from each player with their informed and signed consent, strictly adhering to the protocol established by the corresponding federation. All matches were played outdoors on glass courts. Before the general warm-ups, players were fitted with a vest without the inertial device. After completing their specific warm-up exercises and just 3 minutes before the match start, the device was placed in the vest, activated, and data recording began with the player’s first serve of the match. Data recording ended with the last point of the match. During the game, the WIMU Pro device recorded the study’s interest variables, which were also monitored in real-time using Svivo Server software to ensure accurate data recording. After the matches, the recorded data were analysed using Spro software and stored along with other contextual variables such as match results. In compliance with privacy standards, the federation was responsible for anonymizing the data before submission to the researchers for analysis. This study assessed movement variables related to external load, categorized by volume and intensity criteria : Volume-related variables: Classic volume and intensity variables: Novel volume and intensity variables: Acceleration profile included: Independent variable: For data analysis, SPSS v 28.0 (IBM; USA) and R studio (R-Tools Technology) were used. Median and interquartile range served as descriptive statistics. Initially, the Kolmogorov-Smirnov normality test was conducted. To compare load variables based on match outcomes, the Wilcoxon test was employed. Effect size was assessed using Pearson’s r, categorized as: very weak (0.0 to < 0.2), weak (0.2 to < 0.4), moderate (0.4 to < 0.6), strong (0.6 to < 0.8), and very strong (0.8 to 1.0). Significance was adjusted for p-values < 0.05. Table 1 displays the descriptive values of various competition load variables analysed. Notably, the median distance covered by players per match was 3430 m, with a per-hour match distance of 2401 m. The median value for players’ maximum speed was 15.21 km/h, and the Player Load was 55 arbitrary units (a.u.). The median explosive distance covered by players during matches was 399 m, translating to 253 m per match hour. The median relative HSR (high-speed running) for players was 8 m. Table 2 displays the movement variable values relative to the match outcome. Winners covered a significantly greater relative distance compared to the losers . Similarly, they performed a significantly higher number of accelerations per hour (Mdn = 415.20) than the losers (Mdn = 382.35; p = 0.04; r = 0.22). For the rest of the mentioned variables, there were no significant differences between winners and losers. Figure 1 shows the number of accelerations and decelerations based on their distance, in relation to match outcomes. Winners performed a significantly higher number of accelerations per hour, with differences maintained in accelerations between 1 and 2 m (Mdn winners = 320.68; Mdn losers = 301.86; p = 0.04; r = 0.27). Notably, about 80% of accelerations occur within this distance range. Regarding decelerations, no significant differences were found between winners and losers, but winners had more decelerations over-all and across all distance ranges. Non-significant differences with moderate effect sizes were observed in total decelerations (Md-n winners = 376.47; Mdn losers = 326.81; p = 0.09; r = 0.22) and those from 1 to 2 m (Mdn winners = 284.27; Mdn losers = 255.76; p = 0.07; r = 0.23). Similar to accelerations, approximately 80% of decelerations occurred within 1 to 2 m. The purpose of this study was to analyse the competition load based on outcomes, using variables recorded with an inertial EPTS device related to both volume and intensity of load, as well as establishing an acceleration profile in matches of professional players. Regarding the distance travelled, players in our study covered approximately 3440 m, while the distance per hour of play was 2407 m. These results differ significantly from those reported by Castillo-Rodríguez et al. in three non-elite levels of play, which indicated lower values , and also differ greatly from the results indicated by Ramón-Llin et al. in a professional circuit final where players exceeded 6000 m in two hours and 15 minutes. This may be explained by the fact that the total distance of the match depends greatly on the number of points contested, which affects the duration , and in our case, it was greater than that reported by Castillo-Rodríguez et al. but less than that reported by Ramón-Llin et al. . Additionally, in our case, the matches had an approximate duration of 85 minutes (median), which was higher than that observed in professional players. The maximum movement speed recorded by the players during the matches was 15.20 km/h (median values). These records are lower than those reported in elite male players, who reached speeds of 25 km/h. The explanation for these differences could be as follows: maximum speeds are usually reached when the player sprints to the net and even more so when a player leaves the court to return a three-metre shot from the opponent . Thus, the differences in results between studies could be due to a greater presence in the sample of Ramón-Llin et al. of left-handed players on the right side. We believe that right-handed players, whether in the right or left position, when smashing for three metres will force the opposing player, probably on the left side, to leave the court, so that the player on the right side will not have to leave the court for a three-metre smash when their opponents are right-handed. However, a left-handed player playing on the right side when smashing for three will probably force the opposing right side player to leave for three, and consequently, increase the maximum speed recorded by the right side player. This variation in play dynamics and player positions may also be influenced by the evolution of the game in recent years, especially considering that the data we are comparing were collected more than 10 years ago. These changes over time could further explain the discrepancies observed between the current study and earlier research. Moreover, these sprints to the net or leaving the court on a three-metre smash explain why in our results maximum accelerations of 4.61 m/s 2 were recorded. It is difficult to compare with other padel studies as we only found studies that established an acceleration profile for competition players, but they were in wheelchairs . However, compared to sports such as football, accelerations in padel are slightly lower than those recorded by Silva et al. in under-21 players who showed maximum accelerations between 4.75 and 4.89 m/s 2 , probably because the larger dimensions of the field allow greater distance to accelerate, while padel players are very limited by the dimensions of the court Players in our study recorded values of 55.56 a.u. in player load. We have not been able to compare these results with studies in padel, but we have with football, basketball, and tennis. In the case of football, Reche-Soto et al. reported values close to 20 a.u. in third division players, while in the case of tennis, Perri et al. reported between 548 and 490 a.u. In basketball, several studies have analysed player load between quarters and specific positions using WIMU Pro. The analysis of player load has been more complex because different companies use different algorithms to classify actions and this limits comparability. . Although all companies use accelerometer data from the vertical, horizontal, and medio-lateral planes to calculate player load, the calculations performed to extract the final external workload are quite different, which complicates the comparison between them . In the analysis according to the outcome, winners covered a significantly greater distance and performed a significantly higher number of accelerations per hour than losers, with 80% of accelerations occurring over distances of 1 to 2 m. There is some controversy regarding the distance reported by previous studies, possibly due to the often close competition between winners and losers . In sports like padel and tennis, when losers have covered more distance than winners, it has been attributed to successful strategies of moving the opponent . However, in our study, we believe that the higher number of accelerations and distances covered by the winners may be due to anticipation strategies to approach the net for volleys or quickly retreat for smashes or wall returns. This argument aligns with the findings of Courel-Ibáñez et al. , who pointed out that winners made more winning shots from the net such as smashes or cross-court volleys, and in movements from the net backward when opponents play a lob . In addition, these anticipatory strategies may be related to committing fewer unforced errors . Similarly, although we have not found any studies that analyse decelerations in padel for comparison, we think that the higher number of decelerations by winners in our study could be attributed to braking in movements towards the net after the serve, following a tray shot, or after a wall return . This study, while providing novel and valuable data, is not without limitations that deserve attention. The main one is the measurement of competitive intensity, based solely on playtime, without considering the score or contested points, which may not fully reflect the complexity of performance. On the other hand, the generalization of results is limited by focusing the study sample on professional players, suggesting the need to address more diverse samples in future research. In addition, uncontrolled factors such as environmental conditions and the psychological state of competitors could influence performance. Hence it would be advisable to include psychophysiological measurements and adopt qualitative approaches in future studies. There are some important implications of this study. The insights gained may help coaches, researchers, and professionals enhance training, performance, and strategy development in padel, particularly at the professional level. The study clearly implies that with professional matches, there will be a need for detailed competition load, volume, and intensity analysis, including time structure of matches/games, and design of acceleration profile. These insights will be very helpful to coaches who are preparing training programmes that simulate match activities. Coaches can, therefore, fine-tune the training to improve the performances of the players regarding fine details of the game, such as distance covered, maximum speeds accomplished, and even acceleration profiles, among many others. The analysis of internal and external load is key to adjusting and conducting more specific training sessions. In addition to distance travelled and playing time already reported by previous studies in padel, maximum speeds, accelerations, and player load are parameters recently studied in other sports to determine player performance. In the review conducted, we found no study that reported these latest data in professional padel competition. This study provided precise results of distance travelled, maximum speed, and profiles of accelerations and decelerations, which will allow coaches to determine the kinematic performance of their players in matches and training, in addition to designing specific tasks that meet the demands of competition. The comparative analysis indicated that winners exhibited greater mobility than losers.
|
Study
|
other
|
en
| 0.999994 |
PMC11694212
|
Lifestyle physical activity (PA) is associated with a myriad of physiological adaptations that benefit human health. PA is one of the most effective strategies to prevent and combat cardiometabolic alterations and is related to a 27% decrease in mortality risk . However, the underlying mechanisms that explain how PA enhances cardiometabolic health remain to be elucidated. Gut microbiota refers to microbial communities colonising the gastrointestinal tract , indispensable in regulating the host nutrition, metabolic function and immunological response . Dysbiosis arises from an imbalance within microbial communities , influenced by various factors, including dietary patterns, sedentary or unhealthy lifestyles [ 6 – 8 ], and medication use . This imbalance, in turn, correlates with conditions such as obesity and cardiometabolic diseases . Recent evidence suggests that PA is one of humans’ most significant lifestyle factors influencing gut microbiota diversity and composition . For instance, case-controlled studies showed that faecal microbiota from athletes was much more diverse and had a higher proportion of several bacterial taxa than healthy sedentary individuals. Similarly, in football athletes, it was found that increased levels of PA promoted greater diversity of the faecal microbiota via the production of short-chain fatty acids by gut bacteria, enhancing overall health . Another cross-sectional study observed that premenopausal women meeting the PA World Health Organization recommendations had a greater relative abundance of Akkermansia and Faecalibacterium genera than sedentary women . Evidence indicates that Akkermansia and Faecalibacterium genera are associated with reduced inflammation and, therefore, may play a role in preventing the development of cardiometabolic diseases . Based on that, a recent study showed that individuals with higher levels of PA showed a different Mediterranean pattern and faecal microbiota composition than individuals with obesity who reported lower levels of PA . Most studies investigating the relationship between PA and faecal microbiota composition have used self-reported questionnaires to determine PA levels . However, these instruments have the disadvantage of misclassifying PA levels and thus compromise the ability to detect valid associations between PA levels and faecal microbiota composition . Based on the aforementioned studies using self-reported data, we hypothesise that increased levels of PA, at different intensities, are associated with elevated faecal microbiota diversity and a greater prevalence of beneficial bacteria. Thus, through the utilization of objective measures of PA in the present study, we aimed to explore the association between the time spent in objectively measured PA at different intensities with faecal microbiota diversity and composition in a cohort of young individuals. A total of 92 (65 women) young healthy adults, aged 18–25 years, were included in the present cross-sectional study. This study was carried out within the framework of the ACTIBATE study , an exercise-based randomized controlled trial . All assessments were performed in Granada (Spain) between October and November in 2016. Inclusion criteria were: being engaged in less than 20 min of moderate-vigorous PA on less than 3 days/week, having a stable body weight over the last 3 months (< 3 kg change), not smoking, not taking any medication (including antibiotics in the last 3 months), not presenting any acute or chronic illness and not being pregnant. The study protocol and experimental design were applied in accordance with the last revised ethical guidelines of the Declaration of Helsinki. The study was approved by the Ethics Committee on Human Research of the University of Granada (no. 924) and the Servicio Andaluz de Salud (Centro de Granada, CEI-Granada); all participants signed informed consent. PA variables were objectively measured with one accelerometer on the non-dominant wrist (ActiGraph GT3X+, Pensacola, FL), during 7 consecutive days (24 h/day) . Detailed information about how to wear the accelerometer was given to participants, including the instruction to remove it in daily water-based activities, such as washing dishes or showering. The sampling frequency of 100 Hz was selected to store the raw accelerations of the accelerometers . We exported and converted the raw accelerations to the “.csv” format using ActiLife v.6.13.3 software (ActiGraph, Pensacola, FL, US). Afterwards, the “ggir” package in R software was used to process the raw “.csv” files. This processing consisted of: (i) local gravity data auto-calibration of accelerations according to the local gravitational acceleration , (ii) calculation of the Euclidean Norm of the raw accelerations Minus One G with negative values rounded to 0 (ENMO) calculated elsewhere , (iii) detection of non-wear time based on the raw acceleration of the three axes, (iv) determination of MAL detection of sustained functioning of the accelerometer by means of abnormal high accelerations incompatible with human movement (i.e., related to device malfunctioning), (v) imputation of non-wear time and abnormal high accelerations, (vi) identification of waking and sleeping time based on the automatized algorithm guided by the participants’ daily reports , and (vii) estimation of sedentary time and the time spent in light PA, moderate PA, vigorous PA, and moderate to vigorous PA using agespecific cut-points for a wrist-worn accelerometer, for Euclidean Norm Minus One (ENMO) . We measured the mean ENMO (m g ) during waking time, which is considered an overall indicator of the PA (overall PA). For the analyses we only included the participants who wore the accelerometers for ≥ 16 h/day during at least 4 days (including at least 1 weekend day). The participants collected approximately 50 g of a faecal sample in plastic sterile containers, which were transported in portable coolers to the research centre. Faecal samples were stored at -80°C until extraction of DNA. The QIAamp DNA Stool Mini Kit (QIAGEN, Barcelona, Spain) was used for extraction of DNA, following the manufacturer’s instructions. The samples were incubated at 95ºC to ensure lysis of both gram-positive and gram-negative bacteria. Then, we quantified DNA with a NanoDrop ND1000 spectrophotometer (Thermo Fisher Scientific, DE, USA). Finally, DNA purity was determined by measuring the ratio of absorbance at A260/280 nm and A260/230 nm. DNA extracted was amplified by polymerase chain reaction (PCR) by primer pairs – forward primer (5’CCTACGGGNGGCWGCAG3’) and reverse primer (5’GACTACHVGGGTATCTAATCC3’) – targeting the V3 and V4 hypervariable regions of the bacterial 16S rRNA gene. All PCRs were executed in 25 µL reaction volumes incorporating 12.5 µL of 2X KAPA HiFi Hotstart ready mix (KAPA Biosystems, Woburn, MA, USA), 5 µL of each forward and reverse primer (1 µM) and 2.5 µL of extracted DNA (10 ng) under the following cycling circumstances: (a) denaturation at 95°C for 3 min, (b) cycles of denaturation at 95°C for 30 s, (c) annealing at 55ºC for 30 s, (d) elongation at 72ºC for 30 s , (e) a final extension at 72°C for 5 min. To purify the PCR products from free primers and primer dimers we used AMPure XP beads (Beckman Coulter, Indianapolis, IN, USA). Next, the index PCR attached dual indices and Illumina sequencing adapters using the Nextera XT Index Kit (Illumina, San Diego, CA, USA), on a thermal cycler using the requirements previously mentioned. After that, AMPure XP beads (Beckman Coulter, Indianapolis, IN, USA) were used for purification of the pooled PCR products. The resultant amplicons were sequenced at MiSeq (Illumina, USA), using a paired-end (2 × 300 nt) Illumina MiSeq sequencing system (Illumina, San Diego, CA, USA). We analysed the FASTQ files with the “dada2” package in R software, obtaining 11,659,014 paired ends with an average of 126,728 ± 33,395 reads per sample. The cut-off of 10,000 reads was surpassed for all samples. Samples were resampled to a minimum sequencing depth of 30,982 reads using the “phyloseq” package in R software, returning 11,158 phylotypes. The “Classifier” function from the Ribosomal Database Project (RDP) was used to assign the taxonomic affiliation of phylotypes, based on the naïve Bayesian classification with a pseudo-bootstrap threshold of 80%. We obtained a total of 209 genera belonging to 16 different phyla. The “seqmatch” function from RDP was performed to define the discriminatory power of each sequence read with the purpose of annotating species assignments; we executed annotation according to previously published criteria . Microbial communities were analysed at different taxonomic levels (phylum to genus), calculating relative abundances, expressed as percentages. We performed the analyses with those bacteria with more than 0.5% on average in their relative abundance. Next, alpha and beta diversities were estimated based on the identified microbial communities. Alpha diversity takes into account the number of different phylotypes and relative abundances of a single sample , whereas beta diversity shows differences in microbial community composition between individuals, which is the degree to which samples differ from one another . Alpha diversity was assessed based on Chao richness, Shannon, inverse Simpson and evenness Camargo indexes with the “microbiome” package in R software. Chao richness estimates diversity according to the number of different phylotypes in the community ; that is, higher Chao richness indicates higher diversity in the community. Shannon diversity increases as both the richness and the evenness of the community increase ; the inverse of Simpson diversity is calculated from the classical Simpson diversity and indicates richness in a community with uniform evenness , and evenness Camargo determines the equitability of phylotype frequencies in a community . Beta diversity was measured quantitatively using permutational multivariate analysis of variance (PERMANOVA) based on Bray-Curtis dissimilarity. Participants’ weight and height were measured, without shoes and wearing the standard clothes, using a SECA scale and stadiometer (model 799, Electronic Column Scale, Hamburg, Germany). Body mass index (BMI) was calculated as weight (kg)/height (m 2 ). Body composition was evaluated by dual energy X-ray absorptiometry (DEXA, HOLOGIC, Discovery Wi, Marlborough, MA). The lean mass index (LMI) and fat mass index (FMI) were calculated as lean body mass and fat body mass, respectively, in kg, divided by height in m 2 . The fat mass percentage was determined as the body fat mass divided by the total body mass and multiplied by 100. Fasting blood samples were collected for assessment of the cardio-metabolic profile. Serum glucose, total cholesterol, high density lipoprotein-cholesterol (HDL-C) and triglycerides were measured following standard methods using an AU5832 automated analyser (Beckman Coulter Inc., Brea CA, USA). Low-density lipoprotein cholesterol (LDL-C) was estimated as: [total cholesterol – HDL-C – (triglycerides/5)] , in mg/dL. Serum insulin was measured using the Access Ultrasensitive Insulin chemiluminescent immunoassay kit (Beckman Coulter Inc., Brea CA, USA). The homeostatic model assessment for insulin resistance (HOMA-IR) index was calculated as (insulin (µU/mL) × glucose (mmol/L)/22.5 . Dietary intake was registered using three non-consecutive 24-hour recalls, 2 weekdays and a weekend day. These 24-hour recalls were performed in the laboratory via face-to-face interviews with dietitians. To improve the accuracy of food quantification, we used coloured photographs of different portion sizes of food during the interviews . All 24-hour recalls were analysed for total energy (kcal), fat, proteins, carbohydrates, and fibre intake (g) by EvalFINUT software, which is based on the United States Department of Agriculture (USDA) and “Base de Datos Española de Composición de Alimentos” (BEDCA) databases. This is a secondary study derived from the ACTIBATE trial ; therefore, there is not a sample size calculation for this study. Data normality was explored using the D’Agostino & Pearson omnibus, visual histograms and Q-Q plots (data not shown). None of the variables followed a normal distribution; therefore data were presented as median ± interquartile range and non-parametric tests were used for all analyses. Moreover, no sex interaction was detected (all P > 0.05), so both sexes were pooled together. Spearman correlations were performed to investigate the correlation between the PA variables and faecal microbiota diversity, using the “psych” and “corrplot” packages in R software. Since faecal microbiota diversity can be modified by several factors including sex , BMI and dietary intake , we repeated the aforementioned correlations adjusted for sex, BMI and dietary intake in separate models (data not shown). Moreover, we repeated this analysis by adjusting for accelerometer non-wear time and glucose levels in separate models (data not shown) as possible confounders of PA variables. Overall PA and the time spent in vigorous PA were computed as tertiles according to number of participants with SPSS (SPSS v. 22.0, IBM SPSS Statistics, IBM Corp. Armonk, NY), because they were the only variables with a significant correlation with faecal microbiota diversity. The tertiles’ values for overall PA were low (13.45–29.44 m g ), intermediate (30.02–35.41 m g ), and high (35.49–67.10 m g ), whereas for the time spent in vigorous PA the values were low (0.02–0.83 min/day), intermediate (0.87–2.67 min/day), and high (2.75–14.40 min/day). Tertiles of overall PA and the time spent in vigorous PA were compared using one-way PERMANOVA with 9,999 permutations for significance testing with the Paleontological Statistics (Past3) software for the calculation of beta diversity. Kruskal-Wallis tests were performed to investigate whether there were significant differences in body composition, dietary intake and cardiometabolic profile as well as faecal microbiota alpha diversity and composition outcomes across tertiles of overall PA and the time spent in vigorous PA. Analysis of covariance was used to compare the relative abundance of genera across tertiles of the time spent in vigorous PA adjusted for protein intake with the data transformed by Blom’s formula. All P values were corrected by Benjamini and Hochberg multiple testing to control the false discovery rate (FDR, shown as q-values) . The level of significance was set at P < 0.05 and q < 0.05. R software (V.3.6.0; http://www.r-project.org ) and GraphPad Prism version 8.0.0 for Windows (GraphPad Software, San Diego, California, USA, ( http://www.graphpad.com ) were used for the statistical analysis and graphical plots. A total of 92 participants had data from analysis of faecal microbiota diversity and composition, but only 88 participants (24 men, age = 22.0 ± 2.0; and 64 women, age = 21.6 ± 2.0) had valid PA measurements (as they wore the accelerometer for < 16 h/day during at least 4 days), who were finally included in the analyses. Table 1 shows the descriptive characteristics of the included participants (age 21.7 (19.8–23.9) years and BMI 23.6 (21.6–28.1 kg/m 2 )), of whom 72.7% were women. We performed tertiles of overall PA and the time spent in vigorous PA and we observed that, generally, body composition, dietary intake and cardiometabolic profile were similar across them ( Table S1 ), with the exception of protein intake and glucose levels (P = 0.018 and P = 0.003, respectively; Table S1 ). Overall PA and the time spent in vigorous PA were positively correlated with alpha diversity indexes, more specifically with Shannon and inverse Simpson diversity indexes . Only the time spent in vigorous PA was positively correlated with the Chao richness index . However, we did not observe any significant correlation between other PA variables and alpha diversity indexes . The results were similar when sex, BMI, energy and macronutrient intake, as well as accelerometer non-wear time and glucose levels, were included as confounders in separate models (data not shown). Moreover, we found that individuals with high time spent in vigorous PA had a higher Chao richness, Shannon and inverse Simpson diversity indexes than individuals with low and intermediate time spent in vigorous PA (all P ≤ 0.038; data not shown). However, there were no differences across tertiles of overall PA and the time spent in vigorous PA in the beta diversity at any taxonomic levels (all P ≥ 0.060; Table 2 ). We analysed the differences across tertiles of overall PA and the time spent in vigorous PA on faecal microbiota composition at all taxonomic levels. There were no significant differences across tertiles of overall PA on the relative abundance of all bacteria at the different taxonomic levels . Similarly, we observed no differences across tertiles of time spent in vigorous PA on the relative abundance of bacteria at the phylum taxonomic level . However, we observed that individuals with low time spent in vigorous PA had a higher relative abundance of the Gammaproteobacteria class ( Proteobacteria phylum) than individuals with intermediate time spent in vigorous PA . Moreover, individuals with high time spent in vigorous PA had higher relative abundance of unclassified Firmicutes class ( Firmicutes phylum) and Porphyromonadaceae family ( Bacteroidetes phylum) than individuals with low time spent in vigorous PA . Finally, we found that individuals with intermediate time spent in vigorous PA had a higher relative abundance of the Alistipes genus ( Bacteroidetes phylum) than individuals with low time spent in vigorous PA . Interestingly, these same participants presented differences in protein intake (q = 0.011; Table S1 ). Thus, we repeated the analyses after adjusting for the protein intake and the differences in the relative abundance of the Alistipes genus disappeared (P = 0.080; Table S2 ). In the present study, overall PA and the time spent in vigorous PA were found to be positively correlated with alpha diversity indexes in young adults. Moreover, there were differences across the tertiles of time spent in vigorous PA in the relative abundance of the Gammaproteobacteria class ( Proteobacteria phylum), Porphyromonadaceae family and Alistipes genus (both Bacteroidetes phylum). These findings indicate that PA may play a role in faecal microbiota diversity and composition in young adults, although further studies are needed to confirm these findings. Our results showing the positive correlation between PA and alpha diversity agree with recent findings . However, the mechanisms by which PA may promote higher faecal microbiota diversity are unknown. A possible explanation could be the changes in the gastrointestinal tract due to intrinsic adaptations of performing PA . Interestingly, from an ecological perspective, microbial diversity may be a key factor in allowing an ecosystem to continue operating properly . In fact, greater species diversity has been associated with a healthy phenotype’s host . This is due to the potential effects that the bacteria can exert via metabolites, such as short-chain fatty acids and neurotransmitters locally and extra-intestinal tissues in the host . Our data showed that the participants with low time spent in vigorous PA had higher relative abundance of the Gammaproteobacteria class than individuals with higher time spent in vigorous PA. The relative abundance of the Gammaproteobacteria class ( Proteobacteria phylum) has been reported to be increased in obese mice and individuals with obesity , and disease states such as metabolic diseases and intestinal inflammation . In fact, many common human pathogens, known as sulphur producers , are found in the Gammaproteobacteria class, for example, Escherichia, Shigella , and Yersinia genera . In agreement with our findings, sedentary women and participants with low cardiorespiratory fitness had a higher relative abundance of the Gammaproteobacteria class than active women and participants with high cardiorespiratory fitness, respectively. Similarly, a very recent study performed in > 8,000 individuals using accelerometers observed that PA levels were associated differently with faecal microbiota composition, suggesting that the higher the PA level is, the higher is the diversity . Moreover, several studies have shown that exercise seems to decrease the relative abundance of the Gammaproteobacteria class . Thus, our results suggest that performing less than 1 min/day of vigorous PA could be related to having a higher relative abundance of the Gammaproteobacteria class, bacteria considered health-detrimental. In contrast, we observed that individuals with high and intermediate time spent in vigorous PA had a higher relative abundance of the Porphyromonadaceae family and Alistipes genus (both Bacteroidetes phylum) than individuals with lower time spent in vigorous PA. Accordingly, in a cross-sectional study of professional martial arts athletes, the relative abundance of the Porphyromonadaceae family was higher in the higher-level athletes in comparison with the lowerlevel athletes . Moreover, regular swimming training and voluntary wheel running , both in mice, were able to increase the relative abundance of the Porphyromonadaceae family. In fact, it has recently been found that lean individuals had a significantly higher relative abundance of the Porphyromonadaceae and Rikenellaceae families than individuals with obesity . Of note is the fact that the Alistipes genus belongs to the Rikenellaceae family. In resistance-trained mice, the relative abundance of the Alistipes genus was positively correlated with resistance performance . In humans, the relative abundance of the Alistipes genus is increased after consuming an animal-based diet intake, rich in protein, for 5 days . Certain species that belong to the Alistipes genus are involved in amino acid metabolism; specifically, they can hydrolyse tryptophan to indole . Since tryptophan is an essential amino acid that cannot be produced by animal cells, humans rely on dietary intake, mainly proteins, for incorporating it into the organism . In our study, the individuals with intermediate time spent in vigorous PA had higher protein intake than individuals with low time spent in vigorous PA. In fact, when the protein intake was included as a confounder, the differences in the relative abundance of the Alistipes genus between these individuals disappeared. Considering the relationship between the Alistipes genus and protein metabolism , and the results observed in the present study, it seems possible that these differences were explained by protein intake. Therefore, our data suggest that spending time on vigorous PA, in the range 3–14 min/day, could be related to having a higher relative abundance of Porphyromonadaceae family bacteria, whereas the protein intake seems to modulate the relative abundance of the Alistipes genus in individuals with intermediate time spent in vigorous PA. Even so, the possible effect of time spent in vigorous PA on the relative abundance of the Gammaproteo-bacteria class, Porphyromonadaceae family and Alistipes genus deserves further analysis. A limitation to consider in the current study is that it followed a cross-sectional design, which prevents a causal interpretation of our results. Well-designed randomized controlled trials should be carried out to elucidate the role of PA in faecal microbiota diversity and composition. In addition, we do not know whether our findings apply to older people or individuals presenting any metabolic disease. As for strengths of this study, we sequenced the microbiota composition using the latest technology (Illumina platform) and annotations were made with RDP to the genus taxon level. Moreover, PA was objectively measured by accelerometry during 7 consecutive days (24 h/day) , and we used a cut-point-free approach to assess overall PA since PA intensities estimated from cut-points might be biased by poor calibration studies . Our data showed that overall PA and time spent in vigorous PA were positively correlated with faecal microbiota diversity in young adults. Moreover, the individuals with low time spent in vigorous PA presented higher relative abundance of the Gammaproteobacteria class, whereas the individuals with high time spent in vigorous PA had higher relative abundance of the Porphyromonadaceae family. Altogether, these findings suggest that PA, especially of vigorous intensity, is related to faecal microbiota diversity and the Gammaproteo-bacteria class and Porphyromonadaceae family in young adults. Further studies are needed to confirm this relationship.
|
Review
|
biomedical
|
en
| 0.999997 |
PMC11694213
|
Handball is considered an intermittent team sport characterized by periods of high-intensity actions that are interspersed with lower intensity activities . During a match, players must perform different general movements (e.g., walking, running, sprinting, jumping and changing of direction) and handball-specific actions (e.g., passing, catching, throwing and blocking) with frequent and strenuous body contact against the opponents . Therefore, analysing the physical performance of the players during official matches provides many advantages for coaches and practitioners to: (1), design short- and long-term training programmes to maximize performance, reduce injury risk and minimize the risk of overtraining (2) adapt and periodize weekly training loads to manage stress and recovery, (3) design physical training interventions during the microcycle considering the player role in the match (starter vs. non-starter), and (4) develop and implement individualized physical training programmes for each playing position . To accomplish this purpose, technical staff and sports scientists can use new monitoring tools with a good level of validity and reliability , such as a local positioning system (LPS) with ultra-wideband (UWB) technology or inertial measurement units (IMUs) (e.g., accelerometer, magnetometer and gyroscope) to measure and analyse different external load variables in real time. The physical demands in handball have been quantified across male and female competitions such as ASOBAL Spanish League , LIQUI-MOLY Handball-Bundesliga , and Spanish Women’s 2 nd Division in addition to the European Champions League Final Four and men’s and women’s international tournaments. To summarize this information, a recent systematic review indicated that elite handball players usually cover between 2000 and 4500 m per match, with high-intensity running and sprinting accounting for 5% to 15% of this distance [ 16 – 19 ]. Nevertheless, the external loads experienced by handball players are highly variable due to the influence of gender , competition level [ 4 , 11 – 15 ] and contextual factors such as playing position [ 8 – 15 ] or match halves [ 16 – 20 ]. In the last years, the traditional approach to analyse the physical demands in handball has been based on quantifying the average values of different external load variables (e.g., total distance covered, high-speed running, accelerations and decelerations) during the complete match [ 8 – 11 ]. However, this approach does not provide accurate information about the physical demands of smaller phases or periods of the matches . As a solution to the problem, different researchers have analysed the differences between the first and the second half of the matches [ 16 – 19 ]. In this regard, as in other team sports such as basketball , previous research has revealed that the physical performance decreases throughout a handball match [ 12 , 14 – 19 ]. More specifically, some studies have demonstrated that the time spent in high-intensity running and in very-high metabolic power decreased from the first to second half [ 16 – 20 ]. Similarly, initial values of Player-Load/min declined throughout the halves . Moreover, the total distance covered in the first ten minutes was slightly greater than that covered in the last ten minutes of the match . Thus, according to previous research [ 14 – 20 ], this decline in physical performance throughout the match suggests that handball players probably experience fatigue during the match, as has been previously described in other team sports such as basketball and soccer [ 23 – 25 ]. However, to the best of our knowledge, there is limited research that provides information about the fluctuations in physical performance during small phases or periods of women’s handball matches. Therefore, the aims of the present study were: (1) to analyse physical performance fluctuations throughout match play in women’s handball, and (2) to investigate whether physical performance fluctuations are affected by contextual factors (i.e., level of the opponent and playing positions). We conducted a retrospective observational design study to analyse physical performance fluctuations registered during 5 min fixed phases in official women’s handball matches. The LPS data collected correspond to the average values of 5 min fixed phases registered during 13 official home matches from the Spanish 2 nd Division in the 2021–2022 season. Twenty-two female handball players from the same team participated in this study. Table 1 shows the anthropometric characteristics of the players for each playing position. During the season, players typically completed four or five handball training sessions, two or three strength training sessions and one match per week. All players were informed of the study requirements and provided written informed consent prior to the start of the study. Additionally, all the ethical procedures used in this study were in accordance with the Declaration of Helsinki and were approved by the Ethics Committee of the European University of Madrid (CIPI/18/195). The LPS system (WIMU PRO, RealTrack System SL, Almería, Spain) was installed on the official handball court where the team played their home matches according to the user manual and previous studies . All players were already familiarized with the data-collection procedures during previous training sessions and friendly matches. Each player was fitted with a device on his back with an adjustable vest. The manufacturer’s specific software (SPRO, version 958, RealTrack System SL, Almería, Spain) was used to calculate the perimeter of the court to determine the effective playing time. Consistent with previous studies , playing time was recorded only when the players were inside the court, omitting periods when the match was interrupted. Additionally, only players completing a minimum of 60% of each fixed phase ( 3 3 min for each 5 min time window) were included in the analysis . Due to stop-pages during match play, the duration of the match halves could be longer than 30 minutes . After the match, the LPS files were exported to a USB memory device and analysed using the manufacturer’s specific software. Finally, raw data were exported post-match in Excel format and imported into the statistical software for statistical analysis. At the end of this process, a total of 1166 individual LPS registers from 13 official home matches were collected. Two contextual factors were considered: (1) level of the opponent. Considering the final ranking of each team in the competition we established three tiers: ‘high-level teams’ (HLT) (1 st to 5 th place), ‘middle-level teams’ (MLT) (6 th to 10 th place), and ‘low-level teams’ (LLT) (11 th to 14 th place). These categories are similar to those reported previously [ 9 , 26 – 28 ]; (2) playing positions: backs, pivots and wings. The number of matches and individual LPS registers for each contextual factor and playing position are shown in Table 2 . Three external load variables were collected: (1) total distance (TD) considering total distance covered by the player and expressed in metres; (2) high-speed running (HSR) corresponding to the distance covered above 18.1 km/h and expressed in metres; (3) PlayerLoad (PL) was expressed in arbitrary units (a.u.) and calculated as the square root of the sum of the squared instantaneous rates of change in acceleration in each one of the three planes divided by 100. Data in the text and figures are presented as means and standard deviations (M ± SD). Before carrying out the analyses, the Kolmogorov-Smirnov test was performed to confirm data distribution normality. Variance and sphericity assumptions were checked with the Levene and the Mauchly tests. In relation to the first aim of the study, a repeated-measures analysis of variance (ANOVA) with the Tukey post hoc test was used to examine the physical performance fluctuations (i.e., TD, HSR and PL) between different 5 min fixed phases of the match. Considering the second aim of the study, a two-way ANOVA with the Tukey post hoc test was used to evaluate the interaction between match phases and contextual variables (i.e., level of the opponent and playing positions) on the external load experienced by handball players. Furthermore, partial eta-squared ( ηp 2 ) was calculated for group effects with the following interpretation: > 0.01 small , > 0.06 moderate , and > 0.14 large . Cohen’s effect sizes (ES) were calculated and interpreted using Hopkins’ categorization criteria: d > 0.2 as small, d > 0.6 as moderate d > 1.2 as large , and d > 2.0 as very large . The level of significance was set at p < 0.05 and the statistical software used was SPSS for Windows (Version 26, IBM Corp., Armonk, NY, USA). The physical performance fluctuated according to the match phase in all external load variables: TD (F = 10.30, p < 0.001, η p 2 = 0.106), HSR (F = 9.13, p < 0.001, η p 2 = 0.049) and PL (F = 11.22, p < 0.001, η p 2 = 0.094) . More specifially, the match phase with the greatest mean physical performance was usually the first 5 min phase of the match (TD: 381.7 ± 95.9 m; HSR: 44.5 ± 37.8 m; PL: 6.6 ± 1.8 a.u.). However, the lowest mean physical performance was observed in the last 5 min phase of the first half (TD: 258.1 ± 85.6 m; PL: 4.1 ± 1.4 a.u.), except for HSR, which was registered in the last 5 min phase of the second half (16.3 ± 8.1 m). Additionally, players registered moderately lower values of TD (258.1 ± 85.6 m vs. 342.6 ± 104.2 m, p < 0.001, ES = 0.83) and PL (4.1 ± 1.4 a.u. vs. 5.8 ± 1.8 a.u., p < 0.001, ES = 0.98) during the last 5 min phase of the first half compared to the first 5 min phase of the second half. The level of the opponent had a significant effect on all external load variables: TD (F = 8.07, p < 0.001, η p2 = 0.012), HSR (F = 5.61, p = 0.004, η p2 = 0.009) and PL (F = 3.87, p < 0.021, η p2 = 0.006) . More specifically, matches involving LLT registered moderately more TD in the first (410.2 ± 68.1 m) and the second (365.8 ± 73.9 m) 5 min phases of the match compared to MLT (366.1 ± 96.5 m, p < 0.05, ES = 0.49; 338.5 ± 78.0 m, p < 0.05, ES = 0.51, respectively). Additionally, matches involving LLT registered moderately more TD (296.5 ± 98.0 m) and PL (4.7 ± 1.6 a.u.) in the last 5 min phase of the first half compared to HLT (247.0 ± 89.0 m, p = 0.009, ES = 0.74; 3.9 ± 1.6 a.u., p = 0.032, ES = 0.71, respectively) and MLT (237.9 ± 62.6 m, p = 0.008, ES = 0.82; 3.8 ± 0.9 a.u., p = 0.01, ES = 0.80, respectively). In contrast, matches involving HLT registered moderately more TD (326.6 ± 103.0 m) and PL (5.5 ± 1.9 a.u.) in the last 5 min phase of the second half compared to MLT (262.7 ± 75.4 m, p = 0.066, ES = 0.62; 4.0 ± 1.3 a.u., p = 0.011, ES = 0.81, respectively). However, no significant interactions between level of the opponent and match phases were observed for any variable. The playing positions had a significant effect on all external load variables: TD (F = 153.29, p < 0.001, η p 2 = 0.193), HSR (F = 539.53, p < 0.001, η p 2 = 0.452) and PL (F = 122.44, p < 0.001, η p 2 = 0.159) . Specifically, pivots showed the lowest physical performance in all 5 min phases of the match, whereas wings showed the highest physical performance in all variables. Additionally, pivots presented high variability between the most and the least intense 5 min phase while wings showed the lowest ( Table 3 ). However, no significant interactions between playing positions and match phases were observed for any variable. The aim of the study was to analyse physical performance fluctuations throughout match play in women’s handball and to investigate whether physical performance fluctuations are affected by contextual factors (i.e., level of the opponent and playing positions). The main findings associated with physical performance fluctuations indicated that: (1) the highest values of TD, HSR and PL were registered during the first 5 min phase of the match, regardless of the level of the opponent or playing position; (2) the lowest values of TD and PL were registered in the last 5 min phase of the first half, whereas in HSR they were registered in the last 5 min phase of the second half. Furthermore, the results connected to contextual factors were as follows: (1) there were significant differences with a small effect size between level of the opponent (HLT vs. MLT vs. LLT) for the first and the second 5 min phases of the first half in TD and for the last 5 min phase of each half in TD and PL; (2) there were significant differences with large effect sizes between playing positions (back vs. pivot vs. wing) for all 5 min phases in TD, HSR and PL. In relation to physical performance fluctuations throughout the match, TD, HSR and PL were moderately higher during the first 5 min phase compared to the subsequent 5 min phases of the match, except for TD during the first 5 min phase of the second half where the values were similar to the first 5 min phase of the match. This information indicates that although the players started by testing the ball and shaking hands for approximately 30 seconds, the first 5 min phase of the first half were the most physically intense phase of the match. These results are consistent with previous findings from time-motion analysis and LPS analysis in hand-ball . There are many potential causes of these fluctuations in physical performance during the entire match. Firstly, at the beginning of the match, the teams try to dominate the opponents to obtain an advantage in the match score . Secondly, the goal difference, especially if the match is already decided early in the second half, could affect the player’s pacing strategies and the coach’s decisions (e.g., experiment with new tactics or systems or introduce weaker or less-fit players) . Thirdly, the most likely reason, and one widely supported by scientific evidence, is that handball players experience fatigue during the match [ 14 – 20 ] and, consequently, their physical performance is reduced . Our results suggest that handball players must be physically well prepared to tolerate an intense pace in the first stages of the match and to resist fatigue during the match. To accomplish this purpose, physical trainers and coaches should incorporate different interventions: (1) implement a suitable warm-up protocol before the match ; (2) employ substitutions as an effective tool to distribute playing time between a larger number of players ; (3) improve aerobic-anaerobic capacity and repeated-sprint ability of the players . In relation to the halves of the match, the first 5 min phase of each half registered moderately higher values of TD, HSR and PL compared to the subsequent 5 min phases of each half. Apparently, previous studies in handball reported similar results to our findings . There are many factors that could explain these declines in physical performance within each half of the match: (1) repeat high-intensity actions (e.g., decelerations and changes of direction) associated with high eccentric contractions that generate important neuromuscular fatigue and tissue damage, especially if these high forces cannot be attenuated efficiently ; (2) multiple collisions and contacts associated with one-on-one situations may produce tissue damage, inflammatory responses and neuromuscular impairments ; (3) dehydration and hyperthermia, which affect cellular metabolism and accordingly performance . Neverthless, the first 5 min phase of the second half registered moderately higher values of TD, HSR and PL compared to the last 5 min phase of the first half, but it was only significantly different for TD and PL. These data suggest that the beginning of the second half is a very physically demanding phase, although the HSR values are lower compared to the first 5 min phase of the match, probably because the teams do not play at a high pace to dominate the opponents. Consequently, it could be suggested that the half-time provides an opportunity to partially recover from the transient fatigue produced during the first half. Therefore, handball coaches and practitioners should consider the half-time break as a window of opportunity to include some strategies (e.g. rehydrate, re-fuel, ergogenic aids, heat maintenance, re-warm-up activities) to enhance or maintain physical performance at the beginning of the second half . Lastly, HSR values are lower in all 5 min phases of the second half compared to the first half, especially after the middle of the second half. This could be due to a slower pace of play and fewer counter-attacks as a specific strategy of the players to reduce the number of technical mistakes at the end of the match . Positional differences were found in TD, HSR and PL, with wing players displaying the highest values and pivots showing the lowest values in all 5 min fixed phases. These results could be related to the technical and tactical demands of each playing position and their position on the court , because the handball playing area is longer in the outer aisles than the central domain of the court because of the design of the goal areas, enabling wings to cover larger distances . Furthermore, pivots presented a greater difference between the most and the least intense 5 min phase while wings showed a smaller difference. Several thoughts can be extracted from our findings: (1) this large variability between the most and the least intense 5 min phase reflects the great fluctuations in physical performance throughout the match in handball ; (2) the variability in physical performance is highly playing position-dependent ; (3) the technical staff should design and implement individualized physical training programmes for each playing position. Regarding level of the opponent, matches involving LLT registered slightly more TD in the first and the second 5 min phase of the match compared to MLT, but it was not significantly greater compared to HLT. In contrast, matches involving HLT registered moderately more TD and PL in the last 5 min phase of the second half compared to MLT. We hypothesized that these results could be explained by two reasons: (1) LLT try to obtain an advantage in the match score at the beginning of the match; (2) matches involving HLT usually finished with a small goal difference; therefore, HLT players slowed the pace of the match in the middle of the second half to preserve energy for the last minutes of the match. Consequently, handball coaches should consider the level of the opponent to develop more effective game plans and player rotations strategies. However, this study has some limitations. First, the external load was only monitored during home matches, which means that the match location could have influenced our results. Second, only data from 22 players were analyzed and all of them played in the same club. Third, an analysis of specialist players (offensive or defensive) and goalkeepers was not performed. Fourth, the playing position of each player was established exclusively according to her position in attack, without taking into account her defensive position. Fifth, as it was an observational study, player rotations could not be controlled or influenced by the researchers. Finally, physical performance during fixed phases was not as high as the worst case scenarios or most demanding passages calculated using the rolling average method . Hence, future research should investigate the worst case scenarios analyzing different women’s competitions and including a larger number of teams. In addition, future studies should include away matches. In the present study it was found that the physical performance fluctuated throughout the match and the fluctuations were affected by the level of the opponent and playing positions. Overall, the highest values of TD, HSR and PL were registered during the first 5 min phase of the match, while the lowest values of TD and PL were registered in the last phase of the first half and for HSR in the last phase of the second half. Regarding the level of the opponent, lowlevel teams elicited higher TD in the first 10 min of the match. In contrast, matches involving high-level teams registered more TD and PL in the last 5 min phase of the match. Additionally, in this investigation it was observed that playing positions had a significant effect on physical performance fluctuations. Wings showed the highest physical performance in all 5 min phases of the match, whereas the pivots showed the lowest physical performance. Consequently, physical trainers and handball coaches should incorporate different interventions to minimize the occurrence of fatigue within and between halves (e.g., player substitutions). Likewise, they should consider the half-time break as a window of opportunity to include some strategies (e.g. heat maintenance or re-warm-up) to enhance or maintain physical performance at the beginning of the second half.
|
Other
|
other
|
en
| 0.999995 |
PMC11694313
|
Erosive pustular dermatosis of the scalp (EPDS) is an under-recognized inflammatory condition that presents with localized areas of pustules or crusts overlying eroded plaques or nodules and subsequent scarring alopecia. 1 EPDS is a diagnosis of exclusion; the presentation can often be mistaken for cutaneous malignancy and clinical correlation is critical as histopathology is non-specific. 2 On histology, characteristic findings include atrophic epidermis and a mixed dermal inflammatory infiltrate of lymphocytes, neutrophils and occasional foreign body giant cells. 3 The mainstay treatments are corticosteroids, antibiotics, calcineurin inhibitors, nonsteroidal anti-inflammatory drugs, retinoids and photodynamic therapy. 4 Herein, we report a patient with erosive pustular dermatosis, who has demonstrated resistance to several mainstay treatments typically used to treat EPDS. A 72-year-old male with a history of basal cell carcinoma (BCC), actinic keratoses (AK) and seborrheic keratoses presented to the clinic with persistent crusts and a new erosion with eschar on the left temporal region, with no oozing or discharge. The initial biopsy revealed a foreign body reaction. In addition to counselling on continuing topical mometasone furoate cream, he was started on Doxycycline 100 mg. Concurrently, he was treated for underlying AK with cryotherapy. After 3 months, antibiotics were discontinued because of minimal relief and the formation of new erosions and crusts. At his request, he was started on Colchicine but noticed no improvement in left temple erosions with eschar. Six months later, he was started on a 6-month course of oral isotretinoin; during this time, small erosions started drying up and healing underneath the overlying haemorrhagic crust. With no significant improvements, isotretinoin was discontinued, and the patient was started on topical Calcipotriene ointment in conjunction with Dapsone gel. Three months later, gradual improvement was noted as old crusts healed and filled in. Debridement of eschar was offered at several follow-up appointments but promptly refused. In October 2022, 18 months after his initial presentation, there were new lesions on the vertex scalp and on the right temple. At this time, he was started on prednisone and protopic ointment; these have been used as treatment modalities for EPDS in previous case reports. 5 After improvement, the patient returned to a higher dose of isotretinoin (40 mg) and continued with topical Dapsone 5% gel. Once he reached a plateau, there was a second biopsy done which revealed a BCC. After the BCC was excised, he developed some other areas of eschar that settled with topical Dapsone. Recently, he developed some new areas on the vertex scalp and right temple. Primarily, EPDS affects older men who have accumulated sun damage on a bald scalp; however, females are also impacted by this condition. 6 EPDS is an uncommon condition and usually an overlooked disease. Firstly, the diagnosis is often missed; there is a higher incidence of other cutaneous diseases affecting the same area, such as actinic keratosis, BCC and squamous cell carcinoma. 3 Secondly, there are non-specific clinical, dermoscopic and histopathologic findings. Clinically, EPDS features are consistent with chronic pustules, erosions and crust. 3 On dermoscopy, there is an absence of follicular ostia, dilated vessels and perifollicular serous crusts. The histopathology is non-specific, including atrophic epidermis and chronic inflammation consisting of lymphocytes, neutrophils and occasional foreign body giant cells. 7 The goals of therapy are to reduce inflammation, heal erosions and prevent the progression of scarring alopecia to minimize permanent hair loss. 1 The first line of management is topical corticosteroids and immunomodulating topical therapy. 6 For steroid-sparing agents, topical calcineurin inhibitors, topical calcipotriol and retinoids have been used. Other therapeutic agents that have achieved complete resolution include photodynamic therapy, topical antibiotics such as Dapsone, topical tacrolimus, laser and wound dressings. 8 EPDS can be a chronic recurring disease and the therapy can range from weeks to months. In case of refractory disease progression, combination therapy is common, usually in conjunction with a topical corticosteroid. 4 With refractory EPDS, there may be benefit in trialling topical calcipotriol, short courses of systemic steroids, Doxycycline, isotretinoin, acitretin or Dapsone. 1 In our male patient, most of these therapies were tested before therapeutic relief was achieved. In Australia, extensive management guidelines have been established that consider factors such as skin atrophy, response rate to topical corticosteroids, recurrence and refractory disease. 7 In cases of no response to therapy, a biopsy should be considered.
|
Clinical case
|
biomedical
|
en
| 0.999997 |
PMC11694363
|
Use of benzodiazepines is common among people with substance use disorders (SUD) [ 1 – 3 ], with a high prevalence ranging between 61% and 94% reported in patients with opioid use disorders . Historically, benzodiazepine dependence is recommended to be managed with short-term tapering regimens combined with non-pharmacological and psychological interventions . The effect and safety of short-term detoxification including benzodiazepine tapering in patients with SUD is uncertain . Tapering interventions and complete abstinence-based strategies have often not been successful, especially among those with severe and long-term dependence . Short-term detoxification is even attributed to an increased risk of life-threatening complications such as seizures and psychotic reactions . Although substitution treatment is not recommended, a large proportion of benzodiazepine-dependent patients have continuously been prescribed benzodiazepines, mainly by primary care physicians . Recent research has also confirmed such prescriptions among 30–50% of the Norwegian patients undergoing opioid agonist therapy (OAT) . The Norwegian OAT guidelines recommend gradual tapering and discontinuation of benzodiazepines as the main recommendation, but also propose to consider benzodiazepine stabilizing agonist treatment for patients with severe and long-lasting dependence who have not succeeded in tapering . It is acknowledged that there is limited research supporting this recommendation, although user-reported and clinical experiences indicate that such approaches may be beneficial in improving the quality of life for some patients with long-term and regular use of benzodiazepines. Reports have shown that concurrent dependency to benzodiazepines and other addictive substances in patients undergoing OAT complicates the treatment course and reduces the chance to improve health and quality of life . These patients demonstrate more frequently symptoms of mental illness, use multiple substances, and have more impaired psychosocial functioning compared to patients without benzodiazepine dependence . It is uncertain whether these findings are due to differences between patient groups (e.g., in the extent of mental illness) and/or use of benzodiazepines per se. Clinical experiences indicate that the majority relapses and continues to use benzodiazepines acquired illicitly despite tapering attempts . There is a lack of evidence on the effect and safety of standard treatment approaches as well as benzodiazepine substitution treatment in populations with severe SUD including those undergoing OAT . In a small, non-randomized controlled study on patients undergoing methadone maintenance treatment (total n = 66; 33 in each group), the proportion using illegally acquired benzodiazepines was lower in patients receiving benzodiazepine substitution treatment compared with patients who tapered these agents and discontinued (77% vs. 27%, and 65% vs. 14%) after 2 and 12 months, respectively . Thus, there is an urgent need to conduct randomized controlled trials on the benefit and risks of benzodiazepine stabilizing agonist treatment for patients with benzodiazepine dependence undergoing OAT as compared to standard treatment approaches. Use of benzodiazepines may in turn be related to several clinical characteristics and symptoms. Generally, it is associated with impaired cognitive functioning, in addition to an increased risk of violent behavior . Some register-based studies have shown that patients undergoing OAT who were prescribed benzodiazepines had an increased risk of overdose death compared with those without such prescriptions . In a Norwegian study, clonazepam was often recognized as a contributing agent in emergency units for drug intoxications . In another study, it was frequently found in addition to opioids in toxicological analyses related to overdose death . This is contrary to the Norwegian prescribing pattern in 2014–2015 where the prescription rate of both diazepam and oxazepam (less potent than clonazepam) were 50% higher than of clonazepam. Police confiscations clearly confirm the distribution of the most common illicitly acquired benzodiazepines (among others clonazepam and alprazolam, but not diazepam or oxazepam) in drug-related intoxications . Additionally, contaminated drug supplies have recently been an increasing and worrying issue in Norway and some other European countries where potent synthetic opioids such as nitazenes have been detected in substances sold as other opioids or benzodiazepines . These findings support that the increased risk is related to use of more potent and illicitly acquired and in some cases contaminated benzodiazepines such as clonazepam and alprazolam rather than controlled use of prescription ones such as diazepam and oxazepam which usually are less potent and seldom used illicitly in Norway. A large cohort study, on the other hand, among 353,576 patients receiving stable long-term treatment with benzodiazepines showed that discontinuation was associated with small absolute increases in mortality and other potential harms, including nonfatal overdose, suicide attempt, suicidal ideation, and emergency department visits . These results suggest benzodiazepine discontinuation among patients prescribed for stable long-term treatment may be associated with unanticipated harms, and that efforts to promote discontinuation should carefully consider the potential risks of discontinuation relative to continuation. Despite the high disease burden and related risks among people with benzodiazepine dependence, evidence on effective treatment methods is lacking. We will conduct a multi-center randomized controlled trial on patients undergoing OAT with long-lasting and hard-to-treat concurrent benzodiazepine dependence to study the efficacy and safety of benzodiazepine agonist treatment. The overall objective of the trial is to investigate the effect of stabilizing agonist treatment with prescribed benzodiazepines (diazepam or oxazepam) in reducing the use of illicitly acquired potent benzodiazepines (clonazepam and alprazolam) among people on OAT with benzodiazepine dependence compared to tapering. The central hypothesis is that stabilizing treatment with less potent prescribed benzodiazepines such as diazepam and oxazepam in patients with severe and long-lasting benzodiazepine dependence will result in significant reduction in use of illicit and more potent benzodiazepines such as clonazepam and alprazolam. Accordingly, this may reduce risk and improve health and quality of life in this population. The primary objective of the study is efficacy of stabilizing agonist treatment with daily doses of 15–30 mg diazepam or equipotent doses of 50–100 mg oxazepam in reducing illicitly acquired potent benzodiazepines use measured at 24 weeks intervention. Secondary objectives are to compare mental health symptoms, cognitive functioning, and health-related quality of life between intervention and control groups, as well as to study differences in violence risk, overdoses, and adverse events. Other secondary objectives are to examine differences in treatment retention and satisfaction, and use of alcohol and other substances between the groups. This project is a multi-center, randomized controlled, open and flexible dose trial comparing a 26-week stabilizing agonist treatment by using diazepam (15–30 mg a day) or oxazepam (50–100 mg a day) with a 20-week gradual tapering using the same medications and equivalent initial doses. This study will be conducted in outpatient OAT clinics in six Norwegian cities/counties (Bergen/Vestland, Tønsberg/Vestfold, Skien/Telemark, Fredrikstad/Østfold, Tromsø/Troms and Lillestrøm/Akershus). In total, 108 patients will be recruited. Potential participants will be prescreened in person by a research nurse or OAT staff, and if potentially eligible invited to attend to a formal screening, i.e., eligibility assessment by a physician in line with current clinical practice and trial protocol. The Department of Addiction Medicine, Haukeland University Hospital (Vestland county, city of Bergen, Norway) is responsible for the treatment and follow-up of more than 1000 patients with opioid dependence receiving OAT, of which almost 35% receive methadone while the remaining mainly receive buprenorphine preparations, seldomly oral morphine depot. All medical interventions are integrated with psychosocial care provided in multidisciplinary outpatient clinics as part of specialist health care system. The outpatient OAT clinics are staffed by medical consultants specialized in addiction medicine in addition to nurses, social workers, and psychologists. Dependent on the overall functioning level and decisional capacity, the follow-up of the patients ranges from daily observed medication and consultations to weekly take-home doses. All the clinical measurements and laboratory data are recorded in the hospital journal system. A similar OAT model and organization is also applied in the other involved South-East and North region counties as part of specialist health care system in Norway. These centers include approximately 2000 OAT patients in total with similar clinical and sociodemographic characteristics. The applied OAT platform allows frequent and close clinical observations and follow-up of the participants to increase safety and to ensure a high quality of data collected. The study population is treatment-seeking adults undergoing OAT who fulfil all the inclusion criteria and do not have any of the exclusion criteria. Inclusion criteria are: Benzodiazepine dependence according to International Statistical Classification of Diseases and Related Health Problems 10th Revision (ICD-10) under the following conditions: Minimum duration of ≥ 5 recent years (as a cut-off for severity of dependence) Self-reported use of ≥ 5 days a week during last month Minimum dose used daily is equivalent to ≥ 15 mg diazepam Previous at least one failed attempt at outpatient or inpatient tapering of benzodiazepines Capable of giving signed informed consent, which includes compliance with the requirements and restrictions listed in the informed consent form and in the study protocol. Use of illicitly acquired potent benzodiazepines (clonazepam and/or alprazolam) will be verified by a high specific urine drug screening test (UPLC MS–MS) at baseline. Failure to respond to previous treatment is defined as a relapse to dependent use during tapering or after completing treatment. Exclusion criteria are: Severe respiratory failure (Global initiative for chronic obstructive pulmonary disease “GOLD” grade 3–4) High risk of violent behavior (current violence episodes, i.e., during the last 6 months) High risk of substance related overdose (current overdoses, i.e., during the last 3 months) Severe cognitive impairment (IQ < 70; assessed if needed based on clinical decision) Severe psychosis (current psychotic symptoms and functioning, i.e., during the last 6 months based on clinical decision) Severe depression and high suicide risk (current episodes, i.e., during the last 6 months based on clinical decision) Patients who are already being stabilized with continuously prescribed benzodiazepines (including diazepam or oxazepam during the last 4 weeks prior to baseline assessment) Pregnancy and breastfeeding (female participants should use a safe method of contraception; in doubtful cases, a negative pregnancy test will be required) Challenges related to ability to understand, consent, or willingness to collaborate in following up of the study protocol. The risk of possible drug interactions will be considered individually at the time of eligibility assessments, which in some cases may result in exclusion from participation. Combination of benzodiazepines with opioids, alcohol, and other central nervous system depressants may cause respiratory depression and increase the risk of overdose. Otherwise, there are few known clinical important interactions between benzodiazepines and other medications. Participants will receive comprehensive information about the study during recruitment visits to ensure informed consent and assent. Trained research staff will obtain written informed consent from the patients who wish to attend. Participants will be asked if they are willing to participate in an ancillary qualitative study investigating self-perceived effects of the interventions. If so, an additional written informed consent will be obtained. The comparator is tapering with diazepam or oxazepam in maximum 20 weeks according to the applied clinical procedure in OAT clinics which is based on the Norwegian OAT guidelines recommending gradual tapering and discontinuation of benzodiazepines as the standard treatment of benzodiazepine dependence. Participants in the intervention arm will receive stabilizing agonist treatment with 15–30 mg/day diazepam or equivalent dosages of 50–100 mg/day oxazepam in 26 weeks. Participants in the standard arm will receive tapering with diazepam or oxazepam in maximum 20 weeks according to the applied clinical procedure in OAT clinics. The type of benzodiazepine prescribed (diazepam or oxazepam), the start dosages, and the duration of tapering will be based on the degree of dependence, the dosages of illicit benzodiazepines used prior to study entrance, and the individual’s clinical condition . Accordingly, a customized tapering plan will be suggested to each participant within the framework of the study procedures. For both arms, treatment initiation and follow-up will be conducted at OAT outpatient clinic where they already receive OAT and relevant care including voluntary psychosocial interventions. The study medications will be prescribed by the physicians in the OAT clinics in accordance with the protocol and will be delivered either at the OAT clinic or at a pharmacy in line with the applied standards for OAT follow-up. Commercial tablets and standard medication labels will be used, tagged with trial log numbers. All the costs related to the medications, preparations, and observed intakes will be covered by the OAT clinics (through public assigned funds). The medications will be used in line with national guidelines, and the project will strive to comply with the Good Clinical Practice (GCP) . The stop criteria for the individual participant are defined at least based on non-compliance, unexpected adverse events, or other safety considerations such as use of large amounts of highly potent street benzodiazepines with clinically observed signs of overdosing. In addition, medication can be withheld in the case of deviation from urinary test procedures that can affect measuring the trial primary outcome. Participants who have discontinued protocol-based treatment will be motivated to continue to participate in all remaining research interviews and assessments. For those participants who revoke their consent for the entire study, no further data will be collected from the participant. Medication choice between oxazepam and diazepam will be individualized based on patient preferences, medical history, and present health condition. The prescription method will also be assessed individually and based on the patient’s treatment course (i.e., prescribed by the clinic physician and ordered through the related clinic or pharmacy). All participants will be encouraged to initiate the treatment according to the protocol, with data collected on prescription pickup and treatment initiation dates. In addition, for those receiving medications in the clinics, frequencies, and observation of doses taken will be noted. Self-reported data on medication adherence and compliance will be obtained for all participants. Medication adherence and compliance will be assessed through a combination of prescription pickup frequency, self-reported adherence, observed intake, and urinary tests. Related authorized clinical study-monitoring organs in each health county will visit the study sites on a regular basis to ensure the following requirements: informed consent process, reporting of adverse events and all other safety data, adherence to protocol, maintenance of required regulatory documents, facilities, and data completion on the case report files (CRFs) including source data verification. Additionally, a data monitoring committee (DMC) comprising two independent professionals (a clinician and a researcher) and a statistician will ensure the safety and wellbeing of trial patients and will assist and advice the coordinating and principal investigators to protect the validity and credibility of the trial. All the participants will receive OAT and the related care as usual. Study medications will mainly be administered under supervision by the OAT clinic or pharmacy staff. Individually assessed, the participants will receive some of the doses as take-home-dosages for self-administration. Assessments of observed intake frequency and “take-home doses” will follow the same agreement that applies to OAT medications, and any changes will be assessed by the prescribing OAT physician after discussion in interdisciplinary team. Project managers will be informed of any changes. In any case, all participants should attend the OAT outpatient clinic daily or at least once weekly for clinical observations. Every participant should, regardless of the delivery agreement, have frequently clinical observations and assessments for the first 2 weeks after starting prescription (in both study arms) to ensure treatment safety. After this period, they follow the delivery agreement as before. Clinical and biological follow-up of participants and systematic report of potential adverse effects will be organized according to international GCP guidelines. The participants will be assigned to the scheduled assessments by research nurses and/or clinic physicians in line with the study protocol. The assessments will be performed at baseline, and at week 24 in addition to a follow-up assessment at week 52 after entering the trial. The OAT staff or research nurses will follow up the participants weekly at OAT clinics or by home-based visits if needed, with consultations including self-reports on illicit use of benzodiazepines and other substances, adverse events as well as monthly randomized urine drug screening tests during the trial period . Fig. 1 Flow-chart of the study procedures. Legend: Potential participants will be screened for eligibility. Individuals, who meet the eligibility criteria and provide written informed consent to participate, will be randomized either to a stabilizing dose with diazepam or oxazepam, or tapering using the same medications. The primary endpoint is at week 24 with a follow-up visit at week 52. OAT: opioid agonist treatment Table 1 Schedule of activities during the study period Pre-screening and screening by research nurse/physician Assessment at Baseline by research nurse/physician Assessment at week 24 by research nurse/physician Assessment at week 52 by research nurse/physician Weekly follow-up by research nurse/clinic staff Eligibility criteria X Written Informed consent X Randomization X Observed urinary tests X X X X (monthly) Self-reported drug use X X X X SCL-10 X X X EQ-5D-5L X X X BVC X X X Reaction time X X X Treatment satisfaction X X X Days out of treatment X X X X Recorded overdose X X X X Adverse effects X X SCL-10 Hopkins symptom checklist, EQ-5D-5L Euro Quality of life, 5-dimensional 5-level questionnaire, BVC Brøset violence checklist At the end of trial period, each participant will be individually assessed to receive further clinical follow-up as indicated. Participants who receive stabilizing treatment with prescription benzodiazepines during the 26-week study period will undergo individual clinical assessments upon trial completion. Some participants may continue with stabilizing treatment, while others will have their prescriptions stepped down and discontinued following current guidelines. Participants in the standard treatment arm will receive ongoing treatment and follow-up using conventional approaches after tapering of medications. Participants will be informed about these individual assessments at the study enterance. Clinical observations throughout the study period will provide insights into outcomes for participants in each study arm. Decision-making following the project’s conclusion will be based on these outcomes, aligning with the conditions governing benzodiazepine continuation outlined in the study protocol. Assessments will occur both upon completion of the intervention (after week 26) and at the end of the project period (after week 52). Primary outcome measure is: The difference between the groups in the use of illicit benzodiazepines based on supervised urine drug screening tests measured during the 24-week trial. Secondary outcome measures collected at baseline and week 24 will include: Mental health symptoms score using Hopkins symptom checklist (SCL-10) Health-related quality of life score using 5-dimensional, 5-level Euro Quality of life questionnaire (EQ-5D-5L) Reaction time for cognitive performance using a simple reaction time test Risk of violence behavior using Brøset violence checklist (BVC) Satisfaction with the treatment using visual analog scale (VAS) from 0 to 10 Retention rate in OAT (number of drop-out days during the trial period) Self-reported frequency of use of alcohol and illicit substances including benzodiazepines, and urine drug screening for alcohol and illicit substances other than benzodiazepines Number of non-fatal overdoses and death (if any) Cost-effectiveness of intervention The time schedule of enrolment, interventions, assessments, and visits for participants is shown in Table 1 . The sample size calculation is based on clinical experiences due to no existing empiric assumption. The prevalence of use of illicit benzodiazepines during standard treatment is assumed to vary from 50% to 80%. We have used an “illegal” index which is defined as the proportion of positive urinary tests confirming the use of illicit benzodiazepines and is considered continuous. We set the minimal clinically relevant difference between the groups—regarding the illegal index at week 24—to 0.3, and the required power to 0.8 at two-sided significance level of 0.05. There is no valid data to estimate the standard deviation. Assuming a risk of 0.5 to 0.8 for the use of illicit benzodiazepines in the standard treatment group (p1), and an expected difference of 0.3 between the two arms (p1-p2), we need 43 patients per group . Assuming a drop-out rate of 20% we need 54 patients in each group, 108 in total, to achieve sufficient power . Fig. 2 Study sample size Enrollment was commenced autumn 2022 and will continue until the required number of eligible participants is enrolled in the trial. For both arms, all the clinical stages of the study—including recruitment, information, obtaining written consent, clinical interviews, and completing the study surveys using appropriate instruments, and treatments—will be performed by research nurses and/or physicians to ensure independence. Six OAT sites are involved in the trial conduction to obtain the required sample size. Eligible participants will be randomized digitally with a 1:1 ratio into intervention or standard treatment for inclusion in the trial. A computer-generated blocked, site stratified randomization schedule will be developed by an independent statistician and uploaded to the study database. Randomization will be performed by research nurse or the trial site investigator within the study database and technically secured against manipulation in the database. The allocation sequence will be generated by an independent statistician. The research nurses or physicians will enrol participants in the study and assign them to the interventions. An independent statistician not involved in providing the clinical data will analyze the data collected. Complete blinding is not considered feasible although some masking measures such as blinding of analysts of the data will be taken (blinding for the trial arms for the analysts). Randomization will be disclosed to the researchers, participants, and clinical staff providing treatment and follow-up. Patients will be informed of key elements in the delivery of the respective intervention or standard treatment and follow-up they are randomized to, but not on the hypotheses for the study. The trial is not blinded for the participants, researchers, or care givers. However, the trial statistician will be blinded to the allocation and unblinded after data analysis is completed. To ensure correct adherence to standard operating procedures (SOPs), all system users will be trained and evaluated on a regular basis. The research nurses will collect the trial data at baseline and at trial week 24, in addition to weekly self-reported substance use, adverse events, and other research-related data, as well as the randomized monthly urinary screen tests according to Table 1 . The urine drug screening tests as the measure of primary outcome will be taken at the OAT clinics or other planned units under the observation of the staff. A high specific laboratory analysis method (UPLC MS–MS) will be used to differentiate between the benzodiazepine types identified in the urinary samples (those illicitly acquired, i.e., clonazepam and alprazolam, and the prescribed ones, i.e., diazepam and oxazepam). The samples will be taken randomly in a chosen week per a 4-week period. Randomization will be programmed by an independent researcher and is aimed to reduce the effect of any variations in the intake of illicit benzodiazepines between the intervention and the standard arm in relation to the time of sampling. The patients will be informed Monday of the week that they need to provide an observed urinary test before Friday the given week. In case of no show for a urine sample, the patient will receive a warning to take it the subsequent week (up to once). In the event of a missing urine sample or two delays in a month, study medication will be stopped. Other outcomes will be assessed using standard or semi-structured questionnaires based on Table 1 . The study assessments will be completed for each participant at scheduled visits during the study. Secondary endpoints assessed are: Treatment initiation will be assessed as the proportion taking at least one tablet while medication adherence will assess the proportion taking their medicines at least 6 days per week (> 80%) throughout the entire treatment period. Both outcomes will be assessed and reported as a combination of self-reported adherence and observed intake. The weekly self-reported substance use will be measured each week and at week 24. This includes any use and frequency of use (days a week) of illicitly acquired opioids, cannabinoids, and benzodiazepines, as well as alcohol. For benzodiazepines, more detailed information on the type and quantity (mg) of the benzodiazepines used illicitly during the last week will be collected in addition to the information on the frequency of use (days a week). Mental health symptoms will be assessed using Hopkins symptom checklist score (SCL-10) at baseline, and at week 24. The SCL-10 is a structured self-administrated instrument to measure symptoms of mental health disorders and psychological distress. Scores range from 1 (not bothered at all) to 4 (extremely bothered) for each item. To derive the mean item score, the scores will be summed up and divided by the number of items. A mean score of 1.85 or higher will indicate the presence of symptoms of mental and psychological disorders. The mean score of the records at week 24 will be compared between the two study arms. HRQoL will be assessed using EQ-5D-5L at baseline and week 24. The instrument describes and values HRQoL, consisting of the EQ visual analog scale and the EQ-5D descriptive system. The EQ VAS records the patient’s self-reported health on a vertical scale ranging from 0 “Worst Health You Can Imagine” to 100 “Best Health You Can Imagine,” reflecting the patient’s own opinion about the quality of their health condition. The Descriptive System includes five dimensions (5D): mobility, self-care, usual activities, pain/discomfort, and anxiety/depression. Each dimension has five levels (5L): No problems, minor problems, moderate problems, serious problems, and extreme problems. Patients will indicate their health status by selecting the most appropriate statement in each dimension, resulting in a 5-digit number that describes their health status. The mean HRQoL score will be converted to a utility index (ranging from 0 to 1, where 1 is optimal and 0 is worst) using population-based weightings, which is standard in generic HQoL methods. Data at week 24 will be compared between the two study arms. This secondary outcome will be assessed using the BVC at baseline and week 24. BVC is a 6-item checklist designed to predict imminent violent behavior within a 24-h perspective. The 6 items that reflect the mood setting are confused, irritable, boisterous, physical threats, verbal threats, and attacking objects. Each item is scored as 0 for the absence of behavior and 1 for the presence of behavior, with a maximum total score of 6. The mean score of the records at week 24 will be compared between the two study arms. The cognitive performance will be based on the reaction time test (measured 3 consequent times according to the time used to react in milliseconds) at week 24, and the mean score of the records will be compared between the two study arms. Satisfaction with the treatment will be compared between the two study arms at week 24 after initiation of treatment using VAS from 0 to 100 where 0 means no satisfaction and 100 means very satisfied. The mean score of the records at week 24 will be compared between the two study arms. Study participants will be encouraged to retain and complete study follow-up. They will also receive a limited economic compensation (500 Norwegian Krones) for the use of time and to increase the motivation to complete the data collections, measurements, and assessments. A list of outcome data will be collected for participants who discontinue or deviate from intervention protocols. All the OAT clinics in the participating sites will use electronic CRFs that will be entered online in a study database through the data collection software (Viedoc®). Alternatively, paper-based CRF will be used by the sites, and then the collected data will be entered in the central electronic CRF by coordinating research nurse. The software is accredited by Helse Vest for national health research and clinical integration. A central data manager at Helse Bergen will assist in designing the CRF, train data collectors in use of the CRF, and aid data export. Only authorized study personnel including the principal investigator, research nurses, and coordinating physicians will have access to CRFs and supporting documents. Data capture and storage will be undertaken using computer systems compliant to GCP. The research data will be stored in an encrypted data server for research with access limited for the principal investigators and coordinating research nurse. Not applicable. Genetic specimens will not be taken in this trial. Analysis methods will follow the CONSORT guidelines as far as possible . All tests will be two-sided. Descriptive results and the estimated efficacy will be presented with 95% confidence intervals (CI). Categorical variables will be summarized as percentages and continuous variables as medians with interquartile ranges or means with standard deviations for variables with a Gaussian distribution. Analyses for the primary endpoint will be undertaken on an intention-to-treat (ITT) basis and reported upon as such. All the randomized and eligible participants will be analyzed based on initial group allocation. The main endpoint is difference in an illegal index which measures the cumulative use of illicit benzodiazepines (as assessed by monthly supervised urinary tests) at the end of 24 weeks. The illegal index for urinary tests is defined as the proportion of positive urine tests on illicit benzodiazepines for a 24-week trial and is considered continuous. The difference between the groups at the primary time point (week 24) will be assessed using t -test and ANCOVA. Risk difference with 95% confidence intervals will be reported. For the analyses of secondary endpoints and based on the type of outcomes, appropriate analysis methods will be used including ANCOVA, t -test, and risk difference with confidence intervals. We have two types of secondary outcomes: These include monthly urinary tests and weekly self-reports on illicit drug and alcohol use, weekly self-reports on illicit benzodiazepine use, SCL-10, EQ-5D-5L, BVC, reaction time, treatment retention, overdose, and treatment satisfaction. These outcomes are analyzed using ANCOVA, i.e., the linear regression model of the outcome at the primary time point (week 24) depending on the randomization group adjusted for the baseline value of the outcome. The mean difference with confidence interval, coefficient with confidence interval, and the p -value will be reported. These include adverse events. These outcomes are analyzed using the t -test for differences in the outcome at the primary time point (week 24). The mean difference with confidence interval and the p -value will be reported. Additionally, we will estimate a linear mixed effects model for continuous outcome variables at all time points depending on time, randomization group, and the interaction between time and randomization group, adjusted using an individual random intercept. We will use a simple contrast. For the outcomes appearing to have linear dependence on time we will also use a linear contrast. For all regression models, we will present the regression table, and for the linear mixed model, we will provide the graphics with mean and 95% confidence interval for each follow-up time point. The ITT analyses will be repeated with the per-protocol dataset. There will be conducted per-protocol analyses as part of sensitivity analyses. An interim analysis for efficacy of the primary endpoint will be done when 50% of planned sample size is assessed at week 24. We will use a group sequential design without futility and the O’Brien-Fleming alpha spending approach. The interim analysis will be performed by the DMC. The DMC will give a recommendation whether the study should be continued or stopped. Even if not in the model, the DMC can recommend stopping the study for futility . All patients enrolled in the study will be evaluated with respect to safety-related outcomes according to the treatment they receive. Safety analyses will include summaries of the incidence of all adverse events and serious adverse events that are possibly or probably related to the study intervention and occur during the study treatment period or within 30 days of the last dose of study treatment. Safety analysis will be specified by DMC. Sub-group analyses will be conducted for exploratory purposes, due to their inherently low statistical power. These analyses will focus on the primary endpoint and confirmatory secondary endpoints to assess the consistency of the investigational intervention effect across various subgroups such as age groups and sex. If a subgroup contains fewer than 10% of the total participants, the subgroup categories may be redefined. The analyses will be conducted using a test for heterogeneity and results will be presented on forest plots presenting the estimated study arm difference and 95% confidence intervals. Further details on the statistical analysis will be provided in the statistical analysis plan. A cost-effectiveness analysis will be conducted. Effectiveness will be measured with quality-adjusted life years (QALYs), building on primary and secondary results combined with a prospective Markov model. Cost data will be collected alongside the clinical trial employing both healthcare and societal perspective. The analysis will follow the current guidelines and will be conducted by a health economist . Participants who withdraw from the study treatment will not be censored, as treatment discontinuation is likely to be related to allocation. Deaths will be censored at the last outcome measurement. We will also have a sensitivity analysis assuming missing urine samples is similar as last urine sample taken (to manage missed data for the primary outcome measure). Robustness of the primary outcome will be checked with sensitivity analyses considering censoring and adjusting for potential baseline imbalances. There are no plans for granting public access to the full protocol, participant-level dataset, and statistical code. However, this will be considered by the sponsor if indicated. The trial will be administrated and coordinated by Bergen Addiction Research Group (BAR) and Norwegian Research Centre for Agonist Treatment of Substance Use Disorders (NORCATS) at Department of Addiction Medicine, Haukeland University Hospital, Vestland, Bergen in collaboration with the other sites at Vestfold, Telemark, Østfold, Troms, and Akershus. The research consortium brings together and profits from expertise on addiction medicine and clinical trial implementation in the interdisciplinary and highly specialized clinics together with research expertise from BAR and NORCATS in The Department of Addiction Medicine at Haukeland University Hospital and University of Bergen, in addition to the user expertise from the user organization ProLAR employed in the department. Then, the project group is multidisciplinary, involving national and local collaborators, including several partners with extensive clinical experience, members with user perspectives, and researchers from related field. The coordination unit will be led by the principal investigator in close cooperation with the project manager and will have ongoing communication as well as regular meetings with the other sites, i.e., monthly. The Steering committee (SC) will be responsible for the conduct of the study. The coordination unit will report regularly to the SC for updates on trial progress and potential issues. The SC is the trial decision body, for all scientific and administrative aspects. It will send technical reports to the funder, ethical committees, and regulatory bodies particularly the monitoring organs in Helse Bergen, and financial reports to the funder. SC participants will meet on regular conference calls with customized frequencies. The coordinating and managing unit will oversee the day-to-day conduct of the trial and will meet weekly to review trial conduct. The composition of this unit includes principal investigators, coordinating research nurses, and study physicians. It will centralize all study information and will report to the steering committee which includes sponsor, national coordinating investigator, and the responsible principal investigators. The coordination unit will coordinate analysis and writing of different outcomes from the study. The trial coordination will also rely on other units, which can be considered as SC sub-committees. These are as follows: Clinical care unit which will oversee elaborating the clinical procedures for the trial. It will also support the local physicians on the treatment protocols. Physicians in the coordination unit and the participating sites and coordinating research nurse will be members of this unit. Study site units which will oversee day-to-day implementation at each study clinic in collaboration with the coordination unit. The clinic leaders will generally be involved in the site units. The main aim of DMC is to ensure the safety and wellbeing of trial patients and to assist and advise the coordinating investigator, steering committee, and the principal investigators, so as to protect the validity and credibility of the trial. The DMC will be comprising two independent clinicians and researchers and a statistician. An agreed DMC charter will describe the roles and responsibilities of the committee, including the timing of meetings, methods of providing information to and from the DMC, frequency and format of meetings, statistical issues, and relationships with other committees. The charter will be in place before the first patient is included. Potential adverse events among participants will be managed according to the treatment guidelines . A low rate of adverse effects and toxicity is expected with the applied benzodiazepines. Complete lists on the reported adverse effects are described in the summaries of product characteristics for oxazepam and diazepam as the reference safety information. All serious adverse events (SAE) reported to the sponsor will be assessed against the reference safety information to consider whether the event is unexpected (suspected unexpected serious adverse reaction “SUSAR”) or not. Regular clinical observations by the OAT clinic staff and physicians will secure prompt identification of potential adverse events. Any side effects or suspicious clinical conditions such as symptoms of intoxication that are observed by the clinic staff or registered using weekly questionaries, will promptly be reported to the responsible clinic physicians and the trial investigator. Emergency intoxication care and acute antidote medication (naloxone) will be available at the involved OAT clinics, and further transport to the emergency unit will be secured when close and continuous clinical monitoring is needed. Clinical and biological safety will be assessed according to the standardized scales of toxicity . All unexplained grade III or IV events will lead to temporary interruption of the study medication before a new assessment is made by clinic physician and study investigators. A rapid report system for the management of SAE and SUSAR will be available. The decision to initiate the study will be taken in the agreement between the coordination unit and the sponsor. The study sponsor may wish to do audit visits on sites to ensure the trial is conducted according to the protocol and GCP guidelines. Any important protocol modifications (e.g., changes to eligibility criteria, outcomes, or analyses) will be communicated to relevant parties (e.g., investigators, ethical committees, trial participants, trial registries, journals, and regulatory organs). Any changes to the protocol will be notified to the sponsor and funder as the first action. Then, the principal investigator will notify the study centers. A copy of the revised protocol will be sent to the principal investigator to add to the Investigator Site File. Any deviations from the protocol will be fully documented using a breach report form. The protocol will be updated in the clinical trial registry. The findings will be presented in relevant national and international conferences and will also be presented to politicians and the health and welfare administration at all levels. The aspect in this study is considered relevant to a public audience, and several of the researchers has extensive experience with communicating research to the public through various media, as well as informing patients and other health workers. The results from the study will also be published in articles in peer-reviewed scientific journals in line with the ICMJE guidelines . Open access journals indexed in PubMed/Medline will be preferred. The project benefits a strong user involvement and close collaboration with the peer organization ProLAR ( http://prolar.no ), by user-representatives at the Department of Addiction Medicine, Haukeland University Hospital, Bergen. Each site will also have the possibility to involve the local user representants in the trial. The peer group is involved in the planning, design, recruitment, and implementation of the study. This is the first randomized controlled trial of benzodiazepine stabilizing agonist treatment for benzodiazepine dependence. The research project will provide knowledge on the impact of such intervention on patient outcomes. We will assess efficacy and safety of stabilizing treatment with prescribed benzodiazepines compared to benzodiazepine tapering and discontinuation regarding use of illicit benzodiazepines and accordingly reducing the related risks such as overdoses due to contaminated drug supplies and improving well-being of patients with concurrent benzodiazepine and opioid dependence undergoing OAT. Our trial involves some limitations and several strengths. For the trial, it is difficult to ensure complete blinding, however, some masking measures will be taken including blinded assessments of the study analyses by independent research staff. The study is also funded from public sources to ensure independency form pharmaceutical companies. We also have a biological primary outcome. Thus, substantial information biases are considered unlikely. The study is individually randomized, which minimizes potential confounding. The proposed sample size is powered to detect a medium effect size in between-group differences in the primary outcome. Participants will be closely monitored with weekly clinic visits and research reviews at outpatient OAT clinics where they usually are being treated and followed up. This will promote participants’ safety and retention in the study. The study protocol proposes to deliver the medication in an outpatient setting allowing participants take-home dosages to more closely mimic the service delivery situations in which the medication will be used. However, absence of a structured psychosocial intervention as a supplementary treatment, and not fully observed daily dosages are among the trial limitations. Other limitations include a non-blinded design and a possible (although low) risk of using illicitly acquired diazepam and/or oxazepam which cannot be detected by urinary analyses. However, the results of self-reported use of illicit benzodiazepines can provide additive information on the drug use patterns. One could also argue that randomization could have been stratified not only by site/city, but also by OAT medication. This, however, would have made a high number of randomization strata and many randomization blocks being incomplete with the risk of less balanced groups. If the intervention is found to be efficacious and safe, it will be considered one of the options to standard treatment of patients with opioid and benzodiazepine co-dependency. Recruiting started in September 2022. The current protocol is version 5.0 of 29-8-2024. Currently , we included 67 patients. Patient recruitment is estimated to be completed around December 2025.
|
Review
|
biomedical
|
en
| 0.999998 |
PMC11694364
|
Thyroid eye disease (TED), an inflammatory autoimmune disorder that affects orbital and periorbital tissue, is characterized by unilateral or bilateral proptosis. TED affects people not only cosmetically, but also physically. Periorbital expansion causes diplopia, and furthermore, progression to dysthyroid optic neuropathy can threaten eyesight. Orbital fat proliferation plays a critical pathological role in the periorbital expansion of TED . However, whether a correlation exists between obesity and orbital fat accumulation in thyroid-associated orbitopathy remains undetermined. Studies have identified adipogenesis to be the mechanism of orbital fat deposition in TED . Lacheta et al. noted the capacity of orbital fibroblasts to differentiate into adipocytes . Khong et al. also reported enhanced adipogenesis in thyroid ophthalmopathy, including the proliferation and differentiation of adipocytes . Adipogenesis, namely the proliferation of adipocyte precursor cells and their differentiation into mature adipocytes, contributes to obesity. Therefore, we hypothesized that obesity may be associated with orbital fat expansion in TED. Our study aims to investigate the association between obesity and proptosis in TED. We evaluated whether participants who underwent orbital fat decompression surgery had a greater average body mass index (BMI). The study also explored the association of obesity with proptosis severity and the correlation between obesity and removed fat volume from orbital fat decompression surgery. This cross-sectional, observational study retrospectively enrolled participants who received orbital fat decompression surgery between January 2015 and February 2022 in a single tertiary referral center. The study followed the tenets of the Declaration of Helsinki. Institutional Review Board approval by National Cheng Kung University Hospital was obtained, and an informed consent was waived. All participants had a diagnosis of TED and had at least one of the following surgical indications: (1) proptosis of either eye with a protrusion value over 18 mm as measured by a Hertel exophthalmometer, namely the upper limit of normal Hertel value in the Chinese population , (2) an asymmetry between the protrusion of both eyes of 2 mm or greater , and (3) self-reported unsatisfactory and disfiguring exophthalmos. This study excluded patients who (1) had previously received this surgery or (2) had undergone simultaneous bone decompression surgery. In the case of bilateral fat decompression surgery, only the eye with more severe proptosis was selected, due to between-eye correlations and statistical assumptions of independence of the data . All enrolled participants underwent orbital fat decompression surgery by a single surgeon (CCL). An anterior orbitotomy was performed under general anesthesia with a horizontal incision made through the conjunctiva of the lower fornix . Hypertrophic fat, including medial, middle, and lateral fat pads, was then removed for orbital decompression . The surgical goal was to achieve proptosis reduction equal to 15 mm, namely the average Hertel exophthalmometric value in the healthy Chinese population , or bilateral symmetry in patients with an asymmetry of 2 mm or greater between both eyes. The desired volume of adipose tissue was calculated by the predictive equation developed by Liao et al. with a 0.8 mm Hertel change for each milliliter of orbital fat resection . Fig. 1 Exposure of hypertrophic fat in orbital fat decompression surgery The primary outcome measure was participant BMI. Secondary outcome measures included the preoperative protrusion value, removed orbital fat volume during surgery, and preoperative thyroid status. Demographic data were obtained and reviewed from medical records. BMI was measured and calculated on the day of surgery admission. BMI levels were categorized as normal (BMI ≥ 18.5 and < 24 kg/m2), overweight (BMI ≥ 24 and < 27 kg/m2), and obese (BMI ≥ 27 kg/m2) as defined by the Health Promotion Administration, Ministry of Health and Welfare, Taiwan. The reference BMI of the general Taiwanese population was documented according to the 2017 to 2020 Nutrition and Health Survey in Taiwan (NAHSIT) as administered by the Health Promotion Administration, Ministry of Health and Welfare, Taiwan. The preoperative protrusion value was measured by a Hertel exophthalmometer within 1 month prior to surgery. Removed orbital fat volume was directly measured with a syringe from the surgical specimen. Thyroid status was defined by blood thyroid-stimulating hormone (TSH) level measured within 3 months prior to surgery and was categorized into hyperthyroidism (TSH < 0.25 µU/ml), euthyroidism (TSH ≥ 0.25 and ≤ 4 µU/ml), and hypothyroidism (TSH > 4 µU/ml) [ 13 – 15 ]. Statistical analyses were performed using SPSS version 17.0 (SPSS, Chicago, IL, USA). We compared the mean BMI of enrolled participants with that of the general Taiwanese population using a one-sample t test. Subgroup analyses of sex and age were performed. We applied a binomial test to compare the proportion of participants with overweight or obesity versus the proportion of individuals in the general Taiwanese population. We compared the proptosis severity and average fat volume removed between the groups with and without overweight and between the groups with and without obesity with a two-sample t test. We verified the association between BMI and fat volume removed by the Pearson correlation coefficient. We examined the contributions of different factors, including age, sex, and BMI, on proptosis severity and removed fat volume through multivariable linear regression analyses. A one-way analysis of variance was performed to compare the BMI of participants with varying thyroid statuses. Statistical significance was indicated at P < 0.05. A total of 87 participants, including 32 men (36.8%) and 55 women (63.2%), were included in this study. The average age of participants was 48.66 ± 12.67 years (range = 23 to 87 years). The average BMI was 25.59 ± 4.36 kg/m 2 . Among the participants, 30 (34.5%) had overweight (BMI ≥ 24 and < 27 kg/m 2 ) and 24 (27.6%) had obesity (BMI ≥ 27 kg/m 2 ). The average proptosis value measured by a Hertel exophthalmometer was 18.96 ± 3.52 mm. The study noted a significantly greater average BMI (25.59 ± 4.36 kg/m 2 ) in enrolled participants compared with the general adult Taiwanese population (24.5 kg/m 2 ; P = 0.012; Table 1 ). A subgroup analysis by sex also indicated a greater average BMI among male (26.78 ± 4.16 kg/m 2 ) and female (24.89 ± 4.36 kg/m 2 ) participants versus their counterparts in the Taiwanese adult population (25.3 kg/m 2 for men and 23.8 kg/m 2 for women; P = 0.026 and 0.035, respectively; Table 1 ). The subgroup analysis by age indicated a significantly higher BMI for participants aged 19 to 44 years versus their counterparts in the Taiwanese adult population (25.96 ± 4.87 kg/m 2 ; 24.3 kg/m 2 for Taiwanese population aged 19 to 44; P = 0.027; Table 1 ). Table 1 Average BMI of study sample and that of general Taiwanese population for comparison BMI (Mean ± SD) (kg/m 2 ) Study sample Reference population a P value All 25.59 ± 4.36 ( n = 87) 24.5 ± 0.12 0.012* Sex Male 26.78 ± 4.16 ( n = 32) 25.3 ± 0.16 0.026* Female 24.89 ± 4.36 ( n = 55) 23.8 ± 0.13 0.035* Age 19–44 25.96 ± 4.87 ( n = 34) 24.3 ± 0.19 0.027* 45–64 25.35 ± 4.08 ( n = 47) 24.7 ± 0.15 0.139 ≥ 65 25.25 ± 3.97 ( n = 6) 25.1 ± 0.09 0.465 a Reference population: 2017 to 2020 Nutrition and Health Survey in Taiwan, administered by the Health Promotion Administration, Ministry of Health and Welfare, Taiwan * P < 0.05 The study sample had a significantly greater proportion (62.1%) of people with overweight and obesity (BMI ≥ 24 kg/m 2 ) compared with the Taiwanese adult population (50.7%; P = 0.022; Table 2 ). The subgroup analysis by sex indicated a significantly greater proportion (75%) of men with overweight and obesity (58.8% for Taiwanese adult population; P = 0.043; Table 2 ). The study sample also had a nonsignificantly greater proportion of individuals with obesity (27.6%) than the general population in Taiwan (23.9%; P = 0.244; Table 3 ). Table 2 Proportions of people with overweight and obesity in study sample and in general Taiwanese population for comparison Study sample Reference population a BMI < 24 (%) BMI ≥ 24 (%) BMI ≥ 24 (%) P value All 37.9 ( n = 33) 62.1 ( n = 54) 50.7 0.022* Sex Male 25 ( n = 8) 75 ( n = 24) 58.8 0.043* Female 45 ( n = 25) 55 ( n = 30) 42.8 0.053 a Reference population: 2017 to 2020 Nutrition and Health Survey in Taiwan, administered by the Health Promotion Administration, Ministry of Health and Welfare, Taiwan * P < 0.05 Table 3 Proportions of people with obesity in study sample and in general Taiwanese population for comparison Study sample Reference population a BMI < 27 (%) BMI ≥ 27 (%) BMI ≥ 27 (%) P value All 72.4 ( n = 63) 27.6 ( n = 24) 23.9 0.244 Sex Male 59.4 ( n = 19) 40.6 ( n = 13) 28.3 0.091 Female 80 ( n = 44) 20 ( n = 11) 19.6 0.524 a Reference population: 2017 to 2020 Nutrition and Health Survey in Taiwan, administered by the Health Promotion Administration, Ministry of Health and Welfare, Taiwan The group with overweight (BMI ≥ 24 kg/m 2 ) had a significantly greater proptosis value (19.52 ± 3.52 mm) than that without overweight (18.05 ± 3.37 mm; P = 0.029; Table 4 ). Similarly, the group with obesity (BMI ≥ 27 kg/m 2 ) had a significantly greater proptosis value (21.25 ± 3.76 mm) than that without obesity (18.09 ± 3.02 mm; P < 0.001; Table 4 ). A multivariable linear regression analysis suggested a positive correlation of BMI (beta coefficient 0.416; P < 0.001), and a negative correlation of age (beta coefficient − 0.295; P = 0.002), with proptosis severity (Table 5 ). Table 4 Proptosis value between groups with and without overweight and between groups with and without obesity BMI < 24 ( n = 33) BMI ≥ 24 ( n = 54) P value Proptosis a (mm) (Mean ± SD) 18.05 ± 3.37 19.52 ± 3.52 0.029* BMI < 27 ( n = 63 ) BMI ≥ 27 ( n = 24 ) P value Proptosis a (mm) (Mean ± SD) 18.09 ± 3.02 21.25 ± 3.76 < 0.001* a Measured by Hertel exophthalmometer * P < 0.05 Table 5 Multivariable linear regression analysis of the independent factors for proptosis severity B a P Age -0.295 0.002* Sex -0.180 0.055 BMI 0.416 < 0.001* a Standardized beta coefficient * P < 0.05 The study noted that a greater fat volume (3.81 ± 1.14 ml) was removed from the group with overweight (BMI ≥ 24 kg/m 2 ) in orbital fat decompression surgery than from that without overweight (3.61 ± 1.19 ml; P = 0.24; Table 6 ). A significantly greater orbital fat volume (4.61 ± 1.17 ml) was removed from the group with obesity (BMI ≥ 27 kg/m 2 ) than from that without obesity (3.57 ± 1.12 ml; P = 0.021; Table 6 ). A multivariable linear regression analysis predicting the removed fat volume in orbital fat decompression surgery revealed that BMI was the only factor significantly associated with removed orbital fat volume ( P = 0.02; Table 7 ). A positive correlation between BMI and removed fat volume was demonstrated . Table 6 Removed orbital fat volume of groups with and without overweight, and with and without obesity BMI < 24 ( n = 33) BMI ≥ 24 ( n = 54) P value Fat volume (ml) (Mean ± SD) 3.61 ± 1.19 3.81 ± 1.14 0.235 BMI < 27 ( n = 63) BMI ≥ 27 ( n = 24) P value Fat volume (ml) (Mean ± SD) 3.57 ± 1.12 4.16 ± 1.17 0.021* * P < 0.05 Table 7 Multivariable linear regression analysis of the independent factors for removed orbital fat volume B a P Age -0.007 0.489 Sex -0.232 0.389 BMI 0.074 0.020* a Standardized beta coefficient * P < 0.05 Fig. 2 Relationship between BMI and removed orbital fat volume in orbital fat decompression surgery. Correlation coefficient = 0.291, P = 0.005 Twenty-four study participants exhibited euthyroidism (TSH ≥ 0.25 and ≤ 4 µU/ml), 20 exhibited hyperthyroidism (TSH < 0.25 µU/ml) and 11 exhibited hypothyroidism (TSH > 4 µU/ml). The TSH levels of the remaining 32 participants were not available. A one-way analysis of variance did not detect a significant difference in average BMI among participants with different thyroid statuses ( P = 0.449; Table 8 ). Table 8 Average BMI of hyperthyroid, euthyroid, and hypothyroid groups TSH < 0.25 µU/ml ( n = 20) 0.25 ≤ TSH ≤ 4 µU/ml ( n = 24) TSH > 4 µU/ml ( n = 11) P value BMI (Mean ± SD) (kg/m 2 ) 25.13 ± 4.95 26.98 ± 5.22 25.96 ± 3.50 0.449 The present study revealed an association between obesity and exophthalmos in TED. Participants who underwent orbital fat decompression surgery had a greater average BMI than the general population of Taiwan. Participants with obesity exhibited significantly more severe proptosis. Furthermore, significantly greater orbital fat volume was obtained from the group with obesity in orbital fat decompression surgery. Participants who received orbital fat decompression surgery had a significantly greater average BMI than the general population of Taiwan. We also observed a greater proportion of overweight and obesity in the study sample. Adipogenesis is the process during which preadipocytes mature into adipocytes, which results in fat expansion. When whole-body metabolic homeostasis is altered, excess adipose tissue deposits result in the development of obesity . Crisp et al. and previous studies have noted the potential role of orbital adipocytes in the increased orbital volume of TED . Lu et al. also proved the association of obesity-related factors, including BMI, with Graves’ orbitopathy . Therefore, we hypothesized that participants with obesity may experience more active adipogenesis and have a greater tendency toward fat deposition, and thus experience more severe proptosis in TED. Notably, the age subgroup analysis indicated a significantly greater average BMI in the study sample among individuals aged 19 to 44 relative to their counterparts in the general population, which agrees with findings in the literature. The enlargement of the extraocular muscles predominates among older patients; by contrast, fatty hypertrophy predominates among younger patients . Ugradar et al. also observed a negative correlation between orbital fat volume and age . The difference in predominant type by age may result from the diminishing adipogenic potential of orbital fibroblasts with aging . The study revealed an association of proptosis severity with overweight and obesity. Participants with overweight and obesity tended to have greater proptosis values. Proptosis severity is reported to be closely related to orbital adipose tissue volume . We therefore hypothesized that participants with a greater BMI, and consequently with a higher potential for orbital fat deposition, would develop more severe proptosis. With a multivariable linear regression analysis, the study also demonstrated that BMI was an independent factor positively correlated with proptosis severity. Although age was also found to be a significant factor negatively correlated with proptosis severity, we inferred that this was due to adipose tissue atrophy and fat volume loss along with aging. Additionally, age-related orbital fat herniation anteriorly through the infraorbital space due to weakening of supportive tissue, including orbital rim bony resorption, loosened capsulopalpebral fascia, and loss of muscle tone, also makes proptosis less prominent in the elderly. Instead of intraorbital deposition, which can lead to proptosis, orbital fat in the elderly herniates anteriorly, resulting in baggy eyelids. To investigate whether orbital fat volume contributed to the greater proptosis value in the group with obesity, the study established a positive correlation between BMI and removed fat volume in orbital fat decompression surgery. A significantly greater fat volume was removed in the group with obesity. The technique of orbital fat decompression surgery effectively reduced the degree of proptosis in TED through fat removal . The activation of further adipogenesis in participants with obesity made a greater volume of fat available for manipulation and removal during orbital fat decompression surgery. To sum up, when taking both proptosis severity and removed orbital fat volume into account, BMI has been the only, common, significant factor in the study. A significant association between thyroid status and BMI was absent from this present study. However, studies have observed lower BMI and blood lipid levels in subclinical hyperthyroidism . We attribute the results to several factors. Due to the retrospective design of this study, data on the thyroid status of more than one-third of the participants was missing, and the analysis was therefore limited by the relatively small sample size. Furthermore, most participants were currently under or had received medical treatment for dysthyroidism; antithyroid drugs can interfere with measurement reliability. The present study revealed the association between obesity and exophthalmos in TED from multiple aspects; however, it has several limitations. First, although a correlation was indicated between obesity and proptosis in TED, the cause-and-effect relationship remains undetermined. Moreover, surgical specimen measurements were used instead of radiological measurements of orbital fat volume. This method potentially led to an underestimation of orbital fat volume, particularly when the ophthalmologist encountered technical difficulties during fatty decompression. In conclusion, our study demonstrated an association between obesity and orbital fat expansion and proptosis in TED. Our research reveals that because orbital fat deposition in thyroid-associated orbitopathy is correlated with obesity, weight control is a potentially crucial strategy to prevent patients with thyroid orbitopathy from developing severe exophthalmos. We propose an emphasis on weight control in routine care for patients with thyroid disorders. Future prospective studies are required to establish the effect of body weight reduction or physical activity on TED activity.
|
Study
|
biomedical
|
en
| 0.999998 |
PMC11694365
|
Urolithiasis is the most common indication for urological surgery. Elderly patients, with a generally weakened immune system, face a higher risk of severe infections due to persistent urolithiasis . Delayed treatment can result in urinary tract infection, obstruction, and potentially secondary damage to the kidneys—a particularly dire situation for elderly patients . Ureteroscopic holmium laser lithotripsy (UHLL) is a highly effective treatment for ureteral stones . Currently, general anesthesia with mechanical ventilation is the standard approach for this procedure in elderly patients . However, elderly patients with this method may experiences prolonged recovery from anesthesia and postoperative pulmonary complications . Additionally, endotracheal intubation and laryngeal mask placement can cause damage to the teeth, throat mucosa, or trachea [ 6 – 8 ]. All of these clinically relevant complications can be seriously detrimental to elderly’s comfort, affect the sleep and eating, and even pose a threat to the perioperative rehabilitation . Therefore, it is crucial to develop a potential anesthesia protocol to accelerate elderly’ recovery after urological surgery. The guidelines from the European Association of Urology (EAU) recommend the use of local anesthesia and intravenous anesthesia, especially in female patients . While UHLL under local anesthesia has been previously explored, it can sometimes induce anxiety, delirium, hypertension, and pain . Intravenous sedation with nasal oxygen can facilitate this procedure, yet risks remain as physiological changes in elderly patients, including loss of static lung compliance, stiffness of the chest wall, and reduced alveolar surface area, may limit oxygen storage and potentially cause hypoxia during surgery . As the current anesthesia management methods are still not optimal, we innovatively combine intravenous anesthesia with High-flow nasal cannula (HFNC) oxygen therapy for this surgery. HFNC provides high-flow oxygen to maintain optimal oxygenation, generates positive airway pressure contributing to lung recruitment, and offers heated and humidified oxygen-enriched air to preserve the integrity of the ciliary mucous system during anesthesia [ 13 – 15 ]. Recently, HFNC oxygen therapy combined with intravenous anesthesia has demonstrated positive outcomes in preventing postoperative respiratory complications in various surgeries with low neuromuscular blockade requirements, including thoracic, laryngeal, and tracheoscopic procedures [ 16 – 18 ]. However, the benefits of intravenous anesthesia with HFNC for elderly patients undergoing UHLL remain unexplored. The main objective of this randomized controlled trial was to answer the research question: whether intravenous anesthesia with HFNC compared to laryngeal mask airway assisted mechanical ventilation general anesthesia could improve postoperative recovery quality among elderly patients undergoing UHLL. Patients aged 60 to 85 years, with an American Society of Anesthesiologists (ASA) physical status of I-III, a Body Mass Index (BMI) between 18 and 30 kg/m² and stones from 4 mm to 15 mm who were undergoing elective UHLL, were recruited for this randomized controlled trial. The exclusion criteria included impacted stones, sepsis, sleep apnea syndrome, asthma, severe respiratory insufficiency, a history of COVID-19 infection, recent myocardial infarction, uncontrolled hypertension, high-risk coronary artery disease, severe hepatic and renal insufficiency, and gastroesophageal reflux. Randomization (1:1 ratio, with block sizes of 2 and 4, stratified by sex) was performed using the Sealed Envelope online randomization tool ( https://www.sealedenvelope.com/simple-randomiser/v1/lists ). The randomization was stratified by gender, considering that female patients tend to experience poorer postoperative recovery. The resulting random allocations were enclosed within sequentially numbered opaque envelopes and sealed before surgery began. After the induction of anesthesia, an independent researcher, who was unaware of the randomization process, unveiled the envelopes and assigned patients to either the HFNC group or the laryngeal mask airway (LMA) group. The anesthesiologists responsible for patient care were informed of the study medications and anesthesia method, while the surgeons and other members of the healthcare team remained uninformed. The subjects, clinicians (excluding the anesthesiologists), and investigators involved in patient recruitment and outcome assessment were completely blinded to the group assignments. The two assessors did not access anesthesia records and were not involved in direct patient care. Baseline HR, arterial blood pressure (via radial artery cannulation), 3-lead electrocardiography (ECG), peripheral oxygen saturation (SpO₂), and bispectral index (BIS; Medtronic, Minneapolis, MN, USA) were monitored throughout the procedure. After the subjects were situated in the operating room, a single 10 ml dose of 2% lidocaine was administered and retained in the urethra for 10–15 min. In the LMA group, general anesthesia was induced using i.v. propofol 1–2 mg/kg and fentanyl citrate 0.5–1.5 µg/kg. Neuromuscular blockade was achieved with cisatracurium benzoate 0.15 mg/kg for laryngeal mask placement. After laryngeal mask placement, the mechanical ventilation was set the IPPV(Intermittent Positive Pressure Ventilation) mode, and the parameters were tidal volume of 6 to 8 ml/kg, appropriate positive end-expiratory pressure, and a respiratory frequency of 12–16 breaths/min. A normal end-tidal carbon dioxide (CO 2 ) tension (35 to 45mmHg) was maintained by adjusting the respiratory frequency and the tidal volume intraoperatively. Anesthesia was maintained using remifentanil hydrochloride 0.1–0.3 µg/kg/min and propofol 3–5 mg/kg/h, with adjustments to maintain BIS values between 40 and 60. During surgery, cisatracurium 0.03 mg/kg bolus injections were administered as needed for neuromuscular blockade. After surgery, the patients would get reversal agents according to the anesthetist’s decision. In the HFNC group, general anesthesia was induced using i.v. fentanyl 0.5–1.5 µg/kg and propofol 1–2 mg/kg. Anesthesia was maintained at the targeted depth (BIS 50–80) by manually adjusting i.v. propofol 1–2 mg/kg/h and remifentanil 0.1–0.3 µg/kg/min. No neuromuscular blockers were used, allowing subjects in the HFNC group to maintain spontaneous breathing. Oxygen was provided via high-flow nasal cannula (HFNC) with humidified oxygen therapy [30.0 L/min; FiO₂, 100%; and gas temperature, 37.0 °C]. If the pulse oxygen level fell below 93%, the attending anesthesiologists increased the oxygen flow to maintain normal oxygenation. Tracheal intubation could be considered if necessary for safety. If movement occurs during the operation, a single 5 ml dose of 2% lidocaine was administered and retained in the urethra for 3 min, meanwhile additional propofol 30–50 mg was used intravenously. For both groups, dexamethasone (5 mg) and ondansetron (4 mg) were administered to prevent postoperative nausea and vomiting (PONV). Patients experiencing intraoperative blood pressure below 80% of the baseline value were administered intravenous ephedrine (5 mg), phenylephrine (40 µg), or underwent rapid fluid replacement. After surgery, patients were transferred to the Post-Anesthesia Care Unit (PACU), where the level of anesthesia recovery was assessed using the Aldrete score. The primary outcome was the QoR-15 scores after surgery, as defined by the QoR-15 questionnaire . This global assessment tool evaluates postoperative recovery across five dimensions: physical comfort (5 items), physical independence (2 items), emotional state (4 items), psychological support (2 items), and pain (2 items). Each item is rated on an 11-point scale, with higher scores indicating greater frequency of positive outcomes and lower frequency of negative outcomes. The overall score ranges from 0 (indicating the poorest quality of recovery) to 150 (indicating the best quality of recovery). Secondary outcomes primarily included the length of PACU stay, time to out-of-bed mobilization, airway dryness scores , rate of postoperative sore throat, cough, sputum, and surgeons’ satisfaction (three surgeons have consistent professional title level, rich clinical experience and rigorous scientific research attitude). Safety outcomes were assessed, including hypoxemia (defined as a pulse oxygen level < 93% for at least 1 min), hypotension (defined as a reduction in mean arterial pressure (MAP) > 30% of the baseline value for at least 1 min), hypertension (defined as an increase in MAP > 30% of baseline for at least 1 min), bradycardia (defined as heart rate (HR) < 50 beats per minute for at least 1 min), and tachycardia (defined as HR > 100 beats per minute for at least 1 min) during surgery and in the PACU. Sedation in the PACU (defined as an Observer’s Assessment of Alertness/Sedation Scale (OAA/S) ≤ 3) and the occurrence of headache, dizziness, nightmares, or hallucinations within 0–24 h after surgery were also considered. Symptoms (nausea, vomiting, nightmares, hallucinations, and delirium) were documented during ward visits by blinded assessors. Hemodynamic events, interventions, sedation in the PACU, and the duration of PACU stay were recorded in the Surgical Anesthesia Information System (Hangzhou Zejin Information Technology Co., Ltd, Hangzhou, China). Postoperative opioid consumption, rescue analgesia, occurrences of sore throat, cough, sputum, and the length of postoperative hospital stay were documented in electronic medical records and nursing notes. 7-day follow-up data were obtained via telephone. All information was collected in the electronic case report form, which was reviewed by the principal investigator (JL) and an independent data monitoring committee. Our primary outcome was the quality of recovery (QoR-15) score 24 h post-surgery. The minimum clinically important difference (MCID) in QoR-15 score after surgery is 8 , and the standard deviation (SD) of QoR-15 scores (range, 1-150) is typically 10–16. We considered a difference in the mean QoR-15 scores between groups of 8 as clinically significant. We selected an SD of 13 to best reflect our study population. Assuming a two-sided α of 0.05 and a power of 80%, sample sizes were calculated using PASS 15 software, resulting in N1 = 43 for the LMA group and N2 = 43 for the HFNC group. Considering a possible dropout rate of 20%, a minimum of 53 subjects was required for each group, totaling at least 106 study subjects. The assessment of data distribution was conducted using the Shapiro-Wilk test, and the results are presented as mean (standard deviation [SD]), median (interquartile range [IQR]), or number (%), as appropriate. Perioperative data and study outcomes were compared using the Mann-Whitney U test, chi-square test, or Fisher’s exact test, as deemed appropriate. The estimated effect size was reported in the form of risk difference or relative risk for binary outcomes, hazard ratio for time-to-event, and mean or median differences for continuous data with confidence interval (CI). We defined subjects with BMI > 25 as overweight. We categorized patients who had a QoR-15 score < 121 at 24 h post-surgery as having a moderate-poor quality of recovery. We divided the QoR-15 scores into poorer early postoperative recovery (QoR-15 score < 121) and better early postoperative quality of recovery (QoR-15 ≥ 121) . Furthermore, prespecified subgroup analyses were performed on the primary outcome by sex (female vs. male), current smoking status (no vs. yes), and BMI scores (≤ 25 vs. > 25). For the secondary outcomes, multiple comparisons were not corrected; thus, these results should be interpreted as exploratory. Interim analysis or missing data imputation was not performed. The statistical analysis was performed using SPSS software (version 25.0, IBM SPSS Inc). A two-sided P-value of less than 0.05 was considered indicative of a statistically significant difference. From July 2023 to December 2023, a total of 106 patients were screened . Of these, 6 were excluded, and 100 were randomly assigned to either the HFNC or LMA group. All randomized patients received their assigned anesthesia regimens during surgery. Finally, 96 patients were included in the analysis; 3 were excluded due to operation time exceeding 1 h, and 1 patient required intubation. Fig. 1 Trial flow diagram. LMA, laryngeal mask airway; HFNC, high flow nasal cannula The baseline characteristics between the two groups are presented in Table 1 . The mean (SD) age was 73 (4) in the HFNC group and 72 (3) in the LMA group. BMI averaged 23.8 (2.3) kg/m² in the HFNC group and 24.4 (2.8) kg/m² in the LMA group. Most patients were classified as ASA physical status II (86.5%). In both groups, 60.4% (58 of 96) of patients were male. The two groups had comparable preoperative baseline cardiorespiratory function. The stone location and size were similar between both groups. Overall, the patient demographics, surgical, anesthetic characteristics, laboratory biochemical indexes and respiratory related index were similar between the groups. Table 1 Patients and baseline characteristics LMA ( n = 48) HFNC ( n = 48) P -value Age, y 72.0(3.6) 73.0(4.0) 0.192 BMI, kg/m2 24.4(2.8) 23.8(2.3) 0.234 ASA status I II III 2(4.2%) 42(87.5%) 4(8.3%) 1(2.1%) 41(85.4%) 6(12.5%) 0.803 Sex male female 29(60.4%) 19(39.6%) 29(60.4%) 19(39.6%) > 0.99 Smoking 16(33.3%) 15(31.3%) 0.827 Breath-holding test < 30s ≥ 30s 5(10.4%) 43(89.6%) 8(16.7%) 40(83.3%) 0.371 Pulmonary function, L FVC FEV 1 3.11[2.7–3.4] 0.67[0.6–0.7] 3.06[2.7–3.3] 0.67[0.6–0.7] 0.965 0.437 Cardiovascular comorbidity Hypertension CHF Arrhythmia 20(41.7%) 0 8(16.7%) 18(37.5%) 0 10(20.8%) 0.676 - 0.601 Hemoglobin, g/dL 13.68(1.3) 13.81(1.4) 0.628 Stone location Proximal Mid Distal 3(6.3%) 9(18.8%) 36(75%) 5(10.4%) 8(16.7%) 35(72.9%) 0.883 Stone size ≤ 10 mm > 10 mm 40(83.3%) 8(16.7%) 42(87.5%) 6(12.5%) 0.563 Preoperative QOR-15 score Preoperative pH Preoperative PaO 2 , mmHg Preoperative P/ F, mmHg Preoperative PaCO 2 , mmHg Preoperative RR Preoperative Airway dryness score Mouth Throat 140.5[138.25-143.75] 7.39[7.36–7.42] 76.73(4.84) 365.38(22.93) 39[36–42] 16[15–16] 4[4–5] 4[4-4.8] 141.0[135.25–144.0] 7.36[7.35–7.41] 76.10(5.58) 362.40(26.58) 36[35–41] 16[16–16] 4[4–4] 4[4–4] 0.869 0.125 0.367 0.367 0.097 0.388 0.054 0.218 Data are mean (SD), median [IQR], or n (%) Abbreviations: SD, standard deviation; IQR, interquartile range; BMI: body mass index; FVC: forced vital capacity; FEV1: forced expiratory volume in 1s; CHF: Chronic heart failure; pH, potential of hydrogen; PaO 2 , partial pressure of oxygen; P/F, PaO 2 / fraction of inspired oxygen ratio; PaCO 2 , arterial partial pressure of carbon dioxide; RR, respiration rate Table 2 Perioperative data LMA ( n = 48) HFNC ( n = 48) OR or difference (95%CI) P -value Postoperative pH 7.39[7.36–7.41] 7.38[7.35–7.40] 0.01(0-0.02) 0.137 Postoperative PaO 2 , mmHg 74 (5) 75(5) 1.07(-2.93-1.31) 0.920 Postoperative P/ F, mmHg 353(25) 357(24) 5.08(-13.96- 6.22) 0.920 Postoperative PaCO 2 , mmHg 44[41–47] 45[42–48] -1(-3-0) 0.116 Postoperative RR 16[16–16] 16[16–16] 0(0–0) 0.894 Body movement 1(2.1%) 3(6.25%) 3.13(0.31–31.25) 0.610 Remifentanil consumption, ug 131[104–158] 74[49–112] 49(32–67) < 0.001 Fentanyl consumption, ug 50[46.3–60] 55[50–60] 0(-5-0) 0.293 Propofol consumption, mg 205[183–227] 169[134–219] 30(10–47) 0.005 Length of surgery, min 31(10) 32(10) -1.40(-5.45-2.65) 0.495 stone-free rate 100 100 - - Data are mean (SD), median [IQR], or n (%) Abbreviations: CI, confidence interval; pH, potential of hydrogen; PaO 2 , partial pressure of oxygen; P/F, PaO 2 / fraction of inspired oxygen ratio; PaCO 2 , arterial partial pressure of carbon dioxide; RR, respiration rate Table 3 Primary outcomes, secondary outcomes and safety outcomes LMA ( n = 48) HFNC ( n = 48) OR or difference (95%CI) P -value Primary outcomes QOR-15 score POD1 125.5[118.3–130.0] 136.5[126.3–139] -9(-11 - -5) < 0.001 POD2 134.5[123.3-146.5] 141.5[125.8–145] -1(-7-2) 0.523 POD3 142.5[135–145] 141.5[132–149] -1(-4-3) 0.577 POD7 142.5[135.3–145] 142[138–149] -1(-4-1) 0.270 Secondary outcomes Duration of PACU stay, min 24.8(3.1) 13.2(2.8) 11.60(10.42–12.79) < 0.001 Time to first out-of-bed, min 94.9(3.2) 63.1(3.1) 31.81 (30.57–33.05) < 0.001 Length of hospital stay, d 2[2–2] 2[2–2] 0(0–0) 0.849 Sore throat 7(14.6%) 0(0%) - 0.019 Cough and phlegm 8(16.7%) 1(2.1%) 9.4(1.13–78.41) 0.036 Airway dryness score Mouth 30 min after surgery 5[4.3-6] 3[2–4] 2(1–3) < 0.001 1 h after surgery 4[3-5.8] 3.5[2–5] 1(0–1) 0.058 3 h after surgery 2[2–2] 2[1–2] 0(0–0) 0.096 Throat 30 min after surgery 5[4–7] 3[2–4] 2(1–3) < 0.001 1 h after surgery 4[3-4.8] 4[3.3-4] 0(0–0) 0.919 3 h after surgery 2[2-2.8] 2[1–2] 0(0–0) 0.109 Surgeons’ satisfaction 0.504 Not satisfied 0 0 Satisfied 6(12.5%) 4(8.3%) Totally satisfied 42(87.5%) 44(91.7%) Safety outcomes Hypotension 5(10.4%) 4(8.3%) 0.782(0.197–0.859) > 0.99 Bradycardia 3(6.3%) 4(8.3%) 1.364(0.288–6.448) > 0.99 Hypertension 1(2.1%) 2(4.2%) 2.043(0.179–23.319) > 0.99 Tachycardia 2(4.2%) 3(6.3%) 1.533(0.245–9.614) > 0.99 Interventions for haemodynamic events 9(18.8%) 7(14.6%) 0.740(0.251–2.180) 0.584 Hypoxemia 2(4.2%) 3(6.25%) 1.53(0.45–9.61) > 0.99 Sedation in PACU 4(8.3%) 2(4.2%) 0.478(0.083–2.744) 0.673 PONV within 0–48 h 3(6.3%) 2(4.2%) 0.652(0.104–4.089) > 0.99 Nightmare or hallucination 0(0) 0(0) - Delirium 0(0) 0(0) - Data are mean (SD), median [IQR], or n (%) Abbreviations: CI, confidence interval; QOR-15, Quality of Recovery 15-questionnaire (QoR-15); PACU, post-anesthesia care unit; PONV, postoperative nausea and vomiting Compared with the LMA group, the HFNC group had lower opioid consumption throughout anesthesia and surgery. In the HFNC group, the median [IQR] dose of remifentanil was 74 µg. In the LMA group, the median [IQR] dose of remifentanil was 131 µg. The mean length of surgery was 31 (10) min in the LMA group and 32 (10) min in the HFNC group. Body movement occurred in three patients in the HFNC group, while one patient in the LMA group experienced body movement. This difference, however, was not statistically significant. In terms of lung function assessment, the two groups had comparable results, including pH (potential of hydrogen), PaO₂ (partial pressure of oxygen), P/F (oxygenation index), PaCO₂ (partial pressure of carbon dioxide), and RR (respiration rate) during the perioperative period (Table 2 ). The primary outcomes are presented in Table 3 ; Fig. 2 . Compared with the LMA group, the HFNC group had a significantly higher postoperative QoR-15 score (125.5 [118.3–130.0] vs. 136.5 [126.3–139.0]; median difference = -9, 95% confidence interval [CI], -11 to -5; P < 0.001) at the first 24 h post-surgery. The differences were not statistically significant on postoperative days 2, 3, and 7. Fig. 2 Comparision of the QOR-15 scores of LMA and HFNC group at PRE, POD1, POD2, POD3 and POD7. Data were expressed as median (horizontal bar), interquartile range (box) and the outliers (points). QOR-15, Quality of Recovery 15-questionnaire (QoR-15); LMA, laryngeal mask airway; HFNC, high flow nasal cannula; PRE, pre-operation; POD, post-operation day The length of PACU stay was longer in the LMA group (mean difference = 11.6 min, 95% CI, 10.4–12.8 min). The time to first out-of-bed mobilization for patients in the HFNC group was significantly shorter than in the LMA group (mean difference = 31.8 min, 95% CI, 30.6–33.1 min). There was no significant difference in the length of postoperative hospital stay or surgeons’ satisfaction between the two groups. Compared with the LMA group, airway dryness scores (mouth and throat) 30 min after surgery were lower in the HFNC group. During the first 24 h after surgery, 7 patients (14.6%) in the LMA group experienced sore throat compared to none in the HFNC group ( P = 0.019). Cough and sputum occurred in 8 patients (16.7%) in the LMA group vs. 1 patient (2.1%) in the HFNC group (OR = 9.4, 95% CI, 1.1–78.4; P = 0.036). All safety outcomes exhibited no significant differences between the two groups (Table 3 ). Hypotension occurred in 8.3% of subjects in the HFNC group and 10.4% in the LMA group, with bradycardia occurring infrequently in both groups. Transient hypoxia events during the operation were treated in 6.3% of patients in the HFNC group and 4.2% in the LMA group. Additionally, two patients in the LMA group developed postoperative hypoxia upon removal of the laryngeal mask. In predefined subgroup analyses, the two groups were comparable in terms of treatment effects on postoperative QoR-15 scores within the subgroups of sex (female vs. male, P = 0.649), smoking status (no vs. yes, P = 0.882), and BMI (≤ 25 vs. >25, P = 0.577) . Fig. 3 Subgroup analysis of the QOR-15 scores at POD1. BMI: body mass index; CI, confidence interval In this randomized controlled trial, we demonstrated that monitored anesthesia care with HFNC led to a statistically and clinically significant increase in the quality of postoperative recovery among older patients undergoing UHLL within the first 24 h, compared with laryngeal mask airway anesthesia. Additionally, monitored anesthesia care with HFNC decreased opioid consumption, airway dryness scores, length of PACU stay, and time to out-of-bed mobilization. The safety outcomes were comparable between the two anesthesia regimens. Over the past few years, HFNC oxygen therapy has been applied in various medical procedures, including anesthesia induction, sedation, gastroenteroscopy, and fiberoptic bronchoscopy . Additionally, a randomized study indicated that HFNC enhances arterial oxygenation during one-lung ventilation in non-intubated thoracoscopic surgery . Transnasal Humidified Rapid-Insufflation Ventilatory Exchange (THRIVE), a form of HFNC, could even keep patients with mild systemic disease and a BMI < 30 well-oxygenated for up to 30 min . These studies suggest that HFNC oxygen therapy is feasible and safe for surgery, potentially providing optimal oxygenation levels, positive airway pressure, and maintaining the integrity of the ciliary mucous system . Moreover, another study showed that HFNC had the potential to reduce postoperative recovery time, decrease the occurrence of agitation, and enhance lung function and oxygenation status during the anesthesia recovery period . In our study, 48 elderly patients received monitored anesthesia care with HFNC; all successfully completed the operation, and the success rate of lithotripsy was 100%. A BIS level between 40 and 60 is typically considered the ideal anesthesia state. Monitored anesthesia care with HFNC provides smoother airways compared to laryngeal mask-assisted mechanical ventilation general anesthesia. The risk of laryngospasm is much lower with HFNC compared to the use of a LMA. Therefore, in our study, we implemented a novel strategy of maintaining a higher BIS range of 50–80 within the HFNC group. Research suggests that maintaining BIS in the range of 61–70 and 71–80 can lead to reduced propofol usage during surgery, better quality of recovery among elderly patients, and decreased incidents of hypotension . Based on experiences with ureteroscopy lithotripsy using 100 µg fentanyl induction under sedation and anesthesia , elderly patients require appropriate dosage adjustments. For fentanyl use, the median [IQR] dose in the HFNC and LMA groups was 55 and 50 [46.25,60] µg, respectively. We opted to combine remifentanil with propofol for anesthesia maintenance. This pairing offers fast action, rapid metabolism, easy control, and optimal safety, and decreases postoperative fatigue, thereby promoting faster recovery. The primary outcome in our trial used QoR-15, which demonstrated strong efficacy, reliability, and sensitivity. Chazapis et al. reported that most patients’ QoR-15 scores had returned to baseline levels by 48 h postoperatively and exceeded pre-surgery levels by seven days . Hence, focusing on QoR-15 scores obtained before surgery or within the first seven days post-surgery yields more accurate results. Our trial findings provide compelling clinical evidence that monitored anesthesia care with HFNC notably elevates the quality of recovery within 24 h postoperatively, compared with the LMA group. The secondary outcomes also indicated that monitored anesthesia care with HFNC could enhance recovery after surgery in older patients undergoing UHLL within the first 24 h. A certain level of neuromuscular blockade is often needed when using LMA during general anesthesia. However, insufficient recovery from neuromuscular blocking drugs is linked to adverse outcomes, such as upper airway obstruction, reintubation, atelectasis, pneumonia, prolonged stay in the PACU, and reduced patient satisfaction . Additionally, the increased use of opioids during anesthesia might adversely impact the immune function of older patients, potentially detracting from the quality of early recovery . The findings in Table 3 indicate a notable decrease in airway dryness scores, postoperative sore throat, and postoperative cough and sputum in the HFNC group. Moreover, patients in this group demonstrated improved communication with medical staff, relatives, and friends during the early postoperative period. These positive changes enhanced the quality of early recovery in three QoR-15 clusters: psychological and emotional state, pain, and physiological adaptability. The length of PACU stay and time to out-of-bed mobilization were both reduced in the HFNC group, which is essential for the expulsion of residual crushed stones from the urine after UHLL . The QoR-15 has strong construct validity, revealing a negative correlation with both duration of surgery and total opioid use . Thus, we standardized the duration of surgery in this study to minimize bias between the groups. In line with reported studies, our results showed that monitored anesthesia care with HFNC could decrease total opioid use, enhancing recovery after surgery. In addition, previous evidence indicates that women experience worse postoperative recovery. ( 37 – 38 ) To avoid gender bias, we performed a prespecified stratified analysis, which showed that the treatment effects of the LMA group versus the HFNC group on QoR-15 scores did not differ in the subgroups of sex. This study has several limitations. Firstly, it was conducted at a single hospital raising questions about its generalizability beyond this specific setting, and further protocol will be performed for multi-center trial. Secondly, only ASA I-III patients with a BMI below 30 were included, limiting the applicability of the findings to patients with advanced cardiopulmonary disease or a higher BMI. Thirdly, only the LMA group received neuromuscular blocking drugs. The better outcome in the HFNC group could be due to the patients not having received any neuromuscular blocking agents. Although reversal agents may be used in the study protocol according to the needs of the anesthesiologist, there is still the possibility of neuromuscular blocking agents residues. In the future, LMA group can be treated without neuromuscular blocking agents, which could rule out this as a potential confounding factor. Lastly, we did not perform measurements for hypercapnia by transcutaneous carbon dioxide in real time during the surgery, as this study mainly focused on the postoperative recovery. However, previous research has shown that HFNC applied to spontaneously breathing patients can increase CO 2 clearance and decrease the respiratory rate due to dead space wash-out . In conclusion, monitored anesthesia care with HFNC can improve the postoperative recovery quality of older patients undergoing UHLL and decrease opioid use, airway dryness scores, length of PACU stay, and time to out-of-bed mobilization. This regimen is feasible, safe, and beneficial for UHLL.
|
Study
|
biomedical
|
en
| 0.999996 |
PMC11694379
|
Telemedicine is the use of communication technology to provide healthcare services remotely . With the rise in technological advancements, the sharing of medical information over large distances has advanced through telegraphs, telephones, and the internet. In modern times, healthcare providers can deliver care directly to patients in the comfort of their own homes via live chat services that allow real-time one-on-one communication . Through various telemedicine methods, including telephone calls and electronic health records, patients can schedule appointments, have access to their medical history, and communicate with their providers . This can be both time and cost-effective for the patient, as it resolves the barrier of physically going to the provider’s office to receive information and care that could otherwise be communicated virtually. One month after the Centers for Disease Control and Prevention declared COVID-19 to be a pandemic in April 2020, the number of telemedicine insurance claims in the United States (US) increased from 0.15% to 13% , as healthcare departments transitioned from in-person to remote encounters [ 5 – 7 ] due to concerns regarding SARS-CoV-2. Prior to the pandemic, however, telemedicine utilization was already on the rise, increasing in US hospitals by 41% from 2010–2017, while also being employed by 61% of healthcare institutions across the nation . This could partly be explained by the modality’s affordability (vs. in-person encounters) and convenience, amongst other benefits . However, there are concerns regarding the efficacy of some forms of telemedicine. For instance, the direct-to-consumer version of the modality does not necessarily rely on a patient’s typical clinician and it may lack appropriate medical tests and equipment, thereby minimizing the data available for making diagnoses and aptly prescribing medications . Similarly affecting providers’ ability to serve them well, patients have also been noted to downplay the signs and symptoms of their ailment during telephone consultations . Nonetheless, as telemedicine’s quality continues to be refined and its utilization expands, it is prudent to begin identifying patient-valued interface traits to maximize its scope of use. This is particularly relevant for the field of rural vascular surgery, as a recent study has indicated that the modality is acceptable for the follow-up care of chronic venous disease (CVD) . Generally speaking, telemedicine services can deliver high-quality care for patients with venous disease in a safe and coordinated manner, with varicose vein patients having demonstrated high satisfaction with telemedicine over the traditional healthcare delivery model . With telemedicine’s affordability, this could help address the CVD treatment costs of the approximately 40% of Americans who are affected by the condition, which are estimated to be around $150 million to $3 billion . Additionally, telemedicine can improve the quality of care and access in rural areas, but limited evidence is available on how telemedicine is perceived by these patients . Although previous studies have focused on telemedicine’s efficacy in specific fields and on patient satisfaction, to the best of our knowledge, few studies have evaluated which traditional healthcare aspects rural patients would appreciate within the interface. Moreover, systematic review studies found no substantial differences between telephone and video telehealth appointments, especially with regard to clinical effectiveness and patient satisfaction . During the early weeks of the expansion of telemedicine, telephone visits were the common use of the modality . As time progressed the rise of video visits increased, however, this has become a barrier to underserved populations . Patients using video visits have been more likely to be White, enrolled in commercial insurance, and living in areas with higher income and broadband access. Patients who are older than 65 years, Black, Hispanic, and from areas with low broadband access are less likely to use video visits . Video visits compared to telephone visits require a complex setup and broadband internet access, which may present barriers for older adults, racial/ethnic minorities, and those with limited English proficiency (LEP) . Accordingly, Valley Vein Health Center (VVHC) created a patient survey to gain insight into CVD rural patients’ views on telemedicine through telephone visits. This paper aims to highlight variables, which rural patients with CVD rated favorably when employing telemedicine services. The study was conducted from January to February 2021 at VVHC, a rural outpatient clinic with seven separate locations serving Central California. All surveys were conducted in accordance with relevant guidelines and regulations approved by the Valley Vein Health Center Ethics and Institutional Review Board (IRB) Committee. A convenience sampling method, using a predetermined survey, was used with an informed consent process that was a voluntary, opt-in consent-by-completion approach for all subjects. The survey was developed through a comprehensive review of primary research and review articles from international sources, including data from the United States, the United Kingdom, and Iran, which had previously identified variables contributing to in-person care hesitancy. The 21 survey items were carefully constructed based on these established variables. Specifically, 18 of the items were derived from the most prevalent in-person care hesitancy variables, which had been previously grouped in the literature into three major categories: provider confidence, patient-physician rapport, and treatment accessibility. These categorizations were supported by existing research findings and were used as the basis for structuring the survey. The remaining three items were designed to assess patient opinions specific to their individual healthcare encounters. Overall, these predetermined survey questions were intended to mitigate the challenge of receiving incomplete questionnaires (i.e. having surveys returned with many unanswered items), which may have arisen from free-response questions. The survey was tested and reviewed by 15 staff members and 30 patients for clarity and validity before a final set of questions was approved. Additionally, a statistician, a patient advocate, and VVHC’s two clinicians reviewed the questionnaire for appropriateness. The finalized survey had two versions available in English and Spanish: one for patients who completed their appointment and one for patients who canceled/delayed it. The latter survey had statements that generally opposed those found in the “completed appointment” survey. For the ranking of individual survey items, a five-point Likert rating scale was employed: a rating of five indicated that a statement was favorable or true, whereas a rating of one meant that a statement was unimportant or false (please see Appendix 1 ). Using a convenience sampling method with a voluntary, opt-in consent by-completion approach, 153 distinct patients who had attended ( n = 120) or canceled/rescheduled ( n = 33) an over-the-phone telemedicine appointment completed the survey. Patients completed questionnaires over the phone either before or after their scheduled consultation with their provider by the research assistants at VVHC, as well as by a researcher (EG). For the completion of the cancellation/rescheduling surveys, the research assistants and researcher (EG) contacted patients who had delayed a healthcare encounter at VVHC over the past year, with respect to the study’s conclusion date. All surveys were conducted over the phone and patients were explained the reasons for the study and what it hoped to answer, in addition to its benefits. Paired difference t-tests were utilized to compare the general variable categories to one another. A t-test of the correlation between each pair of categories (i.e. confidence vs. rapport, confidence vs. accessibility, and rapport vs. accessibility) was employed to determine whether an observed correlation was significantly different from zero. Additionally, least significant difference (LSD) tests modeled after Bonferroni's LSD in Analysis of Variance (ANOVA) were applied to the survey data to determine statistical significance when comparing the favorability of individual categories to one another. Patients surveyed at VVHC were 82.4% female ( n = 127) and 17.6% male ( n = 26); 69.3% of them were Hispanic/Latino ( n = 106), 22.2% were White (non-Hispanic/Latino) ( n = 34), 3.3% were Indian/Pakistani/Punjabi ( n = 5), 2% were Black/African American ( n = 3), 2% were Asian ( n = 3), and 1.2% were American Indian/Alaska Native ( n = 2). 24.9% of these individuals ( n = 38) completed an English version of the questionnaire, while 75.1% of them ( n = 115) completed a Spanish version. The following proportions of patients belonged to the respective age groups: 3.9% were 18–30 ( n = 6), 15.7% were 31–40 ( n = 24), 21.6% were 41–50 ( n = 33), 26.8% were 51–60 ( n = 41), 21.6% were 61–70 ( n = 33), 8.4% were 71–80 ( n = 13), and 2% were 81 + ( n = 3). Overall, 6.5% ( n = 10) were new to our telemedicine services, while 93.5% ( n = 143) were returning telemedicine patients. When seeking to identify the top patient preferences, LSD tests revealed variables 1–14 (see Table 1 ) to be significantly different from variables 15–18, although there were no clear distinctions within each group of variables (1–14 and 15–18). There are some negligible differences among the variables at the edges of 1–14, but these individual statements’ statistical overlap essentially leaves them as a single large group with items that only differ in non-significant or barely significant ways. For the favorability of these traits to be statistically different from one another, the difference between the means of the individual items being compared (e.g. 1 vs. 2, 1 vs. 3, or 2 vs. 3, etc.) must be greater than the LSD of 0.242 for variables 1–14; the aforementioned means refer to the overall average Likert rating that a distinct statement received. Similarly, for any item to be statistically distinguishable from traits 15–18, the difference between the means of the individual statements being compared must be greater than the LSD of 0.403. Nonetheless, based on the mean Likert ratings of patient responses, the top favored variables are: services with providers who are kind and helpful ( n = 119, x̄ = 4.966), visits with physicians who are considered knowledgeable ( n = 119, x̄ = 4.958), the implementation of technology that acknowledges COVID-19 safety concerns ( n = 119, x̄ = 4.950), the provision of care by staff who are considered knowledgeable ( n = 120, x̄ = 4.917), and appointments with physicians and staff who are perceived as trustworthy ( n = 119, x̄ = 4.916). The mean Likert values for all variables, as well as examples of significant values calculated from the difference between these means, are summarized in Table 1 . Table 1 This study rated variables valued by patients who completed telemedicine visits versus those who did not complete telemedicine visits When reviewing the patient opinion statements, our data suggested a belief that telemedicine encounters were as good as in-person visits ( n = 118, x̄ = 4.932) and that such encounters provided them with the confidence to proceed with future, in-person vein treatments ( n = 117, x̄ = 4.744). Additionally, patients expressed feeling as though their personal information was safe ( n = 117, x̄ = 4.897). Overall, completed surveys revealed that telemedicine is a promising modality for phlebology consultations ( n = 118, x̄ = 4.814), with only three respondents indicating that they will not likely use such services in the future. In reviewing the mean Likert ratings, patients indicated some accessibility variables to be more important than others, which were termed as “novel,” since these variables have not been widely noted by previous studies to be valued by patients who utilize telemedicine. Those variables that were less appreciated were termed “base,” as they generally coincide with variables that were prevalent in past studies. Novel accessibility variables include ease in contacting providers and flexibility in encounter availability. Base accessibility variables include patient referrals, appointments not requiring a commute, the provision of devices to facilitate telemedicine interaction (if necessary), and financial assistance for the service. These and all other survey items are individually listed in Table 1 . Additionally, paired difference t-tests suggest that patients generally placed less value on accessibility variable statements when compared to patient-physician rapport and confidence variable statements. When comparing the novel and base accessibility variable statements to the confidence variable statements, patients indicated the latter to be more valued than either of the former variable statements (x̄ d = 0.076, SD = 0.399, t = 2.082, p = 0.039 and x̄ d = 0.739, SD = 0.924, t = 8.763, p = 1.563E-14, respectively; x̄ d : mean of the differences of the average Likert rating that the two compared variable statements received from each patient). When comparing the patient-physician rapport variable statements to the base accessibility variable statements, patients indicated the rapport variable statements to be more favored (x̄ d = 0.713, SD = 0.908, t = 8.606, p = 3.641E-14). Lastly, when comparing the confidence and rapport variable statements to one another, as well as those from the rapport and novel accessibility variable statements, there was no statistically significant difference between either compared group (x̄ d = 0.026, SD = 0.292, t = 0.968, p = 0.335 and x̄ d = 0.050, SD = 0.370, t = 1.481, p = 0.141, respectively). This suggests that patients believed these pairs of variable categories were approximately equally valuable when utilizing telemedicine through telephone appointments. When analyzing the relationships between the general classes of variable categories (ie. patient-physician rapport vs confidence vs accessibility), t-tests of correlations revealed values as follows: confidence and rapport ( r = 0.616, t = 10.790, p = 2.651E-19), confidence and novel accessibility ( r = 0.525, t = 7.882, p = 1.789E-12), rapport and novel accessibility ( r = 0.612, t = 10.643, p = 5.938E-19). However, confidence and base accessibility show a much lower correlation ( r = 0.112, t = 1.238, p = 0.218), and rapport and base accessibility also had a relatively weak correlation ( r = 0.182, t = 2.040, p = 0.044). These results suggest that although some variables from a given category were individually ranked as being more valued than others, no one class of variables is independently favored over another, which is to say that patients appreciate the presence of all listed variables in a telemedicine service. Lastly, although not statistically significant, the cancellation/rescheduled surveys suggest a possible trend that patients may be more likely to cancel/delay their telemedicine appointment if hours of operation are inflexible ( n = 33, x̄ = 1.970), if they are unable to pay for the encounter ( n = 33, x̄ = 1.879), or if their provider is not easily accessible ( n = 33, x̄ = 1.788). Further research is needed to confirm this correlation. The study’s purpose was to evaluate patients’ views on novel accessibility variables of telemedicine and the attributes of telemedicine that resonate with patients. One of this modality’s key benefits is accessibility: patients do not need to physically enter a facility to receive medical care. Rather, with the appropriate resources, they can have remote encounters that meet their needs. However, our results suggest that accessibility is not solely about time and distance commuted, but also about flexible scheduling and how easily a medical professional can be reached. As such, the utilization of telemedicine seems to involve the low-hanging fruit principle, in which people tend to fulfill the tasks that are easy or convenient before attempting more difficult ones. Thus, the use of telemedicine services can result in patients being more consistent with their doctor visits and possibly, more compliant with their medical follow-ups. Our findings also indicate that the more valued accessibility elements are those that give patients more control over appointment scheduling and the frequency with which telemedicine encounters/communications are available. This may be especially true for lower socioeconomic populations who may not have the ability to forgo a day of work or be able to afford childcare to attend a healthcare appointment . Telemedicine allows patients to access medical care on their timetable, rather than having to coordinate their personal life around their medical condition. With such freedom, appointments would become less burdensome and patients would be more likely to attend them. In turn, this would help remedy previously identified traditional healthcare barriers, which include: patients being too busy to schedule/attend an appointment and a lack of patient access to treatments during regular hours of operation [ 20 – 23 ]. As such, it is advisable for telemedicine providers to survey their patient base to better understand their availability and thereby promote the service’s utility. However, healthcare systems should also consider patients’ digital literacy and whether they have the necessary resources to use the modality. Thus, further investigation is necessary to identify feasible methods for directly assisting disadvantaged patients. With three of the top five patient-valued variables belonging to the confidence category, this suggests that confidence in a provider is an attribute patients are keen to possess through telemedicine care. In descending order of mean Likert ratings, these variables include services with 1) physicians and 2) staff who are perceived to be knowledgeable and 3) physicians and staff who are perceived as being trustworthy. These findings support those of previous studies [ 22 , 24 – 26 ], which emphasize that medical personnel’s skillful demonstration of clinical knowledge is attractive when individuals select a particular healthcare institution. As such, telemedicine providers should be urged to invest time in establishing confidence between them and their patients. One potential method involves physicians incorporating positive communication behaviors (e.g. providing opportunities for patient engagement, physician encouragement of patients, ensuring patients understand diagnoses, etc.) into their practice. Past studies found the use of such techniques to be directly correlated with patients’ perception of medical providers as competent, trustworthy, and kind . Each of these latter traits, as uncovered by McCroskey and Teven , ultimately contributes to credibility. With a heightened perception of physician credibility, Paulsel et al. state that overall patient satisfaction with a service will increase, while also improving patients’ perception of their quality of care. Therefore, by taking time to apply positive communication techniques, physicians can demonstrate to their patients that they are well-qualified to tend to their medical needs. This is also the case with telemedicine visits, as our findings indicate that the same confidence-based variables are the most valuable for patients during their visits. In turn, positive communication techniques would help ameliorate previously identified in-person care hesitancy variables, which include fears of being: misdiagnosed, subjected to unnecessary tests, and prescribed unnecessary medications . Although physicians may feel as though they lack the time to incorporate such confidence- and rapport-building measures into their practice, Desjarlais-deKlerk and Wallace discovered that doing so would take roughly about the same time as it would if physicians strictly dispensed the required information to the patient and limited patients’ engagement in an interaction . With some forms of telemedicine being inherently less personal (e.g. video or phone calls), failing to incorporate such measures can hinder medical providers’ ability to establish a confident and longitudinal relationship with their patients. Yet, an additional benefit of granting patients the opportunity to engage during medical encounters is that it leads them to feel respected and as though their concerns have been acknowledged . Meeting these latter two conditions was of great value for our study’s respondents, with the provision of care by physicians who are kind and helpful in addressing patients’ needs being the highest-rated variable of our questionnaire. According to Moore et al. , such rapport-building practices are invariably important to the patient-physician relationship. This is especially true with new patients, as an individual’s most recent experience with a medical provider influences whether they will continue pursuing future care with them. Furthermore, the favorable perception of their physician and their interactions tends to lead to increased levels of compliance as patients progress to the following steps of their care plan . Thus, to ensure the success of a telemedicine service (which is inherently less personal than in-person care), we reiterate the importance of exhibiting positive communication behaviors during all encounters. In conclusion, our results suggest that rural patients favor the following general characteristics in telemedicine: flexible encounters and providers who strive to build trust and rapport. Future research to further understand why patients attend, cancel, or reschedule appointments using a free-response method is recommended, in addition to the assessment of patients’ technological fluency, to improve the efficacy and reach of telemedicine visits. One study limitation includes participant response bias, as surveys were not administered anonymously and those conducted for the “completed appointment” category were done rather close in time to a patient’s appointment. Questionnaires could have also be written in a more neutral approach ensuring identical questions throughout versions. This study also does not assess the extent of patients’ previous experiences with telemedicine, which may have influenced their views. Additionally, our sample size for the cancellation/rescheduling survey was lacking to attain significant results. Finally, while our research primarily focused on patient views of telemedicine, it did not explore medical providers’ perspectives. Additional studies should be conducted to tease out their views on its accessibility benefits, the feasibility of meaningful rapport-building through such services, and whether telemedicine can be an effective adjunct to their field. Supplementary Material 1.
|
Other
|
biomedical
|
en
| 0.999997 |
PMC11694382
|
The field of health policy and system research (HPSR) has grown considerably over the past two decades . Substantial growth has been seen in donor funding, publications and research capacity, globally . Since its inception in 1999, the Alliance for Health Policy and Systems Research (the Alliance) has been promoting the generation and use of HPSR as a means to strengthen health systems in low- and middle-income countries (LMICs) . The Alliance has been an key funder of HPSR research, supporting close to 1600 researchers in 79 LMICs in the last 20 years . During its first 10 years of operation, the Alliance worked specifically to develop the research capacity of younger researchers in LMICs through its grants . In the past decade, the Alliance has broadened its focus, but maintained a strong emphasis on generation of knowledge in HPSR . Given the rapid growth of HPSR, it is important to monitor the research environment, especially the evolution of HPSR research outputs in LMICs. Bibliometrics is “the application of mathematical and statistical methods to books and other media of communication” and can be used to study the productivity of scientific literature in a given topic . Bibliometric analysis is useful for revealing historical trends, evaluating the strengths and weaknesses in the evidence base, and tracing the emergence of new disciplines and thus appropriate for evaluation of HPSR knowledge production . Bibliometric analyses have been used to track literature development in health policy , health services research and health systems fields. A few previous studies have looked at trends in research output in HPSR and structural properties of the HPSR coauthorship network [ 1 , 11 – 13 ]. The first bibliometric analysis of HPSR publications found that publications increased from 2003 to 2009, but only 10% of publications were from LMICs and of those publications, only 4% were led by authors from low-income countries . A second analysis covered the longer period 1990–2015, reporting that the pace of output of HPSR papers with a LMIC topic and LMIC lead author was greater than the rate of increase of PubMed publications overall . The coauthorship network analysis from that investigation found that global connectivity and lead authorship of upper middle-income country authors publishing on HPSR was comparable to high-income country authors, while low-income and lower middle-income country authors were lagging behind . Finally, HPSR publications in 47 LMICs were investigated in the period of 2010–2020, where an increasing trend was seen and India, China and Brazil had the highest number of publications . The present study is part of a larger study in which we were engaged by the Alliance to assess the impact and contributions of the Alliance’s HPSR projects in 11 LMICs where the Alliance’s funding was concentrated during the past two decades . For this study, we applied bibliometric methods to evaluate HPSR scientific output over time. The overall goal of the study was to generate quantitative metrics to assess the evolution of HPSR publications in the 11 target countries. We also wanted to assess the role of the Alliance’s grant support in HPSR growth over time. Evaluation of HPSR evidence production, especially in LMIC settings, is crucial to understand the growth of the discipline, assess gaps and plan for future investment. We developed a search strategy in consultation with a library information specialist at the George Washington University. We used broad keywords to ensure the inclusion of the maximum number of publications. Our search terms included the names of the 11 target countries (“Brazil” OR “China” OR “Ghana” OR “India” OR “Lebanon” OR “Mexico” OR “Nigeria” OR “Pakistan” OR “South Africa” OR “Uganda” OR “Vietnam”) in combination with three terms for HPSR (“health policy and systems research” OR “health systems” OR “health policy research”). All searches were conducted in July 2020. We ran searches in PubMed, Global Health and Global Index Medicus for the dates 1 January 1999 to 1 May 2020; the end date was selected to allow us to capture as many publications as possible. For Google Scholar, studies recommend using the first 200–300 most relevant search results . We retrieved a large number of records from Google Scholar so we decided to screen the first 1500 records sorted by relevance, and include in the analysis the first 999 most relevant records. The search results were transferred to Covidence , a well-recognized web-based platform used for the management of search results and data extraction for systematic and scoping literature reviews. Title and abstract screening were conducted independently by three team members (EKC, SK and HER) with a fourth team member (NP) reviewing and resolving conflicts. Random checks in which the three reviewers independently categorized the same set of records were conducted to ensure interrater reliability. Publications were included if an abstract was available in English, if the date of publication was between 1 January 1999 and 1 May 2020, if the content was one of the 11 target countries and if the content was HPSR. HPSR was categorized into eight thematic areas: governance and leadership, health policy, health workforce, health financing, health information, medicines and supplies, health systems and health policy research. These categories were loosely based on the WHO’s six building blocks of health systems . We made a distinction between HPSR content (which was included) and health services research (HSR) content (which was excluded). We categorized HSR according to the framework from Sheikh et al. to include studies focused on the effectiveness of specific health interventions, clinical service delivery, translation of research into clinical practice and implementation science. This allowed the setting of manageable boundaries for our study and was consistent with framing HPSR as a field shaped by questions rather than by specific methodologies . Records were excluded for the following reasons consistent with the outlined inclusion criteria: the focus of the publication was HSR and not HPSR, the title and abstract were in a language other than English, the publication was not related to HPSR, the publication was not related to any of the 11 target countries, no abstract was available for the publication or the date of publication was outside of the study period. During the title and abstract screening, data were extracted from each publication on the thematic area, focal country or countries, year of publication and authors. To analyse the impact of Alliance funding on the research environment, we needed to access the names of researchers funded by Alliance grants. The Alliance provided a complete database of their grants in the 11 target countries in the years 1999–2020, which included 247 grants and 343 primary investigators and co-investigators (hereafter referred to collectively as investigators). Unfortunately, 18 grants were missing the names of all investigators and one investigator name was missing the given name; thus, we approached the analysis with a list of 229 grants and 342 investigators. The number of grants and investigators in each country varied. We conducted data analyses using Microsoft Excel and Stata statistical software, release 16 . The records identified by the systematic review were analysed to assess: (1) publications over time, (2) publications by thematic area and (3) publications by country. We then separated the records into publications authored by Alliance-funded investigators and authors not funded by the Alliance. We looked again at publications over time, by thematic area and by country. The database searches returned 6362 records with 1483 duplicates (23%) which were removed, resulting in a final set of 4879 records that were included in the next stage . After title and abstract screening, 3521 (72%) records were excluded from further analysis. The resulting 1359 records remaining after the screening were included in the bibliometric analysis. Fig. 1 Flowchart of the systematic review process HPSR publications across the 11 target countries increased steadily from 1999 to 2020 with a low of 2 publications per year in 1999 and a high of 133 publications per year in 2017 . The average increase in publications per year from 1999 to 2019 (the last full year) was 34%. Fig. 2 HPSR publications per year, 1999–2020. *1 January–1 May 2020 Across the 11 target countries, the health systems and health workforce thematic areas had the most publications in the study period, with 308 (23%) and 258 (19%), respectively (Table 1 ). There were few publications in the health information ( n = 44, 3%), health policy research ( n = 89, 7%), and medicines and supplies ( n = 98, 7%) thematic areas. Table 1 HPSR publications by thematic area, 1999–2020 Thematic area Number Percentage Health systems 308 22.7 Health workforce 258 19.0 Health financing 198 14.6 Health policy 189 13.9 Governance and leadership 175 12.9 Medicines and supplies 98 7.2 Health policy research 89 6.5 Health information 44 3.2 Across the past two decades, there has been a wide range in the number of HPSR publications by country, with Brazil having the most publications ( n = 391 publications, 26%) and Lebanon having the fewest publications ( n = 15, 1%; Table 2 ). Along with Brazil, South Africa and India also had high numbers of publications. South Africa (3.89), Uganda (3.12) and Ghana (2.89) had the highest rates of publications per million population. Table 2 HPSR publications by country and per million population, 1999–2020 Country Number of publications* Percentage of publications Publications per million population Brazil 391 25.5 1.82 South Africa 231 15.1 3.89 India 222 14.5 0.16 Uganda 143 9.3 3.12 Mexico 127 8.3 1.00 China 112 7.3 0.08 Nigeria 100 6.5 0.47 Ghana 95 6.2 2.89 Pakistan 76 5.0 0.33 Vietnam 22 1.4 0.23 Lebanon 15 1.0 2.68 * Sum of individual country values is more than total publications because some publications featured multiple countries Of the 1359 HPSR publications analysed, 261 (19%) were authored by Alliance-funded investigators (Table 3 ). These 261 publications were authored by 123 unique individuals out of 342 total Alliance-funded investigators meaning that 36% of Alliance-funded investigators authored a publication. Governance and leadership ( n = 42, 24%), health policy research ( n = 21, 24%) and health financing ( n = 46, 23%) were the thematic areas with highest percentage of publications authored by Alliance-funded investigators (Table 3 ). The health information thematic area had the lowest percentage of publications by Alliance-funded investigators at only 7% ( n = 3). Table 3 HPSR publications by thematic area and funding, 1999–2020 Thematic area Alliance-funded publications Alliance-funded percentage of publications Non-alliance funded publications Non-alliance-funded percentage of publications Total publications Health systems 43 14.0 265 86.0 308 Health workforce 51 19.8 207 80.2 258 Health financing 46 23.2 152 76.8 198 Health policy 39 20.6 150 79.4 189 Governance and leadership 42 24.0 133 76.0 175 Medicines and supplies 16 16.3 82 83.7 98 Health policy research 21 23.6 68 76.4 89 Health information 3 6.8 41 93.2 44 Lebanon, India and Nigeria had the highest percentages of HPSR publications authored by Alliance-funded investigators with 53% ( n = 8), 32% ( n = 70) and 29% ( n = 29), respectively (Table 4 ). Brazil had the lowest percentage of Alliance-funded investigator authorship at 2% ( n = 9). Table 4 HPSR publications by country and funding, 1999–2020 Country Alliance-funded publications* Alliance-funded percentage of publications Non-alliance-funded publications* Non-alliance-funded percentage of publications Total publications* Brazil 9 2.3 382 97.7 391 South Africa 64 27.7 167 72.3 231 India 70 31.5 152 68.5 222 Uganda 38 26.6 105 73.4 143 Mexico 19 15.0 108 85.0 127 China 15 13.4 97 86.6 112 Nigeria 29 29.0 71 71.0 100 Ghana 26 27.4 69 72.6 95 Pakistan 17 22.4 59 77.6 76 Vietnam 2 9.1 20 90.9 22 Lebanon 8 53.3 7 46.7 15 * Sum of individual country values is more than total publications because some publications featured multiple countries In this analysis, we assessed HPSR publications across 11 LMICs over the past two decades to understand the evolution of HPSR in these countries and the role played by the Alliance. We found that, across the 11 target countries, HPSR publications have steadily and substantially increased since 1999 at an average rate of 34% per year. This rate is higher than the estimated growth rate of all science publications of 3% per year , indicating that HPSR publications in these 11 countries are increasing at a faster pace than global scientific output in general. Overall, HPSR publications in the 11 countries were heavily concentrated in two thematic areas: health systems and health workforce. Adam et al. similarly found that human resources was one of the most popular topics in HPSR publications related to LMICs between 2003 and 2009 . We found the fewest publications in the area of health information which includes health information and surveillance systems, standardized tools and instruments, and international health statistics . This indicates an area of HPSR that may need increased funding and attention. Output of HPSR publications varied considerably between countries from 1999 to 2020. There was a relatively high number of HPSR publications in Brazil, South Africa and India while Lebanon, Vietnam, Pakistan and Ghana had low numbers of HPSR publications. While we do not have data on the total number of HPSR investigators in the 11 countries, looking at publication rates by million population can provide interesting insight. Lebanon and Ghana, two countries with low HPSR publication output overall, have high rates of publications per million population. Alternatively, India had high publication output compared with the other countries, but one of the lowest rates of publications per million population. This potentially indicates that some of the trends seen in publication at a country level are driven by other factors that are not measured in this analysis. When we analysed publications by author name, we found that 19% of all HPSR publications during the time period in these countries were authored by Alliance-funded investigators. We cannot say for sure whether an Alliance-funded project was the topic of these papers or whether Alliance funding had any impact on the publication process. Nonetheless, it is clear that researchers funded by the Alliance were productive in terms of evidence generation, fulfilling a primary objective of Alliance support. Lebanon had a high share (over 50%) of HPSR publications authored by Alliance-funded investigators, suggesting that the Alliance played a substantial role in research production in this country. On the other hand, Brazil had more HPSR publications than any other country and Alliance-funded investigators represented a small proportion of the HPSR output at 2%. It seems that the research environment in each country was at a different stage of development. The main strength of this analysis is the rigorous methodology. After conducting the literature searches, we took the further step of screening our search results by title and abstract which ensured that the maximum number of relevant publications were included in the review, while omitting publications which did not focus on HPSR. The thorough screening process allowed us to use broad search terms and cast a wide net for literature. The inclusion of grey literature via Google Scholar also added depth to our analysis beyond previous HPSR bibliometric analyses. This bibliometric analysis had several limitations. First, the exclusion of publications without abstracts in English likely resulted in the omission of relevant publications written in other languages. Publishing in the language spoken by the country’s researchers and policy-makers is a good strategy to promote dissemination and uptake of research. This was primary an issue for Mexico and Brazil so it may have reduced the validity of our findings from those two countries. In addition, the use of the health systems building blocks to categorize publications thematically presents a limitation in that these categories are largely overlapping and inconsistent in their definitions . Also, the author-level analyses were based on the database that we received from the Alliance and relies on the accuracy of that data. Further, because articles are often published months or years after a project ends, it is likely that our analysis missed some Alliance-funded grant-related publications. Finally, potential transcription errors for author names with accents could have impeded identification of Alliance-funded investigators in this analysis, especially relevant for authors in Mexico and Brazil. Over the past two decades, the HPSR research environment has expanded considerably in the 11 target countries where the Alliance has been highly involved. Our analysis of HPSR publications by Alliance-funded investigators suggests that the Alliance played a key role in building research capacity and output in these countries.
|
Other
|
other
|
en
| 0.999998 |
PMC11694390
|
Addictions pose a great threat to public health, encompassing both substance use disorders (SUDs) and behavioral addictions. SUDs involve excessive and uncontrolled consumption of psychoactive chemicals such as alcohol or other psychoactive substances affecting about 35 million people worldwide . The impact of SUDs is profound, with the 2017 Global Burden of Disease study ranking them as the second leading cause of disability among mental disorders, accounting for 31,052,000 (25%) years lived with disability (YLD) . The scope of addiction extends beyond SUDs to include behavioral addictions. According to official psychiatric diagnostic systems, there are currently only two non-chemical addictions recognized (i.e., pathological gambling and gaming disorder) . However, many scholars have argued and found empirical support for various other non-chemical/behavioral addictions , including social media addiction , video game addiction , Internet addiction , exercise addiction , mobile phone addiction , shopping addiction , workaholism , and sex addiction . Many people are affected by behavioral addictions, with an estimated weighted average prevalence of 2.47% for internet gaming disorder (IGD) , and 4.5% for pathological gambling disorder (PGD) . At a broad level, studies indicate that addiction represents a multifaceted challenge, impacting not only individual health but also society at large . This issue has detrimental effects on both mental and physical well-being, leading to significant negative outcomes . Unfortunately, psychologists and psychiatrists are less involved with addictions, both theoretically and therapeutically, than they are with other mental disorders . Several dispositional factors have been found to increase the likelihood of engaging in a variety of potentially addictive behaviors, both recreationally and at a problematic level , one of which is personality traits. In most studies examining associations between personality traits and addiction disorders, some traits are considered predisposing factors for addiction . Therefore, certain personality characteristics may represent independent risk factors for addiction. The Five-Factor Model (FFM) is a predominant framework used to conceptualize personality traits. This model consists of five overarching domains that capture fundamental aspects of human personality. Extraversion (vs. introversion), agreeableness (vs. antagonism), conscientiousness (vs. independently or disinhibition), neuroticism (vs. emotional stability), and openness to experience (vs. closeness) are the higher-order domains in this model . The FFM`s personality trait profiles associated with several clinical disorders have been examined, including alcohol use disorder (AUD], SUD , gambling disorder (GD] , Instagram addiction , smartphone addiction , social media addiction , online game addiction , and sex addiction . Findings from these studies indicate that the relationships with personality traits vary among different addictions, a factor that could hold significance for both theoretical understanding and practical applications . Several meta-analyses have examined the FFM in relation to alcohol and substance abuse, finding that conscientiousness, and agreeableness are inversely associated with most addictions. However, neuroticism tends to be positively associated with these addiction disorders. However, conscientiousness tends to be inversely associated with these addiction disorders . There has been a lack of consistency in the results regarding extraversion and openness and substance addictions. The FFM is widely adopted in the literature on addiction, however, other research has identified alternative personality structures using a similar lexical methodology . It has been claimed and illustrated empirically that certain aspects of personality might be underrepresented in the FFM (for example, Honesty-Humility) . As an alternative to the FFM, Lee, and Ashton present the HEXACO model of personality to address some of the limitations of the FFM and to provide an expanded method of examining personality characteristics. This model was developed from lexical studies that involved self and observer ratings of personality descriptive adjectives across various languages . These studies indicated a six-factor solution to describe the variation in personality. According to the HEXACO model, the six main dimensions are Honesty-Humility (H), Emotionality (E), Extraversion (X), Agreeableness (A), Conscientiousness (C), and Openness to Experience (O). While the extraversion, conscientiousness, and openness to experience dimensions in the HEXACO model largely align with their counterparts in the FFM model , the HEXACO model introduces rotated variants of agreeableness and emotionality . HEXACO model, for example, attributes traits like even temper, which reflects low neuroticism in the FFM, to agreeableness. Conversely, traits such as sentimentality, which are typically associated with agreeableness in the FFM, are assigned to emotionality. The most significant deviation from the FFM is the inclusion of honesty-humility as a sixth fundamental dimension of personality. Numerous studies have provided evidence supporting the validity of the honesty-humility factor [ 43 – 47 ]. This factor represents the sincerity of an individual regarding their interactions with others, the willingness to take advantage of others for personal gain, the desire for or motivation to acquire high status, as well as the modesty of the individual . There have been numerous studies examining honesty-humility in various contexts, including those related to gambling severity [ 49 – 51 ], SUD , the workplace , academic settings , and laboratory studies examining decision-making in social dilemmas . The research landscape exploring the relationship between the HEXACO model of personality, SUD, and behavioral addictions is evolving, with a growing number of studies emerging. Abbasi et al. investigated the relationship between personality traits, based on the HEXACO model, and cell phone addiction and found a significant positive relationship between cell phone addiction and emotionality, as well as a significant inverse relationship between cell phone addiction, extraversion, and conscientiousness . Enayati found that mobile phone addiction had a significant inverse relationship with honesty-humility, extraversion, conscientiousness, openness to experience, and a positive correlation with emotionality . Azizi et al. studied the associations between internet addiction and the HEXACO model of personality traits. They found that among the HEXACO personality traits, only extraversion and openness to experience were significantly and positively related to internet addiction . Horwood and Anglim studied a sample of Australian adults and found a moderate negative association between problematic smartphone use and honesty-humility, agreeableness, conscientiousness, and openness to experience. They also found a significant positive correlation between problematic smartphone use and emotionality . Inanloo et al. examined a model predicting online game addiction based on the HEXACO personality traits and the parent–child relationship, mediated by impulsiveness. They found that the direct path coefficients of conscientiousness, honesty-humility, extraversion, agreeableness, and openness to experience on online game addiction were not significant. Zafar et al. investigated the relationship between Facebook addiction and HEXACO personality traits among university students and found significant positive correlations between Facebook addiction and extraversion, emotionality, and openness to experience, while honesty-humility, agreeableness, and conscientiousness displayed significant negative correlations . Leslie and McGrath used the HEXACO model to compare personality traits of gamblers who played exclusively online, exclusively offline, and in mixed-mode contexts and found that mixed-mode gamblers reported lower honesty-humility scores and higher extraversion scores compared to exclusively online and offline gamblers . McGrath et al. investigated the relationships between personality traits, based on the HEXACO model of personality, and problem gambling severity in young adult gamblers. They found that honesty–humility, agreeableness, and conscientiousness were significantly and negatively associated with the scores on the Problem Gambling Severity Index (PGSI) . Kim et al. studied the associations between HEXACO traits and problem gambling severity in a community-recruited sample of gamblers. They found that scores on honesty–humility, conscientiousness, and openness to experience were significantly and inversely associated with gambling severity . Rash et al. examined HEXACO personality traits in relation to disordered engagement in three addictive behaviours: AUD, cannabis use disorder (CUD), and GD. Multinomial logistic regression analyses revealed lower levels of honesty-humility among individuals with AUD and GD, and higher levels of openness to experience among individuals with CUD compared to control participants . Taken together, the literature on the relationship between the HEXACO personality traits, SUD, and behavioral addictions consistently shows that honesty-humility [ 49 – 51 , 58 , 60 ], agreeableness , and conscientiousness are inversely associated with most addictions. In contrast, the trait of emotionality is positively associated . Extraversion and openness to experience have been shown to positively relate to behavioral addictions in some studies, however, in other studies, extraversion and openness to experience have been shown to be inversely related to addictive disorders. As research about the relationship between HEXACO personality traits, SUD, and behavioral addictions expands, it is necessary to synthesize and summarize these findings to draw conclusions. One rigorous method for aggregating numerical research findings is meta-analysis. Meta-analysis refers to a collection of statistical methods that are used to combine the results of independent experimental and correlational studies leading to an overall estimate or result . To date, no meta-analysis has been conducted investigating the relationship between HEXACO personality traits, SUD, and behavioral addiction. This gap in the literature necessitates a project that systematically reviews and synthesizes existing findings. Conducting this study is essential for four main reasons. Firstly, individual studies have assessed the association between HEXACO traits and various types of addiction, but their results have been inconsistent. Secondly, a systematic review and meta-analysis can provide a more robust and comprehensive understanding of these relationships by aggregating and analyzing data from multiple studies. Thirdly, knowing which HEXACO personality traits are associated with SUD and behavioral addictions could provide insight into potential mechanisms that contribute to addiction which could further inform future research and intervention strategies. Fourthly, identifying relevant moderators concerning the relationship between the HEXACO model and addictions will aid in identifying circumstances where the personality traits in question may play an important role in developing addictions. This systematic review and meta-analysis aim to comprehensively examine the associations between HEXACO personality traits, SUD, and behavioral addictions. The main scope of the study is to include all studies investigating the term “HEXACO” and addiction disorders including illegal substances, and behavioral addictions (e.g., gambling and gaming problems, social media addiction, and CSBD). The study includes diverse populations, covering both clinical and non-clinical samples across various age groups (above 18 years), genders, socio-economic backgrounds, and ethnic identities. By including studies from any sample type and geographical location published in English, other European languages, or Persian, we ensure a broad and inclusive scope. This systematic review involves a meta-analysis of all relevant studies and will report effect sizes converted into correlation coefficients to standardize the measurement of associations across studies. Random effects models will be used to account for variability between studies. Additionally, we will assess several moderating factors in the relationship between HEXACO personality traits, SUD, and behavioral addictions. Following the recommendation to include approximately 10 studies for each covariate when investigating potential moderators through meta-regression analyses , the number of variables that will be entered into the meta-regression models will depend on the number of studies that will be included in the specific analysis. The research team will select a priori and in prioritized order to test (a) mean age, (b) gender, (c) ethnicity, (d) marital status, (e) occupational status, and (f) educational attainment. We aim to answer the questions (1) Is there an association between HEXACO personality traits and the use of illegal substances, including alcohol, nicotine, and all narcotics and cannabis? (2) Is there an association between HEXACO personality traits and behavioral addictions, including gambling and gaming problems, social media addiction, and CSBD? and (3) Are associations between HEXACO personality traits moderated by age, gender, ethnicity, marital status, occupational status, and educational attainment? This protocol has been registered within the International Prospective Register of Systematic Reviews and was reported according to the PRISMA for Protocol (PRISMA-P) 2015 statement [see Additional file 1]. For studies to be eligible for inclusion, they will have to meet the following criteria: (1) being an empirical study; (2) original articles, published as full papers; (3) published in English or other European languages or Persian; (4) published in a peer-reviewed journal; (5) master’s or doctoral dissertations, published as full dissertation; (6) studies investigating the association between HEXACO personality traits and addictive disorders, including illegal substances (e.g., all narcotics and cannabis as separate categories), and behavioural addictions (e.g., gambling and gaming problems, social media addiction, and CSBD); (7) research reporting Pearson’s or Spearman’s r correlation coefficients of the variables of interest, or any data that could be converted into a correlation coefficient, such as Cohen’s d / f , T value, or Fishers Z ; (8) studies with any population type (clinical or non-clinical), interventions, and participant characteristics; and (9) with any year of publication. Studies will be excluded if they are (1) based on case studies and qualitative designs; (2) articles not published in a peer-reviewed journal; (3) grey literature, including conference papers, reports, newspaper articles, and unpublished dissertations (to ensure the inclusion of high-quality, rigorously vetted studies); (4) studies without full-text access; (5) measuring personality traits similar to the HEXACO model, but not the actual traits; (6) not related to SUD or behavioral addiction; (7) not report sufficient data for calculating the effect size (Pearson’s or Spearman’s r correlation coefficients of the variables of interest, or any data that could be converted into a correlation coefficient); and (8) articles or dissertations that have been identically presented to two different journals or universities under different titles (Only the most recent or comprehensive (with highest number of observations) version of a duplicate will be included to avoid redundancy). Literature searches will be conducted across various electronic literature databases, such as APA PsycINFO (Ovid), MEDLINE (Ovid), ProQuest, Web of Science, PubMed, CINAHL, and Wiley Online Library. Also, a literature search will be conducted on Google Scholar, of which we will review the first 50 pages due to a large number of results (over 12,000 hits). We chose “HEXACO” as our one and only search term, after advice from a research librarian. In the case of the HEXACO model, there is a limited number of published articles. The HEXACO model of personality is a new phenomenon with limited existing evidence. Searching with the singular search term “HEXACO”, we assume that we are guaranteed to identify all articles on HEXACO. Making a more elaborate combination of search terms will necessarily limit the number of search results, and we could potentially miss out on relevant articles. Being a relatively new phenomenon explains why “HEXACO” is not yet an established term in Controlled Vocabulary/Subject Headings. Including the more general Subject Headings of “Personality traits” or “Personality Tests” in the search strategy would include a very large number of articles not relevant to the research question of this review. No applied limits will be used in the search strategy, and the reason for this is the same as the rationale for using only the one search term “HEXACO”: The safest way of ensuring that no relevant articles are missed, is to do all excluding of articles in the screening process. This is possible to do because of the fact that the one search term “HEXACO” with no limits applied results in a manageable number of results to screen and this approach was recommended by a scientific librarian. The search field definition chosen in each database will be the default alternative, covering a minimum of titles and abstracts. As an example, our APA PsycINFO (Ovid) Search Strategy is as follows: hexaco.mp. [mp = title, abstract, heading word, table of contents, key concepts, original title, tests and measures, mesh word]. In addition to searching with the term “HEXACO” with the default search field definition in all the mentioned databases, we will employ additional methods in our literature search. We will employ backward tracking (inspecting reference lists in relevant identified literature). Moreover, we leverage artificial intelligence in our research, specifically using Keenious Plus by transferring relevant articles into its platform to identify matching articles that are relevant to our research, helping us uncover any potentially missed articles. An initial screening of titles and abstracts of identified papers will be performed by two independent reviewers (FS and PH), who will assess each study for relevance according to the pre-specified eligibility criteria. Studies that cannot be conclusively excluded from the title and abstract screening will be taken forward to full-text screening, at which stage the full text will be obtained and a second screening process performed, again by two independent reviewers. This will result in a final set of papers to be included in the review. Discrepancies between the two reviewers at any stage will be resolved through discussion and, if required, referral to a third reviewer (EKE). The number of studies identified, included, and excluded at each stage will be reported using the PRISMA flow diagram together with reasons for exclusion at the full-text stage. Moreover, to calculate interrater reliability and the degree of agreement between the initial choices made by two independent raters, we will use Cohen’s kappa statistic or percentage agreement . The pilot screening process, involving 20 studies, was conducted to ensure the consistency and reliability of our methods before full-scale screening. Reviewers (FS and PH) independently screened these articles for inclusion based on predefined inclusion criteria. There was a high level of agreement among the reviewers in the screening process, with a Cohen’s kappa coefficient of 0.89, indicating substantial agreement. Differences primarily arose in the application of inclusion criteria at the initial stage. These differences were resolved through discussions among the reviewers, leading to minor modifications in the inclusion criteria for clearer guidance. Covidence Systematic Review software (69) will be used to manage references throughout the review process. Covidence’s systematic approach ensures an organized and efficient review, enhancing both reliability and collaboration between reviewers. This comprehensive software will be employed at all stages, including screening titles and abstracts, reviewing full texts, and extracting data. Initially, all relevant articles identified from the database searches will be imported into Covidence. The software will then automatically remove duplicates, after which two independent reviewers will commence the screening process. The Covidence Systematic Review software allows us to incorporate personalized headings, subheadings, single-choice fields, multiple-choice, and tables . A meticulous data extraction sheet will be developed based on our study objectives. This sheet will be made available within the Covidence platform, and accessible to the reviewers responsible for data extraction. This development will be guided by the outcomes of the full-text screening stage, following templates provided by experts in systematic reviewing within the platform. Two independent reviewers (FS and PH) will conduct data extraction in this platform for all included studies. In instances where multiple publications arise from a single study, we will consolidate all sources and extract the data into a unified form. The data items to be extracted encompass various elements, including study design, age and gender of participants, country of study, sample size ( n ), setting, methods of assessing personality traits, SUD, and behavioral addictions. Additionally, relevant findings, including effect sizes converted to correlation coefficients ( r ), will be extracted. If the correlation coefficients are not provided directly, we will also extract sample size, standard deviation (SD), and means for both the addiction and control groups to facilitate the conversion to the correlation coefficients. The list of variables to be extracted is shown in an additional file [see Additional file 2]. The pilot test for data extraction, involving seven studies, was conducted to assess the feasibility of the data extraction process and ensure consistency among reviewers. Reviewers (FS and PH) independently extracted data from these studies using a predefined data extraction form. There was a high level of agreement among reviewers, with a Cohen’s kappa coefficient of 0.85, indicating substantial agreement. Discrepancies were resolved through discussion among the reviewers, resulting in minor modifications to the data extraction form to clarify ambiguous conditions. This pilot confirmed the feasibility and robustness of our data extraction process for the upcoming comprehensive meta-analysis. Assessing the risk of bias in included studies is important in determining the validity of the results and interpretation of findings. The Newcastle–Ottawa Scale (NOS) for longitudinal and cohort studies, and an adapted version of the NOS for cross-sectional studies, will be used to assess the risk of bias and quality of the studies . The NOS provides predefined criteria for assessing bias/quality through a checklist consisting of three main categories. It is possible to obtain a maximum score of 10 stars, where a higher score indicates higher quality/less bias. The first category is “selection” and relates to the representativeness of the sample, sample size, comparability between respondents and non-respondents, and ascertainment of the exposure. This category provides a maximum of five stars. The second category is “comparability” and concerns whether confounding factors are controlled for. This category provides a maximum of two stars. The third category is “outcome” and represents the assessment of the outcome and the statistical tests. The maximum score in this category is three stars. In the current study, studies with five stars or more will be considered to have moderate to good quality. This study will report effect sizes that will be converted into correlation coefficients to standardize the measurement of associations across studies. We plan to combine data from studies with different designs in a single meta-analysis, as long as they provide comparable effect size estimates. This approach allows us to synthesize a broad range of evidence, enhancing the generalizability of our findings. We will use random effects models to pool effect sizes rather than fixed effects models due to the anticipated variations in sample characteristics, methods, and measures across the included studies. Random effects models account for between-study heterogeneity and provide more conservative and generalizable estimates when there is underlying true variance in effect sizes . Specifically, we will use the DerSimonian-Laird method for the random effects model, which is a widely used approach for meta-analysis that estimates the between-study variance and weights the studies accordingly. This approach is particularly suited for our research given the diverse nature of addictions and personality dimensions being investigated. To assess the heterogeneity among studies, Cochran’s Q test and the I 2 -index will be employed. We will interpret the I 2 -values as an indication of the proportion of total variation in estimated effects that may be attributed to heterogeneity rather than to chance. An I 2 -value greater than 50% will be indicative of substantial heterogeneity . As the correlation coefficient is not normally distributed, we will use Z -transformation \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Z=\frac{1}{2}\text{ln (}\frac{1+\rho }{1-\rho })$$\end{document} Z = 1 2 ln ( 1 + ρ 1 - ρ ) in the meta-analytic calculations. Meta-analysis will be used to pool the results of studies. To investigate the impact of publication bias, which can occur when the likelihood of a study being published is influenced by the nature or direction of its results, we will construct funnel plots and conduct regression-based Egger’s test and non-parametric Trim and Fill analysis. These analyses will help identify and adjust for asymmetry in the effect size distribution that may suggest the presence of publication bias. Subgroup analysis and meta-regression will be conducted to explore potential sources of heterogeneity. Subgroup analysis will be performed according to age, gender, ethnicity, marital status, occupational status, and educational attainment. The specific criteria for defining these subgroups will be determined after a comprehensive review of the included studies, ensuring clarity and appropriateness for each variable. This approach aims to enhance our understanding of how these variables may differentially impact the association between HEXACO personality traits and addiction-related outcomes. Sensitivity analysis will be performed using studies based on representative samples to assess how the exclusion of studies of potentially lower methodological rigor or with specific population characteristics affects the robustness of our findings. This analysis will help ensure the conclusions drawn from our meta-analysis are not unduly influenced by a subset of the included studies. Data will be analyzed with CMA VER.4 . To the best of our knowledge, this systematic review and meta-analysis will be the first to systematically assess the association between the HEXACO personality traits and addiction disorders, including illegal substances (e.g., all narcotics in one category and cannabis in another), and behavioral addictions (e.g., gambling and gaming problems, social media addiction, and CSBD), within all types of sample populations. The findings of this systematic review and meta-analysis have several important implications for both research and clinical practice in the field of addiction. From a research perspective, this study can improve theoretical models of addiction by including the HEXACO framework and revealing research gaps. This comprehensive approach may provide a more nuanced understanding of the complex interplay between personality traits and addiction. In terms of clinical applications, this knowledge can aid healthcare providers in creating more effective screening tools to guide subsequent treatment for those struggling with addiction . Identifying personality variables that serve as risk factors for addiction may contribute to a deeper understanding of addiction and also aid its prevention and treatment, as it highlights key personality traits that should be addressed in such interventions. Understanding which personality traits predispose individuals to these disorders can inform the development of personalized treatment plans, making interventions more effective by addressing specific personality-related vulnerabilities. For instance, individuals with high levels of certain traits may benefit from tailored therapeutic approaches such as cognitive-behavioral therapy or motivational interviewing. Additionally, the identification of personality traits that serve as risk factors for addiction can aid in creating preventive strategies, with educational programs and community interventions designed to target these traits and reduce the incidence of addiction by fostering resilience and promoting healthier coping mechanisms. Given the limited number of studies on the association between HEXACO personality traits and SUD, we suggest that future research should focus on exploring HEXACO traits in relation to various types of SUD, including alcohol, cannabis, and other narcotics. Longitudinal studies that can inform about the directionality between the personality traits and various addictions as well as the HEXACO personality traits as moderators of addiction treatment effects would be of special interest. Additionally, more research is needed on other types of behavioral addictions, such as gaming problems, CSBD, and social media addiction, and how these relate to the HEXACO traits. Researchers should also aim to provide more detailed information on demographic factors like gender, age, and geographical location in their studies to enhance the understanding of these associations and improve the generalizability of findings. Moreover, future studies should consider conducting research on different sample types, such as adolescents, elderly populations, individuals with co-occurring mental health disorders, and diverse cultural backgrounds, to explore how personality traits contribute to addiction risk across various populations. The present study has several strengths. As the first systematic review and meta-analysis in this field, our study will fill a significant gap in the literature by providing a comprehensive overview of the associations between HEXACO personality traits, SUD, and behavioral addictions. It will be conducted in line with the PRISMA guidelines . We will use the Covidence Systematic Review software for data management throughout the systematic review process to manage references, which can increase reliability and ease collaboration between reviewers . Searches will be conducted across several databases, and we will utilize additional methods in our literature search, including artificial intelligence using Keenious Plus. To ensure reliability regarding quality assessment and effect size data, the included studies will be coded independently by two authors, with disagreements being resolved by discussion or referral to a third reviewer. Moreover, a pilot screening process was conducted to ensure consistency and reliability, and a pilot test regarding data extraction was carried out to assess feasibility and refine the data extraction form. The results of these pilot tests showed a high level of inter-reviewer agreement. While this study has several strengths, the results of this systematic review should be interpreted with caution due to several potential limitations. One significant limitation is the scarcity of existing research specifically focused on the associations between the HEXACO model, SUD, and behavioral addictions. The HEXACO model is relatively new compared to other personality frameworks, such as the FFM, and therefore, the number of studies investigating its relationship with addictive disorders may be limited. This scarcity can constrain the pool of eligible studies for inclusion in a systematic review/meta-analysis, potentially affecting the robustness and generalizability of the findings. Additionally, our initial search indicated that there are fewer studies related to HEXACO personality traits and SUD compared to those focused on HEXACO and behavioral addictions, which may impact the comprehensiveness of our results. Furthermore, studies that have not reported correlation coefficients or coefficients convertible to correlation coefficients cannot be included in this meta-analysis, which may lead to the exclusion of relevant research. Moreover, not all studies may report key variables such as age, gender, ethnicity, and marital status, which are considered potential moderators that can help explain variations in effect sizes. It is also significant to note that grey literature was not incorporated into the research for this study. The inclusion of grey literature could have provided a more comprehensive view of the topic by incorporating unpublished research and findings that might not be accessible through conventional academic publishing channels. Despite the limitations, this systematic review meta-analysis study will help clarify the role of HEXACO personality traits underlying addiction among the studies conducted to date. The results will have implications for understanding the role of personality traits in addiction. Identifying specific HEXACO traits linked to SUD and behavioral addictions will improve screening, prevention, and treatment strategies tailored to individual personality profiles. Additional file 1. PRISMA-P 2015 Checklist. Additional file 2. Extraction Sheet.
|
Review
|
biomedical
|
en
| 0.999997 |
PMC11694445
|
M2-polarized tumor-associated macrophages (TAMs), recruited and driven by tumor-derived inflammatory cytokines and immunosuppressive metabolites, are predominant in many cancers, including gastric cancer (GC), where they promote tumor progression and contribute to chemotherapy resistance [ 1 – 3 ]. TAM depletion has been shown to enhance immunotherapy efficacy. Strategies such as CSF1/CSF1R inhibition or chimeric antigen receptor (CAR) T cells targeting specific TAM receptors can reduce TAM density, thereby improving chemotherapy sensitivity and antitumor effects [ 1 , 4 – 8 ]. However, TAM depletion or recruitment blockade, either alone or in combination, has not yet yielded positive clinical outcomes . This limitation may arise from the complexity of the tumor microenvironment (TME) and the heterogeneity of macrophage subsets, as well as the dual role of TAMs, where M1 TAMs exert cytotoxicity and phagocytosis, while M2 TAMs support tumor progression. Therefore, selectively targeting M2 TAMs or inhibiting M2 polarization to reshape the immunosuppressive TME, while preserving the antitumor functions of M1 TAMs, represents a potentially more effective therapeutic strategy. Exosomes, as key mediators of intercellular communication, play a pivotal role in shaping the TME. Increasing evidence highlights that exosomes derived from GC cells significantly contribute to M2 macrophage polarization. For instance, GC cell-derived exosomal miR-92b-5p mediated by PLXNC1 promotes M2 polarization through inhibiting SOCS7-STAT3 interactions . GC cell-derived exosomal circATP8A1 induces M2 polarization via the miR-1-3p/STAT6 axis . Similarly, exosomal ELFN1-AS1 derived from GC cells facilitates macrophage recruitment and M2 polarization by regulating glycolysis through PKM in a HIF-1α-dependent manner . GC cell-derived exosomal miR-541-5p drives M2 polarization through mediating the DUSP3/JAK2/STAT3 pathway . GC cell-derived exosomal lncRNA HCG18 promotes macrophages M2 polarization by downregulating miR-875-3p to enhance KLF4 expression in macrophages . GC cell-derived exosomal HMGB1 induces macrophages M2 polarization by inhibiting p50 transcriptional activity and inactivating the NF-κB pathway . Moreover, GC cell-derived exosome-mediated M2 macrophage polarization significantly contributes to tumor progression and varying degrees of drug resistance [ 10 – 14 ]. Therefore, regulating the expression of certain substances in tumor-derived exosomes to inhibit M2 polarization or promote M1 type transformation may be an effective and promising multi-target therapeutic strategy for reshaping the immunosuppressive tumor microenvironment and enhancing antitumor efficacy. Oncogene activation in tumor cells reshapes the tumor microenvironment and affects tumor progression and therapeutic response by promoting the secretion of cytokines and regulating the release of extracellular vesicles [ 15 – 17 ]. Therefore, identifying targets within tumor cells that simultaneously inhibit tumor progression and selectively suppress or reprogram M2-polarized TAMs represents offers a higher promising therapeutic strategy. This study utilized proprietary expression profile chip data from clinical GC tissues, transcriptomic data from public databases, relevant clinical information, and single-cell sequencing data from GC tissues to identify key genes that are highly expressed in GC cells and strongly associated with poor prognosis and elevated M2 macrophage infiltration. Subsequently, we clarified the role of these genes in mediating tumor-derived exosome-driven M2 macrophage polarization through a co-culture system of human monocytes and GC cells with gene silencing or overexpression, combined with exosome intervention assays. Additionally, Small RNA (sRNA) sequencing of tumor-derived exosomes identified oncogene-regulated miRNAs driving M2 polarization and elucidated their specific functions. Protein array analysis of gene-silenced GC cells further revealed the mechanisms by which oncogenes regulate exosomal miRNA expression. These findings provide important insights into the role of oncogenes in GC progression and mediating extracellular vesicle secretion, highlighting novel and promising therapeutic targets for both targeted and immunotherapy approaches in GC. GES-1 cells (BioChannel Biotech) were maintained in DMEM with high glucose. Human GC cell lines including MKN45, AGS, HGC27 (Pricella Biotech) and N87 (Genecarer) cells were cultured in RPMI 1640 medium (Gibco). THP1 cells (Ubigene) were grown in RPMI 1640 supplemented with 0.05 mM 2-mercaptoethanol (Sigma). All cells were incubated at 37 °C with 5% CO 2 in medium containing 10% fetal bovine serum (FBS, Gibco) and 1% penicillin-streptomycin solution (Beyotime) to ensure optimal growth conditions. Clinical specimens were obtained from patients diagnosed with GC who underwent surgical resection at The Second Hospital of Lanzhou University. Fresh tissue samples for qRT-PCR and western blotting analyses were promptly frozen in liquid nitrogen. Tissue samples designated for immunofluorescence were fixed in 4% formaldehyde (Yuanye Biotech) to ensure optimal preservation for subsequent analysis. Gene expression profile analysis of 16 pairs of GC and para-carcinoma tissues was performed using GeneChip™ Human Genome U133 Plus 2.0 microarray (Affymetrix, Santa Clara, USA). Chip scanning and data analysis were conducted by GeneChem Co., Ltd (Shanghai, China). Transcriptomic and clinical data from The Cancer Genome Atlas (TCGA) Stomach Adenocarcinoma (STAD) cohort, available in GDC Data Portal ( https://portal.gdc.cancer.gov/ ), were analyzed for differential gene expression, weighted gene co-expression network analysis (WGCNA) of M2 macrophage-related genes, immune cell infiltration estimation, univariate and multivariate Cox analysis, Kaplan-Meier (KM) survival analysis, correlation analysis of immune checkpoint expression, clinical characteristic evaluation, time-dependent ROC curve analysis, and Gene Set Enrichment Analysis (GSEA) to identify enriched signaling pathways between high and low gene expression groups. Tumor Immune Estimation Resource (TIMER) database ( http://timer.cistrome.org/ ) was used for combined survival analysis of gene expression and M2 macrophage infiltration in GC patients. Single-cell sequencing data from GC tissues in the Gene Expression Omnibus (GEO) database ( https://www.ncbi.nlm.nih.gov/geo/ ) were used for cell clustering and gene expression analysis across different cell types. Transcriptome data from GC tissues in the GEO database were used for differential gene expression analysis. Transcription factors (TFs) regulating the miRNAs were predicted using TransmiR v2.0 ( http://www.cuilab.cn/transmir ), and JASPAR ( http://jaspar.genereg.net ) was employed to identify TF binding sites within the promoter regions. miRNA target gene prediction was performed using miRDB ( http://mirdb.org/ ), miRWalk ( http://mirwalk.umm.uni-heidelberg.de/ ), and miRTarBase ( https://mirtarbase.cuhk.edu.cn/ ). KEGG pathway enrichment analysis of target genes was conducted utilizing the DAVID database ( https://david.ncifcrf.gov/ ). TCGA and GEO data were analyzed and visualized using R software (v. 4.2.1), utilizing R packages such as limma, clusterProfiler, pathview, GSEABase, WGCNA, timeROC, survival, survminer, complexheatmap, pheatmap, ggplot2, ggExtra, ggpubr and ggClusterNet. “CIBERSORT” and “Immunedeconv” were used to estimate the relative abundance of tumor-infiltrating immune cells (TIICs) in TCGA-STAD samples from normalized expression data. SERPINE1 -lentiviral vector and control lentiviral vector were constructed and synthesized by VectorBuilder Biotech (Guangzhou, China) with lentivirus particle concentrations of approximately 6 × 10 8 TU/ml. The targeting sequences were as follows: sh SERPINE1 #1, 5′-GTGCCTGGTAGAAACTATTTC-3′; sh SERPINE1 #2, 5′-AGACCAACAAGTTCAACTATA-3′; sh SERPINE1 #3, 5′-TCTCTGCCCTCACCAACATTC-3′; and scramble shRNA (shNC) as a non-targeting negative control (NC), 5′-CCTAAGGTTAAGTCGCCCTCG-3′. Lentiviral particles at a multiplicity of infection (MOI) of 18, along with 5 µg/ml polybrene (VectorBuilder Biotech, Guangzhou, China), were introduced into 24-well plates containing cells seeded at a density of 6 × 10 4 per well. Stable knockdown cell lines were generated by selecting in complete medium supplemented 2 µg/ml puromycin (VectorBuilder Biotech, Guangzhou, China) for two weeks. SERPINE1 overexpression plasmid and negative control plasmid were constructed and synthesized by VectorBuilder Biotech (Guangzhou, China). pcDNA3.1-EGFP-STAT3 plasmid and negative control plasmid were also obtained from VectorBuilder Biotech (Guangzhou, China). pCMV3-C-Myc-SOCS7 plasmid was constructed by Sino Biological Inc. (Beijing). Transfection complex was prepared by diluting 10 µl Lipo2000 and 4.0 µg plasmid in 250 µl Opti-MEM (Gibco) each, followed by gentle mixing. 3 × 10 5 cells were seeded in 6-well plates, incubated with 1.5 ml Opti-MEM and 500 µl transfection complex for 6 h, and then cultured in complete medium for an additional 48 h. AntagomiR negative control (NC, 5’-CAGUACUUUUGUGUAGUACAA-3’) and antagomiR-let-7 g-5p (5’-AACUGUACAAACUACUACCUCA-3’) were synthesized by GenePharma (Shanghai) and dissolved in 125 µl DEPC-treated water to prepare a 20 µM stock solution. THP1 cells (4 × 10 4 /well) were seeded in 24-well plates, incubated with a transfection complex and 400 µl Opti-MEM for 6 h, followed by replacement with PMA-containing medium and cultured for 24 h. Tissue sections were heated at 58 °C for 2 h, then deparaffinized in xylene and rehydrated through a graded ethanol series. Cells were washed, fixed with 4% paraformaldehyde, and permeabilized using 0.2% Triton X-100 (Solarbio). Antigen retrieval was performed using sodium citrate buffer (pH 6.0, 98 °C), followed by goat serum blocking for 1 h. Sections were incubated with PAI-1 (rabbit, 1:200, Immunoway), CD163 (mouse, 1:200, Immunoway), CD206 (mouse, 1:200, Proteintech), F4/80 (rabbit, 1:200, Bioss), iNOS (rabbit, 1:200, Bioss), Arg1 (rabbit, 1:200, Proteintech), STAT3 (rabbit, 1:200, Bioss) antibodies overnight at 4 °C, reactivated, stained with Cy3-conjugated goat anti-rabbit IgG (Abcam) and Alexa Fluor 488-conjugated goat anti-mouse IgG (Abcam) for 30 min, counterstained with DAPI, and imaged using fluorescence microscopy (IX51, Olympus) for ImageJ analysis. THP-1 cells were differentiated into macrophages using 150 ng/mL phorbol 12-myristate 13-acetate (PMA, Sigma) for 24 h and subsequently co-cultured with cancer-derived exosomes or GC cells in 6-well plates with 0.4-µm membranes for 72 h. Harvested macrophages were converted into single-cell suspensions, stained with Elab Fluor 488 anti-human CD68 (Mouse, 1:20, ElabScience) and APC anti-human CD206 (Mouse, 1:20, ElabScience) antibodies, and analyzed for CD68 + CD206 + populations by flow cytometry (Accuri C6, BD). Total RNA was extracted using TRIzol reagent (Invitrogen) and reverse transcribed into cDNA using the Hifair III 1st Strand cDNA Synthesis Kit (Yeasen). Quantitative PCR (qPCR) was performed on a Real-Time PCR System using Hieff UNICON Universal Blue QPCR SYBR Green Master Mix (Yeasen). Total miRNA was isolated with the miRNeasy Mini Kit (Qiagen) and reverse transcribed into first-strand cDNA using the Mir-X miRNA First Strand Synthesis Kit (Takara). qPCR was conducted using the Mir-X miRNA qRT-PCR TB Green Kit (Takara). Relative quantification (2 −ΔΔCT ) was normalized to GAPDH and U6 snRNA. Primer sequences are provided in the Supplementary Materials. Tissues, cells and exosomes were lysed using RIPA buffer (Solarbio) supplemented with 1% PMSF (Sigma), and the lysates were centrifuged at 12,000 × g for 15 min at 4 °C. Protein concentrations were quantified via a BCA assay (Solarbio). Proteins were then separated by SDS-PAGE and transferred onto PVDF membranes (Millipore). After blocking with 5% fat-free milk, membranes were incubated overnight with primary antibodies (Supplementary Materials). Chemiluminescent substrates (Affinity) were used for detection, and blots were visualized using SH-Compact523 Chemiluminescence Gel Imaging System (Shenhua). Cell viability was assessed using a CCK8 kit (PUMOKE), and absorbance at 450 nm was measured with a microplate reader (iMark, Bio-Rad). Cell proliferation was evaluated using EdU staining, following the manufacturer’s instructions (Solarbio). A colony formation assay was conducted using 0.3% soft agar. In each well of a 6-well plate, 400–800 cells were seeded and cultured for two weeks. Colonies were then stained with 0.5% crystal violet. 2 × 10⁵ GC cells silencing or overexpressing SERPINE1 were seeded into T75 culture flasks and cultured to approximately 80% confluence. The medium was then replaced with serum-free media (Umibio) and incubated for an additional 48 h. Subsequently, equal volume of conditioned medium was collected for exosome isolation by ultracentrifugation (XPN-100, 32Ti rotor, Beckman) and re-suspended in 200 µl of cold 1 × PBS. Exosome concentration and size were analyzed via BCA protein assay (Solarbao), nanoparticle tracking analysis (ZetaView PMX110, Germany), while their morphology and size were confirmed by TEM . Protein markers were detected using western blotting. Dil-labeled exosomes (Yeasen) internalized by macrophages were visualized using confocal laser scanning microscopy (CLSM; SP8, Leica). Transwell inserts (8 μm, Corning) were employed to assess cell migration and invasion. 500 µl complete medium was added to the lower chamber of a 24-well plate, and 100 µl of cell suspension (approximately 2 × 10 4 cells) was placed in the upper chamber for 48 h. Invasion assays were conducted by coating the upper chamber membranes with 100 µl Matrigel Matrix (Corning), followed by the same steps as the migration assay. Invading cells were methanol-fixed, stained with crystal violet, and imaged using an IX51 microscope (Olympus). Exosome solution was diluted 1:10 with 1 × PBS, and 10 µl was applied onto a piece of parafilm, followed by placement of a 200-mesh Formvar-carbon copper grid for 20 min. The grid was fixed in 2.5% glutaraldehyde for 5 min, washed in deionized water, stained with 4% uranyl acetate for 10 min, stained with methylcellulose-UA on ice for 10 min and observed by Hitachi-7500 TEM. sRNA sequencing was performed by RiboBio Biotech (Guangzhou). Total RNA was isolated from six exosome samples (three from the SERPINE1 knockdown group and three from the control group) using the miRNeasy Micro Kit (QIAGEN). RNA quality and integrity were verified with an ND-1000 Spectrophotometer (NanoDrop Technologies) and a Bioanalyzer 2100 (Agilent). A sequencing library was constructed from 1 µg of total RNA per sample using the NEBNext Multiplex Small RNA Library Prep Set for Illumina (NEB), and sequencing was conducted on the Illumina HiSeq2500 platform. Differential expression of exosomal miRNAs was evaluated using reads per million (RPM) and DESeq2 (V1.26.0). Protein microarray analysis was conducted using the CSP100 Plus Microarray (HWayen), which immobilizes highly specific antibodies involving 16 signaling pathways. Each antibody had six technical replicates, scanned using an Agilent SureScan Dx Microarray Scanner, and image intensities were analyzed with GenePix Pro v6.0 software (Axon). Specific pathogen-free (SPF)-grade BALB/c-nu mice (male, 5 weeks old) were obtained from Chengdu Yaokang Biotech Co. Ltd and kept under SPF conditions. Twelve mice were randomly divided into two groups: MKN45/AGS cells transfected with scrambled shRNA or sh SERPINE1 #3 ( n = 3/group). Each mouse was subcutaneously injected with 5 × 10⁶ cells mixed with 50% Matrigel (Corning). Tumor volumes were measured weekly. After 42 days, mice were sacrificed, and tumors were excised, weighed, and divided for western blotting and immunofluorescence analyses. Macrophages induced by PMA were exposed to exosomes isolated from GC cells and transfected with pCMV3-C-Myc-SOCS7 plasmids as before, with or without antagomir-92b-5p. Co-IP was conducted using the Pierce Co-IP Kit (Thermo Scientific). Cell lysates were pre-cleared with Protein A/G beads to remove non-specific proteins, with 20% reserved as input control. The remaining lysates were incubated overnight at 4 °C with anti-STAT3 (Rabbit, 1:50, Cell Signaling), anti-Myc (Rabbit, 1:200, Beyotime) or IgG (Rabbit, 1:50, Abcam), followed by Protein A/G bead capture. After washing to remove unbound proteins, complexes were eluted, denatured, and analyzed via western blotting. FISH analysis was performed using the miRNA FISH Kit (GenePharma). A Cy3-labeled hsa-let-7 g-5p probe (GenePharma) was synthesized as follows: 5′ Cy3-AACTGTACAAACTACTACCTCA-3′ and provided by GenePharma. Briefly, paraffin Sect. (5 μm) were incubated at 60 °C for 30 min, deparaffinized with xylene, and rehydrated through graded ethanol. Proteinase K digestion (20 min at 37 °C) and denaturation (8 min at 78 °C) followed. Cells were fixed with 4% formaldehyde for 15 min at room temperature, washed with PBS, and hybridized with denatured probes at 37 °C for 12 h. Nuclei were counterstained with DAPI (GenePharma), and images were captured using an Olympus BX51 fluorescence microscope. ChIP assays were performed using a ChIP Kit (Beyotime) in accordance with established protocols. MKN45 cells were transfected with pcDNA3.1-EGFP-STAT3 plasmids as before. Cells at 24 h post-transfection were crosslinked with 1% formaldehyde at 37 °C for 10 min, followed by quenching with 125 mM glycine at room temperature for 5 min. Cells were subsequently collected, lysed, and subjected to sonication to fragment the DNA into 200–750 bp segments, confirmed by agarose gel electrophoresis. A 10% aliquot of each chromatin complex was reserved as an input control. IP was performed using anti-pSTAT3 (Rabbit, 1:50, Cell Signaling Technology) or Rabbit IgG (1:50, Abcam) antibodies, with Protein A/G Agarose/Salmon Sperm DNA to capture immune complexes associated with the let-7 g-5p promoter . After sequential washes, qPCR was employed to quantify the immunoprecipitated DNA, with ΔΔCt values calculated according to previous literature . Specific primers used are provided in the Supplementary Materials. The potential STAT3-binding sites on the let-7 g-5p promoter were identified using JASPAR ( http://jaspar.genereg.net/ ). The promoter region of let-7 g-5p was synthesized and inserted into the pRP[Pro]-hRluc/Puro-Luciferase reporter plasmids including negative control, wild-type and mutant-type (VectorBuilder), which were transfected into MKN45 cells. Luciferase activity was evaluated using the Dual-Luciferase Reporter Assay System (Promega). Relative luciferase activity was calculated following the method described in the previous literature . Data analysis and visualization were conducted using SPSS 26.0 (IBM, Chicago, IL, USA) and GraphPad Prism v9 (GraphPad Software, San Diego, CA, USA). Categorical variables were analyzed with a chi-squared test, and correlations were determined using Spearman’s correlation test. Nonparametric data were analyzed using the Mann-Whitney U test for two-variable comparisons and the Kruskal-Wallis test for multiple variables. Normally distributed continuous data were analyzed using Student’s t-tests or paired t-tests for two-group comparisons and one-way ANOVA with Tukey’s post hoc test for multiple comparisons. All experiments were independently replicated at least three times. Statistical significance was set at p < 0.05 and shown as * p < 0.05, ** p < 0.01, *** p < 0.001, **** p < 0.0001, ns (not significant) p > 0.05. Human mRNA microarray analysis of 16 pairs of GC and para-carcinoma tissues identified 664 upregulated and 581 downregulated mRNAs . Similarly, differential mRNA analysis of the TCGA-STAD cohort revealed 1927 upregulated and 703 downregulated mRNAs across 375 GC and 32 para-carcinoma tissues , with 387 mRNAs co-upregulated . CIBERSORT immune cell infiltration distribution and 2630 differential mRNA expression data in TCGA-STAD cohort were used for WGCNA analysis . A scale-free network was constructed at an optimal soft-threshold ( β = 4) when R 2 exceeded 0.9, generating seven modules , with 176 mRNAs in the yellow module identified as M2 macrophage-related genes. Subsequently, a total of 29 mRNAs were identified as being associated with M2 macrophages and prognosis , and their expression validated in both the 16 pairs of GC and para-carcinoma tissues and the TCGA-STAD cohort . Univariate and multivariate Cox regression analysis of 29 mRNAs in the TCGA-STAD cohort showed that SPARC and SERPINE1 were independent prognostic factors ( p < 0.05). Kaplan-Meier survival curves indicated that high SPARC , SERPINE1 , and COL1A2 expression correlated with shorter overall survival (OS) in the TCGA-STAD cohort. The TIMER database indicated a shorter OS in GC patients with high SERPINE1 expression and high M2 macrophage infiltration . Kaplan-Meier survival analysis of 380 GC patients undergoing surgery alone from merged GEO datasets and patients from GSE62254 and GSE22377 further confirmed that high SERPINE1 expression is associated with shorter OS . StromalScore and immuneScore estimated by R package “estimate” significantly increased in GC tissues with high- SERPINE1 expression . An “Immunedeconv” algorithm determined the proportion of 10 types of cells in each GC tissues of TCGA-STAD corhort and showed that SERPINE1 expression was significantly positively correlated with macrophage . The CIBERSORT deconvolution algorithm determined the proportion of 22 types of immune-infiltrating cells in each GC tissue of TCGA-STAD cohort and showed a higher proportion of M2 macrophage infiltration in the high- SERPINE1 expression group and a significant positive correlation between SERPINE1 expression and M2 macrophages . Moreover, SERPINE1 expression positively correlated with the expression of 19 immune checkpoints , displaying immunosuppressive features. Immunofluorescence assays of 32 GC tissues showed higher CD163-positive cell density in the high SERPINE1 -positive cell density group and a significant positive correlation between CD163- and SERPINE1 - positive cell densities . Fig. 1 Screening of genes associated with M2 macrophages and prognosis of GC. Volcano plots of differential mRNA expression in 16 GC patients ( A ) and TCGA-STAD cohort ( B ); red dots indicate upregulated genes, and green dots indicate downregulated genes. ( C ) Venn diagram of upregulated mRNAs in 16 GC patients and the TCGA-STAD cohort. ( D ) WGCNA cluster dendrogram and module assignment using a dynamic tree-cutting algorithm. ( E ) Correlation between module genes and immune cell infiltration. The abscissa represents different types of immune cell infiltration and the ordinate represents different modules. Each rectangle displayed the Pearson correlation coefficient. ( F ) Venn diagram of the upregulated mRNAs and M2-related yellow module mRNAs. ( G and H ) Heatmap of M2-related module-mRNA expression in 16 GC patients and the TCGA-STAD cohort. ( I and J ) Forest plots of univariate and multivariate Cox regression analysis of the M2-related module mRNAs. Kaplan-Meier cumulative survival curves for the combined analysis of SPARC ( K ) or SERPINE1 ( L ) expression and M2 macrophage infiltration in GC. ( M ) Differential analysis of immune stromal components between high- and low- SERPINE1 expression groups. ( N ) Correlation analysis of SERPINE1 expression and immune cells. ( O ) Differential analysis of immune cell infiltration between high- and low- SERPINE1 expression groups. ( P ) Correlation analysis of SERPINE1 expression and immune cell infiltration. ( Q ) Correlation analysis of SERPINE1 expression with immune checkpoint expression. ( R ) Immunofluorescence analysis of CD163 and SERPINE1 expression in 32 pairs of GC and non-GC tissue. ( S ) Difference of CD163-positive cell density between high and low SERPINE1 -positive cell density groups. ( T ) Correlation analysis of CD163-positive and SERPINE1 -positive cell densities in 32 GC tissues Single-cell sequencing results from the GSE134520 and GSE167297 datasets revealed that SERPINE1 was highly expressed in GC cells. To further investigate the effect of SERPINE1 expression on macrophage M2 polarization, SERPINE1 silencing and overexpression plasmids were constructed for stable lentiviral transfection and transient transfection, respectively. qRT-PCR and western blotting verified that SERPINE1 expression was remarkably downregulated in the silenced group and upregulated in the overexpression group. A Transwell co-culture system further showed that SERPINE1 overexpression promoted macrophage M2 polarization, and this effect was reversed by GW4689 (an exosome inhibitor) . Additionally, immunofluorescence staining with F4/80 (a macrophage marker) and iNOS (an M1 polarization marker)/Arg1 (an M2 polarization marker) was used to investigate the impact of SERPINE1 on M2 macrophage infiltration in xenograft tumors, revealing decreased F4/80 + Arg1 + cells (M2 TAMs) and increased F4/80 + iNOS + cells (M1 TAMs) in xenograft tumors of GC cells with stably silenced SERPINE1 . Fig. 2 High SERPINE1 expression in GC cells promotes macrophage M2 polarization. tSNE visualization of nine single-cell clusters partitioned by unsupervised cluster analysis, SERPINE1 expression of each single-cell, and SERPINE1 expression abundance of different single-cell clusters in the GSE134520 ( A – C ) and GSE167297 ( D – F ) datasets. ( G ) Flow cytometry analysis of the proportion of CD68 + CD206 + macrophages in a Transwell co-culture system, with MKN45 and AGS cells overexpressing (oe_ SERPINE1 ) or silencing SERPINE1 (shRNA#3 or sh_ SERPINE1 #3) in the upper chamber, and THP1 cells treated with PMA in the lower chamber. ( H ) Immunofluorescence staining of xenograft tumor tissues. Comparison of the proportion of M1 or M2 macrophage infiltration. Green indicates F4/80. Red indicates iNOS or Arg1 expression SERPINE1 mRNA expression was significantly upregulated in GC tissues, as confirmed by the GSE118916 dataset , the merged GSE33335 and GSE54129 datasets , and the TCGA-STAD cohort . qRT-PCR analysis revealed higher SERPINE1 expression in AGS and MKN45 cells ( p < 0.05) . A higher level of SERPINE1 mRNA and protein expression was confirmed in GC tissues from 33 patients using qRT-PCR , immunofluorescence , and western blotting analysis . Higher SERPINE1 expression was observed in GC patients with higher T, N and G stages ( p < 0.05) and in deceased GC patients ( p < 0.05) . Univariate and multivariate Cox regression analysis of the TCGA-STAD cohort identified SERPINE1 mRNA level as an independent predictor . A ROC curve analysis estimated the predictive value of SERPINE1 mRNA level in TCGA-STAD cohort. The AUC values for 1-, 3- and 5-years OS for patients with GC were 0.612, 0.664, and 0.735, respectively . Correlation analysis showed that SERPINE1 mRNA expression correlated with T stage (Table 1 ). Fig. 3 Differential expression and prognosis analysis of SERPINE1 . ( A – D ) Differential expression of SERPINE1 mRNA in merged GSE33335/GSE54129 (A), GSE118916 ( B ), and TCGA-STAD ( C and D ) cohorts. ( E ) Differential SERPINE1 mRNA expression in GC and GES-1 cells. qRT-PCR ( F ), immunofluorescence ( G ), and western blotting ( H ) analysis of SERPINE1 mRNA and protein expression in GC and non-GC tissues. ( I ) Differential expression of SERPINE1 in GC patients with different clinical stages and survival statuses in TCGA-STAD cohorts and GSE84437 dataset. ( J – K ) Forest plots of univariate and multivariate Cox regression analysis of GC prognosis. ( L ) Time-dependent ROC curves for OS at different time points to assess the predictive ability of SERPINE1 mRNA expression Table 1 Demographic and clinicopathological variables of GC patients Pathological feature n SERPINE1 P -value Low expression High expression Total 33 16 17 Gender 0.934568 Male 27 13 14 Female 6 3 3 Age 0.737139 <65 26 13 13 ≥65 7 3 4 T * 0.027868 T1-T2 4 4 0 T3-T4 29 12 17 N 0.575942 NO 5 3 2 YES 28 13 15 M 0.324534 M0 32 16 16 M1 1 0 1 Stage 0.118989 I–II 14 9 5 III-IV 19 7 12 Lauren classification 0.174235 Intestinal 6 3 3 Diffuse 17 8 9 Mixed 10 5 5 Histological 0.170656 Well differentiated 2 2 0 Moderate differentiated 17 7 10 Poorly differentiated 14 7 7 To elucidate the role of SERPINE1 in GC progression, its impact on GC cell proliferation was analyzed through both in vitro and in vivo studies. Immunofluorescence further confirmed the cellular localization and expression of SERPINE1 protein in silenced and overexpressed cells, which was mainly localized in the cytoplasm , and the protein expression was consistent with the results of qRT-PCR and western blotting . SERPINE1 knockdown led to a notable decrease in the proliferation of GC cells, as determined by CCK8 , EdU , and colony formation assays . Overexpression significantly enhances GC cell proliferation. A reduced growth rate and lower tumor weight were observed in xenograft tumor models subcutaneously injected with GC cells with silenced SERPINE1 . Fig. 4 SERPINE1 promotes GC cell proliferation in vitro and in vivo. ( A ) Immunofluorescence analysis of SERPINE1 protein cellular localization and expression in GC cells with SERPINE1 silencing and overexpression. ( B – G ) Cell proliferation assays for GC cells with SERPINE1 silencing (sh_ SERPINE1 #3) and overexpression (oe_ SERPINE1 ): CCK8 ( B and C ), EdU ( D – F ), and colony formation assay ( G ). ( H and I ) Nude mice were observed 42 days after subcutaneous injection of MKN45/AGS cells with either silenced (sh SERPINE1 #3) or non-silenced SERPINE1 (shNC). ( H ) Growth curves of xenograft tumor volumes. ( I ) Comparison of tumor weights between sh SERPINE1 #3 and shNC groups Exosomes were enriched by ultracentrifugation from equal volumes of serum-free, exosome-depleted culture medium derived from GC cells silencing or overexpressing SERPINE1 at the same initial cell densities, and used in subsequent experiments to determine the effects of cancer-derived exosomes on M2 polarization . TEM showed typical “saucer shape” particles with a size of approximately 100 nm . NTA analysis revealed homogeneous particle sizes with a mean diameter of 137.6 nm . Exosome concentrations were quantified using the BCA assay with a BSA standard curve , and the results indicated that concentrations across all groups ranged from 66 to 69 µg/mL, showing consistent levels and high quality (Supplementary Table 1 ). Western blotting analysis revealed the presence of CD63, CD81, and TSG101 exosomal markers as well as the absence of calnexin . CLSM showed that exosomes were successfully internalized into PMA-induced THP1 cells . A higher proportion of CD206 + and CD68 + CD206 + macrophages was observed in PMA-induced THP1 cells ingesting exosomes derived from GC cells overexpressing SERPINE1 by immunofluorescence analysis and flow cytometry analysis . qRT-PCR analysis of classical M1 and M2 markers revealed decreased M1 markers and increased M2 markers in macrophages ingesting exosomes derived from GC cells overexpressing SERPINE1 . A Transwell co-culture system showed that macrophages ingesting exosomes derived from GC cells overexpressing SERPINE1 promoted GC cell invasion and migration . As presented above, exosomes derived from GC cells overexpressing SERPINE1 significantly enhance M2 polarization, resulting in greater pro-migration and pro-invasive potential. Fig. 5 SERPINE1 -mediated gastric cancer-derived exosomes facilitate the polarization of THP1 cells into M2 macrophages. ( A ) Schematic representation of the extraction and identification of exosomes and the induction of macrophage polarization. Transmission electron microscopy ( B ), nanoparticle tracking analysis ( C ), and western blotting ( D ) were used to identify the morphology, particle size, and markers of exosomes. ( E ) Confocal laser scanning microscopy detected Dil-labeled exosomes (red) internalized by DAPI-labeled macrophages (blue). ( F – G ) Immunofluorescence analysis of the proportion of CD206 + cells in THP1 cells treated with exosomes. ( H – I ) Flow cytometry analysis of the proportion of CD68 + CD206 + cells in THP1 cells treated with exosomes. ( J – K ) qRT-PCR analysis of M1 markers (iNOS and TNF-α) and M2 markers (TGF-β, IL-10, and Arg-1) in THP1 cells treated with exosomes. ( L – N ) Transwell migration and invasion assays of GC cells (upper chamber) co-cultured with macrophages (lower chamber) ingesting exosomes An sRNA-Seq analysis of exosomal miRNA profiles from MKN45 cells stably silencing SERPINE1 compared with normal MKN45 cells identified 14 differentially expressed miRNAs with log 2 (foldchange) > 1.2 and p < 0.05, including 9 upregulated miRNAs and 5 downregulated miRNAs . Exosomal let-7 g-5p was the most statistically significant downregulated miRNA ( p = 0.002). According to the intersection of miRDB, miRWalk, and miRTarBase, 78 potential targets of let-7 g-5p were predicted . Signal transducer and activator of transcription family 3 (STAT3), a member of the STAT family, plays a key role in M2 polarization by being activated through phosphorylation, dimerization, and nuclear translocation . Protein interaction analysis of the 78 targets and STAT3 revealed that 10 target genes interacted with STAT3 using the STRING database . KEGG pathway enrichment analysis of 78 potential targets showed that 5 proteins interacting with STAT3, including SOCS7, IFNLR1, IL13, BCL2L1, and CDKN1A, were enriched in the Janus kinase (JAK)-STAT signaling pathway . It has been reported that STAT3 phosphorylation and nuclear translocation can be negatively regulated by suppressors of cytokine signaling (SOCS) 7 . Thus, exosomal let-7 g-5p may bind to the 3′ untranslated region (UTR) of SOCS7 to suppress SOCS7 protein synthesis through mRNA degradation or translation repression, resulting in the alleviation of the dephosphorylation and phosphorylation inhibition of STAT3 mediated by SOCS7 and subsequent STAT3 activation in macrophages . Flow cytometry analysis demonstrated that antagomir-let-7 g-5p reversed M2 polarization induced by exosomes derived from GC cells overexpressing SERPINE1 . However, it is unclear whether SOCS7 interacts with STAT3 to regulate M2 polarization. In this study, decreased SOCS7 protein levels and increased STAT3 phosphorylation levels were observed in macrophages ingesting exosomes derived from GC cells overexpressing SERPINE1 , and antagomir-let-7 g-5p reversed this effect . It appears that let-7 g-5p negatively regulated SOCS7 protein expression and promoted STAT3 phosphorylation. Subsequently, a Co-IP experiment demonstrated that SOCS7 and STAT3 co-precipitated in GC cell-derived exosome-taking macrophages, further confirming that SOCS7 physically interacts with STAT3. Additionally, the interaction was enhanced by antagomir-let-7 g-5p . Finally, western blotting analysis demonstrated a significant elevation in SOCS7 levels following SERPINE1 silencing in vivo . These findings indicate that let-7 g-5p negatively regulates SOCS7 protein levels and consequently decreased SOCS7 interaction with STAT3, leading to STAT3 hyperphosphorylation to facilitate M2 polarization. Fig. 6 SERPINE1 -mediated GC-derived exosomal let-7 g-5p facilitates macrophage M2 polarization through STAT3 hyperphosphorylation resulting from inhibition of SOCS7 interactions with STAT3. ( A ) Differential miRNA analysis of exosomes derived from MKN45 cells with stably silenced SERPINE1 and normal MKN45 cells using sRNA-Seq. N, normal group. sh, stably silenced SERPINE1 . ( B ) Venn diagram of target genes predicted by miRDB, miRWalk, and miRTarBase for let-7 g-5p. ( C ) Network of target genes that interact with STAT3. ( D ) KEGG pathway analysis of the 78 target genes of let-7 g-5p using DAVID. ( E ) Schematic representation: exosomal let-7 g-5p ingested by macrophages inhibits SOCS7 interaction with STAT3, resulting in STAT3 hyperphosphorylation. ( F ) Flow cytometric assay of the impact of let-7 g-5p on M2 polarization induced by exosomes derived from GC cells. ( G ) Western blotting analysis for the levels of SOCS7 protein and STAT3 phosphorylation in macrophages treated with exosomes and antagomir-let-7 g-5p. ( H and I ) Endogenous CoIP assay for SOCS7 and STAT3 in macrophages ingesting exosomes derived from normal MKN45 cells. ( J ) Western blotting analysis of SOCS7 protein levels in xenograft tumors To confirm how SERPINE1 regulates exosomal let-7 g-5p expression, GSEA, protein microarray, and TF prediction were used sequentially. According to gene set enrichment analysis (GSEA), SERPINE1 expression was significantly correlated with the activation of the JAK-STAT pathway . Protein array analysis identified 34 phosphorylated proteins that were downregulated in MKN45 cells with stably silenced SERPINE1 . Among these, STAT3, STAT1, and JUN were screened by intersecting 34 downregulated phosphorylated proteins with 107 TFs targeting let-7 g-5p predicted by the TransmiR v2.0 database . STAT3 serves as a central hub for multiple oncogenic signaling pathways and oncogenes, contributing to GC progression and chemotherapy resistance . KEGG pathway enrichment analysis showed that JAK2 and STAT3 were simultaneously enriched in JAK-STAT pathway . JAK2 and STAT3 were phosphorylated to a lesser extent in the SERPINE1 -silenced group based on the fluorescent intensity of the protein array . Therefore, it is highly likely that SERPINE1 activates JAK2-STAT3 signaling to transcriptionally regulate let-7 g-5p. Additionally, western blotting analysis confirmed that pJAK2 and pSTAT3 (Tyr705) levels decreased in GC cells silencing SERPINE1 , and increased in GC cells overexpressing SERPINE1 . This suggests that high SERPINE1 expression activates the JAK2-STAT3 signaling pathway. A JAK inhibitor (Fedratinib, MedChemExpress) blocked the activation of JAK2/STAT3 stimulated by high SERPINE1 expression . Western blotting analysis revealed significantly decreased phosphorylation levels of JAK2 and STAT3 in xenograft tumors silencing SERPINE1 , providing further evidence of the influence of SERPINE1 on the activation of JAK2-STAT3 signaling pathway in vivo. FISH analysis of let-7 g-5p expression in GC cells showed higher expression in GC cells overexpressing SERPINE1 and lower expression in GC cells silencing SERPINE1 . qRT-PCR also revealed higher let-7 g-5p expression in equal volumes of exosomes derived from GC cells overexpressing SERPINE1 , which was partially reversed by the JAK inhibitor . Consistent results were observed in FISH analysis of xenograft tumor tissues, where SERPINE1 -silenced xenografts showed reduced let-7 g-5p expression . Overall, SERPINE1 contributed to GC cell-derived exosomal let-7 g-5p expression by activating JAK2/STAT3. However, whether STAT3 directly regulates let-7 g-5p transcription remains unknown. According to the JASPR database, STAT3 can bind directly to the promoter of let-7 g-5p . Subsequent ChIP-qPCR analysis indicated robust binding affinity of STAT3 to the let-7 g-5p promoter . In addition, dual-luciferase reporter gene assays revealed that STAT3 enhanced the activity of the wild-type let-7 g-5p promoter . Fig. 7 SERPINE1 promotes exosomal let-7 g-5p expression through the JAK2/STAT3 pathway. ( A ) GSEA was conducted for SERPINE1 co-expressed genes using GSEA (version 4.1.0). ( B ) Heatmap of 35 phosphorylation sites of 34 STAT3 upstream proteins determined by the median fluorescent intensity of the protein array normalized using Grubb’s algorithm. ( C ) Venn diagram of 34 downregulated phosphorylated proteins in the silenced SERPINE1 group and 107 TFs targeting let-7 g-5p. ( D ) Bubble plot combined with Sankey diagram of enriched KEGG pathways for 32 of the 34 STAT3 upstream proteins. ( E ) Statistical analysis of normalized phospho- and nonphospho-fluorescent protein spots in protein arrays. Western blotting analysis of total protein and phosphorylation levels of JAK2/STAT3 in GC cells silencing SERPINE1 ( F ), overexpressing SERPINE1 or treated with a JAK inhibitor ( G ). ( H ) Western blotting analysis of JAK2/STAT3 and SOCS7 in xenograft tumors. ( I ) Representative FISH images and comparison of let-7 g-5p expression in GC cells. ( J ) qRT-PCR analysis of exosomal let-7 g-5p expression in GC cells. ( K ) Representative FISH images and comparison of let-7 g-5p expression in xenograft tissues. ( L ) STAT3-binding motif and sites in the let-7 g-5p promoter region predicted using the JASPR database. ( M ) ChIP-qPCR assay demonstrated that STAT3 interacted with the let-7 g-5p promoter . ( N ) Dual-luciferase reporter gene assay for the let-7 g-5p promoter region . ( O ) Luciferase activity of wt- and mut- let-7 g-5p promoter in the presence of vector or oeSTAT3. Mut, mutated-type. WT, wild-type. Vector, negative control plasmid. oeSTAT3, STAT3 overexpression plasmid Oncogene activation in tumor cells plays a crucial role in driving the transfer of tumor-derived exosomes to macrophages, which induces M2 polarization and contributes to tumor progression and therapy resistance [ 14 – 16 ]. In this study, transcriptomic and single-cell sequencing analyses from GC tissues identified SERPINE1 as a gene highly expressed in GC cells, closely associated with poor prognosis and elevated M2 macrophage infiltration. Subsequent experiments revealed that highly expressed SERPINE1 promotes GC growth and upregulates let-7 g-5p transcription in GC cells through the activation of JAK2/STAT3 signaling pathway. Furthermore, SERPINE1 -mediated transfer of exosomal let-7 g-5p disrupts the SOCS7-STAT3 interaction in macrophages, resulting in STAT3 hyperactivation and driving macrophage M2 polarization. Serine protease inhibitor family E member 1 ( SERPINE1 ), encoding plasminogen activator inhibitor-1 (PAI-1, a 45-kDa glycoprotein), is highly expressed in various tumors and serves as a cancer-promoting factor by facilitating tumor cell proliferation, migration, invasion, and angiogenesis in GC [ 24 – 26 ]. Recent studies have demonstrated that SERPINE1 is significantly overexpressed in GC and holds considerable potential as a prognostic marker [ 27 – 29 ]. In this study, SERPINE1 was found to be significantly upregulated in GC through high-throughput mRNA microarray analysis of 16 paired GC and adjacent tissues, combined with differential mRNA expression analysis of the TCGA-STAD cohort, consistent with previous reports. Further correlation analysis of clinical characteristics, immune cell infiltration, and prognosis revealed that high SERPINE1 expression was strongly associated with higher M2 macrophage infiltration, advanced clinical stage, and poor prognosis in GC. These findings suggest that SERPINE1 may serve as a potential marker for GC subtyping, immunophenotyping, and prognosis prediction. Additionally, this study revealed that silencing SERPINE1 significantly inhibits GC growth both in vitro and in vivo, aligning with previous findings . Collectively, these results highlight SERPINE1 as an oncogene and suggest that targeting it may provide a novel therapeutic strategy to impede GC progression. SERPINE1 /PAI-1 has garnered increasing attention for its association with M2 macrophage infiltration and immunotherapeutic response in the GC . However, the specific functions and mechanisms of SERPINE1 within the GC microenvironment remain unclear. Although reports on SERPINE1 promoting M2 polarization in macrophages are limited, its role and mechanism in driving this process are being increasingly recognized. A study on esophageal squamous cell carcinoma demonstrated that PAI-1 derived from cancer-associated fibroblasts (CAFs) promotes macrophage migration by activating Akt and Erk 1/2 through interaction with low-density lipoprotein receptor-related protein 1 (LRP1), an endocytic receptor on macrophage surfaces . Similarly, research on fibrosarcoma and lung cancer cells showed that cancer cell-derived PAI-1 stimulates macrophage migration via its LRP1 binding domain and induces M2 polarization by activating the p38MAPK/NF-kB signaling pathway and the IL6/STAT3 autocrine loop through its C-terminal urokinase plasminogen activator (uPA) binding domain . Consequently, it is clear that PAI-1, derived from cancer cells or CAFs in a paracrine manner, plays a crucial role in the regulation of macrophage recruitment and M2 polarization. In this study, single-cell sequencing analysis revealed that SERPINE1 was primarily expressed in GC cells. It is hypothesized that SERPINE1 promotes macrophage M2 polarization in GC through a paracrine secretion mechanism. Supporting this hypothesis, SERPINE1 overexpression significantly enhanced M2 polarization in human monocytes co-cultured with GC cells. Further investigation using an exosome inhibitor, which blocks exosome production in GC cells, reversed the M2 polarization induced by SERPINE1 overexpression, indicating that beyond the paracrine pathway, SERPINE1 also promotes macrophage M2 polarization through tumor-derived exosomes. To further validate this finding, exosomes were isolated from GC cells with either silenced or overexpressed SERPINE1 and subsequently applied to human monocytes, demonstrating that exosomes derived from GC cells with overexpressed SERPINE1 significantly promoted M2 polarization, while those from GC cells with silenced SERPINE1 exhibited the opposite effect. This finding not only confirms that SERPINE1 overexpression in GC cells promotes M2 macrophage polarization, but also provides a novel perspective beyond previously recognized paracrine mechanisms, highlighting that SERPINE1 mediates the transfer of tumor-derived exosomes to enhance M2 polarization. Exosomes, as pivotal mediators of intercellular communication within TME, play an essential role in mediating cellular interactions. Tumor-derived exosomes transfer proteins, lipids, and miRNAs to macrophages, reprogramming their gene expression and metabolic pathways and driving macrophages from the anti-tumor M1 phenotype toward the pro-tumor M2 phenotype, thereby promoting tumor growth and progression [ 33 – 35 ]. In this study, exosome sRNA sequencing analysis revealed that let-7 g-5p was the predominant miRNA carried within SERPINE1 -mediated exosomes. Subsequent exploration showed that inhibition of let-7 g-5p significantly reversed macrophage M2 polarization induced by GC cell-derived exosomes, including those from SERPINE1 -overexpressing cells. This finding aligns with previous studies showing that tumor-derived exosomes reprogram immune cells by delivering oncogenic factors, such as miRNAs, which influence macrophage behavior and promote immune evasion and tumor growth . However, research on the function of let-7 g-5p remains limited, and its regulatory mechanisms in macrophage M2 polarization represent an important question that warrants further investigation. STAT3 plays a pivotal role in M2 polarization through its phosphorylation, dimerization, and nuclear translocation . In this study, SOCS7 was identified as an interacting protein of STAT3 and a target of let-7 g-5p. SOCS7 is known to negatively regulate STAT3 phosphorylation and nuclear translocation . Thus, exosomal let-7 g-5p is speculated to bind to the 3’UTR of SOCS7 mRNA, promoting its degradation or inhibiting translation, thereby relieving SOCS7-mediated inhibition of STAT3 phosphorylation, leading to excessive STAT3 activation and M2 polarization in macrophages. In this study, SOCS7 protein levels decreased, whereas STAT3 phosphorylation increased in macrophages exposed to exosomes derived from SERPINE1 -overexpressing cells, and this effect was reversed by antagomir-let-7 g-5p. Subsequent protein interaction assays further confirmed the physical interaction between SOCS7 and STAT3 in macrophages exposed to GC cell-derived exosomes, and let-7 g-5p inhibition enhanced this interaction. This alteration in SOCS7 expression was validated in vivo. The results suggest that SERPINE1 -mediated cancer-derived exosomal let-7 g-5p downregulates SOCS7 protein levels, reducing its interaction with STAT3, which leads to STAT3 hyperphosphorylation and promotes M2 polarization. Additionally, previous studies have shown that STAT5 phosphorylation inhibits macrophage M1 polarization and promotes its shift toward the M2 phenotype . SOCS7 has been reported to interact with STAT5, inhibiting its phosphorylation and nuclear translocation . Therefore, it is possible that exosomal let-7 g-5p may promote M2 polarization by reducing the inhibitory effect of SOCS7 on STAT5. Moreover, MAP3K1 has been identified as a predicted target of let-7 g-5p and is essential for activating NF-κB and p38/JNK signaling pathways, which are critical for macrophage M1 polarization . Inhibition of MAP3K1 has been shown to promote the transition from M1 to M2 macrophages . Thus, exosomal let-7 g-5p may target MAP3K1 to disrupt M1 polarization-related signaling and promote the conversion of M1 macrophages to the M2 phenotype. These findings offer valuable insights into the role of exosomal let-7 g-5p in shaping an immunosuppressive tumor microenvironment, providing a critical direction for future research. However, this may represent only a small part of SERPINE1 ’s role in facilitating macrophage M2 polarization by mediating the transfer of exosomal miRNAs. Among the downregulated exosomal miRNAs derived from SERPINE1 -silenced GC cells, miR-365a-5p has been reported to promote macrophage M2 polarization by inhibiting the TLR2/MyD88/NF-κB signaling pathway in osteoarthritis . Similarly, miR-106b-3p decreased significantly in ferroptotic cardiomyocyte-derived exosomes, where it promoted macrophage M1 polarization by activating the Wnt signaling pathway . These findings suggest that, in addition to let-7 g-5p, SERPINE1 silencing may inhibit M2 polarization and promote M1 polarization by downregulating miR-365a-5p and miR-106b-3p. Conversely, among the upregulated miRNAs, miR-26b-5p, miR-671-3p, miR-152-3p, miR-1246, miR-1290, miR-346, and miR-1291 were predicted to target HMGA2, a known activator of STAT3 transcription and a driver of macrophage recruitment . Notably, a separate study has shown that downregulation of miR-1291 by osteosarcoma-derived exosomal ELFN1-AS1 promotes macrophage M2 polarization via upregulating CREB1 . These observations highlight a potential mechanism through which SERPINE1 silencing reduces M2 macrophage infiltration by inhibiting HMGA2 and upregulating miR-1291. Additionally, miR-671-3p inhibition has been shown to promote macrophage M2 polarization via the KLF12/AKT/c-myc signaling pathway in pancreatic cancer . Thus, its upregulation following SERPINE1 silencing may further disrupt M2 polarization. Collectively, these findings suggest that SERPINE1 silencing modulates macrophage polarization and recruitment through multiple miRNA-mediated signaling pathways, providing critical insights into its regulatory role in M2 polarization. At this moment, another critical question regarding the regulatory mechanism of SERPINE1 on let-7 g-5p expression remains unanswered and urgently needs clarification. Based on transcription factor prediction for let-7 g-5p and protein microarray analysis in SERPINE1 -silenced GC cells, we hypothesized that SERPINE1 transcriptionally regulate the expression of let-7 g-5p by activating JAK2 to promote STAT3 phosphorylation and nuclear translocation. Our results confirmed that silencing SERPINE1 inhibited both JAK2 and STAT3 phosphorylation in vivo and in vitro. Additionally, subsequent cell experiments revealed that SERPINE1 overexpression elevated exosomal let-7 g-5p levels, an effect counteracted by a JAK inhibitor. Further analysis revealed that STAT3 could directly bind to the let-7 g-5p promoter, enhancing its transcriptional activity. These findings suggest that SERPINE1 overexpression promotes let-7 g-5p transcription by activating the JAK2/STAT3 signaling pathway, thereby elevating exosomal let-7 g-5p levels. Given that JAK2/STAT3 is frequently activated in various malignancies, including GC, where it drives cell proliferation, migration, invasion, and shapes the immunosuppressive microenvironment , the discovery that SERPINE1 activates the JAK2/STAT3 pathway not only reveals the regulatory mechanism of exosomal let-7 g-5p, but also provides a potential explanation for the role of SERPINE1 in promoting GC cell proliferation. This finding adds a new layer of understanding by showing that SERPINE1 regulates and mediates the transfer of exosomal miRNAs through the JAK2/STAT3 signaling axis to favor tumor progression. However, whether SERPINE1 activates JAK2 through cytoplasmic protein interactions or via autocrine signaling through LRP1 and the C-terminal uPA-binding domain on the GC cell surface remains unclear and warrants further study. In conclusion, this study demonstrates the dual role of SERPINE1 in promoting GC cell proliferation and driving TAM M2 polarization through both autocrine signaling and exosome-mediated communication. As shown in Fig. 8 , this study highlights that SERPINE1 transcriptionally regulates exosomal let-7 g-5p levels derived from GC cells through the JAK2/STAT3 signaling pathway. By transferring exosomal let-7 g-5p to macrophages, SERPINE1 disrupts the SOCS7-STAT3 interaction, thereby lifting SOCS7-mediated suppression of STAT3 phosphorylation, leading to STAT3 hyperactivation and subsequent macrophage M2 polarization. This study establishes SERPINE1 as a key regulator driving macrophage M2 polarization and immune suppression through exosome-mediated miRNA transfer. Targeting SERPINE1 , whether by inhibiting JAK2/STAT3 signaling in GC cells, blocking exosomal let-7 g-5p translocation, or disrupting STAT3 activation in macrophages, presents a novel therapeutic strategy to reprogram the immunosuppressive microenvironment, potentially transforming GC treatment through enhanced anti-tumor immunity. Fig. 8 Graphical summarization of the molecular mechanism of SERPINE1 -mediated Gastric cancer-derived exosomes promoting macrophage M2 polarization Below is the link to the electronic supplementary material. Supplementary Material 1
|
Review
|
biomedical
|
en
| 0.999996 |
PMC11694455
|
Intimate partner violence (IPV) constitutes the most common form of violence against women and is defined as any behavior by a current or former intimate partner that causes physical, sexual, or psychological harm, including acts of physical aggression, sexual coercion, psychological abuse, and controlling behaviors . According to the violence against women EU-wide survey, 43% of women in the EU who had a (current or previous) partner reported experiences of psychological IPV and 22% of women reported physical and/ or sexual IPV since the age of 15 . In Germany, lifetime prevalence of IPV was 57.6% of women (14 years and older). Out of the different subtypes, psychological IPV was most prevalent with 53.6%, while 15.2% of women reported physical IPV and 18.6% reported sexual IPV, and all forms regularly coincided . In the first year postpartum, the prevalence of IPV worldwide ranges from 2% in Sweden to 58% in Iran . In addition to often facing substantial physical and mental health challenges during the postpartum period , one German study suggests that the birth of a child can also be the catalyst for IPV . Women who had experienced IPV reported worse health across a number of health domains in the immediate period after childbirth [ 8 – 11 ] and across the lifespan , highlighting the importance of providing support to postpartum women experiencing IPV. Although any kind of support can improve the recovery and safety of women experiencing IPV [ 14 – 17 ], in Germany, more than two thirds of women suffering from physical or sexual IPV do not seek formal help . Even in the case of informal help (e.g., family and friends), the proportion of women who seek support is alarmingly low . Consequently, approximately four out of ten affected German women do not tell anyone about the violence . Based on Andersen´s Behavioral Model of Access to Health Care and a model from Liang et al. the process of help seeking starts with problem recognition being influenced by social and cognitive factors, leading to a decision to act and, at last, selecting a source of help. The cross-sectional study INVITE (INtimate partner VIolence Treatment prEferences; ), on which the current study is based, integrated these two existing models of help-seeking into its theoretical framework and adjusted these to relevant aspects of motherhood . This seems to be important because especially postpartum women may face specific barriers that make it difficult to seek help, such as fear of being separated from their children, e.g., by losing custody [ 18 , 22 – 25 ], or the focus on preserving family unity [ 25 – 27 ]. At the same time, studies have shown that becoming a mother can motivate affected women to seek help [ 28 – 31 ]. Therefore, it is particularly important to investigate women’s preferences for services in the postpartum period to best support them in seeking help and to prevent intergenerational impacts of IPV on families . Fig. 1 Predictors of Help Seeking Preferences: Theoretical Framework of the INVITE Study. Note. Theoretical Framework of the INVITE study, adapted version of the Andersen´s Behavioral Model of Access to Health Care and a model from Liang et al. . From “Preferences and barriers to counseling for and treatment of intimate partner violence, depression, anxiety, and posttraumatic stress disorder among postpartum women: study protocol of the cross-sectional study INVITE.” by Seefeld, L., Mojahed, A., Thiel, F., Schellong, J., and Garthus-Niegel, S., 2022, Frontiers in psychiatry, 13 . CC BY 4.0 Previous research has shown that women’s decision to seek help is facilitated by offering a variety of services that meet their needs . Services can be classified into domains depending on the service provider, setting, and type of provided help. There is informal support (e.g., family and friends) and formal support services, which contain counseling services (e.g., family counseling centers), specific domestic violence services (e.g., women’s shelters), criminal justice services, or medical services (e.g., emergency room, general practitioner) . During the perinatal period, women have frequent interactions with healthcare services and previous research suggests that this may offer an opportunity to help women affected by IPV . However, services often used by women in the perinatal period, such as gynecologists and midwives, have so far not been investigated as a support service for women experiencing IPV. Additionally, most previous studies have investigated affected women’s satisfaction with the services received and have neglected affected women who have not sought help yet . Ensuring accessibility and tailored services is essential for these women. They may be unaware of the support available, so exploring their preferences and offering appropriate services may enable them to seek help when they need it. Taking these preferences into account promotes prevention, early identification, accessibility, and empowerment of women. Therefore, it is particularly important to include women who have not yet sought or received help in studies investigating service preferences. Research so far suggests that women choose different services according to their needs . For instance, those who need emotional support tend to seek informal help, while those who need immediate medical assistance are more likely to seek formal help [ 39 – 43 ]. Further, prior studies showed that the type of violence influenced the type of service approached . Associations were found between psychological violence and informal help [ 39 – 44 ]. While some studies revealed that experiences of physical IPV led more frequently to seeking formal , another study found that participants were more likely to seek informal help for physical IPV . Experiences of sexual IPV were associated with increased help-seeking in the medical and psychosocial service domains and decreased informal help-seeking . Combined physical and sexual IPV was associated with increased help-seeking regarding psychosocial services, including community and IPV-specific services . These heterogeneous findings may be due to the fact that different studies assessed different service domains and not always all types and combinations of IPV. The extent to which different types of IPV may impact the preferences for specific service domains and settings in which services may be provided needs to be investigated in more detail. Services can be provided directly (e.g., in person or via video conference) or indirectly (e.g., via chat, e-mail or mobile phone app), with the latter meaning, that communication can be delayed in time. The number of service provision modes is constantly growing . Prior studies indicated that some women experience indirect service provision as more accessible, while others prefer direct care . Women affected by IPV who preferred indirect modes reported that seeking help this way was easier than in person because of anonymity and better accessibility at any time . Indirect modes were perceived as a safe and confidential tool for initiating discussions about IPV with health professionals, assisting women in enhancing their safety and exploring help-seeking options . This seems to be especially beneficial for women with social anxiety, when transportation is an issue , or in case of women whose partners control their physical whereabouts . However, there are still challenges with indirect modes of service provision. There is not always access to a stable internet connection or an electronic device and some clients struggle with security risks (e.g., device security, children or abusers overhearing conversations, etc.) or technological skills . In general, there is little research on preferences for different modes of service provision for non-affected and affected women in the postpartum period and in case of different types of IPV. The aim of this study was to provide new insights into the preferences for counseling and treatment services along with the mode of service provision among postpartum women in Germany (non-)affected by IPV. The study extends previous research by examining a wide range of services for women in the postpartum period, specialized in IPV or more general services, in different settings, and with different modalities. It also fills a knowledge gap about preferences of women who have not yet received help, especially in the postpartum period and distributed by type of IPV. For this reason, we decided in favour of an exploratory data analysis. Our objectives were to explore whether and how women with experiences of psychological, physical, and/or sexual IPV compared to women without experiences of IPV differ from each other in their ratings of preferences for counseling and treatment services (in total and in specific service domains) and for modes of service provision (in total and in specific provision domains). This study was based on data from the cross-sectional study INVITE, which examines the preferences for and barriers to counseling and treatment services of women in the postpartum period . The aim was to investigate women's (postpartum) health and other factors that support access to appropriate services, especially in cases of mental health problems or exposure to IPV. For this purpose, women were contacted for a standardized telephone interview lasting approximately one hour three to four months postpartum. All participants provided written informed consent to participate. Recruitment for the study took place from November 2020 to October 2023 at various maternity hospitals and freestanding birth centers in the Dresden area, Germany – specifically at birth information events, home visits for new parents by the youth welfare office, midwife (antenatal) appointments, and on the maternity ward. Women who were younger than 16 years or who did not have sufficient language skills in German or English to take part in the study were excluded. Monetary compensation for study participation was 20 €. The current study is based on Version 3 of the quality-assured data files of the INVITE study . At the time of data extraction, 4,164 of the 9,372 women approached gave consent to participate in the study, and 3,547 completed the interview. To obtain comparable results, women less than six weeks or more than six months postpartum at the time of the interview were excluded ( n = 21). Additionally, n = 19 women were excluded due to missing data in all variables of interest. The retention rate of the study population is presented in Fig. 2 . Fig. 2 Study Population and Retention Rate. Note. The final sample size was based on recruitment between November 2020 and April 2023. a Due to consent being withdrawn or because the woman could not be reached. b Women less than six weeks or more than six months postpartum at the time of the interview Women were divided into five groups: non-affected women, women affected by psychological IPV, women affected by physical IPV, women affected by sexual IPV, and women affected by both physical and sexual IPV. Previous studies have shown that there is a large overlap between psychological IPV and physical and/or sexual IPV . Therefore, women affected by psychological and physical as well as women affected by both psychological and sexual IPV were included in the physical and/or sexual IPV groups. The independent variables were assessed using the WHO-Violence Against Women Instrument (WHO-VAWI) with behavior-specific items related to psychological (four items), physical (six items), and sexual IPV (three items). Respondents were categorized as being exposed to IPV if they answered any question affirmatively . For each question, respondents were asked whether they had experienced the specific act during the past year and/or earlier in life. The instrument was translated and validated into German by the INVITE research team, following suggestions from the WHO . In this sample, the internal consistency of the total score was good (Cronbach's α = 0.82). The internal consistency of subscales was acceptable for psychological IPV (McDonald´s ω = 0.69) and good for physical IPV ( ω = 0.82) and sexual IPV ( ω = 0.73). The first main dependent variable was the total score of preferences for counseling and treatment services assessed with a self-generated questionnaire . These services were of varying specializations within the German support system and ranged from very specific options tailored to IPV, such as women's shelters, to more general services for all women, such as general practitioners . Women were asked to indicate on a four-point response scale (“not at all” to “definitely”) how likely it is that they would pick a specific service based on their preferences if they are or were affected by any type of IPV. Women who were not affected were asked to imagine the respective situation and answer the questions as if they were affected (i.e., hypothetically). The questionnaire consists of 19 items, which form the total score ranging between 19 and 76. The total score is used to determine how likely women generally are to seek help from any kind of service. Alongside this composite variable, subscales of services were computed and considered as distinct dependent variables. Therefore, an exploratory Principal Axes Factor Analysis (PFA) with varimax rotation was conducted to determine the factor structure of these dependent variables (see Appendix 1 ). Examination of Kaiser’s criteria and Bartlett’s test yielded justification for retaining three factors with eigenvalues exceeding 1, which accounted for 41.27% of the total variance. Three items (“family member, friend, or colleague”, “woman in the same situation”, and “religious institutions”) were excluded due to a factor loading below 0.30. The following subscales were derived: Psychosocial services (e.g., women´s shelter or self-help groups), medical services (e.g., gynecologist or emergency room), and midwives (midwives and family midwives). Internal consistency was good ( α = 0.83) for the service preferences questionnaire, as well as for the subscales psychosocial ( ω = 0.80) and medical services ( ω = 0.79). Interitem correlation for the subscale midwives was high (0.64). Finally, participants were asked if they would contact the police or any legal service provider if they would experience IPV. Here, the answer options were “yes”, “no”, and “I don´t know”. The second main dependent variable was the total score of mode of service provision preferences for IPV assessed with a self-generated questionnaire . The questionnaire consisted of seven items, and participants could answer on a four-point response scale (“not at all” to “definitely”), with a minimum total score of seven and a maximum total score of 28, whether the different options would meet their needs when using IPV services. A high total score indicates that women would seek help across all modes in case of IPV. Subscales for different modes of service provision were calculated with an exploratory PFA with varimax rotation (see Appendix 2 ) and considered as separate dependent variables to investigate whether there are differences in preferences for certain provision modes. Two factors explaining a total of 44.77% of the variance were revealed: Direct modes (i.e. in person, telephone call, and video conference) and indirect modes (i.e. app or online platform, chat or e-mail). In comparison to the direct modes, the indirect modes can be time-delayed and are less identifiable. Internal consistency was acceptable ( α = 0.72) for the mode of service provision preferences questionnaire and good for the subscale direct modes ( ω = 0.70) and the subscale indirect modes of service provision ( ω = 0.70). In accordance with the assessment of counseling and treatment service preferences, participants not currently affected by IPV were asked to imagine that they were suffering from IPV. Based on their importance in the literature on postpartum help-seeking behavior [ 34 , 57 – 59 ], the following variables were included as potential covariates: duration of residence in Germany, categorized as “born in Germany” and number of years since migration to Germany (< 5 years, 5–10 years, and > 10 years), net monthly household income (< 1,250 €, 1,250 €–2,249 €, 2,250 €–2,999 €, 3,000 €–3,999 €, 4,000 €–4,999 € and > 5,000 €), and number of children in total. Additionally, time of occurrence of IPV (experiences in lifetime versus within the last 12 months) was included as a potential covariate. For the following data analyses, IBM SPSS Statistics (Version 28.0.0.0) was used. First, the data set and all relevant variables were checked for outliers and extreme values. Univariate outliers were identified by using boxplots displaying the interquartile range (IQR). Values three times greater than the IQR were considered extreme univariate outliers. Multivariate outliers were identified using the Mahalanobis distance, which was based on robust estimations of the mean and covariance matrix. Assumptions for all analyses were tested. The main sociodemographic characteristics, potential confounding variables, predictors, and outcome variables were examined descriptively. Missing values of less than 20% in psychometric scales were replaced by the woman’s mean value. If a woman completed less than 80% of the items of a given questionnaire, the score for that questionnaire was treated as missing. However, responses from the same participant on other questionnaires were included in the subsequent analyses. Spearman correlation analyses were performed to identify relevant covariates. Variables which correlated significantly with a main dependent variable (total score of counseling and treatment service or total score of mode of service provision preferences) were included in the multivariate analyses as covariates. Two one-way analyses of covariance (ANCOVA) were performed to investigate the differences between the total scores of service preferences as well as the mode of service provision preferences between all groups including identified covariates. As there was heterogeneity of covariates across groups for number of children and maternal age, main effects and interaction statistics were reported based on Wilks' lambda test statistic, following the recommendation of Ateş et al. . Additionally, analyses were performed with and without these heterogeneous covariates to control for this unfulfilled assumption. As no difference in significance level was found, covariates were maintained. In the case of identified outliers, main analyses were reported without outliers, and sensitivity analyses were performed to control for their influence. Significant differences in these sensitivity analyses were reported. The ANCOVA for service preferences was calculated with the interaction term “maternal age*IPV group” to address the unmet statistical requirements of homogeneity of regression slopes. To reveal the differences between the groups and their relation to the subscales of service preferences as well as the subscales of preferences in modes of service provision, two one-way multiple analyses of covariance (MANCOVA) and post hoc tests were applied. Due to multiple testing, Bonferroni correction was applied. As there was no normal distribution of residuals, all analyses were performed with bootstrapping using the bias-corrected and accelerated (BCa) method with 95% confidence intervals (CIs). Analyses were calculated with 1,000 iterations. In ANCOVAs and MANCOVAs SPSS uses listwise deletion as the standard method to handle missing data. This means that SPSS removes entire cases from the analysis if there are any missing values in any of the variables involved in that analysis. Therefore, n varied slightly between analyses. Data collection and management were facilitated by using “Research Electronic Data Capture” (REDCap), a secure, web-based application for data capture as part of research studies, hosted at the ‘Koordinierungszentrum für Klinische Studien’ at the Faculty of Medicine of the Technische Universität Dresden . Mothers were on average 32.95 years old ( SD = 4.63). Most of them were born in Germany (91.5%), had a partner (97.8%), and attained more than ten years of education (73.9%). Detailed sociodemographic characteristics of the sample are presented in Table 1 . Table 1 Sociodemographic characteristics of the sample M (SD) Range Maternal age a 32.95 ± 4.63 16.78–53.96 Age of child b 13.23 ± 2.73 6.00–25.57 n = 3,507 % Duration of residence in Germany c Born in Germany 3,210 91.5 <5 years 87 2.5 5–10 years 103 2.9 >10 years 109 3.1 Partnership status Partner 3,432 97.8 No partner 77 2.2 Education ≤ 10 years 914 26.0 > 10 years 2,592 73.9* Net income d < 1,250 € 84 2.4 1,250 €–2,249 € 356 10.2 2,250 €–2,999 € 440 12.6 3,000 €–3,999 € 895 25.6 4,000 €–4,999 € 883 25.3 > 5,000 € 837 24.0* Number of children 1 1,843 52.6 2 1,275 36.4 3 298 8.5 4 67 1.9 ≥ 5 22 0.6 IPV groups Without 1,813 51.9 Psychological 941 26.9 Physical 278 7.9 Sexual 297 8.5 Sexual and physical 172 4.9 n varied slightly due to missing values, IPV Intimate Partner Violence a in years, b in weeks, c time since migration to Germany, d per month and household * The valid percentages do not add up to 100% due to rounding errors In this sample, 48.2% of women disclosed instances of IPV. Specifically, 45.2% of women had encountered psychological IPV, with 13.5% experiencing it within the past 12 months. Additionally, 12.8% reported incidents of physical IPV, with 1.9% being affected in the last 12 months. Moreover, 13.3% of women were affected by sexual IPV, with 1.3% experiencing it within the past 12 months. As shown in Fig. 3 , most women affected by sexual and/or physical IPV also reported experiences of psychological IPV, for which the highest lifetime prevalence was found. Fig. 3 Overlap in Lifetime Prevalence for Different Types of IPV. Note . IPV Intimate Partner Violence. Proportions in relation to the total sample of affected women ( n = 1,688) Table 2 shows the descriptive statistics of preferences for all service and mode of service provision items across all groups. Among the various services, on average women tended to prefer the items “family member, friend, or colleague” and “woman in the same situation” the most and “religious institutions” the least. “In person” was the most popular modality of service provision, followed by “telephone call” and “video conference”. Women rated “apps or online platforms without guidance from an expert” as the least popular on average. Table 2 Descriptive statistics of preferences for individual counseling and treatment services as well as for mode of service provision Item n = 3,507 M (SD) n a = 1,688 affected women M (SD) n b = 1,812 non-affected women M (SD ) Likeliness of picking a service: Counseling and treatment service preferences Definitely (4) Family member, friend, or colleague 3.3 (0.8) 3.3 (0.8) 3.4 (0.7) Rather yes (3) Woman in the same situation 3.2 (0.7) 3.2 (0.7) 3.2 (0.7) Women's shelter 3.1 (0.8) 3.0 (0.8) 3.1 (0.8) Intervention center 3.0 (0.7) 3.0 (0.7) 3.1 (0.7) Emergency Room 3.0 (0.9) 2.9 (0.9) 3.0 (0.8) Help hotline for violence against women or sexual abuse 2.9 (0.8) 2.9 (0.7) 3.0 (0.8) Life and family counseling center 2.9 (0.7) 2.9 (0.8) 2.9 (0.7) Family midwife 2.9 (0.9) 2.9 (0.8) 2.9 (0.8) Counseling service for victims of crime 2.9 (0.8) 2.8 (0.8) 2.9 (0.7) Midwife 2.8 (0.9) 2.8 (0.9) 2.9 (0.9) Gynecologist 2.8 (0.9) 2.8 (0.9) 2.8 (0.9) Outpatient clinic for psychiatry or psychosomatic medicine 2.8 (0.8) 2.8 (0.9) 2.7 (0.8) General Practitioner 2.7 (0.9) 2.6 (0.9) 2.7 (0.9) Self-help group 2.6 (0.9) 2.6 (0.9) 2.7 (0.9) Psychosocial crisis service of the health department 2.5 (0.8) 2.5 (0.8) 2.5 (0.8) Rather no (2) Telephone counseling 2.5 (0.8) 2.5 (0.8) 2.5 (0.8) Social pedagogical family assistance 2.5 (0.8) 2.4 (0.8) 2.5 (0.8) Pediatrician 2.3 (1.0) 2.3 (1.0) 2.3 (0.9) Religious institutions 1.7 (0.8) 1.6 (0.8) 1.8 (0.9) Not at all (1) Mode of service provision preferences Definitely (4) In person 3.7 (0.5) 3.7 (0.6) 3.7 (0.5) Telephone call 2.7 (0.8) 2.7 (0.8) 2.7 (0.7) Rather yes (3) Video conference 2.6 (0.8) 2.6 (0.8) 2.7 (0.8)) App or online platform with guidance from expert 1.9 (0.8) 1.9 (0.8) 2.0 (0.8) Rather no (2) Chat 1.8 (0.8) 1.9 (0.8) 1.8 (0.7) E-mail 1.8 (0.8) 1.8 (0.8) 1.8 (0.8) App or online platform without guidance from expert 1.5 (0.6) 1.5 (0.6) 1.5 (0.6) Not at all (1) n varied slightly due to missing values In addition to these services, women were asked if they would contact the police or any other law enforcement agency. In this sample, 2,522 women (71.9%) affirmed that they would contact the police, while 728 women (20.7%) answered “don´t know”, and 259 women (7.4%) answered “no”. Spearman’s rank correlation was computed to assess the relationships between covariates and outcome variables. A higher score of counseling and treatment service preferences was statistically significantly associated with higher maternal age ( r = 0.053, p = 0.001). A higher score of modes of service provision preferences was statistically significantly associated with a shorter duration of residence in Germany ( r = −0.044, p = 0.001) and a lower number of children ( r = −0.043 , p = 0.05). Therefore, maternal age was included in the following analysis for service preferences (for the total score as well as for every service subscale). Likewise, duration of residence in Germany and number of children were included in the analysis for mode of service provision preferences. Detailed correlations can be found in Appendix 3 . After adjusting for maternal age and its interaction term due to unmet homogeneity requirement (maternal age*IPV groups), IPV groups differed significantly in their total scores for preferences for services [ F = 3.161, p = 0.013, partial η 2 = 0.004]. Bonferroni-corrected post hoc tests revealed a lower total score for counseling and treatment service preferences for women affected by any type of IPV compared to women without experiences of IPV, indicating that they preferred all services less than non-affected women. Furthermore, the total score for preferred services of women affected by physical IPV or women affected by sexual IPV was significantly lower than that of women affected by psychological IPV (see Table 3 ). Neither maternal age as a covariate [ F = 1.931, p = 0.165] nor its interaction term [ F = 2.741, p = 0.098] showed statistical significance. Table 3 Group comparisons for counseling and treatment service preferences IPV group (I) IPV group (J) Services Psychosocial services Medical services Midwives M (I─J) P 95 % BCa CI M (I─J) p 95 % BCa CI M (I─J) p 95 % BCa CI M (I─J) p 95 % BCa CI LL UL LL UL LL UL LL UL Without Psychological 2.06* .015 0.53 3.73 0.57** .004 0.22 0.96 0.27** .016 0.06 0.48 0.20** .002 0.08 0.33 Physical 4.42** .007 1.51 7.94 1.44*** <.001 0.84 2.06 0.39* .032 0.02 0.74 0.43*** <.001 0.23 0.63 Sexual 5.32* .023 1.10 10.37 1.35*** <.001 0.76 1.95 0.28 .092 -0.05 0.63 0.47*** <.001 0.26 0.66 Sexual and physical 6.28* .040 0.81 12.71 1.25** .008 0.39 2.13 0.32 .170 -0.12 0.80 0.38*** <.001 0.14 0.61 Psychological Without Physical 2.37** .008 0.79 4.19 0.87* .011 0.23 1.56 0.12 .540 -0.28 0.48 0.22* .039 0.02 0.43 Sexual 3.26* .046 0.33 6.68 0.77* .014 0.12 1.44 0.01 .943 ‑0.35 0.37 0.26* .018 0.03 0.47 Sexual and physical 4.22 .065 0.18 9.04 0.68 .148 -0.21 1.62 0.05 .830 -0.45 0.55 0.18 .160 -0.09 0.43 Physical Without Psychological Sexual 0.90 .342 -0.95 2.99 -0.10 .808 -0.95 0.64 -0.10 .642 -0.58 0.35 0.04 .760 -0.24 0.29 Sexual and physical 1.86 .238 -1.12 5.30 -0.19 .709 -1.28 0.82 -0.07 .836 -0.60 0.49 -0.05 .736 -0.34 0.22 Sexual Without Psychological Physical Sexual and physical 0.96 .321 -0.93 3.21 -0.10 .859 -1.04 0.95 0.04 .862 -0.46 0.59 -0.09 .570 -0.37 0.21 Sexual and Physical Without Psychological Physical Sexual IPV Intimate Partner Violence. Bootstrap results are based on 1,000 bootstrap samples. BCa CI bias-corrected and accelerated bootstrap interval, LL lower limit, UL upper limit. Calculations without outliers and with covariate: maternal age * p < .05. ** p < .01. *** p < .001 Considering group comparisons for the three subscales of service preferences, 15 multivariate outliers were identified and excluded from further calculations. After adjusting for maternal age, one-way MANCOVA with bootstrapping revealed differences between groups and the subscales [ F = 5.806, p < 0.001, partial η 2 = 0.007, Wilk’s Λ = 0.980]. Moreover, maternal age as a covariate showed significance [ F = 12.029, p < 0.001, partial η 2 = 0.010, Wilk’s Λ = 0.990]. Post hoc univariate ANOVAs were conducted for the three subscales and indicated statistically significant differences between IPV groups in every subscale, namely preferences for psychosocial services [ F = 11.105, p < 0.001, partial η 2 = 0.013], medical services [ F = 2.665, p = 0.031, partial η 2 = 0.003], and midwives [ F = 10.755, p < 0.001, partial η 2 = 0.012]. Bonferroni-corrected post hoc analyses with bootstrapping revealed that psychosocial services were less preferred by women affected by any type of IPV compared to non-affected women. Furthermore, women affected by physical IPV and women affected by sexual IPV reported that they would be less likely to use psychosocial services than women affected by psychological IPV. Detailed information can be found in Table 3 . Medical services were rated significantly less likely to be used by women affected by psychological IPV and women affected by physical IPV than by non-affected women. In this subscale, higher maternal age was additionally associated with significantly higher ratings for medical service preferences [ F = 26.678, p < 0.001, partial η 2 = 0.008]. As shown in Table 3 , midwives were significantly less likely to be used by women affected by any type of IPV compared to non-affected women, and women affected by physical or sexual IPV rated midwives less likely to be used than women affected by psychological IPV. According to Cohen, the effect sizes were small for the subscales of psychosocial service preferences (partial η 2 = 0.013) and preferences for midwives (partial η 2 = 0.012) and very small for the subscale of medical service preferences (partial η 2 = 0.003). Considering mode of service provision preferences, one univariate extreme outlier was found and excluded from the calculations. IPV groups did not differ significantly in their preferences for modes of service provision in the total score [ F = 1.290, p = 0.272, partial η 2 = 0.001] after adjusting for number of children and duration of residence in Germany. Both covariates, however, showed statistical significance: a longer duration of residence in Germany [ F = 9.800, p = 0.002, partial η 2 = 0.003] and a higher number of children [ F = 11.132, p = 0.001, partial η 2 = 0.003] were associated with lower scores in preferences for mode of service provision. Considering group comparisons for the two subscales of mode of service provision preferences, 24 outliers were found and excluded. Groups differed on the subscales as revealed with a one-way MANCOVA [ F = 2.569, p = 0.009, partial η 2 = 0.003, Wilk’s Λ = 0.994)]. Duration of residence in Germany [ F = 22.588, p < 0.001, partial η 2 = 0.013, Wilk’s Λ = 0.987] and number of children [ F = 7.526, p < 0.001, partial η 2 = 0.004, Wilk’s Λ = 0.996)] were also statistically significant in this model. Post hoc univariate ANOVAs showed a statistically significant difference between IPV groups for the subscale of direct modes of service provision [ F = 4.577, p = 0.001, partial η 2 = 0.005] but not for the subscale of indirect modes of service provision [ F = 0.259, p = 0.904, partial η 2 < 0.001]. The covariate number of children showed statistical significance for the subscale of direct modes of service provision [ F = 14.222, p < 0.001, partial η 2 = 0.004] and the covariate duration of residence in Germany for the subscale of indirect modes of service provision [ F = 32.533, p < 0.001, partial η 2 = 0.009]. Higher values of the covariates were associated with lower scores for preferences in the dependent variables. Bonferroni-corrected post hoc tests with bootstrapping for the subscale direct modes of service provision revealed that women affected by physical and/or sexual IPV rated direct modes less favorably than women without experiences of IPV (see Table 4 ). Likewise, women affected by physical IPV and/or sexual IPV rated direct modes less favorably than women affected by psychological IPV. Table 4 Group comparisons for the subscale direct modes of service provision preferences IPV group (I) IPV group (J) M (I─J) p 95% BCa CI LL UL Without Psychological 0.00 .903 -0.04 0.04 Physical 0.10** .005 0.03 0.16 Sexual 0.07* .022 0.01 0.13 Sexual and physical 0.10* .017 0.02 0.19 Psychological Without Physical 0.09* .010 0.02 0.17 Sexual 0.07* .032 0.00 0.13 Sexual and physical 0.10* .019 0.01 0.19 Physical Without Psychological Sexual -0.03 .474 -0.12 0.06 Sexual and physical 0.01 .898 -0.10 0.12 Sexual Without Psychological Physical Sexual and physical 0.04 .494 -0.07 0.14 Sexual and Physical Without Psychological Physical Sexual IPV Intimate Partner Violence, Bootstrap results are based on 1,000 bootstrap samples, BCa CI bias-corrected and accelerated bootstrap interval, LL lower limit, UL upper limit. Calculations without outliers and with covariates: duration of residence in Germany, number of children * p < .05. ** p < .01 The aim of this study was to provide new insights into the preferences for counseling and treatment services for IPV and the preferred modes of service provision among women in the postpartum period. It was investigated whether and how postpartum women who had experienced different types of IPV and postpartum women who had not experienced IPV differ in their preferences. Key findings were as follows: 1) Descriptive data revealed that people from the women's social environment and direct modes of service provision were the most preferred types of support among all women. 2) Counseling and treatment services overall were less likely to be used by women who experienced any type of IPV compared to non-affected women (hypothetically). 3) In the specific service domains, women affected by any type of IPV were less likely to use psychosocial services and seek help by midwives compared to non-affected women (hypothetically). Medical services were rated less likely to be used by women affected by psychological and/or physical IPV than by non-affected women (hypothetically). 4) There were no group differences in the preferences for using services via different modes in total and the indirect modes of service provision. 5) Direct modes were rated less likely to be used by women affected by physical and/or sexual IPV than by non-affected women (hypothetically). Descriptive findings indicate that, among the array of services available, conversing with a "family member, friend, or colleague" emerged as the most preferred option on average across all women. A recent survey also showed that women affected by IPV were most likely to seek help from their family or friends compared to formal support services . The recourse to informal support frequently marks the initial stride towards seeking help and exerts a notable influence on the determination to pursue formal help [ 7 , 63 – 66 ]. Subsequently, women's shelters emerged as the preeminent service of choice among respondents. This preference may be attributed to the fact that the responsibility of women's shelters in cases of IPV is clearly evident, and the women may be able to anticipate the type of support they can expect from this service. In Germany, women’s shelter organizations form a national network and offer a wide range of services Also thanks to a variety of public campaigns, women's shelters in Germany have become more widely known . The emergency room was descriptively rated as the most preferred medical service among all respondents. This shows that women would be likely to seek help to treat current physical injuries but not necessarily with the intent to do something about the IPV itself. This is consistent with previous findings and indicates the important role of well-trained professionals in medical settings to screen for IPV and provide appropriate support . Group comparisons revealed that women who have actually experienced IPV were less likely to use counseling and treatment services if needed than non-affected women would hypothetically do. This seems to be in line with the often reported low numbers of help seeking in case of IPV . Our findings indicate a trend wherein preferences, both across the total score and the subscales, exhibit a decline corresponding to the type of IPV: Women affected by physical and/or sexual IPV consistently assign the lowest ratings, followed by those experiencing psychological IPV, and subsequently by non-affected women. The decision to seek help is described as a process, which includes three stages: problem recognition, the decision to seek help, and last, the selection of a help provider . In the INVITE study, non-affected women were asked to hypothetically put themselves in the position of an affected woman. This implies that non-affected women could evaluate services without having experienced IPV and did not have to go through the first two stages of the help-seeking process . Women who have not experienced IPV may struggle to empathize with the profound impact it can have. In addition to potentially being dependent on their abuser, low ratings in preferences for services may be caused by the psychological consequences of IPV, such as internal changes impacting their self-esteem, self-confidence, and self-efficacy as well as depression or feelings of stigma, shame and fear, which impede help-seeking . The majority of affected women in our sample reported experiences with IPV more than 12 months ago. The fact that these women nevertheless would be less likely to seek help may underline the seriousness of exposure to IPV and its long-lasting effects . Even if women affected by psychological IPV reported higher ratings than women affected by physical or sexual IPV, they were still significantly less likely to use counseling and treatment services than non-affected women. This is an important finding, because psychological IPV is often not recognized as abuse by the women themselves and may support previous results about the seriousness of psychological IPV in particular . Furthermore, it is possible that women affected by IPV already had negative experiences with certain services and were therefore reluctant to use them again, because the first experience with psychosocial or medical service providers has a crucial impact on women’s future help-seeking decisions . Finally, previous research outlined the opportunity of service provision by perinatal medical care, such as midwives . In our descriptive analyses, midwives were identified as comparatively prominent, with an average score of almost 3, indicating a certain level of approval. In previous research, affected women reported that they wished to talk about their experiences of IPV with their midwives but they did not because they thought the midwives could not help them . Strengthening midwives’ skills and strategies to raise the issue of IPV may reduce women’s barriers to talk about their experiences . Investing in training midwives to identify IPV could prove beneficial due to their frequent home visits, as well as their interactions with partners, providing opportunities for IPV detection. Moreover, the oftentimes established trust between women and midwives further enhances the potential for effective intervention. It could also be of advantage to collocate IPV services with health care settings to reduce transportation and time barriers and improve safety and comfort by providing plausible “other reasons” for visiting a facility . Although there is an increasing number of digital offers, women in our sample preferred personal contact. This result is consistent with previous findings . When comparing affected and non-affected women, we found lower preference ratings by women affected by physical and/or sexual IPV compared to non-affected women in the subscale for direct modes. However, there were no significant group differences in their preferences for modes of service provision in total and in their preferences for indirect modes of service provision. The fact that there was no difference in the subscale of indirect modes may stem from the overall lack of popularity of all indirect modes within the entire sample. However, it is conceivable that non-affected women may underestimate the barriers or the overarching circumstances. Consequently, it seems reasonable that women experiencing physical or sexual IPV may prefer indirect modes to direct modes of service provision due to potential feelings of shame. In addition, a heightened fear of their partner discovering their actions may also contribute to this preference. A former study revealed that women found it easier to disclose IPV via a computer than in personal contact . Hence, online tools may be an opportunity to reduce barriers in help-seeking. A strength of this study is the large number of participants. By conducting structured interviews via telephone, we reduced participant burden (e.g., no traveling required) and increased anonymity (compared to in person interviews), making it easier to share sensitive experiences [ 86 – 88 ]. The assessment of IPV using the WHO-VAWI allows a valid differentiation of various types of IPV . Further, this is one of very few studies in Germany that quantitatively examine service preferences in the case of IPV based on a theoretical framework . This theoretical framework established a connection between predisposing variables (such as age or duration of residence in Germany), past and present stress exposure (such as IPV), and the preference for various support services . We compared non-affected women and women with experiences of different types of IPV and considered several potential covariates as well as a wide range of services offered by different providers in specific settings and provision modes. By focusing our investigation on women in the postpartum period, we have provided IPV prevalence rates for this special population and deepened our comprehension of their distinct requirements and preferences. Moreover, this has enabled us to incorporate specialized services, such as midwifery care, aimed at addressing the unique needs of women within the postpartum subgroup. Despite these important contributions, this study also has some limitations. Most of the women in our study sample were born in Germany, had an above-average level of education and income, and lived in Dresden and the surrounding area. Thus, caution should be taken in generalizing our results to other populations. Furthermore, it is important to note that the exclusion of women lacking sufficient proficiency in German or English may have resulted in recent immigrants being underrepresented in our sample. Regarding our assessment methods, it is possible that the true prevalence rates of IPV are higher than those reported in our study, as it is assumed that there is a dark figure of IPV that goes unreported due to stigma and shame , which could have been the case in a telephone interview. Besides that, economic IPV was not separately assessed in this study. To assess women’s preferences with regard to available services within the German support system in Dresden , self-generated questionnaires were used. This may reduce the generalisability to other healthcare systems. Besides the wide range of integrated services in our service questionnaire, some services (e. g., police or legal services) were still missing. Moreover, the study was conducted partly during the COVID-19 pandemic. Compared to many other countries worldwide, the infection protection measures implemented in Germany did not have the direct negative consequences that were partly expected, such as increased IPV rates [ 89 – 91 ]. However, services had to significantly adapt their operations. Counseling services were partially shifted to telephone or digital formats to maintain access to support. Occupancy rates in some shelters had to be reduced, and strict hygiene and distancing measures were enforced. Therefore, it cannot be ruled out that these factors influenced preferences regarding specific support services. Regarding statistical analyses, not all assumptions were fully met for every analysis. Although additional measures were taken and robust procedures were used whenever possible, it cannot be excluded that insufficient assumptions may have affected the results. Finally, IPV against men and violence against children was not assessed in this study; however, this should not diminish their significance. There is a need for studies that investigate the decision-making process itself to better understand how to assist women affected by IPV in their decision to start looking for counseling and treatment services. Thus, our examination so far has been limited to theoretical assessments of utilization or preferences. The subsequent phase entails delineating the sequential stages leading up to the decision to seek assistance or abstain from doing so. This step is crucial to strategically intervene at pivotal points in the help-seeking process. In addition, it is necessary to examine the unmet needs of women which may have led to the lower preference ratings in our study. It is further important to examine the extent to which the severity and duration of IPV as well as being affected by more than one type of IPV influences the preferences. In addition to whether or not women are affected by IPV, other variables can impact service preferences and need to be assessed. For instance, past and present stress exposure, past and present health, and enabling resources, such as social support and knowledge of existing services or health beliefs, need to be considered. In our study, the awareness about the existence of services was not assessed. As a recently published German study suggests that support services are less known among women affected by IPV compared to those who are not , future research should investigate whether the knowledge about services may influence preferences. Subsequent surveys should also ask in detail about women’s preferences regarding additional service settings, such as the police, lawyers, or judiciaries, and add research on the reasons why or why not women may seek help from these services. Moreover, as immigrant women are also at risk of IPV and may face additional difficulties in navigating support systems and accessing help, resulting in distinct preferences for the provision of support services , further research is warranted to address the needs of these women as well. Lastly, future studies should investigate why the potential benefits of online interventions were not reflected in women's overall preferences for service provision. To this end, it is important to know under which circumstances women could envision using such services and which concerns may make them reluctant to do so. The results are not only important on a scientific level but also of sociopolitical relevance. First, there is a need for public prevention programs to raise further awareness of IPV and its different forms, its serious impact, and how to support affected women, since acquaintances from the women's social environment were rated as the most preferred types of support. Second, the most preferred psychosocial service were women’s shelters. Women's shelters in Germany are very unevenly distributed and there is a nationwide shortage of more than 14,000 shelter places according to the Council of Europe . It would thus be of paramount importance to initially expand the capacity of women's shelters and concurrently broaden the array of services and provide support not only to women and children who need shelter but also to affected women with other needs, such as counseling, advocacy, or support in legal proceedings. Furthermore, there is a need for better implementation of IPV services in medical settings, as women may access services through healthcare settings . When providing direct or indirect services, professionals should be particularly sensitive to the fact that affected women seem to feel more uncomfortable in direct contact than non-affected women and take measures to reduce potential stigma-related or other fears women might have. Moreover, there is a need for dissemination of knowledge about the availability and potential of online interventions, because they could be of great benefit to affected women. This study addressed knowledge gaps related to preferences for counseling and treatment services and the mode of service provision among postpartum women when being exposed to IPV. The results showed that being affected by any type of IPV was associated with lower scores in counseling and treatment service preferences compared to non-affected women. This indicates that there still seem to be too many barriers for many affected women to actually make use of services. Studies examining barriers faced by affected women have identified instrumental barriers such as transportation issues, lack of access to technology, and lengthy waitlists for services. Our findings underscore the importance of addressing these barriers. Further research should investigate which of these potential barriers lead to these lower preferences. Overall, knowledge about the different types of IPV as well as support options should be further disseminated. Efforts should be made to increase awareness of IPV and its impact on mothers and children as well as to improve the availability of service systems such as women’s shelters. IPV screenings need to be implemented in medical settings and health professionals should be trained in detecting IPV. As affected women had a lower preference for a range of services, there is a critical need for health professionals to be competent in providing an effective first-line response through empathetic listening, validating concerns, helping to identify the abuse, and listing options for support and safety . Additionally, there is a need for better implementation of online interventions to make them more popular and available.
|
Study
|
other
|
en
| 0.999998 |
PMC11694463
|
Migraine is one of the most common neurological disorders worldwide. Global migraine prevalence has recently been reported to be 14–15% . Migraine, particularly migraine without aura, is associated with cervical artery dissection (CeAD) . Some genetic correlations exist between migraine and CeAD, and it is assumed that vascular fragility may underlie both conditions . Calcitonin gene-related peptide (CGRP) is a multifunctional neuropeptide targeted for the treatment of migraine and is known to have various effects, including a vasodilating effect . Given the effects and expressions of CGRP, the side effects and off-target effects of anti-CGRP monoclonal antibodies (CGRP mAb) are recently of interest . One of these concerns is that CGRP mAb may increase the risk of cerebrovascular disease. Similarly, CGRP mAb may affect CeAD, one of the cerebrovascular disorders; however, relevant reports are lacking. We present a case of vertebral artery dissection that developed during CGRP mAb treatment for migraine without aura. To the best of our knowledge, there have been no reports of CGRP mAb and CeAD. We used the Food and Drug Administration (FDA) Adverse Event Reporting System (FAERS) database, a publicly available database containing spontaneous adverse event reports submitted to the FDA, to investigate the number of cerebral and cervical artery dissection events reported as adverse effects (AEs) of CGRP mAb. A 39-year-old woman was treated with galcanezumab since June 2021 for migraine without aura. She was a non-smoker and had no vascular risk factors such as hypertension, hyperlipidemia, or diabetes. Her family history did not include any cardiovascular or cerebrovascular diseases. After the 16th dose of galcanezumab, the patient developed neck pain on the left side followed by severe headaches that differed from her usual migraine headaches. This unusual headache was unilateral, non-pulsatile, and worsened with physical movement. Headache severity was 9–10/10 on a numerical rating scale. The patient had no traumatic or triggering events prior to the onset of headache. The patient did not report nausea, photophobia, or phonophobia. The patient visited her physician two weeks after experiencing persistent neck pain and headaches. Brain magnetic resonance imaging (MRI) revealed left vertebral artery stenosis, and the patient was referred to our department for further evaluation and treatment. Head and neck magnetic resonance angiography (MRA) showed a 15 mm-long vertebral artery dissection distal to the left V2 segment . She had no physical findings, including sensory disturbance or ataxia, and did not report vomiting and vertigo. She had no family history suggestive of connective tissue disorders such as Marfan syndrome. Blood tests showed no specific abnormal findings in blood count and the coagulation systems, nor were there any indications of vasculitis. We did not administer anti-platelet therapy and followed up with pain control. Considering the effects on the blood vessels, we did not resume galcanezumab and initiated amitriptyline to control the attack of migraine. For the same reason, we switched from triptan to lasmiditan for the acute treatment of migraine. Her neck pain and headache were relieved, and MRA conducted two months later suggested a complete resolution of the dissection. Fig. 1 MRA findings. ( A ) Neck MRA revealed the stenosis of the V2 segment of the left vertebral artery (white arrow); ( B ) Axial MRA revealed “double lumen” of the left vertebral artery (white arrow); ( C , D ) Neck and axial MRA performed two months later revealed the improvement of the left vertebral artery dissection. MRA, magnetic resonance angiography FAERS was downloaded from the FDA website on June 19, 2023 ( https://fis.fda.gov/extensions/FPD-QDE-FAERS/FPD-QDE-FAERS.html ). We reviewed the publicly available FAERS database from the first quarter of 2012 through to the fourth quarter of 2023, removing duplicate reports (with the same CASE ID number) , to search for reports of CeAD as AEs with CGRP mAb (galcanezumab, fremanezumab, erenumab, and eptinezumab). Adverse events in the FAERS are registered based on the Medical Dictionary for Regulatory Activities (MedDRA) developed by the International Conference on Harmonization of Technical Requirements for Registration of Pharmaceuticals for Human Use. To detect adverse event names, MedDRA version 26.0 was used. We extracted and analyzed the Preferred Terms (PTs) related to CeAD from the High-level Terms (HLTs), “central nervous system aneurysms, and dissections” (Supplementary Table 1 [see Additional file 1 ]). However, the PTs did not distinguish between intra- and extra-cranial artery dissection, therefore, we analyzed AEs that included both of cerebral artery dissection and CeAD. To evaluate whether the effect was attributable to the CGRP mAbs themselves rather than migraine, we also searched for sumatriptan, a widely prescribed drug for migraine, as a comparator. A total of 13,290,393 AE reports were submitted to the FAERS, including 20,946 reports on galcanezumab, 69,906 reports on CGRP mAbs, and 33,462 reports on sumatriptan. Six cases of cerebral artery dissection and CeAD AEs were reported with galcanezumab and 10 cases with CGRP mAbs (Table 1 ). According to the disproportionality analysis , the reporting odds ratios (RORs) for galcanezumab and CGRP mAbs in the FAERS for cerebral artery dissection and CeAD were elevated to 14.0 (95% confidence interval [CI]: 6.22–31.4) and 7.06 (95% CI: 3.75–13.3), respectively. On the other hand, the ROR for sumatriptan was not significantly elevated (2.87; 95% CI: 0.71–11.5). Table 1 The number of patients reported with cerebral artery dissection and CeAD as adverse effects, and ROR (95% CI) in the FAERS database (total n = 13,290,393) Integrated PTs (number of total reports) Galcanezumab ( n = 20,946) CGRPs ( n = 69,906) Sumatriptan ( n = 33,462) n ROR (95% CI) n ROR (95% CI) n ROR (95% CI) PTs of cerebral artery dissection and CeAD (278) 6 14.0 (6.22–31.4) 10 7.06 (3.75–13.3) 2 2.87 (0.71–11.5) Data are reported as frequencies along with the ROR and 95% CI. The total sample size was 13,290,393. CeAD, cervical artery dissection; ROR, reporting odds ratio; CI, confidence interval; FAERS, food and drug administration adverse event reporting system; PT, preferred term; CGRP, calcitonin gene-related peptide Migraine and cervical artery dissection (CeAD) have been suggested to be associated. Many studies have reported an association between migraine and ischemic stroke (IS); patients with any type of migraine were reported to have a 2.04 (95% CI: 1.72–2.43) times higher risk of IS, with a particularly elevated risk of 3.65 (95%CI: 2.21–6.04) in women under 45 years old . The association between migraine and CeAD has been suspected to be one of the factors increasing IS in patients with migraine . A systematic review found that patients with migraine had a 1.74 times higher risk of developing CeAD . Metso et al. reported that patients with IS and CeAD had a higher frequency of migraine without aura compared to patients with IS from other causes . Recent genetic analyses have reported an association between migraines and CeAD; a genetic correlation study of pairwise traits identified ADAMTSL4/ECM1 , PLCE1 , and MRVI1 as new candidate genes implicated in the susceptibility to both migraine and CeAD . Migraine can, in rare instances, lead to mild ischemic cerebrovascular deficits with a relatively benign prognosis . Although our case also had a benign prognosis, the absence of ischemic infarction suggests that this particular scenario may not apply to our case. CGRP is an essential multifunctional neuropeptide discovered in 1982 as one of the first examples of alternative RNA processing . Since then, a series of researches have revealed the role of CGRP in the cranial sensory nerves associated with migraines, and multiple CGRP transmission components are targeted as migraine therapies . CGRP is one of the most potent vasodilators in humans, which increases cerebral, cardiac, and renal blood flow . CGRP is released endogenously in response to ischemia and has been suggested to play a role in preconditioning and protection against reperfusion injury of the brain and various organs . CGRP mAb inhibit these effects, thereby potentially increasing the risk of cardiovascular events. Mulder et al. reported that administering gepant, a CGRP receptor antagonist, to mice and inducing artificial vascular occlusion resulted in a significantly higher incidence and extent of cerebral infarction compared with vehicle . Currently, no clinical evidence suggests an increased risk of cerebrovascular events associated with CGRP mAb. However, the European Headache Federation guidelines recommend cautious use of CGRP mAb in patients with high cerebrovascular risk . Although there is no specific hypothesis regarding the association between CGRP mAb and CeAD, the mechanism of the CGRP effect suggests that it may affect CeAD as well as cerebral infarction. However, there is no evidence of a strong correlation between CeAD and CGRP. We investigated the number of cerebral artery dissection and CeAD events reported as AEs of CGRP mAb, using the FAERS database, which was utilized in the previous study to report the adverse event profile of CGRP mAb, including cases of coronary artery dissection ( n = 5) . The RORs for galcanezumab and CGRP mAbs compared with all the other drugs in FAERS for cerebral artery dissection and CeAD were significantly elevated. However, cerebral artery dissection and CeAD are more likely to occur in patients with migraine; therefore, these results may be indicative of the characteristics of the population of migraine patients. In contrast, sumatriptan did not show a significantly elevated ROR, suggesting that CGRP mAb themselves may increase the risk of developing cerebral arterial dissection and CeAD. Further high-quality evidence is necessary to determine whether CGRP mAb is associated with cerebral arterial dissection and CeAD. Several limitations should be noted. First, our case developed vertebral artery dissection long after the initiation of galcanezumab, so the association between them is unclear. Second, the FAERS database is voluntary reporting system and includes various biases, such as reporting heterogeneity, population background, and disease prevalence. Therefore, results from the FAERS database analysis do not necessarily represent a causal relationship. However, our report is the first to examine the relationship between CGRP mAb and CeAD. We propose conducting a large-scale safety investigation of CGRP mAb in relation to vascular events, as well as an in vivo study to evaluate their effects on the vasculature and explore potential connections. In summary, we describe a case of vertebral artery dissection in a patient with migraine who received CGRP mAb treatment for more than one year. It is unclear whether vertebral artery dissection and the use of CGRP mAb are causally related. To the best of our knowledge, however, our case report is the first focusing on CeAD and CGRP mAb. Considering the characteristics of CGRP and the result of FAERS database analysis, the potential for CGRP mAb to be related to CeAD cannot be ruled out. Further case series and studies are required to validate the association between CGRP mAb and CeAD. Below is the link to the electronic supplementary material. Supplementary Table 1
|
Study
|
biomedical
|
en
| 0.999996 |
PMC11694472
|
CONSORT guidelines recommend reporting relative and absolute measures of effect for binary outcomes in randomised controlled trials (RCTs), along with confidence intervals . In tandem, the European Medicines Agency (EMA) and the US Food and Drug Administration (FDA) also recommend adjusting for covariates when estimating treatment effects to improve statistical efficiency . However, there are several choices to be made regarding which relative measure of treatment effect to report and how to adjust for covariates, which we consider in detail below. When reporting relative measures of effect, the odds ratio is the most commonly reported summary measure [ 4 – 6 ]. Whilst a covariate-adjusted odds ratio can be readily estimated using logistic regression, some have argued that the odds ratio is not an intuitive summary measure , and the magnitude of the intervention effect can be overstated if the odds ratios are misunderstood . Furthermore, the odds ratio is non-collapsible, meaning that adjusting for covariates changes the meaning of the estimate—the unadjusted odds ratio targets the marginal estimand, while the adjusted odds ratio represents the conditional estimand . In comparison, relative risks are easier to interpret, target the same estimand with and without covariate adjustment, and are less likely to be misinterpreted, meaning they can facilitate the translation of findings into practice . Estimation of both unadjusted and/or adjusted relative risks may therefore be of interest to many trials with binary primary outcomes. Although more nuanced for binary outcomes, covariate adjustment is recommended for covariates used in any restricted randomisation , and adjustment for other prognostic covariates can also increase statistical precision . However, there is also evidence that adjustment for non-prognostic or weakly prognostic covariates can reduce precision [ 15 – 19 ]. Guidelines recommend that the covariates chosen for adjusted analysis or restricted randomisation should be predictive of the outcome and be pre-specified [ 3 , 20 – 22 ]. The guidance also says the retrospective selection of covariates using data-driven approaches (e.g. testing for imbalance using statistical tests) should be avoided . Estimating an adjusted relative risk or risk difference can be more challenging than estimating an adjusted odds ratio, which can be implemented straightforwardly using logistic regression . For example, conventional estimators of a binomial regression model with a log or identity link can fail to converge, particularly in settings of low outcome event rates . While other less-known approaches are available, they are not without their limitations. For example, modified Poisson can also yield fitted probabilities that are greater than one [ 26 – 28 ], and substitution approaches can underestimate standard errors . More recently, an approach known as marginal standardisation, also known as G-computation or potential outcomes modelling, has seen some traction —but the implementation of this can be challenging for trial statisticians, although there are some new packages that can help with facilitation . Despite guidance suggesting there is value in reporting covariate-adjusted relative risks and risk differences, reviews of RCTs with binary outcomes show many trials still do not follow this guidance [ 9 , 33 – 37 ]. Firstly, binary outcomes are one of the most common outcome types, with the frequency of two-arm superiority RCTs having binary primary outcomes reported to range between 28 and 72% . Among trials with a binary primary outcome, about 70% report an odds ratio, with much fewer reporting relative risks or risk differences . Secondly, covariate adjustment is performed in about a third of trials, possibly slightly less for binary outcomes and even less for relative risks or risk differences . Thirdly, whilst most covariates are chosen by pre-specification based on their prognostic relationship with outcomes , most RCTs which use restricted randomisation do not adjust for all of the covariates used in the restriction . Thus, whilst there is a growing appreciation for reporting adjusted relative risks and risk differences, there are undoubtedly barriers to implementation. None of the previous reviews have assessed what approaches are being used to estimate covariate-adjusted relative risks or risk differences. Thus, it is unclear how often different methods are applied in practice. There is also limited empirical evidence on the impact of covariate adjustment on statistical precision in practice—for example, if investigators are choosing non-prognostic covariates for adjustment, it might be the case that statistical precision decreases. This review was conducted with the primary aim of determining what methods are used to estimate unadjusted and adjusted relative risks and risk differences in two-arm parallel RCTs. We additionally sought to identify if it was possible to determine if the use of covariate adjustment results in increased statistical precision in practice. Using individually randomised trials published in a selection of high-impact journals, with a binary primary outcome, that report relative risks and risk differences, the objectives were to: Objective 1: Identify methods used to estimate unadjusted relative risks and risk differences (point estimates and confidence intervals) and the estimate proportion of RCTs that use such methods. Objective 2: Estimate the proportion of RCTs that report a covariate-adjusted relative risk or risk differences (along with confidence intervals, standard errors, and p -values) within a primary, secondary, or exploratory analysis; along with the proportion that adjust for covariates used in any restricted randomisation. Objective 3: Identify methods used to estimate the covariate-adjusted relative risks and risk differences (point estimates and confidence intervals) and estimate the proportion of RCTs that use such methods. Objective 4: Describe how covariates are chosen for adjustment, including pre-specification of clinically important variables or data-driven approaches. Objective 5: Compare the unadjusted and adjusted point estimates, standard errors, and p -values. Since this review focuses on RCT methodology, it was not eligible for registration on PROSPERO. However, the protocol was developed using Preferred Reporting Items for Systematic Reviews Meta-Analyses (PRISMA) guidelines and is available in Additional file 1 (Table A1). We included two-arm, individually randomised trials published in four selected high-ranking journals that publish across diverse clinical fields: the Journal of the American Medical Association (JAMA), the New England Journal of Medicine (NEJM), the Lancet , and the BMJ . These journals were chosen because adherence to CONSORT recommendations should be higher within these journals, and therefore, reporting of relative risks and risk differences as a summary measure for a binary primary outcome should be more frequent than in lower-impact journals. The dates of the search were limited to January 1, 2018, and March 11, 2023, to align with our sample size justification (see below). The review is organised into two parts. The first part, referred to as the overall review sample, addresses the first two objectives (estimation of unadjusted effects and proportion of RCTs that report covariate-adjusted effects), and the second part, referred to as the “nested review,” addresses the other three objectives (estimation of adjusted effects). i. RCTs (superiority and non-inferiority) that were published in either the JAMA, NEJM, Lancet, or BMJ between January 1, 2018, and March 11, 2023, with a binary primary outcome and reporting a (unadjusted or covariate-adjusted) relative risk or risk difference. ii. The sub-sample of RCTs identified from the overall review sample reporting a covariate-adjusted relative risk or risk difference. i. All non-human, vaccine, or drug safety trials. ii. Cluster or cross-over randomised studies, pilot, feasibility, phase I or II trials, trials with more than two arms, and factorial trials. iii. RCTs that are not primary reports but secondary publications (e.g. follow-up reports) and publications primarily reporting health economic analyses. iv. RCTs that only report proportions (i.e. no treatment effect), odds ratios, or other summary measures of effect that are not relative risks or risk differences. v. RCTs that evaluate co-primary outcomes or multiple primary outcomes. This includes studies that concurrently conducted two or more trials and reported the outcomes in one article. vi. RCTs that have non-binary primary outcomes (e.g. continuous, rate, ordinal, count, or time-to-event outcomes). vii. Duplicate publications, abstracts, ongoing studies, conference proceedings, research letters, commentaries, editorials, or review articles. i. RCTs that do not report any values of either the point estimate, confidence interval or a p -value for the covariate-adjusted relative risk or risk difference within the main report for the primary, secondary, or exploratory analysis. We determined the sample size based on a combination of a size that would be feasible and allow us to estimate various quantitative objectives with reasonable precision. For the overall review sample, we aimed to identify about 300 studies that met our inclusion criteria (i.e., two-arm RCTs with a binary primary outcome reporting relative risks or risk differences as a summary measure). Assuming approximately 30% of RCTs implement covariate adjustment , a sample size of 300 would allow us to estimate the percentage of studies implementing covariate adjustment, with a 95% CI of approximately 25% to 35% (objective 2). A sample size of 300 would also allow us to estimate the proportion of trials using one of the more common approaches to estimate unadjusted effects (e.g. modified Poisson) with similar precision. Furthermore, identifying 100 studies from the initial sample of 300 that implement covariate adjustment will be of reasonable size to allow us to summarise the methods used to estimate covariate-adjusted relative risks and risk differences, and how covariates are selected for adjustment (to evaluate objectives 3 and 4) and quantitatively compare the unadjusted and covariate-adjusted results (objective 5). To identify about 300 studies that meet these broad inclusion criteria, we anticipated we needed to screen about 900 RCT abstracts since about one-third of RCTs report a binary primary outcome and report either a relative risk or risk difference . It was estimated that approximately 245 RCTs are published each year in the four target journals (estimated via a scoping search). Thus, to achieve a target of about 900 RCTs, it was estimated that the search should cover the period from 2018 to 2023, roughly identifying 980 RCTs. A combination of relevant keywords and Boolean operators were used to search Ovid Medline. Search strings were developed using recommended filters . See Table A2 for details of the search strategy. The records identified were imported into a bibliographic referencing software programme . One reviewer (JT) independently screened the titles and abstracts for inclusion in the overall review sample using the Covidence systematic review software . Those RCTs identified as meeting the eligibility criteria for the overall review sample were then further screened (again by one reviewer, JT) for inclusion in the nested review sample. Details are outlined below. The initial process involved screening the title and abstracts of the identified RCTs (anticipated to be around 980 reports) to identify whether relative risks or risk differences were reported as a summary measure for the primary outcome. Our working definition of the primary outcome used the following hierarchy. First and foremost, records were searched for any clear definition of a primary outcome in the manuscript. When the primary outcome was not clearly specified, we chose the outcome used for sample size calculation, and where that was not clear, the first outcome was reported in the abstract. For those RCTs that meet the eligibility criteria of having a single binary primary outcome, all full texts were retrieved to ascertain if a relative risk or risk difference was reported. Those that met all these criteria were included in the overall review sample. For those RCTs that met the eligibility criteria of the overall review sample, all full texts (anticipated to be around 300 reports) were retrieved to ascertain if a covariate-adjusted relative risk or risk difference was reported for the primary outcome (either for the primary analysis, a secondary or sensitivity analysis). The following information was extracted from each study: Information about the authors, year of publication, the journal, whether the trial was a multi-centre or single-centre study, and the sample size (total number of participants randomised). The type of unadjusted summary measures reported (i.e. relative risk and/or risk difference) for the primary binary outcome. The analysis method used to obtain the estimate of relative risk and/or risk difference and its corresponding confidence interval. Whether a restricted method of randomisation was used, and where a restricted method was used how many covariates were included in the restriction. Whether a covariate-adjusted relative risk or risk difference is reported and identify if it is reported for the primary, secondary, or exploratory analysis. A more detailed extraction was then undertaken again using the full texts of the anticipated 100 trials that met the eligibility criteria for the nested review sample (i.e. two-arm RCTs with a binary primary outcome that reports a covariate-adjusted relative risk or risk difference). The additional information extracted was: When reported, the method used to estimate the covariate-adjusted relative risk or risk difference. Whether the covariate-adjusted analysis was fully or partially adjusted for covariates used in the randomisation and whether the analyses were adjusted for any additional covariates (and if so, how many). When reported, how covariates were chosen for adjustment, including selection based on the pre-specification of clinically important variables, those used in the randomisation, or data-driven approaches. The values of the point estimate and measures of uncertainty (confidence interval, standard error, and p -value) for the unadjusted and adjusted relative risk or risk difference. In the setting where more than one covariate-adjusted relative risk or risk difference was reported, we extracted this information for the one which was the most comprehensive covariate-adjusted model. The methods for estimation of a covariate-adjusted relative risk or risk difference were categorised as follows: Binomial model (or log-binomial)—generalised linear model with binomial distribution and log link (for estimation of relative risk) or identity link (for estimation of risk difference). Modified Poisson—generalised linear model with Poisson distribution, robust standard errors, and log link (for estimation of relative risk) or identity link (for estimation of risk difference). Marginal standardisation—generalised linear model with binomial distribution and logit link; then predict outcomes and average across groups, with log link (for estimation of relative risk) or identity link (for estimation of risk difference). Linear model—linear model with Gaussian distribution, robust standard errors with identity link (for estimation of risk difference only). These categorisations were used as initial scoping work suggested these were the more commonly used approaches (see later text for assumptions made when categorising here). Approaches that did not clearly align with one of these were classified as ‘other’. In addition, more detailed information was extracted (verbatim text of the approach used) and reported in supplementary tables. When these data were not clearly reported in the main report, the supplementary materials (e.g. protocols and statistical analysis plans) were also accessed for further details. We used these same classifications for the estimation of the unadjusted effects but note that an unadjusted point estimate is simply the ratio or difference and so it was not expected that this trivial detail would be reported, where confidence intervals were reported, we were seeking to identify the method that primarily related to how this had been derived. Data was extracted using a standardised data extraction proforma developed in AirTable. The proforma underwent iterative review and feasibility testing by the project team to address discrepancies and ensure consistency (see Table A3). In a pilot/training phase, two reviewers (JT and LM) independently performed the data extraction for the first 21 studies and then resolved all discrepancies by discussion. Subsequent to this, the remaining data was extracted by one reviewer (JT), with consultation with the wider team in cases of uncertainty. Several assumptions were made about the methods used to estimate the relative risk or risk difference when the reporting was not clear or comprehensive (for reporting in main tables, though verbatim text is provided in supplementary tables). When it was reported that a generalised linear model was used, but without information on the link or distribution, it was assumed that this was the log-binomial model for the relative risk and the identity link for the risk difference. Likewise, generalised linear mixed models and generalised estimating equations are categorised as the log-binomial model (for estimation of relative risk) or binomial-identity model (for estimation of risk difference). When authors proposed an alternative approach to handle non-convergence, e.g. modified Poisson for the log-binomial model, we report the method that was implemented and reported. When authors report using “G-computation”, logistic regression with delta standard errors or bootstrapping, or other descriptions of standardisation (such as the marginal average of predicted values from the logistic regression), these were classified as marginal standardisation. Modified Poisson using generalised estimating equations is categorised as modified Poisson. The approach was classified as unclear when the authors reported using mixed-effect modelling with an exchangeable correlation but with no further detail. Similarly, when authors reported the use of logistic regression to estimate a relative risk or risk difference without any other detail, this is categorised as unclear. When authors reported using an approach that provides a statistical test, such as the Cochran-Mantel–Haenszel test, chi-square or Fisher’s exact test, Wald likelihood ratio approximation tests or Z -test, without any clear information on how the confidence intervals were estimated, this was also categorised as unclear. The number of studies identified, screened, and excluded were reported, and the findings are presented in a PRISMA flowchart. A descriptive summary of the trials included in the review is provided, summarising using numbers (and percentages) for categorical data or medians (and interquartile ranges) for continuous data. The reliability between the data extractions performed in duplicate is reported using percentage agreement and unweighted Kappa statistics, which measures the agreement between two raters classifying categorical items . When describing numbers and percentages related to the key objectives, such as proportions of analyses using the different candidate approaches, numbers and percentages were reported, and 95% confidence intervals around these percentages were reported using the Wilson score confidence interval method . To compare precision between the unadjusted and adjusted effects (where few trials were anticipated to report standard errors) we estimate the standard error by subtracting the upper confidence interval from the lower confidence interval and dividing by 3.92 (working on the log scale for relative risks). All descriptive, exploratory analyses and data visualisations were performed using R Statistical Software version 4.4.1 . Kappa statistics were estimated using the IRR package . The 95% CIs were computed using the Wilson score confidence intervals in the gt_summary R package . Of the 3113 records that were identified and screened for relevance at the title and abstract stage, three were excluded as duplicates and 1338 were excluded on an abstract screen . Then, 1963 full-text articles were screened for inclusion against the eligibility criteria, and 308 studies met the criteria required for the overall review sample (see flow chart for details of reasons why studies were excluded, mostly ( n = 191) due to not reporting a relative risk or risk difference). These 308 studies were RCTs that reported either an unadjusted or covariate-adjusted relative risk, risk difference or both. Of these 308 studies, 131 were identified to report either a covariate-adjusted relative risk, risk difference, or both and were eligible for inclusion in the nested review sample. None of the included studies reported on more than one RCT in each trial report. Fig. 1 PRISMA flow diagram of the selection process for eligible studies. The overall review sample includes RCTs published between January 1, 2018, and March 11, 2023, in selected high-impact journals, with a primary binary outcome, that report either a relative risk (RR) or risk difference (RD) (unadjusted or adjusted) as a summary measure. The numbers reported for unadjusted RR and RD are not mutually exclusive to the reporting of adjusted RR and RD. RR, relative risk; RD, risk difference. The nested review sample is a sub-sample of RCTs identified from the overall review sample that reports a covariate-adjusted RR or RD for the primary, secondary, or exploratory analysis. The numbers reported for adjusted RR and RD are not mutually exclusive The percentage agreement across the two independent data extractions for the 21 studies that were extracted in duplicate was 92.8%, and unweighted Kappa was 0.9 (95% CI: 0.80–0.99). Most discrepancies were due to a lack of clarity in the description provided by the authors regarding the methods used to derive summary measures for the unadjusted analysis (Table A4). In many cases, it was unclear how the confidence intervals were derived. Other discrepancies were due to a lack of clarity in approach when both the relative risk and risk difference were reported. It was also commonly unclear whether the p -value reported was for the relative risk or risk difference. Of the 308 RCTs in the overall review sample, most trials were published in JAMA (108/308, 35%) or NEJM (108/308, 35%) compared to Lancet (75/308, 24%) or BMJ (17/308, 6%) (Table 1 ). The studies were all published between 2018 and 2023. Most (293/308, 95%) were multi-centre studies with an average sample size in the control and intervention arms of 357 (IQR: 158–717) and 367 (IQR: 176–729), respectively. Using a restricted method of randomisation was common (213/308, 69%), with the average number of covariates used in the randomisation reported to be 2 (IQR: 1–3). Of those that used restricted randomisation ( n = 213), about half (107, 50%; 95% CI: 43–57%) reported adjusting for all those covariates used in the restriction; whilst an additional 18 (9%; 95% CI: 5%, 13%) partially adjusted for some of the covariates; in about a fifth (45, 21%; 95% CI: 16–27%) the analyses did not adjust for these covariates and in the remainder (43, 20%; 95% 15–26%), it was unclear. Of the 308 RCTs, around half (150/308, 49%; 95% CI: 43–54%) reported a covariate-adjusted relative risk or risk difference. Table 1 Characteristics of studies included in the overall review sample Trials that reported either unadjusted or adjusted RR or RD N = 308 Characteristics of the included studies Journal ( N , %) JAMA 108/308 (35%) NEJM 108/308 (35%) Lancet 75/308 (24%) BMJ 17/308 (6%) Publication year ( N , %) 2018 60/308 (19%) 2019 71/308 (23%) 2020 53/308 (17%) 2021 54/308 (18%) 2022 58/308 (19%) 2023 12/308 (4%) Type of centre ( N , %) Multi-centre 293/308 (95%) No. of participants, median [IQR] a Intervention arm 367 Control arm 357 Practices relating to covariate adjustment ( N /denominator, %, 95% CI) Used a restricted randomisation ( N , %) 213/308, 69% Number of randomisation covariates, median [IQR] 2 [1.0, 3.0] Complete adjustment for randomisation covariates (N, %) 107/213, 50% (95% CI: 43%, 57%) Partial adjustment for randomisation covariates (N, %) 18/213, 8.5% (95% CI: 5.2%, 13%) No adjustment for randomisation covariates (N, %) 45/213, 21% (95% CI: 16%, 27%) Unclear adjustment for randomisation covariates (N, %) 43/213, 20% (95% CI: 15%, 26%) Report adjusted RR or RD b ( N , %) 150/308, 49% (95% CI: 43%, 54%) RR relative risk, RD risk difference, N number, IQR interquartile range, CI confidence interval, % percent, JAMA Journal of the American Medical Association, NEJM New England Journal of Medicine, BMJ British Medical Journal a This is the number of participants randomised b Report of adjusted or unadjusted RR or RD at any point in the report Of the 308 studies, 95 reported an unadjusted relative risk (Table 2 ). In the majority ( n = 65, 68%; 95% CI: 58–77%) of these reports, the method used to estimate the confidence intervals for the unadjusted relative risk was unclear. Of the 30 RCTs that reported an unadjusted relative risk and where the reporting of the method for the confidence interval was clear, the most common method used was the log-binomial model, used in 21 (70%; 95% CI: 50–85%) of the studies; followed by modified Poisson, used in 7 (23%; 95% CI: 11–43%) of the studies; and only in one (3%; 95% CI: 0–19%) study was marginal standardisation reported to be used. See Table A5 for a granular description of the methods identified. Of the 95 studies that reported an unadjusted relative risk, almost all studies reported point estimates and upper and lower confidence intervals. However, whilst p -values for the unadjusted relative risk were reported for most studies (77, 81%), they were not always reported. No studies reported standard errors of effects. Of the 308 studies, 194 RCTs reported an unadjusted risk difference. In the majority ( n = 139, 72%; 95% CI: 65–78%) of these 194 reports, the method used to estimate the confidence intervals for the unadjusted risk difference was unclear (Table 2 ). Of the 55 RCTs that reported an unadjusted risk difference and where the reporting of the confidence interval was clear we did not identify a universally more common approach: the binomial model was used in 9 (16%; 95% CI: 8–29%) studies; the linear model used in 6 (11%; 95% CI: 4.5–23%) studies; marginal standardisation used in 4 (7%; 95% CI: 2–18%) studies; and modified Poisson used in 2 (4%; 95% CI: 1–14%) studies. Moreover, we identified that many other, different, approaches were used as in 34 (62%; 95% CI: 48–74%) of the studies, the approach was classified as ‘other’. See Table A6 for a granular description of the methods identified. Again, of the 194 RCTs that reported an unadjusted risk difference, most reported point estimates and upper and lower confidence intervals. However, only around half of the studies, 127/194 (65%), reported p -values for the unadjusted risk difference; and no studies reported standard errors of estimated effects. Table 2 Methods used for estimating unadjusted and adjusted relative risks and risk differences and completeness of reporting Trials that reported either unadjusted or adjusted RR or RD N = 308 Trials that reported either unadjusted or adjusted RR N = 137 Trials that reported either unadjusted or adjusted RD N = 244 Unadjusted RR N = 95 Adjusted RR N = 82 Unadjusted RD N = 194 Adjusted RD N = 92 Method used ( N /denominator, %, 95% CI) Unclear a 65/95, 68% (58%, 77%) 17/82, 21% (13%, 31%) 139/194, 72% (65%, 78%) 36/92, 39% (29%, 50%) Of those that report a clear approach Binomial model b,c 21/30, 70% (50%, 85%) 42/65, 65% (52%, 76%) 9/55, 16% (8%, 29%) 27/56, 48% (35%, 62%) Marginal standardisation 1/30, 3% (< 1%, 19%) 2/65, 3% (< 1%, 12%) 4/55, 7% (2%, 18%) 12/56, 21% (12%, 35%) Modified Poisson d 7/30, 23% (11%, 43%) 19/65, 29% (19%, 42%) 2/55, 34% (< 1%, 14%) 4/56, 7% (2%, 18%) Linear model 0 (0%) 0/65 (0%) 6/55, 11% (5%, 23%) 6/56, 11% (4%, 23%) Other 1/30, 3% (< 1%, 19%) 2/65, 3% (< 1%, 12%) 34/55, 62% (48%, 74%) 7/56, 13% (6%, 25%) Completeness of reporting of point estimates and measures of uncertainty ( N /denominator) Point estimate 95/95 (100%) 82/82 (100%) 194/194 (100%) 91/92 (99%) LCI 95/95 (100%) 82/82 (100%) 184/194 (95%) 88/92 (96%) UCI 94/95 (99%) 81/82 (99%) 179/194 (92%) 87/92 (95%) P-value d 77/95 (81%) 66/82 (80%) 127/194 (65%) 53/92 (58%) SE 0 (0%) 0 (0%) 0 (0%) 0 (0%) RR relative risk, RD risk difference, CI confidence interval, N number, LCI lower confidence interval, UCI upper confidence interval, SE standard error a Unclear represents situations where the information was unclear, not reported, unable to be determined, or missing. See text for details b Binomial model with log link for RR and identity link for RD c In one RCT, the RR and 95% CI were estimated via the log-binomial model and P -value from the Cochran-Mantel–Haenszel test d In one study, the authors only reported adjusted p -values estimated using modified Poisson. It is assumed that the adjustment for centre as a fixed and random effect is a covariate-adjusted effect. For the RD, the modified Poisson is used with the identity link Of the 308 studies, 82 reported an adjusted relative risk (Table 3 ). In about a quarter of these reports, the method used was unclear, 17 (21%; 95% CI: 13–31%). When the reporting was clear, the log-binomial model was the most common method used in 42 (65%; 95% CI: 52–76%) studies; then modified Poisson which was used in 19 (29%; 95% CI: 19–42%); and marginal standardisation used in two (3%; 95% CI: < 1–12%) studies. See Table A5 for a granular description of the methods identified. Most studies (82, 100%) reported point estimates, and upper and lower confidence intervals (82, 99%). However, p -values for the adjusted relative risk were reported less frequently (reported in 66 (80%) studies); and no studies reported standard errors. Of the 308 studies, 92 reported an adjusted risk difference. The method used was unclear in a considerable proportion, 36 (39%; 95% CI: 29–50%). When the reporting was clear, the binomial model, used in 27 (48%; 95% CI: 35–62%) studies, was the most common method. Then marginal standardisation was used in 12 (21%; 95% CI: 12–35%) studies; then linear model which was used in 6 (11%; 95% CI: 4–23%) studies; and modified Poisson used in 4 (7%; 95% CI: 2–18%) studies. However, a small proportion of methods in 7 (13%; 95% CI: 6–25%) were classified as other. Most studies (91, 99%) reported point estimates, upper confidence intervals (88, 96%) and lower confidence intervals (87, 95%). However, p -values for the adjusted risk difference were reported less frequently (reported in 53 (58%) studies); and no studies reported standard errors. See Table A6 for a granular description of the methods identified. Of the studies included in the nested review sample, 82 studies reported a covariate-adjusted relative risk, and 92 reported a covariate-adjusted risk difference (Table 3 ). Of the 82 studies that reported an adjusted relative risk, in 67 (82%; 95% CI: 71–89%) studies, the adjusted relative risk was reported as the primary analysis; in 5 (6%; 95% CI: 2–14%) studies this was for a secondary analysis; and in 10 (12%; 95% CI: 6–22%) studies this was for an exploratory or sensitivity analysis. Similarly, of the 92 studies that reported a covariate-adjusted risk difference, in 74 (80%; 95% CI: 71–88%) studies this was for the primary analysis; in 6 (7%; 95% CI: 3–14%) studies this was for a secondary analysis; and in 12 (12%; 95% CI: 7–22%) studies this was for an exploratory or sensitivity analysis. Table 3 Current practice related to covariate-adjusted analysis in the nested review sample Trials that reported either adjusted RR or RD N = 131 Trials that report adjusted RR N = 82 Trials that report adjusted RD N = 92 Covariate-adjusted analysis reported for the*: ( N /denominator, %, 95% CI) Primary analysis 67/82, 82% (71%, 89%) 74/92, 80% (71%, 88%) Secondary analysis 5/82, 6% (2%, 14%) 6/92, 7% (3%, 14%) Sensitivity or exploratory analysis 10/82, 12% (6%, 22%) 12/92, 13% (7%, 22%) Unclear 1 0/82 (0%) 0/92 (0%) Justification for covariate adjustment ( N , %) Prespecification a 73/82, 89% (80%, 95%) 84/92, 91% (83%, 96%) Data-driven— post hoc 4/82, 5% (2%, 13%) 2/92, 2% (< 1%, 8%) Unclear b 5/82, 6% (2%, 14%) 6/92, 7% (3%, 14%) Additional covariates included in adjustment c ( N , %) 34/82, 41% (31%, 53%) 33/92, 36% (26%, 47%) Number of additional covariates , Median (IQR) 3 [2.0, 5.0] 3 [2.00, 4.00] RR relative risk, RD risk difference, IQR interquartile range, CI confidence interval, N number. % percent a Prespecification includes covariates used in the randomisation or perceived prognostic importance. Data-driven approaches include post hoc analysis due to a lack of balance or statistical significance b Unclear represents situations where the information was unclear, not reported, unable to be determined, or missing. See text for details c Additional covariates refer to adjustment in addition to those covariates used in the randomisation For the 82 studies that reported an adjusted relative risk, in the vast majority (73, 89%; 95% CI: 80–95%) of these 82 studies, the covariates used in the adjustment were pre-specified (which included randomisation covariates). In only a few (4, 5%; 95% CI: 2–13%) studies, covariate adjustment resulted from data-driven post-hoc analyses or unclear approaches (5, 6%; 95% CI: 2–14%). In addition, 34 (41%; 95% CI: 31–53%) studies reported adjusting for covariates other than those included in a restricted randomisation. The average number of additional covariates (i.e. in addition to those adjusted because they were used in the randomisation) used in these studies was 3 (IQR: 2–5). Of the 92 studies that reported an adjusted risk difference, a restricted method of randomisation was used in 85 (92%; 95% CI: 84–97%) studies. For these studies, the reported rationale for choosing these covariates was mostly pre-specification in 84 (91%; 95% CI: 83–96%); rather than data-driven in 2 (2%; 95% CI: < 1–8.4%) or unclear approaches in 6 (7%; (95% CI: 3–14%). In addition, 33 (36%; 95% CI: 26–47%) studies reported adjusting for covariates other than those included in a restricted randomisation. The average number of additional covariates used in these studies was 3 (IQR: 2–4). Of the 82 studies that reported covariate-adjusted relative risk, 41 reported both an unadjusted and adjusted relative risk . The adjusted and unadjusted point estimates were mostly observed to be similar for each study, with occasional larger differences that went in both directions (sometimes the adjusted relative risk was larger than the unadjusted relative risk and vice versa). A similar pattern was observed for standard errors and p -values: mostly these were similar between the adjusted and unadjusted analyses, but sometimes there were larger differences, and these differences went in both directions—so that sometimes the standard error from an adjusted analysis could be smaller than from the unadjusted analysis and sometimes larger. Fig. 2 Comparison of adjusted to unadjusted relative risks. Footnotes: relative risk; CIs, confidence intervals. If the point falls below the line of equality, then the value is smaller under the adjusted analysis than under the unadjusted analysis (as anticipated per theory). However, if the point is above the line of equality, then the value is larger under the adjusted analysis than under the unadjusted analysis Of the 92 studies that reported covariate-adjusted risk differences, 42 reported both an unadjusted and adjusted risk difference. The reader is referred to in Fig. 3 , and the findings are similar to those of the relative risk. Fig. 3 Comparison of adjusted to unadjusted risk differences. Footnotes: RD, risk difference; CIs, confidence intervals. If the point falls below the line of equality, then the value is smaller under the adjusted analysis than under the unadjusted analysis (as anticipated per theory). However, if the point is above the line of equality, then the value is larger under the adjusted analysis than under the unadjusted analysis Only around half of RCTs (trials published in high-impact journals with a binary primary outcome that uses a risk difference or relative risk for summarising the impact of the treatment) report a covariate-adjusted relative risk or risk difference. This is despite the fact that substantially more of the RCTs included in the review used a restricted randomisation procedure, and the increase in statistical precision that has been reported to result arise when adjusting for prognostic covariates. However, in our pairwise comparison of adjusted vs. unadjusted relative risks and risk differences, we did not find evidence that covariate adjustment universally improves statistical precision. Those trials that do report covariate-adjusted relative risks or risk differences mostly pre-specify the covariates for adjustment and adjust for covariates included in any restricted randomisation. When it comes to reporting unadjusted relative risks and risk differences, most do not clearly report the approach used (this is despite the fact that using something like Fisher’s exact test only provides a p -value and not a confidence interval); and of those that do report the approach, mostly use a binomial model (with log or identity link respectively) or modified Poisson. When it comes to reporting adjusted relative risks and risk differences, a significant number (estimated to be in the region of 40% for adjusted risk differences) still do not report the approach used. Of those that do report the approach used, again, the binomial model is a common approach, and whilst modified Poisson is commonly used to estimate an adjusted relative risk, marginal standardisation is commonly used to estimate an adjusted risk difference. For the relatively small number of trials that reported both an adjusted relative risk and unadjusted relative risk; or adjusted risk difference and unadjusted risk difference, we observed that statistical precision did not always increase on adjustment—with an increase or decrease on adjustment occurring roughly an equal amount of times. A number of reviews have examined methods for estimating summary measures for binary outcomes over the last decade, but none of these reviews focused on relative risks and risk differences [ 33 – 37 ]. These reviews have identified that most trials adjust for covariates used during randomisation . However, only about half of the trials included in this review reported a covariate-adjusted relative risk or risk difference, whereas 70% of them used a restricted randomisation. Our findings mostly concur with those from other reviews, where it has been identified that the practice of covariate adjustment is mostly guided by covariates included in restricted randomisation ; but that a significant proportion of trials do not report a covariate-adjusted treatment effect for their primary analysis . This review included only two-arm RCTs published in one of four high-impact journals with a binary primary outcome reporting either a relative risk or risk difference. We focused on high-impact journals as these journals are likely to contain more information about the methods used—the focus of our review. However, as such, this very likely means our findings are not representative of all RCTs. Only trials which specifically reported either a relative risk of risk difference were included, as the primary focus was on methods used to estimate these summary measures, but this means this work cannot make inferences about the proportion of trials that report a covariate-adjusted treatment effect for example (this has not captured those trials that report a covariate-adjusted odds ratio). There are also more nuanced details that we did not report on. For example, some trials reported using an alternative approach when the primary method did not converge. Neither did we elicit very nuanced details on the approach used, for example, the method of optimisation or details on the methods used to estimate standard errors. Studies evaluating co-primary outcomes were not considered since they raise complex issues relating to type I and type II error control in sample size calculation and final analysis . Double extraction was performed with an independent reviewer for approximately 10% of the papers, which showed good agreement (93%) on the core items of interest. However, we did not perform independent and duplicate data extraction for the entire sample. Furthermore, due to time and resource constraints, study authors were not contacted to resolve any issues around unclear reporting. Likewise, we did not compare the pre-specified covariates reported in the main report with those of the statistical analysis plan; therefore, it is assumed this has not changed. Moreover, whilst one of our objectives was to compare the estimated unadjusted and adjusted relative risks and risk differences—to identify if, in practice, adjusted is resulting in an increase in statistical precision, we only had a small sample of studies that reported both metrics and so here our findings are limited to a small sample. There are valid reasons for estimating covariate-adjusted relative risks and risk differences when reporting findings from randomised trials. There are methods available to facilitate the estimation of these summary metrics, yet even in high-impact journals, there is both a lack of clarity around approaches used for estimation and a lack of uptake in the use of methods that are available. This systematic review is one of the most comprehensive reviews cataloguing methods for estimating relative risks and risk differences used in current practice. Other reviews have identified that about one third of RCTs report covariate-adjusted treatment effects (by the nature of these reviews, these treatment effects will often be the odds ratio , whereas we identified that about half of RCTs with a binary outcome that report either a relative risk or risk difference report a covariate-adjusted treatment effect. One possible explanation for this difference might be that those trials that report a relative risk or risk difference are those that are more conscious about following reporting guidance. In part, this might be hypothesised to be explained by how more complicated methods can be required to implement covariate-adjusted for the estimation of relative risks and risk differences (e.g. using marginal standardisation or modified Poisson). Yet, we identified that some methods that might be considered more novel (such as modified Poisson and marginal standardisation) are used quite frequently. We identified a significant lack of clarity in reporting how unadjusted relative risk and risk differences are estimated. Numerous methods are currently being used in practice to estimate relative risks and risk differences, indicating that the availability of methods is perhaps no longer a barrier; rather, guidance on which methods are best is what is lacking. The reporting of methods for estimating unadjusted relative risk or risk difference is still inadequate in some areas. Furthermore, the reporting of unadjusted and adjusted risk treatment effects is still low, but the rationale for covariate adjustment was commonly reported. What is already known? ➢ There are numerous methods available for estimating relative risks and risk differences. ➢ Covariate adjustment is recommended to improve precision and power. What is new? ➢ Around half of trials published in the top high-impact journals with a binary primary outcome that report a relative risk or risk difference do not report a covariate-adjusted relative risk or risk difference. ➢ Those that do report a covariate-adjusted relative risk or risk difference mostly choose their covariates in advance and adjust for covariates used in the randomisation. ➢ When reporting unadjusted relative risks and risk differences, most trials do not provide adequate information on the method used. ➢ When reporting covariate-adjusted relative risks and risk differences, more trials report details on the approach used—but there is still a significant lack of reporting. ➢ Of those trials that do clearly report how they estimate covariate-adjusted relative risks and risk differences, common approaches include the binomial model, modified Poisson and marginal standardisation. Supplementary Material 1: Table A2: Summary of the search strategy and papers identified via Ovid Medline. Table A3: Data Dictionary Codebook for review of binary outcomes. Table A4: Discrepancies and consensus agreement. Table A5: Detailed description of methods for estimating relative risks from overall and nested sample. Table A6: Detailed description of methods for estimating risk differences from overall and nested sample.
|
Other
|
biomedical
|
en
| 0.999996 |
PMC11694505
|
Polyethylene terephthalate (PET), as an important thermoplastic polyester material, comes from fossil resources. 1,2 PET has excellent properties such as good tensile strength, chemical resistance, transparency, processability, and thermal stability, and is therefore widely used in food packaging, electrical and electronic appliances, machinery and equipment, automotive parts, films and sheets, and so on. 3 In recent years, with the continuous progress of PET production technology and the continuous growth of market demand, PET production capacity has shown a steady upward trend. According to statistics, the global bottle-grade PET production capacity has grown to 34.87 million tonnes from 27 million tonnes in 2014 to 2022, and it is expected that by 2050, 12 billion tonnes of plastic will be thrown into landfills and nature. However, due to the excellent chemical stability of PET, it is difficult to degrade in the natural environment, and a large amount of waste PET is decomposed into microplastics in the soil and flows into lakes and oceans, which poses a serious hazard to people's living environment and health. 4,5 In addition, virgin PET is a petroleum-based material, but the PET recycling rate is not high, resulting in a waste of fossil resources. 6,7 To save fossil resources and protect the environment, there is an urgent need to develop a PET recycling process. 8–10 Today, the treatment methods of waste PET are mainly divided into physical recycling methods and chemical recycling methods. 11–13 Physical recycling is simple and inexpensive, but it cannot completely remove the impurities in PET, thus affecting the quality of recycled products. Chemical recycling converts waste PET into monomers or oligomers and other chemical substances through chemical reactions, and then re-polymerizes or uses them, which can realize the complete recycling and reuse of waste PET, and has a high resource utilization rate and environmental friendliness. 14–16 Among them, alcoholysis is considered to be an ideal method for PET degradation due to its mild reaction conditions, low solvent volatilization, and high product purity. Alcoholysis using ethylene glycol as a solvent is called glycolysis. 17,18 The main product of glycolysis is ethylene terephthalate (BHET), which can be re-synthesized into PET by polymerization. 19 The reaction rate of glycolysis without a catalyst is very low. Metal salts, 20 metal oxides, low eutectic solvents (DES) 8,21–24 and ionic liquids 25–28 have been developed to catalyze the degradation of PET. Although these catalysts have good catalytic effects, they also have the disadvantages of harsh reaction conditions, low monomer yield, and difficult catalyst separation. Currently, biomass catalysts have been developed as a green alternative to traditional metal catalysts. Isti Yunita et al. extracted calcium oxide (CaO) from ostrich eggshells, and seafood shell biomass as raw material to catalyze post-consumer PET bottles and examined their catalytic activity. The results of the study showed that ostrich eggshell by-products of CaO have the advantages of low cost, environmentally friendly, and high product yields. 29 To the best of our knowledge, there is no report on the production of metal oxides from sunflower seed shells as catalysts for PET glycolysis. In this paper, SMS-750 was prepared from sunflower seed husk and applied to PET glycolysis. In order to better understand the catalytic parameters, the main factors such as alcoholysis temperature, glycol dosage, catalyst dosage and alcoholysis time were firstly screened by one-way experiments, and then the best reaction conditions were determined by optimization using response surface experiments. In addition, the alcoholysis products and catalysts were characterized and analyzed. The development of catalytic degradation of waste PET by biomass catalysts will help promote the green transformation of the plastics industry. By reducing the environmental pollution and waste of resources caused by waste plastics, the sustainable development of the plastics industry and the goal of “double carbon” can be achieved. Waste sunflower seed husk, agricultural waste material; waste polyester, 0.3 × 0.3 cm, mineral water bottle; ethylene glycol, anhydrous ethanol, analytically pure, Tianjin Beichen Founder's Reagent; double-distilled water, 0.5% NaOH solution, homemade in the laboratory. Fourier Infrared Spectrometer (FT-IR), TENSOR II, Bruker Technologies, Germany; Nuclear Magnetic Resonance Hydrogen Spectroscopy ( 1 H-NMR), AVANCE, Bruker Technologies, Germany; Thermogravimetric Analyser (TG), HCT-1, Hengjiu Scientific Instrumentation Factory, Beijing, China; Scanning Electron Microscope (SEM), Apreo 2C, Thermo Fisher Scientific; X-Ray Energy Spectroscopy (EDS), XFlash 6130, Oxford Instruments, UK. Firstly, the edible sunflower seeds were repeatedly washed with double-distilled water to remove surface impurities. In order to further remove the residual lipid components in the sunflower seed hulls, the hulls were soaked in 0.5% NaOH solution for 2 h, washed, and then dried at 80 °C for 24 h. Subsequently, the dried sunflower seed hulls were crushed in a pulverizer into homogeneous particles and screened with a standard 100-mesh sieve. The sieved sunflower seed husk powder was then placed in a crucible and then roasted in a muffle furnace preheated to the specified temperature for 2 h. After the temperature was reduced to room temperature, the calcined solid powder was finely ground in a mortar and pestle to obtain a more homogeneous particle size distribution. Finally, in order to ensure the consistency and suitability of the resulting catalyst size, the ground samples were sieved using a 200-mesh standard sieve, i.e. , the solid powder obtained was SMS-750 . Weigh 3.0 g of waste PET bottle flakes in a three-necked flask, add the prepared sunflower seed husk matrix catalyst and ethylene glycol solution according to a certain ratio, stirring, heating, heating to a set temperature, and then react for a certain length of time, you can get the impurity-containing BHET. When the reaction was finished, the reaction solution was transferred to a beaker and distilled water was added to 60 ml while it was still hot, and the temperature in the beaker was 90 °C when the first filtration was carried out using a filtering flask with the aim of filtering out the undepolymerized PET bottle flakes. The product from the first filtration was then filtered for the second time by adding distilled water to 90 ml and controlling the temperature at 65 °C, to separate the trimers. Further to remove the dimer, the product after the second filtration was added with distilled water to 120 ml and the temperature was controlled at 45 °C and then filtrated to obtain the BHET solution. The filtrate was cooled at 5 °C for 12 h. White needle-like crystals appeared in the beaker. Finally, after two filtrations, the white needle-like crystals were placed in a vacuum desiccator at 80 °C for 12 h to obtain the product BHET . Where the BHET yield is calculated in eqn (1) as follows: 1 where m BHET is the weight of BHET crystals collected in grams, M BHET is the molecular weight of BHET (254 g mol −1 ), m PET0 is the initial weight of PET in grams and M PET is the molecular weight of the PET repeat unit (192 g mol −1 ). XRD was used to analyze the crystal structure of the catalysts under the following conditions: voltage 40 kV, current 40 MA, scanning speed 10° min −1 , test range 5°–90°, step size 0.02°; SEM was used to characterize the surface morphology of the catalysts under the following conditions: a small amount of the sample was adhered to conductive adhesive and sprayed with gold, with a pressurized voltage of 30 kV and a resolution of 1.4 nm, with a maximum multiplicity of about 100 000; the functional groups of the catalysts and degradation products were analyzed by FTIR. The functional groups of the catalyst and the degradation products were analyzed by FTIR test, under the following conditions: the samples were mixed and ground with KBr, then dried and pressed into tablets, with a scanning range of 400–4000 cm −1 ; the proportion of different types of hydrogen atoms in the degradation products was quantitatively determined with the help of 1 H-NMR test by analyzing the peak area, and the following conditions were adopted: a sample of about 5 mg was taken and completely dissolved in CDCl 3 . The catalyst material was characterized for thermal stability using the TG test, where the sample was heated from 20 °C to 800 °C at a rate of 10 °C min −1 in the N 2 atmosphere. Based on a one-way parallel test, the response surface test was designed by Design Expert 13.0 software using the alcoholysis temperature, alcoholysis time, EG dosage, and SMS-750 dosage as independent variables, and the degradation product BHET as the response value, the test was carried out under the conditions of the corresponding independent variables, and the response values were obtained and substituted into the software for the response surface analysis, and the results were discussed. In order to determine the variation of the composition of sunflower seed husk with different roasting temperatures, a certain amount of sunflower seed husk powder was taken for thermogravimetric analysis and the results are shown in Fig. 3 . It can be seen that before 330 °C, the weight loss is about 57.61%, which is mainly caused by the loss of water and the mass of organic components in sunflower seed husk; the mass loss in the range of 330–600 °C is mainly attributed to the further pyrolysis of the remaining organic components in the residual charcoal, with a weight loss of 29.51%; the temperature continues to increase, and the weight loss is almost zero, which indicates that sunflower seed husk is roasted at greater than 600 °C after the weight loss was almost zero, indicating that the sunflower hulls had been constant after roasting at >600 °C. Based on the analysis of the effect of different calcination temperatures on the degradation yield, 750 °C was selected as the best calcination temperature for the catalyst, and the catalysts prepared at 750 °C were named SMS-750, based on which further research was conducted and discussed. The appearance morphology and elemental composition of SMS-750 were analyzed, and it can be seen from Fig. 4(a) that SMS-750 is mostly in the random stacking of irregular bricks with smooth surfaces and a small amount of microporous structure. According to the EDS of Fig. 4(b–g) , the mass fractions of the elements K, O, C, Ca, Mg, and Na in SMS-750 were 39.28%, 31.37%, 15.67%, 10.76%, 1.84%, and 1.08%, and the elements were uniformly distributed on the catalyst surface. The associated XRD diffractogram in Fig. 5 depicts the crystalline structure of SMS-750. The presence of a crystalline phase is indicated by the presence of multiple peaks in the diffraction peaks centered at 2 θ . The sharp peaks evident in the spectra indicate that the sample is well crystalline. The results show that the major compounds in SMS-750 are CaO, MgO, CaCO 3 , (Mg 0.03 Ca 0.97 )(CO 3 ), K 2 Ca(CO 3 ) and Ca(OH) 2 . It can be seen that SMS-750 is highly loaded with oxides and carbonates of K, Ca, and Mg, which have very strong basic sites, which is highly consistent with the results of EDS analysis. The infrared spectra are shown in Fig. 6 . 3031.34 cm −1 the broad absorption peaks are due to –OH stretching vibration due to water absorption on the catalyst surface. 30 The sharp bands at 3694.77 cm −1 are associated with the formation of basic groups attached to Ca atoms. 31 The characteristic peaks at 1415.67 cm −1 , 1023.15 cm −1 , and 888.06 cm −1 correspond to the C <svg xmlns="http://www.w3.org/2000/svg" version="1.0" width="13.200000pt" height="16.000000pt" viewBox="0 0 13.200000 16.000000" preserveAspectRatio="xMidYMid meet"><metadata> Created by potrace 1.16, written by Peter Selinger 2001-2019 </metadata><g transform="translate scale" fill="currentColor" stroke="none"><path d="M0 440 l0 -40 320 0 320 0 0 40 0 40 -320 0 -320 0 0 -40z M0 280 l0 -40 320 0 320 0 0 40 0 40 -320 0 -320 0 0 -40z"/></g></svg> O stretching vibration, C–O stretching vibration and C–O bending vibration of CO 3 2− respectively, which may be due to the absorption of CO 2 from the air onto the surface of the metal oxides to form metal carbonates, thus suggesting the presence of Ca, Mg and K oxides in the catalyst. 32–36 The absorption peaks of K–O and Ca–O stretching vibration are at 612.68 cm −1 . 37,38 The FT-IR results are in agreement with EDS and XRD data. The IR spectral characterization of the degradation products is shown in Fig. 7 . It can be seen that the characteristic absorption peak at 3269.4 cm −1 corresponds to the –OH stretching vibration in the hydroxyethyl group, the infrared absorption peaks at 2951.1 cm −1 and 2867.3 cm −1 are the symmetric stretching vibration peaks of –CH 2 , the strong absorption peaks at 1711.63 cm −1 are related to the stretching vibration of C O, the characteristic absorption peaks near 1406.3 cm −1 are the vibrational absorption peaks of the benzene ring backbone. The absorption peaks at 1260.2 cm −1 and 1118.2 cm −1 are the absorption peaks of the stretching vibration of C–O, and the in-plane bending vibration of the benzene ring is 888.4 cm −1 , which in summary indicates that the IR spectra of the product are in agreement with those of BHET. In order to further determine the structure of the degradation product, the alcoholysis product obtained by SMS-750 catalysis was characterized by 1 H-NMR, and the results are shown in Fig. 8 . The 1 H NMR (400 MHz, CDCl 3 ) δ 8.12 (s, 4H), 4.50 (d, J = 5.8 Hz, 4H), 4.06–3.91 (m, 4H), 2.07 (s, 2H), the ratio of the number of hydrogen atoms was calculated from the peak area, and the ratio of hydrogen atoms of the PET degradation product and the monomer BHET molecule matched, and combined with the analysis of the FT-IR spectra, it can be confirmed that the PET degradation product is BHET. The main influencing factors in the SMS-750 catalyzed PET alcoholysis process are catalyst dosage, alcoholysis time, alcoholysis temperature, and solvent dosage. The four factors were selected as the main influencing parameters for the one-way experiment, as shown in Fig. 9 . It can be seen that when the catalyst dosage is 1%, the alcoholysis time is 4 h, the alcoholysis temperature is 190 °C and the solvent dosage is 14 ml, the BHET yield relatively reaches a higher value, which provides a coded level of central value for the establishment of the response surface below. According to the Box–Behnken central combination design principle, a 4-factor, 3-level response surface analysis experiment was designed using the alcoholysis temperature ( A ), alcoholysis time ( B ), EG dosage ( C ), and SMS-750 dosage ( D ) as the independent variables, and the degradation product, BHET yield ( Y ), as the response value, and the levels and codes of the experimental factors are shown in Table 1 . A total of 29 sets of trials were run for the response surface design, details of which are given in the ESI, Table S1. † Multiple regression was fitted to the experimental data using Design-Expert software to obtain the quadratic fitted regression eqn (2) . 2 The simulated regression equations were analyzed by ANOVA and tested for significance and the results are shown in Table 2 . As shown in Table 2 , the model P = 0.0092 < 0.01, indicating that the model has good overall significance, high model confidence, and accurate simulation. According to the magnitude of the F -value, we can judge the significance of the four factors on the model in the following order: EG dosage > SMS-750 dosage > alcoholysis time > alcoholysis temperature. The misfit term of the model P = 0.0610 > 0.05 indicates that the misfit of the response values is not significant and the model can reflect the relationship between the independent variables and the response values better. The correlation coefficient of the model R 2 = 0.7902, the adjusted coefficient of determination R 2 Adj. = 0.5805, and the coefficient of variation C.V. = 1.99% < 10% indicate that it has a sufficiently strong signal and the model is ideal. In conclusion, the simulation is accurate and reliable for optimizing and analyzing the predicted test conditions for SMS-750 catalyzed glycolysis PET. In order to visualize the effect of the interaction between four factors, namely, alcoholysis temperature ( A ), alcoholysis time ( B ), EG dosage ( C ), and SMS-750 dosage ( D ), on the BHET yield ( Y ), three-dimensional response surface and plane contour plots of the relationship between the factors and the response values were plotted using Design Expert. The slope of the surface of the three-dimensional plot can reflect the influence of the factors on the response value, and the steeper the slope, the greater the influence of the factor on the response value; the strength of the interaction between the factors is reflected by the shape of the contour lines. Combined with Fig. 10(a1) and 10(a2) , it can be seen that the alcoholysis temperature and alcoholysis time have a more significant effect on the model and there is good interaction between them; combined with Fig. 10(b1) and 10(b2) , it can be seen that the EG dosage has a greater effect on the BHET yield, which is shown to increase and then decrease, which is in line with the results of the one-way test; combined with Fig. 10(c1) and 10(c2) can be seen that the effect of SMS-750 dosage on BHET yield is larger compared to the alcoholysis temperature, and the slope is steeper; combined with Fig. 10(d1) and 10(d2) , it can be seen that the effect of EG dosage on BHET yield is larger compared to the alcoholysis time, and the contour lines are elliptical, which indicates that the interaction between the alcoholysis time and the EG dosage is significant; combined with Fig. 10(e1) and 10(e2) , it can be seen that the effect of SMS-750 dosage on the BHET yield was larger, which showed that it increased first and then decreased, which was consistent with the results of the one-way test; from Fig. 10(f2) , it can be seen that the contour lines of the EG dosage and SMS-750 dosage were approximately circular, which indicated that the interaction between the two was not significant. After the response surface analysis and the prediction of the regression model, the optimal reaction conditions for the alcoholysis of waste PET with SMS-750 were: alcoholysis time of 4.9 h, alcoholysis temperature of 185 °C, SMS-750 dosage of 0.89%, and EG dosage of 14.6 ml, and the simulated prediction of the BHET yield under this condition was 79.82%. Validation experiments were carried out for the above optimal conditions, and the final BHET yield obtained was 79.57%, which was similar to the predicted value, indicating that the use of this model is reliable. The reaction was carried out under the conditions of alcoholysis temperature of 185 °C, alcoholysis time of 4.9 h, catalyst dosage of 0.89%, and ethylene glycol dosage of 14.6 ml. After the reaction, the catalyst was separated, washed, and dried while it was still hot, and reused several times to study the service life of the catalyst, and the results are shown in Fig. 11 . From Fig. 11 , it can be seen that there was a slight decrease in the yield of BHET, which was mainly due to the small diameter of the prepared particles of SMS-750, and there was a slight loss of some catalyst dissolved in the reaction solution. The yield of BHET was still above 70% after three reuses. Therefore, the biomass catalysts prepared in this study have high catalytic activity and stability and can be reused many times. Based on the experimental results, Fig. 12 proposes a possible reaction mechanism for the catalytic alcoholysis of PET by calcium-magnesium oxide: (I) calcium-magnesium oxide is a solid base, whose active base position activates the hydrogen on the hydroxyl group of the ethylene glycol, which makes the oxygen on the hydroxyl group negatively charged and thus nucleophilic, and it is more likely to attack the electropositive carbonyl carbon atoms of the PET and undergo a nucleophilic addition reaction, which is more likely to attack the electropositive carbonyl carbon atoms of PET and undergo a nucleophilic addition reaction, and then the catalyst forms a six-membered ring transition state structure with the carbonyl group in the PET chain segments and the hydroxyl group in ethylene glycol. (II) At the same time, electron transfer occurs to exchange the glycol fragments in PET, an elimination reaction occurs, the chemical bond of the PET molecule is broken, and PET depolymerizes. (III) As the reaction proceeds, the degree of PET polymerization begins to decrease, and finally, the entire PET molecule is completely degraded to BHET monomer, while a certain amount of ethylene glycol is also generated. A biomass-based catalyst was prepared from waste sunflower seed husk and applied to the catalytic alcoholysis of waste PET. The effects of reaction temperature, reaction time, catalyst dosage, and glycol dosage on the alcoholysis reaction were investigated. Based on one-factor experiment, a four-factor and three-level response surface experiment was designed according to the principle of BOX–Behnken experimental design, using the yield of BHET as the response value, and the optimal process conditions were determined: the alcoholysis temperature was 185 °C, the catalyst dosage was 0.89%, the reaction time was 4.9 h, and the glycol dosage was 14.6 ml, at which the actual yield of BHET was 79.57%, which was close to the value predicted by the simulation. In conclusion, it is of great significance that the sunflower seed husk matrix catalyst degrades waste PET in an environmentally friendly and efficient way, reduces pollution, promotes resource recycling, and contributes to sustainable development. All data supporting the findings of this study are available within the paper and its ESI. † Guoliang Shen and Tiejun Xu conceived, planned, and supervised the experiments. Linlin Zhao and Haichen Wang were responsible for the main part of the experiment, including the preparation of the catalyst and its application in PET degradation. Ruiyang Wen and Sijin Jiang were responsible for the characterization and analysis of the catalyst. Linlin Zhao and Xiaocui Wang wrote the first manuscript, while all authors reviewed the manuscript. The authors declare that they have no conflicts of interest.
|
Study
|
biomedical
|
en
| 0.999997 |
PMC11694718
|
With the rise in population, there has been a significant increase in global energy consumption. Hydrogen is considered one of the most promising energy carriers. 1 In order to provide humanity with cutting-edge facilities, research and technology must provide solutions to the demand for the day by day energy increases. 2 Most energy generated comes from fossil fuels, which are non-renewable and take longer to replenish or return to their starting form. 3 As the world's energy needs increase, innovative strategies and research on sustainable energy are also updated. 4 Although hydrogen has a few obstacles to overcome, it is undoubtedly a fantastic alternative to fossil fuels like coal, oil and natural gas. 5 Because of environmental contamination and the use of fossil fuels, there is an increasing need for clean and economical energy sources. Renewable and electrified energy sources are being studied extensively. 6,7 Hydrogen presents itself as an exceptional gas for use as a source of energy, but the most challenging aspect currently is storing hydrogen. 8 For cars, computers, and mobile devices, hydrogen is a promising energy source that has the potential to displace non-renewable petroleum derivatives. Burning hydrogen can lower carbon dioxide emissions and is environmentally friendly, efficient and sustainable. 9 Numerous materials have been investigated for the storage of hydrogen, such as complex hydrides, nanomaterials, and graphene-based materials. A high rate of hydrogen is needed for practical use in order to make it a viable alternative to fossil fuels. 10 Researchers are working on a variety of perovskite materials that have exceptional hydrogen storage capacity. Hydrogen storage materials are made up of specific metals, compounds, and a special form of nano structured hydrides, which is made up of microscopic particles. 11 Certain parameters must be met by the material used for energy storage, including a high volumetric and gravimetric ratio, good kinetic energy, noteworthy mechanical qualities, and the capacity to release hydrogen under normal circumstances. 12,13 ABH 3 perovskite is a hydride perovskite with a structure in which B is a light element that replaces one of the O-atoms in the BO 6 octahedral, such as carbon (C), oxygen (O), or nitrogen (N). When lighter elements are substituted for oxygen, hydrogen forms more bonding sites, resulting in high storage capacity. 14,15 They are different categories as the elements in groups 1 and 2 of the periodic table can be used to create A and B. The second class of hydrides of the perovskite type is produced by combining monovalent alkali or divalent metals A and B. 16,17 This category consists of SrPdH 3 , LiCuH 3 , LiFeH 3 , MgFeH 3 , CaNiH 3 , MgCoH 3 , CaCoH 3 and KCuH 3 . 18,19 Due to their high gravimetric densities, light metal hydrides are among the most promising materials for on-board hydrogen storage. 20 Metal hydrides are examples of functional compounds that aid in the absorption of hydrogen. 21 Hydrogen is stored in vast amounts in the intermetallic phases of various metals through chemical bonding. 22 NaMgH 3 and Na 0.9 K 0.1 MgH 3 have been experimentally synthesized using a high-energy ball milling method, which revealed that adding different concentrations of K on Na enhanced the dehydriding kinetic properties and increased the amount of hydrogen desorbed. 23 Song et al. 24 found that the gravimetric hydrogen storage capacities of NaMnH 3 , KMnH 3 , and RbMnH 3 compounds are 3.74, 3.12, and 2.11 wt%. Li et al. 25 studied the electronic structure of NaMgH 3 and reported that it behaves as a metal, while Bouhadda et al. , 21 Fornari et al. 26 and Vajeeston et al. 27 found theoretically that NaMgH 3 must be a semiconductor. In this paper we reported the electronic behavior of NaMgH 3 using a different exchange correlation potential in order to accurately treat the band structure. The need for clean and renewable energy sources has become a serious challenge. In this regard, thermoelectric materials and hydrogen are very popular as a potentially broad, effective, and sustainable energy source. By solving the heating problems linked to storing hydrogen, thermoelectric properties can help make hydrogen a successful and sustainable energy source in the long term. Also, it has been found that thermoelectric devices can be used as hydrogen solid storage tanks. 28 In contrast to compressed gas or liquid hydrogen, perovskite hydrides of the type NaXH 3 (X = Be, Mg, Ca and Sr) are considered safer options for storing hydrogen. This study provides important information that could improve the efficiency and safety of hydrogen storage technologies in the future by examining the characteristics of these materials. For hydrogen storage applications we calculated the gravimetric ratio and formation energy. Furthermore, other important properties such as structural, electronic, mechanical and thermoelectric characteristics of NaXH 3 (X = Be, Mg, Ca, Sr) hydride perovskites are also calculated and studied in this article. Density functional theory (DFT) and an approach known as full-potential linearized augmented plane wave plus local orbitals (FP-LAPW+lo) implemented in WEIN2k code is used in the first principles calculations of NaXH 3 (X = Be, Mg, Ca, and Sr). 29,30 We used WC-GGA as an accurate exchange-correlation potential in our calculations. 31 To ascertain these properties, the WC-GGA is employed in conjunction with the modified Beck-Johnson (mBJ) approach. 32 mBJ is one of the best density functional approaches (error of about 2%) to treat the electronic band structure of perovskites and various semiconductor materials. 33 We use the finer, 7 × 7 × 7 and the denser 10 × 10 × 10 k-mesh to determine the structural and electronic properties. To obtain the elastic constant, the energy strain approach which is included in the WEIN2k package was applied. 34 By subjecting the cubic lattice to deformation, only three independent elastic constants C 11 , C 12 and C 44 can be calculated. Thermoelectric characteristics including figure of merit ( ZT ), power factor (PF), electronic thermal conductivity ( κ ), electrical conductivity ( σ ), and Seebeck coefficient ( S ) were investigated utilizing WC-GGA+mBJ through the extended semi-classical BoltzTraP package. 35 Different possible geometries are designed and visualized using VESTA, with the assist of vesta code we obtained the polyhedral structure of these perovskite materials. 36 NaXH 3 (X = Be, Mg, Ca, Sr) hydrides have the Pm 3̄ m (221) space group. The unit cells of these compounds were figured out using fractional coordinates. The Na-atom is found at the corner position (0, 0, 0), the X metal atom is located at the center position (0.5, 0.5, 0.5) and the H-atom occupies the center position of the face of the octahedral sites (0.5, 0, 0), (0, 0.5, 0) and (0, 0, 0.5). In Fig. 1 , the structural arrangement is presented. Fig. 1 shows that the Na-atoms (red sphere) are cations which occupy the corner of a cube. The X-atom (blue sphere) (X = Be, Mg, Ca, Sr) also serves as a cation but with a position at the center. The face of the cube is occupied by hydrogen atoms (yellow sphere) serving as anions. The structural stability of perovskite materials can be understood using energy–volume curves. The E – V curves for NaXH 3 (X = Be, Mg, Ca, Sr) are shown in Fig. 2(a)–(d) . The E – V plot for any material provides very valuable information about its mechanical structure and dynamic stability. Achieving a ground state is the first step in the stability of any physical compound that can be found by plotting an energy graph in terms of its volume. 37 Forces were firstly calculated, using the WC-GGA function. The system was unfastened until the forces acting on the atoms were negligible. The optimized lattice constants were obtained using the Birch–Murnaghan equation of state, 38 1 where in eqn (1) B 0 , B ′, V 0 and E ( V ) are the bulk modulus, first derivative of B , unit cell volume and energy at equilibrium ground states, respectively. Fig. 2 shows the evolution of the total energy as a function of the unit cell volume of cubic perovskite-type NaXH 3 . The equilibrium lattice parameter was computed also from the structural optimization, using the Birch–Murnaghan equation of state listed in Table 1 . The E – V plot is generated by varying the lattice constant and calculating the minimal energy point – the lowest point of the curve where the arrow is set – which shows where the structure is dynamically stable. E – V plot provides important details regarding the properties and structural stability of the NaXH 3 (X = Be, Mg, Ca, Sr) material. Table 1 shows that the calculated lattice constants are in agreement with other reported values. The previous studies 20–22 on the same type of hydrides show that these materials may be used for hydrogen storage applications. When a substance absorbs hydrogen, its stability may be ascertained using the E – V curve. A material is said to be capable of efficiently storing hydrogen if its energy dramatically drops as its volume increases. This connection guarantees that the material stays stable and effective for storage while demonstrating how well it can withstand the changes that occur during hydrogen absorption, which could be employed in hydrogen storage applications. To compute the thermodynamical stability of NaXH 3 (X = Be, Mg, Ca, Sr) their formation energies Δ H f are determined using the equation: 39 2 Δ H f(NaXH 3 ) = [ E total(NaXH 3 ) − E s(Na) − E s(X) − 3/2( E s H 2 )]. In eqn (2) the individual ground state energies E s(Na) , E s(X) and E s H 2 , as well as E total(NaXH 3 ) , the energy of the whole compound, are used. The stability formalisms for the formation energies of NaXH 3 perovskite materials are shown in Fig. 3 . The figure shows that the calculated value of all materials have a negative formation energy (yellow bar), indicating that these materials are thermodynamically stable. The negative formation energy of NaXH 3 is also listed in Table 1 . NaBeH 3 has the lowest formation energy (−0.285) among all the materials under consideration and is the most thermodynamically stable material along with NaCaH 3 and NaMgH 3 . The potential for hydrogen storage applications of NaXH 3 (X = Be, Mg, Ca, Sr) hydride perovskites has been assessed by calculating their gravimetric storage capacity using eqn (3) , the hydrogen deposited is represented by the gravimetric ratios. The algorithm utilized to assess the cwt% for gravimetric ratio is given by: 40,41 3 In eqn (3) , M H represents the weight of hydrogen atoms, M HOST represents the weight of the material it's in and H / M is the ratio of hydrogen atoms to material atoms. Fig. 3 represents the weight percentage of the gravimetric capacity of perovskite hydrides NaXH 3 (X = Be, Mg, Ca, Sr) in which NaBeH 3 can hold the most at 8.6% and NaMgH 3 can hold 6.0%. Also, NaCaH 3 can store 4.5% but NaSrH 3 can only store 2.6%. Therefore, NaBeH 3 , NaMgH 3 , and NaCaH 3 have good gravimetric ratios. The energy–volume curve demonstrates a material's stability by displaying how its energy varies with volume. Lower energy states correspond with lower formation energies, making the material better suited for hydrogen storage. The lowest formation energy of NaBeH 3 compounds allows hydrogen to enter and exit more easily compared to other compounds, and achieves a hydrogen gravimetric capacity of 8.6%. High formation energy can make hydrogen storage less efficient. These principles work together to determine the best materials for successful hydrogen storage applications. 42,43 Examining the electronic structure of compounds allows us to better comprehend their solid form. Several major characteristics of materials can be better understood thanks to such studies. Both the band structure and the total density of electronic states are important in the study of solid construction. The band structure for the NaXH 3 (X = Be, Mg, Ca, Sr) cubic phase has been calculated along the high symmetry directions in the first Brillouin zone, with WC-GGA and WC-GGA+mBJ exchange potential method. Fig. (4a–d) display the band structure of NaXH 3 . We notice that the minima and maxima of the conduction and valence band are not situated at the same point of symmetry. Therefore, these materials have an indirect band gap transition in both correlations. The material NaBeH 3 has Γ–R and NaMgH 3 , NaCaH 3 and NaSrH 3 have X–M direction, which validates that our materials have semiconductor nature. The calculated band gaps for all considered materials are given in Table 2 . Using the WC-GGA approximation , the E g values for NaBeH 3 , NaCaH 3 and NaSrH 3 compounds are found to be 0.71 eV, 1.28 eV, 1.68 eV, respectively, while NaMgH 3 shows metallic nature using WC-GGA. Whereas the WC-GGA+mBJ approach changes the band gap nature of NaMgH 3 material from metallic to semiconductor with a band gap value of 1.14 eV. Similarly, the improved band gap of NaBeH 3 , NaCaH 3 and NaSrH 3 are 2.79 eV, 2.90 eV and 3.44 eV, respectively. The electronic band gap has a major impact on the capacity to store hydrogen. To permit ideal electron transport between the material and hydrogen from the valence to conduction band, a modest band gap that increases binding energy is required, as in our computed data for NaBeH 3 , explaining its better hydrogen intake of up to 8.6%. Additionally, such a band gap enhances electrical conductivity, which promotes the passage of hydrogen. The band gap also helps predict phase stability during the absorption and desorption of hydrogen. Density of states (DOS) is used to scrutinize how electronic band structure is affected due to atomic exchange and relaxation distribution of energy levels. By examining DOS, one can discern if a material contributes metallic or semiconductor behaviour. The partial density of states (P-DOS) gives insight into the role of an electron's orbital in the conduction and valence band. The DOS for each material were examined using WC-GGA+mBJ. The total density of states (T-DOS) and P-DOS for the hydride perovskite NaXH 3 materials are displayed in Fig. 5–8 . Strong hybridization between hydrogen, sodium and the X atom (X= X = Be, Mg, Ca, Sr) is found, as expected. Therefore, hydrogen interaction with other atoms increases the stability of the compounds as shown in the E – V plots . Fig. 5(a) shows the T-DOS of NaBeH 3 (red line). Fig. 5(a) reveals the primary influence of the individual atoms (Na, Be and H) in both the valence band (VB) and conduction band (CB). The P-DOS of NaBeH 3 in Fig. 5(b) shows that the VB and CB can be attributed to the s and p-orbitals of the Be-atom, while the VB can also be attributed to the s-orbital of the H-atom. Fig. 6(a) represents the T-DOS of NaCaH 3 . The CB can be attributed to the dominant peak of the Na-atom (blue peak); the T-DOS are depicted by (red peaks). Fig. 6(b) shows the P-DOS of NaCaH 3 , where the s-orbital of the H-atom and p-orbital of the Be-atom have major influence on the VB (violet and magenta peak), while in CB the s-orbital of the Na-atom plays a vital role. The T-DOS of NaMgH 3 are presented in Fig. 7(a) , where the significant contribution of the H-atom in the VB and Na-atom in the CB are shown with the maroon and blue peaks, respectively. The P-DOS of NaMgH 3 is presented in Fig. 7(b) , where the s-orbital of the H-atom has a major contribution to the VB, and the CB can be attributed to the s and p-orbitals of the Na-atom. Fig. 8(a) shows the T-DOS of NaSrH 3 with major contribution of the H-atom to the VB (maroon peak), and Na-atom (blue peak) to the CB, while the red peak indicates the T-DOS. Fig. 8(b) shows that the P-DOS of NaSrH 3 has a significant contribution from the s-orbital of the H-atom and p-orbital of the Sr-atom, and the VB is contributed by the p-orbital of Na and Sr-atom in CB. DOS is significant for hydrogen storage because it indicates the availability of electronic states suitable for hydrogen binding. A high DOS near the Fermi level suggests that a material has more binding sites, which boosts its capacity to absorb hydrogen; in the present work NaBeH 3 has the highest DOS. Furthermore, the DOS structure affects the energy levels associated with hydrogen interactions, which in turn affects binding energies. An optimal DOS promotes efficient charge transfer, which stabilizes hydride phases during absorption and desorption. Overall, a positive DOS contributes to improved reaction kinetics and stability, making it important for efficient hydrogen storage. 44 Thermoelectric properties play a crucial role in heat transfer, such properties have diverse applications in storing energy effectively and providing solutions to various problems. Thermoelectric materials are especially valuable in thermoelectric devices as they can convert waste heat into usable electrical energy. By employing thermoelectric devices to harness and utilize wasted engine heat, significant cost savings can be achieved. The use of thermoelectric materials holds promise in addressing society's energy challenges. Examples of different types of thermoelectric properties include Seebeck coefficients, electrical conductivity and thermal conductivity. How much voltage is transferred from a junction that is hotter to one that is colder is known as the thermo power. Varying temperature causes changes in the Seebeck coefficient. With rising temperatures, it gets smaller in the range 100 V K −1 to +1000 V K −1 . Thermoelectric applications require substances with high Seebeck values. The Seebeck coefficient is the ability of a material to have a temperature gradient which produces a thermoelectric voltage. The Seebeck coefficient of NaBeH 3 , NaCaH 3 , NaSrH 3 is presented in Fig. 9(a) , at 800 K. The Seebeck coefficient values for NaBeH 3 , NaMgH 3 , NaCaH 3 , NaSrH 3 are given in Table 3 and 4 for p-type and n-type doping, respectively. From Table 3 and 4 and Fig. 9(a) , we show that NaSrH 3 has the larger value at 800 K in n-type regions. Electrical conductivity is a term for how a material can conduct electric currents. Using the strength of the electrical field to determine the current density ratio. While a thermoelectric material possesses a high electrical conductivity, the influence of joule heating should be minimum. Materials are categorized according on how well they conduct electricity; in contrast to insulators, conductors have a relatively high electrical conductivity. The Fermi level is nearer to the conduction band in a conductor than it is in a semiconductor, it is positioned in the middle of the valence band and conduction band. The Fermi level is nearer to the conduction band in p-type semiconductors than it is in n-type semiconductors, and vice versa . Temperature has a significant impact on electrical conductivity. Conductivity is 0 at absolute zero (K) temperatures, but it goes up exponentially as temperature goes up. The calculated electrical conductivities of NaBeH 3 , NaMgH 3 , NaCaH 3 , NaSrH 3 are shown in Fig. 9(b) at 800 K and are listed in Table 3 and 4 at 800 K. The greatest electrical conductivity values are found for NaBeH 3 at 800 K in the n-type region. A material can conduct heat differently, and its ability to do so is called thermal conductivity. Heat means being warm or hot. Some materials do not let heat move very fast, while others let heat move quickly. Metals can usually let heat pass through them easily. Good conductors easily move heat while insulating materials do not – for instance, Styrofoam or rock wool. Materials that cannot transfer heat well are often used in things that need to stay cold and in things that stop heat from getting through. The calculated thermal conductivities of NaBeH 3 , NaMgH 3 , NaCaH 3 and NaSrH 3 are shown in Fig. 9(c) at 800 K and the calculated values of thermal conductivity of NaBeH 3 , NaMgH 3 , NaCaH 3 , NaSrH 3 are presented in Table 3 and 4 at 800 K. The term “figure of merit” is a general concept used in various fields to quantify and compare the performance of a system or a device. The figure of merit is a number that shows how well the system works. In the particular study of materials that can conduct heat or convert it into electricity, scientists often use the figure of merit to measure how well a thermoelectric material will work. We use an equation to calculate it: 4 where T is the temperature; κ is the material's heat conductivity; σ is its electrical conductivity; S is its electrical generating capacity. To put it simply, a higher ZT value indicates that the material has superior heat-controlling capabilities, useful in appliances that run on electricity for heating or cooling. Fig. 9(d) illustrates the degree to which NaBeH 3 , NaMgH 3 , NaCaH 3 , NaSrH 3 function at varying temperatures. Table 3 and 4 illustrate the observed values for NaXH 3 (X = Be, Ca, Mg, Sr) for temperatures at 800 K. NaSrH 3 has the maximum value in the n-type region, 0.998. To calculate the thermoelectric power factor of a material, one of the best parameters to consider is the power factor (PF). PF = σS 2 , here ‘ S ’ represents the Seebeck coefficient and ‘ σ ’ represents the electrical conductivity of the material. The calculated power factor of NaBeH 3 , NaMgH 3 , NaCaH 3 , NaSrH 3 are shown in Fig. 9(e) at 800 K, and the calculated power factor values are presented in Table 3 and 4 . Further, it is shown in Fig. 9(e) , that the n-type NaSrH 3 has the largest value of PF making it favorable for thermoelectric applications. Among NaBeH 3 , NaCaH 3 , NaMgH 3 and NaSrH 3 , the NaSrH 3 is the perfect candidate for use in thermoelectric applications because it has the highest PF and high ZT in n-type doping. To obtain a high ZT value, the material will have a large Seebeck coefficient, high electrical conductivity, and low thermal conductivity. It is evident from Table 3 and 4 that the value of Seebeck coefficient is close to unity. Therefore, all of these materials could be used in thermoelectric power generators. 45 The thermoelectrical behavior of materials, including conductivity, energy band gaps, power factor and figure of merit, are essential for promoting the absorption and desorption of hydrogen. Positive thermoelectric characteristics may also help with heat control during the hydrogen cycle, which would boost storage efficiency even more. Furthermore, improving the interaction of electrical conductivity, thermal conductivity, and the Seebeck coefficient, can result in sophisticated materials that excel at both hydrogen storage and thermoelectric conversion. This integration promotes the development of sustainable energy solutions by increasing energy efficiency in hydrogen-based systems. The strain-dependent matrix of second-order elastic constants ( C ij ), equilibrium volume and crystal energy are some of the parameters that affect a lattice's elastic behavior. The elastic stiffness tensor for NaXH 3 (X = Be, Mg, Ca, Sr) compounds displaying the symmetry features of the Pm 3̄ m space group is composed of three separate components, denoted by Young’s notation as C 11 , C 12 , and C 44 . 46,47 The calculated values of C 11 , C 12 and C 44 for NaXH 3 (X = Be, Mg, Ca, Sr) perovskites are listed in Table 5 . It is evident from the table that our calculated values of elastic constants fulfil the Born stability criteria, commonly known as the mechanical stability conditions. 48,49 The positive/negative of Cauchy’s pressure ( C p = C 12 − C 44 ) can be used to evaluate whether a material exhibits ductile/brittle behavior. The data presented in the Table 5 indicate that NaXH 3 (X= Be, Mg, Ca, Sr) display brittle behaviour, which is empirically determinable. The bulk modulus ( B ), Shear modulus ( G ), Young's modulus ( E ), Poisson's proportion ( v ) and Pugh’s ratio ( B / G ) are illustrations of the mechanical properties of the materials, calculated using the elastic constant ( C ij ), shown in Table 6 , using equations given in ref. 50 and 51 . The bulk modulus of a material indicates the maximum pressure it can bear without changing shape. The shear modulus shows how well a material can withstand pressure without losing its shape. The expansion and deformation of materials are due to their bulk and shear moduli. Young's modulus is a measure of a material's stiffness that compares how much it expands when pushed, to the amount of force applied to it. Table 6 shows that, in comparison to other materials, the material considered that has the largest bulk modulus B value is NaBeH 3 . The second column of results in the table indicate the shear G distortion, the compound with the greatest value for G was found to be NaBeH 3 . The values of B and G were used to calculate the young modulus E . In our calculations, a high Young's modulus E indicates that NaBeH 3 is tougher than other materials. 52,53 Materials that can bend a lot and do not break easily have a B / G ratio of more than 1.75, showing the ductile behaviour of the material. When Poisson's ratio gets close to 0.5, the material is likely to become difficult to compress. When v = 0.5, it is almost impossible to compress. Compared to other materials that can change shape easily, our estimated values for v range from 0.05 to 0.06, which shows that NaSrH 3 is a material that is hard to compress. Table 6 provides the obtained values of anisotropic ( A ) for cubic NaXH 3 . The values of A significantly deviate from unity, indicating that these cubic materials, NaXH 3 , are anisotropic. Micro-hardness ( H ) is a measure of a material’s resistance to compression. According to the results in Table 6 , NaBeH 3 has a higher micro-hardness than other materials, which means it is more resistant to being compressed by small components. 54 In summary we considered the structural, formation energy, hydrogen storage, electronics, thermoelectric and elastic properties of NaXH 3 (X = Be, Ca, Mg, and Sr) cubic hydride perovskites. The formation energy and elastic stability criteria confirm their mechanical stability. The band of the materials are increased by applying the mBJ correction. For NaXH 3 (X = Be, Mg, Ca, and Sr), the computed hydrogen storage capacities are 8.6, 6.0, 4.6, and 2.6 wt%, respectively. Therefore, NaBeH 3 , NaMgH 3 , and NaCaH 3 are considered to be potential materials for hydrogen storage applications. For NaSrH 3 , the power factor at negative chemical potentials is about 25–30% higher than positive chemical potentials, which indicates that the n-type doping is more efficient and the material could be suitable for thermoelectric applications. The authors confirm that the data supporting the findings of this study are available within the article. There are no conflicts to declare.
|
Study
|
biomedical
|
en
| 0.999996 |
PMC11694720
|
Biodegradable polyesters such as poly(ε-caprolactone) (PCL) are of great interest due to their biocompatibility and biodegradability. 1 PCL can be prepared by ring-opening polymerization (ROP) of ε-caprolactone (CL) using an alcohol (R-OH) 2–4 [or diol (HO-R-OH)] 5–9 as an initiator. The alkyl substituent becomes a terminal group and leads to α-hydroxyl-ω-alkyl-PCL (HO-PCL-R). This phenomenon is also applied to other monomers such as δ-valerolactone (δ-VL), 2 glycolide (GA), 4 and l -lactide ( l -LA). 4,10,11 A homopolymer of CL is often used in biomedicine due to its hydrophobic character when used in biodegradable stents 12 in soft tissues and drug delivery. 13 There is growing academic and industrial interest in biodegradable antimicrobial oligomeric polyesters, but few studies have been carried out on oligomers based on PCL. 14 However, in recent years, efforts have been made to separate monodisperse species, including PCL. Stepwise synthesis methods (linear and exponential growth) have been very successful in the production of these oligomers, but these methods result in reduced separation with higher-molecular-weight oligomers and decreased statistical yield of any one oligomer length at a higher degree of polymerization (DP). Furthermore, the multi-step nature is inefficient for preparing large libraries of discrete oligomers and can be challenging. For such purposes, separation methods are used to isolate individual oligomers from dispersed materials, such as flash column chromatography (FCC). 2,15–19 Terpenes are the most abundant family of organic natural products in nature and are the main constituents of essential oils. They have a skeleton that is formed by isoprene and oxygen units, and they are produced and secreted by specialized plant tissues. 20 Terpenes may present various organic functional groups, such as alcohols, ketones, ethers, esters, and aldehydes, 21 and they have biological activities that are related to their functional groups, arrangements, and structures. 22 These natural hydrocarbons are made up of five carbon isoprene units with different configurations, various degrees of unsaturation, oxidation, functional groups, and rings. 23,24 For example, nerol and geraniol are aliphatic monoterpene structures (C 10 H 18 O) produced by the combination of two isoprene units (C 5 H 8 ) that have a functional alcohol group in their organic composition, while farnesol (C 15 H 26 O) is a sesquiterpene alcohol with three isoprene units. Nerol is extracted from Damask rose ( Rosa damascena ), Lavandula stoechas , Lavandula multifida , and lemongrass ( Cymbopogon citratus ). 25 β-Citronellol is a part several volatile oils in plants of the genus Cymbopogon (such as Cymbopogon nardus ). 26–28 Geraniol is a part of various volatile oils and a principle product of Cymbopogon martinii (66.2–76.9%), 29 Pelargonium graveolens (21.08%), 30 R. damascena (18.7–21.2%), Rosa centifolia (7.4–11.3%), 31 and Cymbopogon nardus (22.77%). 32 This phytoconstituent has been reported to have biological and pharmacological properties, such as antibacterial activity. 33,34 The mechanism of this activity is based on its lipophilic character and is explained by the ability to adhere to cell membrane lipids of the microorganism. This allows it to interact with the organism's components, make it more permeable, bind essential intracellular sites, and thus destroy its structures. 33,35 These terpene compounds exhibit good antimicrobial activity against Staphylococcus aureus and Escherichia coli . 23,36–39 One of the main drawbacks of PCL is its lack of functional groups with antimicrobial activity, which is why it has been possible to synthesize biodegradable antimicrobial PCL. 40–54 To improve the antibacterial behavior of PCL, natural compounds have been incorporated as additives, such as resveratrol, 55 polyhexamethylene guanidine derivatives, 56 and essential oils such as cinnamaldehyde and allyl isothiocyanate. 57 Jummes et al. synthesized PCL nanoparticles that entrapped palmarosa essential oil and its majoritarian compound, geraniol, resulting in antimicrobial activity against E. coli and S. aureus . 58 PCL is a biodegradable polyester with oligomers that can be recognized as a carbon source or degraded by enzymes of different microorganisms. Thus, we have been examining the fusion between two different types of chemical species: biodegradable polyesters, such as PCL, and organic molecules with biological properties, such as terpenes. Some of these terpenes have an alcohol functional group such as nerol, geraniol, β-citronellol, or farnesol and can act as initiators in the ROP of CL. We examined what roles terpenic molecules have as a terminal group in a monodisperse oligoester derived from PCL, as well as the effect of the end-group on the physical properties of the monodisperse oligomer. We also examined whether a monodisperse oligomer with a terpenic end-group can retain the biological properties of its terpene precursor, as well as the ability of such oligomers to act as a “trojan horse” against bacteria. We report the synthesis, isolation, characterization, and antimicrobial evaluation of monodisperse oligomeric species derived from PCL and functionalized terpene alcohols that have antibacterial activity (nerol, geraniol, β-citronellol, and farnesol). The terpenes were inserted as end-groups by ROP of the CL. Monodisperse oligomeric species from monomer to trimer were isolated by FCC and analyzed by a range of characterization techniques to examine the chemical nature and the effect of DP on the physical properties of the oligomer. Additionally, we compared the differences between the terpene farnesol (C 15 ) and aliphatic 1-pentadecanol (C 15 ) as a terminal group of monodisperse PCL. Microbiological tests were also carried out using Gram-positive bacteria S. aureus and Gram-negative Pseudomonas aeruginosa . All reagents, ε-caprolactone (CL), nerol, geraniol, β-citronellol, farnesol and 1-pentadecanol, ammonium decamolybdate [(NH 4 ) 8 (Mo 10 O 34 )] and deuterated chloroform (CDCl 3 ) were purchased from Sigma Aldrich Co. (St Louis, MO, USA) and used as received. Thin-layer chromatography (TLC) was performed on percolated silica gel plates and using a Seebach staining reagent. Flash column chromatography (FCC) was conducted using 230–400 mesh silica gel. Toluene and ethyl acetate were used as the mobile phase during FCC. For the antibacterial assays, strains of Staphylococcus aureus #6538 and Pseudomonas aeruginosa #13388 from American Type Culture Collection (ATCC) were growth in Mueller Hinton broth (BD Bioxon). Nuclear magnetic resonance (NMR) spectroscopy. Solution state 1 H and 13 C spectra were recorded at room temperature or above on a 500 MHz Bruker Avance III HD instrument, using CDCl 3 as a solvent. Chemical shifts are reported as δ in parts per million (ppm) and referenced to the chemical shift of the residual solvent ( 13 C at δ 77.16, and 1 H at δ 7.26, for CDCl 3 ). FT-IR spectra were recorded on a PerkinElmer Spectrum 100 FTIR spectrophotometer with attenuated total reflectance spectroscopy (ATR) accessory. Size exclusion chromatography (SEC). All polyester samples were dissolved in THF (5 mg/5 mL) heating at 37 °C for one hour and filtered with an 0.45 μm Acrodisc®. The SEC instrument (Agilent) was equipped with a refractive index detector. Measurements were determined using a single column PLgel 5 μm Mixed-D (Agilent) at a flow rate of 1.0 mL min −1 with HPLC-grade THF. Polystyrene standards (Polymer Laboratories) were used for calibration. Thermograms were performed in two different instruments, the first a Differential Scanning Calorimetry (DSC) Q200 V24.11 Build 124 instrument (an intracooler at −30 °C), the second similar to the first one but with an intracooler at −90 °C. For the oligomers: three scans were obtained with two heating scans (25 to 170 °C and −30 to 170 °C) and one cooling scan (170 to −30 °C) between them. For the monodisperses species: three scans were obtained with two heating scans (25 to 80 °C and −85 to 75 °C) and one cooling scan (80 to −85 °C) between them. The rate of heating/cooling was 10 °C min −1 and was performed under a nitrogen purge. The glass transition temperature ( T g ) is given as an inflection point, and the melting points ( T m ) are given as the minimum of the endothermic transition, and the data presented are taken from the second heating scan. Electrospray ionization quadrupole mass (ESI/MS-QTOF) spectroscopy in positive ionization mode using ESI-Q-TOF-MS (Waters – Synapt G1) equipped. The carrier gas was nitrogen with a flow rate of 1.5 mL min −1 . The sample volume injected was 250 μL. The temperature of the injector was held at 120 °C and the transfer line at 300 °C. The mass spectrometer was operated at 2 V ionization energy, the spectra were recorded in scan mode in the range 50–1200 m / z . The spectra obtained were visualized in the program Mass Lynx V4.1 (Waters), and the ions produced are inspected. Gas Chromatography-Mass Spectrometry (GC-MS), using an Agilent model 6850 gas chromatograph coupled to a 5973N mass spectrometer with a single quadrupole detector (Agilent technologies, Palo Alto, CA, USA). Chromatographic separation was performed on a HP-5 capillary column Agilent (30 m × 20 mm × 0.25 μm). The carrier gas was helium with a flow rate of 1 mL min −1 . The sample volume injected was 1 μL and the split ratio 1 : 50. The oven temperature started at 50 °C for 1 minute, then increased to 250 °C at 10 °C min −1 . The temperature of the injector was held at 150 °C and the transfer line at 250 °C. The mass spectrometer was operated at 70 eV ionization energy, the spectra were recorded in scan mode in the range 100–280 m / z . Polarized optical microscopy (POM). POM micrographs were obtained using a Nikon ECLIPSE E200 optical microscope; photographs were taken using an iPhone 13 mini. All samples were collected with a magnification of 40×. Thermal and decomposition characteristics of the monodisperses species were determined by thermogravimetric analysis (TGA), conducted on a STAR e SW 13.00 TGA/DSC 2 of METTLER TOLEDO, in the temperature range of 35–595 °C with a heating rate of 10 °C min −1 under a flow of nitrogen at 40 mL min −1 . Polymerization was performed in a previously dried 25 mL round-bottom flask. ε-Caprolactone (CL) , ammonium heptamolybdate tetrahydrate (NH 4 ) 6 [Mo 7 O 24 ]·4H 2 O (Hep, 1.21 × 10 −3 mmol, 1.5 mg), and an initiator were charged and heated to aluminum block at 150 °C for 1.5 hour. By thermal decomposition in situ of ammonium heptamolybdate (NH 4 ) 6 [Mo 7 O 24 ], ammonium decamolybdate (NH 4 ) 8 [Mo 10 O 34 ] was obtained in the solid-state. 59 The oligo(CLs) obtained and analyzed were isolated without purification (yield = 96–99%). The different initiators: geraniol (C 10 ) (10 mmol, 1.54 g), nerol (C 10 ) (10 mmol, 1.54 g), β-citronellol (C 10 ) (10 mmol, 1.56 g), farnesol (C 15 ) (10 mmol, 2.22 g), and 1-pentadecanol (10 mmol, 2.28 g) [CL/initiator = 1]. Isolation of oligomers by flash column chromatography: 600 mg of oligo(CL) (DP theo = 1) was dissolved in the minimum volume of toluene (5 mL) and added to a silica gel column with toluene used as mobile phase. Gradually, the fraction of toluene/ethyl acetate increased to 90/10, 80/20 and 75/25. All the fractions were analyzed by thin-layer chromatography (TLC mobile phase: toluene/ethyl acetate = 80/20) and visualizing the spots using the Seebach staining reagent. The fractions were collected in test tubes, the solvent was evaporated by a rotary evaporator, and the resulting oil (or solid) was dried overnight under vacuum. Fractions: monomer C 10 C-CL 1 [226.0 mg, wt% = 34.27%, mol% = 11.83% (mol% C 10 OH = 41.20%, mol% CL = 58.8%)], dimer C 10 C-CL 2 [181.2 mg, wt% = 27.48%, mol% = 20.48% (mol% C 10 OH = 29.68%, mol% CL = 70.32%)], and trimer C 10 C-CL 3 [71.1 mg, wt% = 10.78%, mol% = 67.69% (mol% C 10 OH = 22.89%, mol% CL = 77.11%)]. NMR data at room temperature: 1 H NMR (500 MHz, CDCl 3 , ppm). C 10 C-CL 1 : δ 5.08 (t, 1H, [ k , <svg xmlns="http://www.w3.org/2000/svg" version="1.0" width="13.200000pt" height="16.000000pt" viewBox="0 0 13.200000 16.000000" preserveAspectRatio="xMidYMid meet"><metadata> Created by potrace 1.16, written by Peter Selinger 2001-2019 </metadata><g transform="translate scale" fill="currentColor" stroke="none"><path d="M0 440 l0 -40 320 0 320 0 0 40 0 40 -320 0 -320 0 0 -40z M0 280 l0 -40 320 0 320 0 0 40 0 40 -320 0 -320 0 0 -40z"/></g></svg> CH–], C 10 ), 4.10 (quintet, 2H, [ f , –CH 2 –O–], C 10 ), 3.65 (t, 2H, [ a , –CH 2 –OH], CL 1 ), 2.31 (t, 2H, [ d , –CH 2 –CO–], CL 1 ), 1.98 (quintet, 2H, [ j , –CH 2 –CH ], C 10 ), 1.68 (quintet, 3H, [ l , CH 3 –], C 10 ), 1.65 (quintet, 2H, [ i , –CH 2 –CH 2 –], C 10 ), 1.64 (quintet, 2H, [ b , –CH 2 –CH 2 –], CL), 1.60 (quintet, 3H, [ n , CH 3 –], C 10 ), 1.40 (quintet, 2H, [ g , –CH 2 –(CH)–], C 10 ), 1.33 (quintet, 2H, [ c , –CH 2 –CH 2 –], CL 1 ), 1.19 (quintet, 1H, [ h , –CH–(CH 3 )–CH 2 –], C 10 ), 0.91 (quintet, 1H, [ m , CH 3 –(CH)–], C 10 ). FT-IR: C 10 C-CL 1 . The quantitative in vitro antibacterial activity of monodisperse oligomeric species was performed by broth microdilution, according to the CLSI M 07 method (methods for dilution antimicrobial susceptibility tests for bacteria that grow aerobically) in contact with inoculum Gram-positive S. aureus and Gram-negative P. aeruginosa . For this purpose, different amount of each sample (4, 8, 16, 32, 64, 128, and 256 μg mL −1 ) were suspended in dimethyl sulfoxide (DMSO) to improve the oligomer dispersion. Ciprofloxacin was used as reference for antimicrobial agent. Inoculum of each microorganism were growth in MH broth for 16 h/37 °C and adjusted to 1 × 10 5 colony unit forming (CFU mL −1 ). An equal volume (500 μL) of bacteria/oligomer suspension was placed in a sterile Eppendorf tube and stirred at 37 °C for 24 h. After interaction, an aliquot (50 μL) was plated in MH agar and the minimum inhibitory concentration (MIC) and the bactericidal concentration (CMB) was calculated, compared with a positive control (bacterial growth). Oligomers were synthesized by ROP of CL using ammonium decamolybdate (NH 4 ) 8 [Mo 10 O 34 ] as a catalyst and alcohols (ROH) as initiators: prenol, nerol, geraniol, β-citronellol, farnesol, and 1-pentadecanol ( Scheme 1 ). To examine the effect of the terminal group on the properties of the oligo(CLs), the feed molar ratio of CL to ROH was set to 1. The main idea was to obtain a blend of different types of oligomers such as monomers, dimers, and trimers after the polymerization reaction. Table 1 shows the results obtained for the oligo(CLs) synthesized after a reaction time of 1.5 h at 150 °C. These oligo(CLs) were successfully obtained with high conversion (from 96 to 99%). The experimental values of number-average molecular weight ( M n ) were obtained by nuclear magnetic resonance (NMR) spectroscopy and size exclusion chromatography (SEC). The M n (NMR) values were similar to those of M n (calcd). On the other hand, M n (SEC) was between 600 and 800 g mol −1 and was higher than M n (calcd). The M n (NMR)/ M n (SEC) ratio was between 0.3 and 0.5. In previous studies, M n (SEC) was found to overestimate the real M n value, 59–61 and M n (NMR) is usually more accurate for oligomers than M n (SEC). This effect is attributed to the differences in the hydrodynamic radius between polystyrene standards and PCL samples. The experimental DP detected by NMR showed similar numbers to the feed molar values of CL/ROH, which indicated the control of DP. All oligo(CLs) exhibited a high end-group values (wt% = 56–66%). Fig. 1 shows the 1 H NMR spectrum of the oligo(CL) synthetized using nerol as an initiator (C 10 N-PCL). The spectrum showed characteristic peaks for methylene attached to hydroxyl [ a , –CH 2 OH, δ 3.63], and methyl end groups [ k , –CH 3 , δ 1.75]. The repetitive units are attributed to the methylenes of the main chain of the polymer [ e , –CH 2 –O−, δ 4.05 and d , –CH 2 –(C O)–O−, δ 2.30], and the vinyl groups of the nerol [ g ′, j , –CH C–, δ 5.34, 5.08] were clearly visible. Furthermore, signals corresponding to unreacted nerol were observed ([ f , –CH 2 –, δ 4.08] and [ g , –CH C–, δ 5.43]). There was no evidence of the oxidation of olefinic groups (epoxy or secondary alcohol). Another oligomer obtained with farnesol as the initiator showed the same pattern of peaks for the PCL except for additional signals assigned to the longer olefin end-group . Thermal properties such as the glass transition temperature ( T g ), crystallization temperature ( T c ), and melting temperature ( T m ) were studied by differential scanning calorimetry (DSC) to examine the effect of olefin and alkyl groups on the oligo(CLs) ( Table 1 ). The oligo(CLs) produced using geraniol, β-citronellol, and farnesol as initiators did not show any transition in the range of −85 to 75 °C, which suggests that the samples were amorphous. This effect is due to branched methyl and vinyl carbons that favored steric hindrance and rigidity, inducing an amorphous domain in the oligomer chains. In the case of oligo(CL) with the olefin nerol as the initiator (C 10 N-PCL, T c = −48, and T m = −31, −5 °C), an unusual result was observed in comparison to the rest of the oligo(CLs) with olefinic terminal groups. Nerol and geraniol are geometric isomers with Z and E configurations, respectively. It is likely that the branched methyl attached to the alcohol in the nerol is sterically less exposed in comparison to geraniol. Consequently, when nerol becomes a terminal group in the oligo(CL), it results in less steric hindrance of the blend of different chains of oligo(CL), which leads to a semicrystalline domain at low temperature ( T m = −31, −5 °C). In contrast, an oligo(CL) with an alkyl terminal group (initiated with 1-pentadecanol, C 15 1P-PCL), there was an intense endothermic transition of T m (8, 16, 27, 31 °C). Comparison between C 15 1P-PCL (alkyl end-group) and C 15 F-PCL (olefin end-group) showed different environments in terms of their morphologies, which were semicrystalline and mainly amorphous, respectively. It is evident that an alkyl terminal group such as pentadecyl (C 15 , C 15 1P-PCL) with anti conformation favored the crystallization of the oligo(CL). On the other hand, the olefinic end-group (C 15 , C 15 F-PCL) with sp 2 carbons and branched methyl promotes amorphous domains. The main purpose of the synthesis in this study is the possibility that the species examined could be the precursors of monodisperses oligomers (monomers, dimers, and trimers). The monodisperse species were gradually isolated from the crude reaction using thin layer chromatography (TLC). For this method, a mixture of toluene/ethyl acetate solvents (Experimental section) was effective in the separation of spots, and then flash column chromatography (FCC) was used to isolate monodisperse oligomeric species derived from oligo(CLs) ( Table 2 ). In the case of the oligo(CL) functionalized with the olefinic end-group farnesyl (C 15 F-PCL; Table 1 ), a family of three different monodisperse oligomers was isolated after FCC: monomer (C 15 F-CL 1 ), dimer (C 15 F-CL 2 ), and trimer (C 15 F-CL 3 ). Different characterization techniques were used to illustrate the chemical nature of C 15 F-CL 1 . Fig. 3a shows the 1 H NMR spectrum of C 15 F-CL 1 , which indicates characteristic peaks of both terminal groups, such as methylene adjacent to the hydroxyl group [ a , –CH 2 –OH, δ 3.64] and methine of the vinyl groups in the farnesyl group [ g , j , m , –(CH 3 )C CH–, δ 5.33, 5.09]. The signals attributed to two methines showed an overlap of two triplets ( j and m ), where the relative ratio of j and m with respect to the third methine ( g ) is 2 : 1. A ratio of 2 : 2 was obtained for the two signals assigned to methylenes attached to the hydroxyl group ( a ) and ester terminal group [ f , –CH 2 –O–(C O)–, δ 4.58]. Thus, the integral values of both end-groups confirm the presence of C 15 F-CL 1 as a monomer species. Additionally, the 1 H NMR spectrum of the monodisperse species C 15 F-CL 1 lacks the characteristic signal of the ester group [ e , –CH 2 –O–(C O)–, δ 4.05] in the repetitive unit of a typical oligomer such as C 15 F-PCL . On the other hand, as shown in Fig. 3b , the Fourier-transform infrared (FTIR) spectrum of C 15 F-CL 1 contained a band at 3454 cm −1 , which was attributed to the hydroxyl group, as well as a band characteristic of ester carbonyl (C O) at 1728 cm −1 , another stretching vibration at 1160 cm −1 corresponding to the ester group [–(C O)–O–], and a band at 955 cm −1 that was attributable to olefinic (C CH) group vibration. The NMR and FTIR results suggest that C 15 F-CL 1 is a monodisperse species. To confirm this, a mass spectrometry analysis was carried out. Fig. 4 (top, left) shows the electrospray ionization quadrupole time-of-flight mass spectrometry (ESI/MS-QTOF) spectrum of the monodisperse monomer C 15 F-CL 1 ( Table 2 ) in positive mode after doping with sodium (Na + ). The results confirmed the expected molecular weight. Compared with simulated spectrum, the difference was less than 0.30 g mol −1 , indicating the presence of this monodisperse species. Other monodisperse monomers initiated by nerol (C 10 N-CL 1 ) and citronellol (C 10 C-CL 1 ) and isolated using FCC were analyzed by electron impact (EI) mass spectra . In the mass spectrum in Fig. 5a , the molecular ion of C 10 N-CL 1 at m / z 268 was observed, but there was a peak at m / z 250 corresponding to M-18 from loss of water, as well as a peak at m / z 222 resulting from allylic cleavage from the peak at m / z 250. The rationalized structure corresponding to the peak at m / z 207 is formed by double bond isomerization, resulting in increased conjugation. Similarly, the EI mass spectrum confirmed the presence of monodisperse species of C 10 C-CL 1 . The peak at m / z 252 corresponds to the dehydration product, and the peak at m / z 207 is formed by dehydrogenation that increases the system conjugation and partial loss of the alkyl chain on the caprolactone side. Finally, the peak at m / z 155 corresponds to citronellol with one less proton. Another interesting species in the sequences of the DP is a dimer derived from β-citronellol and 2-CL called C 10 C-CL 2 . Fig. 6 shows the 1 H NMR spectrum of the C 10 C-CL 2 isolated from the oligomer synthesized with β-citronellol (C 10 C-PCL). The characteristic peaks show the presence of dimeric species, such as methylene adjacent to the hydroxyl group [ a , –CH 2 –OH, δ 3.64], a signal for the vinyl group of the terpene [ k , CH–, δ 5.08], and a d signal composed of two triplets at the same chemical shift indicating two α-methyles into the dimer. Additionally, a new signal e is observed, which indicates the presence of a second monomeric unit of CL. This signal was is absent in the previous 1 H NMR spectrum of the monodisperse monomer of C 15 F-CL 1 . The methylene groups of the signals a and d [–CH 2 –(C O)–O–, δ 4.10] adjacent to the ester group had a relative ratio of 2 : 4, which is evidence of a dimer species. It is well known in organic and polymer chemistry that aliphatic and olefinic groups have different properties. Therefore, aliphatic and olefinic groups were compared as parts of terminal groups in a monodisperse species. The main idea was to contrast aliphatic and olefinic terminal groups with the same number of carbons (C 15 ). Using a similar procedure to that described previously (Experimental section), a family of monodisperse species was derived from CL and functionalized with an aliphatic terminal group such as a pentadecyl (C 15 ) group and isolated (C 15 -1P-CL 1 , C 15 -1P-CL 2 , and C 15 -1P-CL 3 , Table 2 ). Fig. 7 shows 1 H NMR spectra of the monomer (C 15 -1P-CL 1 ), dimer (C 15 -1PCL 2 ), and trimer (C 15 -1PCL 3 ) functionalized with 1-pentadecyl. There were characteristic peaks of two terminal groups: the methyl group of the C 15 moiety [ i , CH 3 , δ 0.89] and the methylene adjacent to the hydroxyl group [ a , CH 2 , δ 3.66]. The relative ratio of terminal groups i to a was 3 : 2. Additionally, the proportion of signals between a and methylene close to ester [ f and e , CH 2 , δ 4.10] in the monomer , dimer and trimer was 2 : 2, 2 : 4, and 2 : 6, respectively, confirming the monodisperse oligomers. The 13 C NMR spectrum also confirmed both terminal groups of C 15 1P-CL 1 , α-hydroxyl ( δ 62.7 ppm, a ) and ω-methyl ( δ 22.6 ppm, n ) . Fig. 4 (top, right) shows the ESI/MS-QTOF spectrum used to validate the chemical nature of C 15 -1P-CL 1 by another technique. The theoretical and experimental signals correspond to the monomer functionalized with a pentadecyl terminal group doped with sodium (Na + ). Three different monodisperse species (monomer, dimer, and trimer) were derived with an olefinic farnesyl end-group, isolated, and characterized. The monomer was previously illustrated in Fig. 3 . Table 2 shows the thermal properties of the monodisperse species. No melting point ( T m ) was observed for all oligomeric monodisperse species derived from CL and olefinic terpenes (no. 1–12), indicating that the samples are amorphous with a liquid translucent appearance and viscosity. However, the glass transition temperature ( T g ) was observed in the entire family of oligoesters with the olefinic end-group with a characteristic pattern from monomer to trimer. In the case of oligomers with C 10 (geraniol) terminal group , T g decreased from −38 (monomer) to −51 °C (trimer), indicating that the olefinic and methyl branch of the terminal group induced a disruption of order in the monodisperse oligo(CL) chain. This effect increased with the DP, with increases of DP producing an amorphous domain with an olefinic end-group. The same effect was observed for the rest of the family of monodisperse oligo(CLs) with an olefinic terminal group ( Table 2 , no. 1–12). Monodisperse species with an aliphatic pentadecyl (C 15 ) end-group formed a unique family that exhibited a melting temperature ( T m ). The monomer (C 15 -1P-CL 1 ), dimer (C 15 -1P-CL 2 ), and trimer (C 15 -1P-CL 3 ) had T m values of 34 to 43 °C. Olefinic and aliphatic terminal groups had an opposite effect on the PCL monodisperse oligomers. To illustrate both effects, Fig. 10 shows the DSC thermograms of two types of families with different end-groups (aliphatic C 15 -1P-CL and olefinic C 15 F-CL). An amorphous domain was induced by the olefinic end-group farnesyl, and the semicrystalline domain was favored by the aliphatic terminal group pentadecyl. Both terminal groups induced monodisperse oligomers (monomer, dimer, and trimer) with the same physical properties as those of the alcohols or initiators [farnesol (C 15 H 26 O, liquid), and 1-pentadecanol (C 15 H 32 O, solid)]. These results suggest the importance of the alcohols (R-OH) as initiators and that the hybridization (sp 2 , olefin vs. sp 3 , aliphatic) and the substituents (methyl branch, olefin vs. linear, aliphatic) are the key to the physical properties of monodisperse monomer, dimer, and trimer species. To illustrate the physical properties of both monodisperse monomers with different terminal groups (C 15 F-CL 1 and C 15 1P-CL 1 ), Fig. 11 shows the POM micrography results obtained in different environments. C 15 F-CL 1 appears as an amorphous liquid, but C 15 1P-CL 1 exhibits a spherulite showing a semicrystalline domain. To examine the thermal stability, the initiators were characterized by thermogravimetric analysis (TGA) . The thermograms show differences in the thermal stability between farnesol and 1-pentadecanol with a very different thermal decomposition temperatures ( T d ) of 296 and 250 °C, respectively. However, the monodisperse monomers functionalized with farnesyl (C 15 F-CL 1 ) and pentadecyl (C 15 1P-CL 1 ) end-groups showed an increase in T d relative to their initiators. Thus, the addition of a caprolactone unit provides an effect of thermal stability toward pyrolysis. Terpenes such as β-citronellol, geraniol, nerol, and farnesol, have been studied due to their antibacterial activities. 34,39,64–70 One of the goals in this work was to preserve the biological effects and properties of terpenes as part of terminal groups in monodisperse oligomeric species. First, a conventional antibiotic, ciprofloxacin, and a monodisperse oligomer, C 10 C-CL 1 , were compared using antibiograms . There were dramatic differences in antibacterial activity and effective concentrations between ciprofloxacin and C 10 C-CL 1 . The C 10 C-CL 1 inhibited the growth of S. aureus (Gram positive), but the inhibition decreased at low concentration. The minimum inhibitory concentration (MIC) and minimum bactericidal concentration (MBC) were studied using the NCCLS method 71 and two types of bacteria: S. aureus (Gram positive) and P. aeruginosa (Gram negative) ( Table 3 ). All alcohols previously used as initiators in the polymerizations (β-citronellol, nerol, geraniol, farnesol, and 1-pentadecanol) showed antibacterial activity, as reported previously. 34,39,64–70 In the case of S. aureus , an MIC of 32 μg mL −1 was obtained for β-citronellol and farnesol. The same result was found for geraniol, nerol, and 1-pentadecanol with P. aeruginosa . The antimicrobial activity of terpenes against Gram-positive bacteria is influenced by their lipophilicity and hydrophobicity, as well as the presence of hydroxyl groups. 70,72 The bactericidal behavior of oligomers can be regulated through the incorporation of terpene alcohols, which modulate the membrane stability of S. aureus or potentiate their bacterial loss integrity. 73 According to Lopez-Romero et al. , 74 the bactericidal mechanisms of essential oils such as β-citronellol include surface-charge alteration and K + leakage, which improve the disruption of S. aureus membranes. As shown in Table 3 , the monodisperse oligomers functionalized with terpenes had antibacterial activity and preserved properties similar to those of their precursors. For instance, the length of the oligomers (C 10 or C 15 ) and the unsaturation degree are independent of the bactericidal response, which is mainly attributed to each terpene as terminal group. However, higher bactericidal response is observed in specific monomers, including C 10 N-CL 1 , and C 10 C-CL 2–3 against S. aureus , and C 10 1P-CL 1 against P. aeruginosa . This behavior indicates that the size of monodisperse oligomers, for example, mainly monomers plays a crucial role in the antibacterial properties associated with the availability of the terminal chain of terpenes, which can perform the ionic imbalance through the cell membrane and interfere with glycan synthesis, resulting in cell death. 75,76 For example, for series 1, the results with S. aureus showed that farnesol had half the values of MIC (32 μg mL −1 ) and MBC (64 μg mL −1 ) with respect to the monomer (C 15 F-CL 1 ), dimer (C 15 F-CL 2 ), and trimer (C 15 F-CL 3 ). This indicated a decreased activity of oligomers, which is probably due to the repetitive unit of CL. However, in the case of P. aeruginosa , farnesol and the oligomers exhibited an MIC and MBC of 64 μg mL −1 and 128 μg mL −1 , respectively. C 15 F-CL 1 , C 15 F-CL 2 , and C 15 F-CL 3 exhibited the same antibacterial activity (MIC and MBC) for both S. aureus and P. aeruginosa . On the other hand, comparing two types of geometric isomers as terminal groups in terms of MBC, such as, derivatives of nerol [ Z isomer or cis , (C 10 N-CL x )] vs. geraniol [ E isomer or trans , (C 10 G-CL x )] ( Table 3 ), the effect on the P. aeruginosa was negligible. However, with the bacterium S. aureus the case was dramatically different, where C 10 N-CL x monodisperse species showed a pattern on the antibacterial activity, where monomer > dimer > trimer . In contrast, C 10 G-CL x species had non-significant effect. So, this result suggests that Z isomer (or cis ) end group in the C 10 N-CL x can induce a significant disruption, probably, in the cell membrane of S. aureus , and this effect is directly proportional to the weight percent (wt%) of the nerol as a terminal group, from trimer (31%) to monomer (57%) . In this sense, the exploration of organic molecules with Z isomers (or cis isomers as end-groups) as a significant factor against S. aureus will be worked in a future contribution. Although the monodisperse species ( Table 3 ) demonstrated antibacterial activity, there was a significant gap from that of the conventional antibiotic ciprofloxacin, which has 512 times higher activity in terms of MIC for S. aureus . This is attributed to the direct inhibition of the DNA-gyrase and prevention of bacterial DNA replication. It is important to note that 1-pentadecanol oligomers demonstrate slight bactericidal behavior against S. aureus . However, the MIC value of C 15 1P-CL 1 against P. aeruginosa is 16 μg mL −1 with the pentadecyl end-group. The key point was the difference with P. aeruginosa , for which MIC and MBC were low compared to S. aureus . These results suggest that an aliphatic terminal group tends to act more as an antibiotic on the Gram-negative bacterium such as P. aeruginosa compared to the Gram-positive bacterium S. aureus . Thus, the bacterial membrane composition (particularly peptidoglycan) plays a key role in the oligomer interaction. It is well known that the molecules as terpenes and aromatic compound has been explored in terms of their antifungal 77,78 and antibacterial 79 activity, for example, nerol, 77 citral, 78 trans -anethole 79 and estragole. 79 In these cases, the inherent hydrophobicity to the terpenes play a significant role, where the affinity and accumulation of those in the cell membrane produce a loss of membrane integrity; 79 this phenome was detected with an increase in the extracellular conductivity and extracellular pH, 77,78 which indicates rapid leakage of ions; complementary, using scanning electron microscopy (SEM) was observed severe effects on the cell wall and cytoplasmic membrane. 79 In the case of monodisperse oligomers ( Table 3 ), probably, the mechanism of antibacterial activity involves a significant membrane disruption against bacteria, however, more studies will be explored in our laboratory to validate the damage. In this work, a series of oligomers derived from PCL were synthesized by ring-opening polymerization (ROP) of ε-caprolactone (CL) using terpenes as initiators called oligo(CLs). The oligo(CLs) had specific terminal groups derived from terpenes. Using a flash column chromatography (FCC) a family of fifteen monodisperse species such as monomer, dimer, and trimer were isolated. The thermal properties of monodisperse species derived from olefinic terpenes included a unique glass transition temperature ( T g ), which decreased from the monomer to the trimer and increased the amorphous domain. However, monodisperse oligomers derived from aliphatic pentadecyl exhibited a semicrystalline domain with characteristic melting temperature ( T m ). There were remarkable differences in physical properties between monodisperse oligomeric species with farnesyl and pentadecyl end-groups, which both have the same number of carbons (C 15 ) but different functionality. Farnesyl produced liquid oligomers, and pentadecyl produced semicrystalline powders. This is the first report of using a family of terpenes to functionalize monodisperse oligomeric species. Due to the antibacterial properties of terpenes and their use as initiators, the monodisperse species showed antibacterial activity, especially for Gram-positive S. aureus . These monodisperse species could lead to new antibiotic compounds with potential applications. Studies on the mechanisms of action are currently underway in our laboratory. The data supporting this article have been included as part of ESI. † María Guadalupe Ortiz-Aldaco: investigation, validation, formal analysis, writing – original draft. Miriam Estévez: funding acquisition. Beatriz Liliana España-Sánchez: biological properties, supervision. José Bonilla-Cruz: thermal properties. Eloy Rodríguez-deLeón: mass spectrometry. José E. Báez: conceptualization, supervision, formal analysis, writing – original draft, writing – review, funding acquisition. There are no conflicts to declare.
|
Study
|
biomedical
|
en
| 0.999997 |
PMC11694954
|
The advent of endoscopic endonasal transsphenoidal surgery revolutionized the treatment of sellar and parasellar tumors, offering a minimally invasive approach with enhanced visualization and reduced morbidity compared to traditional techniques . Despite its numerous advantages, this surgery harbors inherent risks, particularly concerning olfactory function, owing to the obligatory resection of nasal structures . The olfactory neuroepithelium, located in the superior portion of the nasal vault, is integral to the sense of smell , and its proximity to the surgical field makes it susceptible to iatrogenic injury during such procedures. Preservation of the olfactory strip is crucial to maintain olfactory function during surgical procedures . Studies have documented the impact of endoscopic endonasal surgery on olfactory outcomes , prompting a shift towards the adoption of minimally invasive strategies aimed at preserving olfaction [ 6 – 8 ]. Despite efforts by several groups to minimize the impact on olfactory function through various techniques, the precise effects of structural changes within the nasal cavity on postoperative olfaction remain largely unexplored. While a cadaveric study suggested that posterior septectomy influences the sinonasal quality of life, this needs clinical validation . Therefore, we quantified the extent of posterior septectomy in patients and correlated it with changes in olfactory function between before and after surgery. This retrospective investigation assessed the outcomes of patients who underwent endoscopic endonasal transsphenoidal surgery from March 2010 through December 2022. The study was approved by the Institutional Review Board of Seoul Saint Mary’s Hospital , which waived the need for informed consent from patients given the retrospective study design. Sinonasal functionality was evaluated utilizing a suite of tests: the Connecticut Chemosensory Clinical Research Center (CCCRC) test , Cross-Cultural Smell Identification Test (CCSIT) , Sino-Nasal Outcome Test-22 (SNOT-22) , and a Visual Analog Scale (VAS), before surgery and 6 months following the procedure. Changes in sinonasal outcomes were determined by deducting the preoperative scores from the postoperative ones, with CCCRC outcomes averaged across both sides. The research team consisted of two independent neurosurgeons and two rhinology experts, focusing exclusively on pituitary adenomas to minimize the variability associated with specific surgical techniques. All interventions used a bilateral transnasal approach, incorporating a modified nasoseptal rescue flap, and involved partial resection of the posterior nasal septum, including the perpendicular plate of the ethmoid bone, the vomer, and the sphenoid sinus anterior wall . The resected septal area was quantified using three-dimensional reconstruction of CT scans of the bony nasal septum, using Mimics Base software (ver. 22, Materialise, Leuven, Belgium). Following the methodology similar to Kim et al., the measurement process began with identifying the axial image showing the most prominent septal defect area . The boundary delineation utilized the differential brightness between subcutaneous tissue and cartilage, which typically shows a mean difference of 50 hounsfield units. In the axial view, the margin along the defect was marked with dots to create an initial boundary. When viewing this marked area in the sagittal reconstructed image , the area initially included the sphenoid sinus cavity. To accurately measure only the surgically resected septum area, the sphenoid sinus cavity was excluded by referring to the patient’s preoperative CT. The posterior boundary for measurements was defined by the preoperative anterior wall of the sphenoid sinus, as inferred from the bone contour remaining after the surgery. This step was critical as the standard surgical technique typically involves resecting approximately 1–2 cm of the posterior septum measured from the anterior sphenoid wall . The anterior margin of the septectomy area was further detailed along the midline of the septum air-soft tissue interface. The final three-dimensionally reconstructed area was verified against the sagittal image to ensure accurate representation of the septectomy site. The total septectomy area was calculated by averaging measurements from both sagittal surfaces of the reconstructed area, providing a more reliable estimate of the actual resection size. Statistical comparisons were made using the Student’s t -test, with all analyses performed using IBM SPSS Statistics (ver. 24.0, SPSS, Chicago, IL, USA). A p-value of less than 0.05 was considered statistically significant. Table 1 summarizes the baseline preoperative characteristics of the 295 patients. A nasoseptal flap was used to repair cerebrospinal fluid leaks in 7.8%, which is within the reported range . Baseline values of the olfactory function tests and questionnaire scores were also within the normal ranges [ 10 – 12 , 18 ]. Table 2 shows there was a significant correlation between the extent of the resected posterior septal area and reduction in CCSIT scores. The results of the CCCRC showed a similar trend, but did not reach statistical significance. This means that for subjective symptoms, patients who undergo more extensive posterior septectomies reported significantly greater discomfort, quantified by elevated SNOT-22 scores. Notably, a larger decrease in olfactory function, assessed with the VAS, was significantly associated with an increased area of posterior septum resected. Olfactory function is an essential postsurgical sinonasal outcome and its significance is often underestimated . Postoperative olfactory dysfunction significantly reduces a patient’s quality of life, manifesting as an inability to detect spoiled foods, gas leaks, or smoke, along with reduced enjoyment of culinary activities . The olfactory neuroepithelium, which houses sensory receptors crucial for olfaction, is located in the upper nasal vault along the cribriform plate, superior turbinate, and superior septum . In the transnasal transsphenoidal approach, the posterior segment of the septum is resected to ensure sufficient access to the sellar region . Despite concerted efforts to conserve this critical area [ 6 – 8 ], the integrity of the neuroepithelium is often compromised while reaching the sellar floor via transsphenoidal approaches, leading to potential olfactory impairment. Previous studies have documented marked declines in both the CCCRC and CCSIT scores following endoscopic endonasal transsphenoidal procedures . Similarly, Tam et al . observed a significant reduction in the University of Pennsylvania Smell Identification Test scores after surgery, irrespective of the use of a septal flap . Despite these insights, no quantitative analysis has explored the direct relationship between specific anatomical changes and the extent of postoperative olfactory dysfunction. Therefore, we examined whether extensive manipulation of the posterosuperior nasal septum exacerbates olfactory dysfunction. Using advanced image-reconstruction techniques, we precisely quantified the resected septal area. This revealed a correlation between the magnitude of septal resection and both the severity of olfactory complaints and the deterioration of objective olfactory assessments. The dimensions of the posterosuperior septal defect are crucial for regulating airflow within the nasal cavity . The surgical removal of anatomical structures induces turbulent airflow, leading to nasal dryness and crust formation . Airflow turbulence in the uppermost nasal cavity impedes the stable retention of odorant molecules within the olfactory cleft, thereby curtailing their interaction with the sensory epithelium . Therefore, we hypothesized that the extent of posterior septectomy would directly affect postoperative olfactory outcomes. Our subsequent analysis validated this hypothesis, demonstrating that a larger resected area of the posterior septum is associated with a more significant reduction in olfactory function after surgery. The observed correlation between extensive posterior septectomy and olfactory dysfunction may be attributable to the significant change in airflow dynamics around the olfactory cleft, a consequence of the integral role of the nasal septum in maintaining laminar airflow and by extension, the equilibrium of the nasal cavity . Such disturbances are exacerbated by an increase in airflow velocity and resultant dryness of the olfactory mucosa, which collectively impede the efficient interaction between sensory receptors and odorants, crucial for olfactory perception . Further research should elucidate the precise mechanisms underlying this association. In this study, we confirmed that a single factor, the degree of posterior septectomy, plays an important role in postoperative olfactory function. Nevertheless, this study had several limitations. First, the size and characteristics of the tumor inherently influence both the duration and intricacies of the surgical procedure, which in turn may affect postoperative olfactory outcomes. To overcome this problem, we tried to standardize the surgical technique and control variables by limiting the results to patients diagnosed with pituitary adenoma who underwent surgery. Second, the involvement of two neurosurgeons introduces the potential for variability in sinonasal outcomes due to differences in individual surgical expertise and approach. Another consideration is the influence of preoperative olfactory function, which can vary with patient age, potentially confounding the assessment of olfactory outcomes. The age factor is particularly relevant given that a case-control study of 60 pediatric patients showed no significant long-term differences in sniffin’ sticks test results between patients with and without a history of endoscopic endonasal skull base surgery . In the context of endoscopic endonasal skull base surgery, optimizing surgical visibility and instrument access while minimizing deformation of the sinonasal anatomy is paramount . This approach is vital when selecting surgical strategies to mitigate postoperative olfactory impairment, while maintaining the objectives of the surgical intervention. The preservation of olfactory function may be achieved through conservative septal resection techniques that prioritize maintaining septal structural integrity.
|
Review
|
biomedical
|
en
| 0.999997 |
PMC11694959
|
In March 2020, the World Health Organization declared a global pandemic of COVID-19 caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) , prompting extensive mathematical modelling efforts to understand transmission dynamics and assess the global spread of the disease . Understanding the risk of multiple reinfections is crucial, in light of waning immunity and the emergence of new variants, which could enhance SARS-CoV-2 spread through previously infected individuals. Prior research has laid the foundation for understanding SARS-CoV-2 reinfection dynamics. Wangari et al . examined reinfection transmission mechanisms using a compartmental model . Another study developed a model to validate a test-negative study design, which revealed that protection against reinfection was higher when the primary infection was caused by the Alpha variant compared to the Beta variant . Pulliam et al .’s catalytic model represents another pivotal development. Using line list data of all observed infections in South Africa, the model assumed that the reinfection hazard was proportional to the seven-day moving average of detected cases, with a constant hazard coefficient. By comparing the projected with observed reinfections during the projection period, it assessed potential changes in the reinfection hazard coefficient. The study identified an increase in reinfection hazard during the Omicron wave in November 2021, providing the first epidemiological evidence of the Omicron variant’s increased reinfection risk compared to previous variants . The noticeable increase in reinfections during South Africa’s Omicron wave prompts the investigation of the risk of multiple reinfections (three or more infections). Our study generalises the model developed by Pulliam et al . to detect increases in the risk of multiple reinfections in South Africa. The original model findings, complemented by the findings for third infections from the extended model presented here, have been applied to South African data and published in the National Institute for Communicable Diseases (NICD) monthly report on SARS-CoV-2 Reinfection Trends in South Africa . Our study contributes to the understand of practical immune dynamics of SARS-CoV-2, potentially informing vaccination policies and the identification of emerging immune-escape variants of SARS-CoV-2. The dataset used in this study is a time series of the daily counts of primary infections, second infections, third infections and fourth infections of SARS-CoV-2 in South Africa from 4 March 2020 to 29 November 2022. This dataset, as detailed in Pulliam et al . is accessible on Zenodo . The observed infections in the dataset were obtained from a national dataset containing all positive tests in South Africa, detected by either polymerase chain reaction (PCR) or rapid antigen tests (RATs). Reporting of positive tests by laboratories was mandatory, although RATs are known to have been underreported. In the dataset, deterministic and probabilistic linkage methods were used to identify repeated tests of the same person. Positive tests of an individual that were at least 90 days after the most recent positive test from the previously observed infection were assumed to represent new infections. This delay is introduced to distinguish reinfection from prolonged viral shedding . The specimen receipt date was used as the date of reference in the analysis. In this study, we focus on the number of third infections ( n = 3). The adapted model calculates the number of expected n th infections on day x from prior ( n −1) th infections reported at least 90 days prior to day x without subsequent detection. Building on the original catalytic model, with t the last date of testing positive for the ( n −1) th infection, the cumulative hazard through day x of an individual is calculated as: H ( t , x ) = λ n − 1 ∑ i = t + 90 i = x Î i t o t where Î i t o t is the 7-day moving average of total infections reported on day i and λ n −1 is a n th infection fitted coefficient describing hazard experienced by individuals with n −1 prior infections. From this, the probability of an n th infection by day x , given a previous ( n −1) th reported infection on day t is described as: p n ( t , x ) = 1 − e − H ( t , x ) The expected number of n th infections, Y n , x reported by day x is calculated by summing over possible dates for the ( n −1) th infection, calculated as: Y n , x = ∑ t = 0 t = x I n − 1 , t p n ( t , x ) where I n −1, t is the number of ( n −1) th infections reported on day t . The expected number of n th infections on day x can then be calculated as: D x = Y n , x − Y n , x − 1 . We used this model to assess third infection risk in South Africa from March 2020 to November 2022 and performed simulation-based validation under a broad range of scenarios to evaluate the performance of the method. The model assumes that the number of reinfections follows a negative binomial distribution with a mean denoted by D x . In Pulliam et al., two parameters—the hazard coefficient ( λ 1 ) and the inverse of the negative binomial dispersion parameter ( κ 1 )—were fitted to the data up to 28 February 2021. These fits were projected forward and by comparing projections to observations an increase in the risk of a second infection was detected during the Omicron wave . When considering the risk of third infections ( n = 3), the generalised model’s parameters did not converge over the same fitting period due to low observation of third infections before the Omicron wave . To overcome this issue, the fitting period was extended to 31 January 2022 and an additional parameter ( λ 2 ′ ) was introduced to account for the increased risk of a second infection with the Omicron variant . The probability of having a third infection reported by day x , given that the person had a positive test for a second infection on day t , can then be calculated as: p 2 ( t , x ) = 1 − e − λ t , 2 ∑ i = t + 90 i = x Î i ′ where λ t , 2 = { λ 2 i f i ≤ t 1 ; λ ′ 2 i f i > t 1 } and t 1 = 31 October 2021. Model parameters were fitted using Monte Carlo Markov Chains (MCMC). We used 10,000 iterations and four chains, discarding the first 1,500 iterations as burn-in for each chain. We specified uninformative prior distributions over the ranges 1.2∙10 −9 to 1.75∙10 −7 for λ n −1 and λ n − 1 ′ , and 0.001 to 2 for κ n −1 (the selected values are similar to the ranges chosen in ). Convergence of the parameters was measured using Gelman-Rubin diagnostics with the `gelman.diag`function from the coda package in R . Gelman-Rubin compares the within-chain and between-chain variance to evaluate the Monte Carlo Markov Chains, as this gives an indication of whether the initial value has been “forgotten”. A value of less than 1.1 indicates a low difference between the variances and, therefore, convergence . The projected n th infections were calculated from a joint posterior distribution from the chains that were fitted during the MCMC procedure, with 2,000 equally spaced samples drawn from the four chains (after discarding burn-in). For each model parameter combination from the posterior distribution, 100 stochastic simulations were run to calculate the number of expected third infections for each day up until 29 November 2022 for that parameter combination. From the realisations, two projection intervals are calculated: the middle 95% of the expected daily n th infections, and the middle 95% of the 7-day moving average of expected n th infections. In , we conducted simulation-based validation to assess the performance of the original catalytic model when introducing changes in the risk of second infections under different scenarios. We concluded that the model is robust to several important aspects of the observation process that are not directly accounted for in the model. Here, we assessed the model by performing sensitivity analyses on the model’s suitability for assessing third infection risk under different increases in the risk of second and third infections. To validate the n th infection method proposed in this study, we considered a simulated dataset of primary infections. The calculation of the simulated dataset was extended from where the seven-day moving average of infections from South African data (available from ) was increased by a factor of 5 and subjected to negative binomial sampling with a shape parameter of 1/ κ , where κ ≈0.27 was the median of the posterior sample in Pulliam et al. Since the method was shown to be robust for different observation probabilities for primary infections and reinfections , we considered fixed primary infection, second infection, and third infection observation probabilities (0.2, 0.5 and 0.35 respectively) for this analysis. We tested the performance of the model on simulated data in different data-generation scenarios by varying the difference in reinfection risk (both the second infection and third infection risk) between a pre-Omicron-like period and a later Omicron-like period. This approach determined whether the model could accurately 1) fit the model parameters for third infections ( λ 2 , λ 2 ′ and κ 2 ), and 2) detect simulated changes in the risk of third infection. Similar to , we generated a timeseries for the number of observed second infections and third infections from the simulated timeseries of primary infections, by drawing a binomial random variable based on the observation probabilities. Using the observed k th infections, the number of ( k +1) th infections was calculated using a unique hazard coefficient for each k >1. In this simulation-based validation, we calculated a time series for observed primary infections ( k = 1), observed second infections ( k = 2) and observed third infections ( k = 3). More information on how the simulated dataset was derived is available in the supplementary material. The hazard coefficients for second and third infections were modified using two scale parameters to represent different increases in the reinfection risk over time: the first increase, introduced by scale parameter σ 1 , was introduced to represent the Omicron wave with no subsequent increase in reinfection risk ( σ 1 varied between 1 and 3; and σ 2 = 1). Then, a second increase in the reinfection risk, introduced by scale parameter σ 2 , was simulated after 31 March 2022 ( σ 1 = 2.8 and σ 2 varied between 1 and 3). This second increase was introduced to assess the method’s ability to detect further increases in the risk of reinfection after the Omicron wave. We measured the Gelman-Rubin convergence diagnostics for κ 2 , λ 2 and λ 2 ′ for the varied scale parameter combinations. We also measured the proportion of days where observed third infections were above the upper 95% of the projection interval, as well as the timing of the first cluster of five consecutive days of observed third infections that fell above the projection interval, denoted as D first , which can be used to detect real-time increases in the risk of third infections. In simulations with no increase in third infection risk after 31 March 2022 (i.e., when σ 2 = 1), the existence of D first indicates a false positive detection of an increase in the risk of third infections, and therefore we measure the specificity of the model for each value of the scale parameter σ 1 as s p e c i f i c i t y = 1 − N u m b e r o f r u n s w h e r e D f i r s t e x i s t s & a l l p a r a m e t e r s c o n v e r g e d N u m b e r o f r u n s w h e r e a l l p a r a m e t e r s c o n v e r g e d The model and MCMC fitting procedure were implemented in the R Statistical Programming Language [version 4.3.1 ]. The code and simulated data for the generalised model is available on Github at https://github.com/SACEMA/reinfectionsBelinda . In Fig 1 , the number of primary, second, and third infections reported in South Africa from 4 March 2020 to 29 November 2022 is depicted, along with the number of people eligible for each category. The model was fitted to South African data of reported third infections through the Omicron period up to 31 January 2022 and the parameters ( λ 2 , λ 2 ′ and κ 2 ) converged well, with the Gelman-Rubin diagnostic values falling below 1.1. The convergence diagnostic for λ 2 ′ was slightly higher (around 1.05) than for λ 2 and κ 2 . Fig 3 shows the 95% projection interval of expected third infections (both the 7-day moving average and the daily third infections) and the observed third infections when the model was fitted to South African data through the first Omicron wave , and used to project third infections. From May to November 2022, the number of observed third infections (red solid line) reaches the lower edge of the band of the 95% 7-day moving average projection interval of third infections (red band), showing a potential decreased risk of third infections. No further increase in the risk of a third infection was detected after the first Omicron wave. After running the model fitting and model projection 20 times for each value of σ 1 and fixing σ 2 = 1, the negative binomial dispersion parameter ( κ 2 ) mostly converged when σ 1 >1.6. The proportion of runs where κ 2 converged increased as σ 1 increased, due to increased numbers of third infections. For more than 0.75 of the runs for each value of σ 1 the third infection hazard coefficient before the first Omicron wave ( λ 2 ) converged, whereas the third infection hazard coefficient for after the first Omicron wave ( λ 2 ′ ) converged in all the runs . The specificity (proportion of runs where an increase in the risk of third infection was not detected when there is no increase in third infection risk in the generated data, σ 2 = 1) was 0.74 and higher for all values of σ 1 (S2 Table in S1 File ). The proportion of observed third infections above the 95% projection interval remained below 2.5%, except one run where σ 1 = 3 which resulted in 5% of third infections above the projection interval. When fixing σ 1 = 2.8 and varying σ 2 with values of 1.2, 1.5 and 2, the median of D first from the runs where all the parameters ( κ 2 , λ 2 and λ 2 ′ ) converged decreased from 26 days to 7 days as σ 2 increased from σ 2 = 1.2 to σ 2 = 2 , and D first did not exist in most cases where σ 2 = 1 (specificity was 0.89). The proportion of points above the projection interval was 0.01 when σ 2 = 1 and gradually increased to 0.45 when σ 2 = 2 . In this study, the method used to detect changes in the risk of reinfection was successfully generalised to detect the risk of multiple reinfections and validated for third infections in South Africa. The output of the method for third infections was used by the NICD in their monthly report for monitoring third infection trends and will continue to contribute to surveillance efforts, particularly with the emergence of potential SARS-CoV-2 variants exhibiting immune escape. Applying the generalised model to data up to 29 November 2022 revealed no additional increase in the risk of third infection beyond the increase observed in the first Omicron wave compared to previous variants. With the extended method, we have demonstrated that we would have detected increases in the third infection risk during the fifth wave if such an increase existed. We performed a simulation-based validation of the method, where simulated data on third infections with SARS-CoV-2 were fitted and projected. The model is robust to changes in the risk of third infections when we fitted an additional parameter that represents the second and third infection hazard coefficient during waves where the reinfection risk is higher. When the increase in the second and third infection risk in the simulated data used for the validation was low, the negative binomial dispersion parameter did not converge in some runs. This is due to an insufficient number of simulated third infections to properly inform the parameter, whereas with the higher increase in the risk of second and third infection, more data were generated to properly inform the dispersion parameter. The specificity, which assesses the method’s ability to avoid false positive detections of third infection risk increases during the projection period, was generally high for most scale values representing the initial rise in third infection risk (first Omicron wave). This suggests that the model effectively distinguished increases in the risk of reinfections from random fluctuations or noise in the data. When introducing an additional increase in the third infection risk after the additional hazard coefficient parameter is introduced), the method detects the simulated increase in the risk of third infection even for the smallest increase we investigated ( σ 2 = 1.2). The proportion of points above the projection interval after the introduction of the additional increase in third infection risk was only 45% when the increase was 100% ( σ 2 = 2), which could be due to the low number of observed third infections after the fifth wave, likely driven by reduced testing. As we look towards applying this model to more complex scenarios, such as the risk of infections beyond the third infection, further validation is necessary. Incorporating prior knowledge and additional parameters, such as introducing a third lambda parameter to account for changing reinfection risks, will be important in ensuring accuracy. Additionally, variations in vaccine coverage across different populations may significantly influence reinfection risks and should be considered in future model applications. While our model provides valuable insights, it is limited by its sensitivity to low counts of observed reinfections, as sufficient case counts are required for parameter convergence. Pandemic fatigue, which leads to less testing and consequently lower numbers of observed reinfections, impacts the method’s applicability in a real-life situation. With low numbers of observed multiple reinfections, the model is less likely to detect increases in the risks of multiple reinfections. The catalytic model was effectively generalised to detect increases in the risk of n th infections. The method was applied to the observed third infections in South Africa to detect increases in the risk of third infection, and simulation-based validation showed its robustness in detecting increases in the risk of third infections of different magnitudes. The generalised method could contribute to future detection of increases in the risk of n th infections by SARS-CoV-2 or other pathogens with similar reinfection dynamics.
|
Study
|
biomedical
|
en
| 0.999997 |
PMC11694960
|
Primary total hip arthroplasty (THA) remains the gold standard procedure for end-stage hip osteoarthritis in relieving pain and regaining joint function . The volume of THA procedures is projected to increase due to the higher demand for improved mobility and quality of life in a growing elderly population . In a recent study using data from the Centers of Medicare & Medicaid Services, Shichman et al. projects that the annual volume of primary THA in the United States will exceed over 700,000 by 2040 and almost 2 million by 2060 . In 2010, the mean age of patients undergoing primary THA was 66, and the prevalence of THA in the United States among adults aged 50 years or older was 2.34% . While THA is among the five most common and fastest-growing procedures in the United States due to its high success rate and cost effectiveness , it is crucial to understand trends in complications and associated risk factors to optimize patient outcomes. According to Patel et al. , complications arise in 27.32% of primary THA cases, with postoperative anemia being the most common complication at 25.20% . Other common complications following THA are the development of postoperative delirium (POD) and postoperative cognitive dysfunction (POCD) . Per Kitsis et al. , total hip and knee arthroplasty patients experience POD with a median incidence of 14.8% and POCD with a median incidence of 19.3% at one week and 10% at three months . Patients who suffer from POD or POCD are at risk for poor outcomes including a prolonged hospital stay, increased mortality, and leaving the workforce prematurely . These postoperative complications, along with other patient-related factors, also influence whether a patient is discharged home or to a rehabilitation facility following THA [ 8 – 10 ]. While it is becoming more common to discharge patients directly home following total joint arthroplasty (TJA) , evidence-based approach is needed to help identify and risk-stratify home-discharged patients who are at highest risk of postoperative complications. Home conditions, specifically the presence of home caregiver support, may be an important factor in determining discharge location as well as predicting patient outcomes following discharge. This retrospective study was designed to investigate whether living alone impacted discharge disposition (home versus non-home) following elective THA in a matched cohort sample. The authors hypothesized that living alone would increase non-home discharge. This study was a retrospective cohort analysis of the data from a national de-identified database. Thus, it was exempt from Institutional Review Board approval. The American College of Surgeons National Surgeons Quality Improvement Program (ACS-NSQIP) 2021 database was queried, with THA defined by the Current Procedural Terminology code 27130. The ACS-NSQIP is a nationally validated program that has about 700 voluntarily participating hospitals and whose case selection was performed systematically. The following criteria were used to generate a homogenous study sample. Only elective cases were included. Cases with bone fracture stated in the diagnosis were excluded. The exclusion criteria also included 1) patients admitted to the hospital for longer than one day preceding surgery, 2) patients with preoperative documentation of end-stage renal disease, metastatic disease, sepsis, or bleeding diathesis, and 3) patients with American Society of Anesthesiologist Physical Status (ASA) classification 5 (moribund, not expected to survive without procedure) . There were no modifications made to the ACS-NSQIP classifications for these variables. These inclusion and exclusion criteria were used to generate a sample that could be generalized to the general population of patients for elective THA. These criteria were decided by the researchers a priori . All cases received either spinal anesthesia or general anesthesia as the principal anesthetic technique. All cases were required to have a documentation of home support (living alone versus with others at home). The primary independent variable was home support (living alone versus with others at home). Other independent variables included age, sex, body mass index (BMI), hypertension, insulin-dependent diabetes, current smoker, chronic obstructive pulmonary disease (COPD), congestive heart failure (CHF), chronic steroid use, functional status, fall within 6 months, dementia, ASA classification, and anesthetic technique (spinal versus general). The primary endpoint was discharge disposition (home versus non-home). For those discharged home, requiring home services was a major secondary endpoint. Secondary endpoints also included functional status at discharge, postoperative delirium, hospital length of stay, and 30-day event rates for unplanned resource utilization, wound complications, systemic complications, bleeding requiring transfusion, and mortality. Unplanned resource utilization included unplanned readmission and return to the operating room. Wound complications included superficial surgical site infection (SSI), deep incisional SSI, organ space SSI, and wound dehiscence. Systemic complications included cardiac arrest, myocardial infarction, stroke, reintubation, pneumonia, deep venous thrombosis, pulmonary embolism, bleeding, sepsis, septic shock, and acute kidney injury. R version 4.3.1 (R Core Team, Vienna, Austria) was used to perform statistical analysis . Patients living alone were identified, and those with others at home were matched using propensity scores computed by the following patient demographics and medical comorbidities: age, sex, BMI, hypertension, diabetes, smoking, COPD, CHF, chronic steroid use, functional status, ASA classification, anesthetic technique, fall history, and dementia. A 1:1 logistic regression covariate estimates with no replacement was used. Cases with missing data were excluded from the analysis in order to promote the creation of balanced cohorts, but with two exceptions: those with unknown functional status were assumed to be independent, and those with unknown fall history were assumed to have not fallen within 6 months given the observed distribution of responses for these two variables. Propensity scoring has been utilized widely to reduce selection bias in cohort studies . Standardized mean differences were used to assess balance among matched pairs, with a standardized mean difference < 0.1 denoting adequate balance . A maximum propensity score difference between groups was not specified a priori . The associations between living alone and various endpoints outcomes were first examined by univariate analyses. Categorical variables were assessed by Pearson’s chi-squared test without continuity correction while hospital length of stay was tested by Student’s t-test. Additionally, chi-square methodology was used to calculate odds ratios (OR), 95% confidence intervals (CI), and fragility indices . The fragility index indicates that changes of how many subject-dependent classifications alter the statistical significance of a hypothesis test. A greater number in the fragility index suggests a more robust difference between groups . A multivariable analysis was performed on the primary outcome. Multiple logistic regression modeling was used and adjustments were made for age, sex, BMI, hypertension, diabetes, smoking, COPD, CHF, chronic steroid use, functional status, fall history, dementia, ASA classification, anesthetic technique, and home support when univariate testing showed significance at α ≤ 0.05. The selection of these variables was decided a priori and based on the availability of the data for analysis. As with matching, only complete cases were analyzed. Missing data were not imputed. Backwards stepwise model adjusted by Akaike information criterion was used to determine the subsequent variable selection. Results of independent predictors were presented as adjusted odds ratios (AOR) and 95% confidence interval (CI), and the c-statistic was used to assess model discrimination . Continuous variables and categorical variables were presented as mean (standard deviation) and frequency (%), respectively. All hypothesis tests were two-sided, with α ≤ 0.05 as having a significant difference. Of 5677 THA patients that met inclusion criteria, 1716 (30.2%) were living alone at the time of surgery. After 1:1 propensity score matching of patient demographics and medical comorbidities, a total of 3248 patients were identified . Balance between cohorts was evidenced by standardized mean differences < 0.1 . For the sake of matching, those with unknown functional status were assumed to be independent given that this was the case for 96.7% of subjects with a functional status response. For the sake of matching, those with unknown fall history were assumed to have not fallen within 6 months given that this was the case for 93.8% of subjects with a fall history response. The patient demographics are subsequently summarized without making such an assumption, reporting only on those with valid responses for functional status and fall history. The distribution of patient demographics and medical comorbidities in those living alone versus with others at home both before and after matching are presented in Table 1 . On univariate analysis, living alone was associated with non-home discharge , need for services in those returning home , and increased length of stay (2.05 vs. 1.72 days; mean difference, 0.34 [95% CI, 0.18 to 0.49]; P < .001). The fragility index for non-home discharge was robust (124). Living alone was not associated with functional status at discharge, postoperative delirium, or rates of unplanned resource utilization, wound complications, systemic complications, bleeding, or mortality ( Table 2 ). On multivariable analysis, non-home discharge was independently predicted by age (AOR, 1.10 [95% CI, 1.07 to 1.13]; P < .001), BMI (AOR, 1.03 [95% CI, 1.01 to 1.05]; P < .001), hypertension (AOR, 1.21 [95% CI, 0.95 to 1.54]; P = .120), dependent functional status (AOR, 3.32 [95% CI, 2.06 to 5.30]; P < .001), fall within 6 months (AOR, 1.72 [95% CI, 1.22 to 2.40]; P = .001), dementia (AOR, 2.18 [95% CI, 1.29 to 3.62]; P = .003), ASA 3 or 4 classification (AOR, 1.97 [95% CI, 1.56 to 2.50]; P < .001), general anesthesia (AOR, 1.77 [95% CI, 1.44 to 2.18]; P < .001), and living alone (AOR, 2.84 [95% CI, 2.30 to 3.54]; P < .001). The model demonstrated good discrimination (model c-statistic, 0.734) and calibration (Hosmer-Lemeshow P = 0.416). This study determined that for patients undergoing primary THA, living alone was predictive of non-home discharge and increased hospital length of stay. Patients who were living alone were also more likely to require home services if discharged to home. In our study, patients living alone experienced no appreciable difference in functional status at discharge, postoperative delirium, rates of unplanned readmission or reoperation, wound or systemic complications, bleeding, or mortality when compared to patients with home support. To our knowledge, only one other study has specifically investigated the impact of living alone on postoperative outcomes . In that study, Fleischman et al. prospectively assessed the safety and efficacy of direct home discharge for patients living alone . However, their study included both primary total hip and knee arthroplasty patients from a single institution with direct home discharge as the standard of care regardless of home support status. Conversely, our study focused solely on primary THA using data from the ACS-NSQIP, a surgical outcomes database contributed by hundreds of institutions throughout the United States. Despite the differences in study design, conclusions were concordant, with both the present study and that of Fleischman et al. demonstrating that living alone was associated with prolonged hospital stays and increased utilization of home health services. To date, there is a paucity of evidence to help determine what percentage of patients receiving primary THA are living independently or with home support. Based on 2021 data from the ACS-NSQIP, approximately 30% of patients receiving primary THA with reported home support status were found to be living alone. However, this metric may not be valid as over half of these cases did not report home support status. Studies by Iwata et al. and Fleischman et al. showed approximately 20% of their study participants undergoing primary THA were reported to be living alone, however these metrics were collected from single-institution databases and may not be generalizable to larger populations . Given that a significant number of patients receiving primary THA are living independently, there has been much research aimed at determining whether living and social conditions, such as caregiver support at home, affect patient outcomes postoperatively. Fleischman et al. showed no increase in complications or unplanned clinical events for patients living alone compared to those living with others. Additionally, there were no significant differences in functional outcomes or pain relief in patients living alone or living with others . However, prior systematic reviews have examined the impact of social determinants on patient-reported outcomes and adverse events following TJA . Karimi et al. found that patients with more social deprivation had a higher proportion of non-home discharge and lower improvements from baseline patient-recorded outcome measures following TJA . A systematic review and meta-analysis by Wylde et al. showed evidence that social support could be a prognostic factor for some patient-reported outcome measures following total joint replacement . Notably, both reviews reported appreciable limitations in their findings due to the methodological quality of available studies and to the inconsistent measurement of social support or deprivation. To combat the complex and multidimensional nature of social support, future studies must specify the social factors they are measuring and utilize metrics that effectively capture social determinants. Numerous retrospective studies have investigated predictors of discharge destination following TJA [ 8 – 10 , 24 , 25 ]. Additionally, multiple authors are credited with developing and validating pre-operative risk assessment tools to predict discharge destination for patients undergoing TJA [ 26 – 29 ]. Interestingly, two out of four known discharge prediction tools utilize home caregiver status in their scoring . While our study focused solely on living alone as a predictive factor of discharge destination, other studies identified multiple predictors for non-home discharge including older age [ 8 – 10 , 24 , 25 ], female gender , certain comorbidities including obesity and pulmonary disease , postoperative functional status , and even patient expectation of discharge destination . The studies that included living alone in their analysis had findings consistent with our study . Similarly to discharge destination, prior studies also have looked for factors that influence hospital length of stay after TJA . Correspondingly, Fleischman et al. determined that living alone was associated with longer inpatient stays . Other retrospective studies found strong associations linking length of stay with age , diabetes , pre-existing cognitive impairment , discharge destination , and even day of surgery . These results suggest that a combination of patient and organizational factors play as drivers in determining hospital length of stay. Our findings suggest that patients living alone may require additional support postoperatively as well as longer inpatient stays to receive sufficient patient education and coordinate support services at home. Patients living alone may lack the physical, cognitive, and emotional support and interactions needed in the initial recovery phase following THA, necessitating a discharge destination that provides ongoing caregiver support. Fortunately, there is no evidence to date associating lack of home support with negative postoperative outcomes following THA. However, animals studies have shown that familial support reduces POCD, possibly via inhibition of neuroinflammation and activation of the lateral habenula-ventral tegmental area neural circuit after surgery [ 32 – 34 ]. These novel findings from animal studies suggest that the presence of home support may influence postoperative neuropsychological outcomes, however there is a need for clinical studies to better understand these phenomena and their implication in patients living alone where social interactions are very limited. In this context, it is worth noting that there was a trend for living alone patients to have a higher incidence of POD (OR: 1.64 [0.98 to 2.76], P = 0.06). Since the diagnosis of POCD requires neurocognitive assessment before and after surgery , there is no data on the incidence of POCD in this retrospective analysis. Given that patients undergoing THA have unique medical and social backgrounds and require varying levels of support in the postoperative period, it is crucial that healthcare providers risk-stratify patients early in the preoperative phase to ensure proper discharge planning and deliver the optimal level of care throughout recovery and rehabilitation. Our study emphasizes the importance of including home support in these risk assessments to obtain a more comprehensive view of our patients’ postoperative trajectories. The study is strengthened by reporting of fragility indices for all outcomes, complementing the P value with an interpretable measure of patient classification on statistical significance . At a minimum, the classification of 142 discharge dispositions would need to be changed to the opposite class to alter the statistical significance of these findings. This study is further strengthened by propensity score matching those living alone to those with others at home in order to control for selection bias . The cohort living alone before propensity score matching had an overrepresentation of elderly patients, females, non-diabetics, and recent falls. Matching produced balanced cohorts, as evidenced by post-matching univariate analyses (all P > .05) and a marked reduction in absolute standardized mean differences. There are several important limitations that are inherent to the study design. This was an observational study, and as such the defined endpoints and covariates were limited to those captured by the ACS-NSQIP database. Home support was not documented in 22486 of 28163 patients otherwise meeting inclusion criteria, reducing the power of this study and potentially introducing selection bias. This supports the use of propensity score matching as one method to reduce sampling bias, though matching is not without limitations . Additionally, the characters of individuals living with the supported patients are not reported in the database; factors like age, functional status, health literacy, and chronic illness are likely influential but to an unknown degree. The screening test for postoperative delirium is also not documented. Thus, it is a distinct possibility that the screening methodology has not been validated for the 93 patients with dementia (2.9% of the matched sample). The study design also did not allow for exclusion or stratification by intraoperative factors, such as transfusion requirements. This study design was used to allow the application of our findings to general population of patients with elective hip arthroplasty. Instead, some perioperative complications, such as transfusion requirements, are studied as secondary endpoints in the analysis. Lastly, the results of this study cannot be generalized to patients meeting the a priori exclusion criteria. In summary, the present study demonstrates the unique challenges faced by patients living alone following THA. These patients are more likely to have longer hospital stays and require discharge to non-home rehabilitation facilities, emphasizing the need for comprehensive risk assessment and discharge planning to ensure adequate patient education, support and coordination of postoperative care. Future studies should consider including a wider range of surgical procedures for generalization of findings, further investigating the effects of home support and social interaction on postoperative outcomes including POD and POCD, and further characterizing the mechanisms behind living alone and its associations with non-home discharge and prolonged hospital length of stay. These studies may ultimately identify approaches to better prepare patients who live alone for excellent recovery after surgery.
|
Study
|
biomedical
|
en
| 0.999997 |
PMC11694961
|
Young adults represent a significant proportion of new obsessive-compulsive disorder (OCD) diagnoses . Although previously thought to be a rare condition, OCD is now recognized as a prevalent mental health condition and substantial contributor to the global burden of disease . The worldwide prevalence of OCD is estimated at 1–3% of the population . Within the U.S., one study estimated lifetime and 12-month prevalence of OCD at 2.3% and 1.2%, respectively . Moreover, the prevalence of those meeting the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) criteria for OCD does not capture the substantial subsyndromal variation in OCD symptoms, with approximately 28.2%, 12.9%, and 6.2% of U.S. adults reported having ≥1, ≥2, or ≥3 OCD symptoms for ≥2 weeks in their lifetime, respectively . OCD exemplifies the class of conditions categorized as “obsessive-compulsive and related disorders” (e.g., trichotillomania, hoarding) in the DSM-5 . Obsessions are perseverative thoughts, images, or urges that are unwanted, intrusive, and extremely anxiety-provoking . Compulsions are behaviors or mental acts that are repetitive and ritualistic, performed with the goal of transiently alleviating the anxiety or distress associated with obsessive thoughts . There is extensive evidence that OCD is often comorbid with mood, anxiety, and other psychotic disorders . Eating disorders (EDs) have also long been known to be comorbid with OCD with estimates of co-occurrence ranging from 35–44% in some reports . While OCD and substance use disorder (SUD) are typically considered clinically and nosologically separate, they exhibit common features in their phenomenology, which is often characterized by compulsive activities . Recent evidence shows that lifetime co-occurrence of substance dependence is higher among individuals with OCD . However, these studies are mostly among clinic- or community-based general adult population and do not focus on the population most at-risk for onset of OCD—young adults. Beyond increasing anxiety and depression rates among young adults, recent trends also show an increase in diagnoses and treatment for other mental health concerns such as OCD among college students . Despite evidence showing an overall decline in mental health, increasing OCD prevalence among young adult college students, and the comorbid risks associated with OCD broadly, there remains scant evidence specific to co-occurrence of OCD with alcohol, tobacco, cannabis, and disordered eating risk among young adults . With the increasing prevalence of substance use among college students with mental health concerns , understanding comorbid disorders is important because comorbid psychiatric conditions and risk behaviors have an impact on condition prognosis, symptom exacerbation, and efficacy of treatments . The current study aims to fill this gap by providing empirical evidence to better understand the substance use and disordered eating risks associated with OCD conditions among college students. We additionally explored variation in these associations across sex/gender given differences in the prevalence of OCD and substance use by sex/gender. Data were from the American College Health Association-National College Health Assessment III (ACHA-NCHA III), a web-based, national survey of U.S. college student health administered each semester . Sponsored by the American College Health Association (ACHA), the ACHA-NCHA III is the third major iteration of the ACHA-NCHA and was first implemented in Fall 2019. Due to the disruption and unusual problems caused by the COVID-19 pandemic, the present study uses ACHA-NCHA III data collected from students enrolled in 216 colleges and universities between Fall 2021 and Fall 2022. Data pooled from these three periods were further restricted to undergraduate students of traditional age (18–24 years) . To be eligible for inclusion in the ACHA-NCHA III national dataset, participating institutions had to sample a census or random sample of students >18 years. Participants provided written informed consent before completing the survey, and a university Institutional Review Board approved the secondary analysis of the ACHA-NCHA data for this study . The ACHA-NCHA III data used in this study is available upon request made to the American College Health Association through their website ( https://www.acha.org/ncha/data-results/data-access-published-literature/ ) or email ( [email protected] ). Risk of use of three substances of interest were examined in this study: alcohol, tobacco, and cannabis. ACHA-NCHA III uses the Alcohol, Smoking and Substance Involvement Screening Test (ASSIST) to measure substance specific involvement risk among college students . ASSIST is an 8-item questionnaire that asks about lifetime use of substances and use of substances and associated problems over the last 3 months. Responses to these questions are used to calculate students’ ASSIST risk scores ranging from 0–39. Scores are used to categorize students into low risk (0–3), moderate risk (4–26), or high risk (27–39) for tobacco and cannabis use. For alcohol, categories were 0–10 (low risk), 11–26 (moderate risk), 27–39 (high risk). Students who indicated no lifetime use of any of the three substances were coded as low risk. Given the interest of the study in examining determinants of risk, participants were collapsed into two groups: medium/high risk (≥27) and not medium/high risk (<27). Two questions were used to assess eating disorder (ED) behavior or diagnoses among students. One question asked participants if “within the last 12 months, an eating disorder /problem affected your academic performance?” A second question asked if they had “ever been diagnosed by a healthcare or mental health professional with eating disorders (for example: anorexia nervosa, bulimia nervosa, binge eating.” Responses to these two questions were combined into a categorical variable where 0 = no eating disorder behavior/diagnoses and 1 = experience of eating disorder behavior/diagnoses. Indication of diagnosis was obtained with a question (yes or no) asking whether students have “ever been diagnosed by a healthcare or mental health professional as having obsessive-compulsive and related conditions (for example: OCD, Body Dysmorphia, Hoarding, Trichotillomania, other body-focused repetitive behavior disorders. Responses were coded such that 0 = no diagnosis and 1 = positive diagnosis for OCD related conditions. ACHA used responses to three questions on sex at birth, gender identity, and whether students identify as transgender to create a sex/gender variable. When a student’s gender identity was consistent with their sex at birth and they selected no for transgender, sex/gender was coded as cisgender woman or cisgender man. Students were categorized as transgender/gender non-conforming (TGNC) if they selected “yes” for transgender, “intersex” for sex at birth, or their sex at birth was not consistent with their gender identity. Students who skipped any of the three questions used to compute the variable were coded as missing. A full description of the methodology ACHA employed in computing the sex/gender variable is published elsewhere . Given the established relationship between stress, mental health factors, substance use, and OCD conditions , binary indications of past 12-month stress (moderate/high vs. no/low), previous diagnosis of anxiety (yes/no), and previous diagnosis of depression (yes/no) were included as covariates. Additional covariates included self-reported demographic information: age (in years); race/ethnicity (Non-Hispanic White [White], Non-Hispanic Black [Black], Hispanic/Latinx, Non-Hispanic Asian/Asian American [Asian], Non-Hispanic American Indian/Alaskan Native/Native Hawaiian/Pacific Islander [NHOPI], Middle Eastern/other Arab [MENA], Biracial/Multicultural (if more than one race/ethnicity selected), Other); parent highest education level (ranging from did not finish high school to doctoral/professional degree); and survey year . This study employed a cross-sectional design. Descriptive statistics were used to summarize the data. Bivariate associations between the primary outcomes (alcohol, tobacco and cannabis risk, eating disorder) and OCD conditions were evaluated using univariate logistic regression analyses. To adjust for student-institution clusters, obtain more efficient parameter estimates, and better standard errors, multivariable regression models were constructed, applying generalized estimating equations (GEE) method. For each outcome, a GEE model with an exchangeable correlation structure and robust variance estimator was used to estimate the alcohol, cannabis, tobacco, and disordered eating risk among those with OCD conditions compared to those without. The models adjusted for study covariates including sex/gender. To examine whether risks differed by students’ sex/gender, we conducted stratified multivariable analyses for each sex/gender. The models stratified by sex/gender included the same set of covariates but excluded sex/gender. Additionally, predicted probabilities (average marginal effects) of medium/high substance use and eating disorder risks by sex/gender were estimated based on the adjusted regression models . Multicollinearity was assessed with the variance inflation factor (VIF) and all VIF values (≤1.90) were below the recommended threshold (≥2.50) indicating no multicollinearity concerns . Missing data on the study’s primary predictor, OCD condition, was 0.8% while missingness for other study variables ranged from 0.2% to 3.5%. Given the small amount of data missingness relative to sample size, we employed listwise deletion . All analyses were completed using Stata version 18.0 software (StataCorp, LLC). As Table 1 shows, 92,757 undergraduate students between the ages of 18 and 24 (mean age 19.88 years (SD = 1.45)) were included in this study. The majority of respondents were White (62%) and identified as cisgender female (63%), and about 6% of students in the sample identified as TGNC. About 6% of respondents indicated having OCD conditions. Among the study sample, 12%, 16%, and 19% of students were classified as having medium/high alcohol, tobacco, and cannabis risk respectively ( Table 2 ). Disordered eating behaviors were reported among 19% of students in the sample. Across sex/gender, 23%, 8%, and 36% of cisgender female, cisgender male, and TGNC students reported eating disorders. Among students with OCD conditions, prevalence of medium/high substance use and disordered eating risks were significantly higher ( Table 2 ). In the bivariate analyses examining association of OCD conditions with medium/high substance use risk, results showed students with OCD conditions to be at significantly higher likelihood (p<0.001) to be classified as having medium/high alcohol, tobacco, and cannabis use risk ( Table 2 ). Students with OCD conditions also had higher odds of reporting disordered eating (p<0.001). Table 3 shows the adjusted associations between OCD conditions and substance use and disordered eating risk in the full sample and stratified by sex/gender. Overall, adjusting for covariates including stress, depression, and anxiety, having OCD conditions were associated with greater odds of medium/high tobacco (adjusted odds ratio [aOR] = 1.12, 95% confidence interval [95% CI] 1.05, 1.21), cannabis (aOR = 1.11, 95% CI 1.04, 1.18), and alcohol (aOR = 1.14, 95% CI 1.05, 1.24) risk. In the models stratified by sex/gender, for cisgender female students, OCD condition was associated with greater odds of medium/high tobacco (aOR = 1.12, 95% CI 1.03, 1.22), cannabis (aOR = 1.13, 95% CI 1.06, 1.23), and alcohol (aOR = 1.18, 95% CI 1.08, 1.29) risk. There were no statistically significant associations between OCD conditions and substance use risk among cisgender male students in the study. However, among TGNC students, OCD conditions were only statistically significantly associated with higher odds of medium/high tobacco risk (aOR = 1.24, 95% CI 1.05, 1.48). Similar to findings for substance use risk, OCD condition was associated with greater risk of disordered eating (aOR = 2.28, 95% CI 2.13, 2.43) in the overall sample. In the models stratified by sex/gender, OCD condition was associated with greater risk of disordered eating among cisgender females (aOR = 2.30, 95% CI 2.14, 2.47), cisgender males, (aOR = 2.34, 95% CI 1.93, 2.83), and TGNC students (aOR = 2.14, 95% CI 1.85, 2.47). As shown in Fig 1 , having OCD conditions was associated with a statistically significant increase in the predicted probability of tobacco (0.02, 0.02, 0.01; p = 0.001), cannabis (0.02, p = 0.001), and alcohol (0.02, 0.02, 0.01; p<0.002) risk among cisgender female, cisgender male and TGNC students, respectively. For eating disorder, predicted probabilities were 0.15, 0.09, and 0.17 (p<0.001) for cisgender female, male and TGNC students, respectively . To our knowledge, this is the first study to examine substance use and disordered eating risks associated with OCD conditions among young adult college students in the U.S. as well as how these associations may vary among cisgender male, cisgender female, and TGNC students. Our findings show that overall, college students with OCD conditions showed higher prevalence of medium/high alcohol, tobacco, and cannabis involvement and disordered eating risk compared to their counterparts without OCD conditions. Even when accounting for stress, depression, and anxiety, having an OCD condition was associated with greater likelihood of medium/high alcohol, tobacco, cannabis, and disordered eating risk; however, the associations varied by sex/gender. Previous studies have reported that due to the similarities in their phenomenology, and the increased likelihood of individuals with OCD to turn to substances to self-medicate and alleviate distress caused by obsessive thoughts and compulsive behaviors, there is a high level of comorbidity between OCD and substance use risk and disorder . In line with these studies, we found that OCD conditions were associated with elevated odds of medium/high alcohol, tobacco, and cannabis use risk. Although most of the previous studies were conducted among a general adult population and in international samples , our study extends current knowledge by demonstrating the association of OCD conditions with medium/high risk alcohol, tobacco, and cannabis use in a U.S. sample among a population subgroup (young adult college students, 18–24 years) that comprises a significant fraction of the U.S. population (38%) and represents a large proportion of new OCD diagnoses [ 27 – 29 ]. OCD and substance misuse share similar neurological and genetic influences . However, there is heterogeneity in their presentation due to the interaction between genetic and biological risk factors and a range of environmental influences . The elevated odds of alcohol, tobacco, and cannabis use among students with OCD may reflect these shared influences, as well as the unique risks posed by the college environment. College campuses are unique environments that foster high-risk substance use (e.g., polysubstance use) through factors like Greek life affiliations , academic stress , and social norms that encourage substance use . Given that OCD is a multifactorial condition influenced by environmental contexts, students with OCD may be particularly vulnerable to substance use disorders in such settings. Our findings underscore the importance of incorporating regular substance use screenings into the clinical care of college students with OCD. Furthermore, tailoring support systems to address both the anxiety and compulsions characteristic of OCD and the environmental triggers unique to college life is essential for promoting the well-being of these students. In line with studies indicating that sex/gender significantly influences the expression and impact of OCD , this study demonstrates that the risks and comorbidities associated with OCD among college students vary by sex/gender. Even after accounting for other stress and mental health symptoms, cisgender females with OCD had a higher risk of comorbidity related to medium/high alcohol, tobacco, and cannabis use as well as disordered eating. TGNC students with OCD conditions were at risk for both medium/high tobacco use and disordered eating while cisgender males with OCD conditions showed an elevated risk only for disordered eating. Our results fill a crucial gap and advance the literature by providing sex/gender-stratified evidence from a U.S.-based population subgroup (traditional age college students) that is not currently represented in the literature. Earlier studies have reported mixed findings regarding gender, OCD symptomatology, and comorbid conditions . For example, one study found no difference in the prevalence of lifetime comorbidities between males and females , another reported an elevated risk of substance use disorder risk among males , and yet another found elevated risk of social phobia and eating disorders among females . While no studies have specifically addressed OCD comorbidities among TGNC individuals, the mixed findings for cisgender males and females may stem from differences in study populations (children vs. adults vs. college students; community vs. clinical samples) or the geographical context (i.e. different countries) . Another possible explanation for this variability could be sex/gender differences in symptom onset and presentation . For example, males with OCD conditions are more likely to exhibit social anxiety disorder and tic disorders, while females are more prone to eating and impulse control disorders . Additionally, research has shown that cisgender women and TGNC individuals often experience heightened psychological distress related to social expectations discrimination, and internalized stigma which may exacerbate obsessive-compulsive behaviors and increase their risk of medium/high substance use and disordered eating . The observed sex/gender variability in risks and comorbidities associated with OCD among college students may also stem from cultural influences and socioenvironmental factors interacting with biological predispositions that shape risk behaviors . These findings emphasize that while all college students with OCD conditions are an at-risk group, cisgender female students face the highest risks and should be prioritized for tailored interventions and programs to support healthy symptom management and effective treatment planning. Moreover, the results highlight the significant sex/gender differences in the complex interplay between psychosocial stressors, OCD symptomatology, and maladaptive coping behaviors, underscoring the need for nuanced approaches to address these challenges. Among non-college populations, there is evidence showing a high comorbidity between eating disorders (EDs) and OCD, attributed to their shared genetic and psychological features . It is therefore not surprising that, among young adult college students with OCD conditions, EDs emerged as the only consistent comorbid risk. However, unlike previous studies that identified cisgender females with OCD as the primary groups at elevated risk for EDs , our findings revealed that among college students, all sexes/genders are at greater risk, with predicted risk probabilities highest among TGNC students. This highlights the unique comorbid risks associated with OCD conditions among college students. First, it is important to note that the disordered eating variable examined in this study encompassed multiple disorders including anorexia nervosa, bulimia nervosa, and binge eating. College students are particularly vulnerable to body dissatisfaction, weight concerns, and appearance-related stress, which are amplified by the college environment and life stage. Disordered eating behaviors, such as restrictive eating, binge eating, and purging, are prevalent in this population and are further exacerbated by stress related to academic performance, relationship challenges, and the transition to college life . Both OCD and EDs often involve ritualistic behaviors as well as intrusive thoughts including those related to body image, weight, and food. The heightened risk of EDs among students with OCD conditions is especially concerning because college students are at a developmental stage marked by susceptibility to mental disorders , reluctance to seek professional help , and the formation of long-term behavioral patterns that influence their quality of life, chronic disease risks, and overall health. The pervasive risk of EDs among cisgender males, cisgender females, and TGNC students with OCD conditions underscores the need for comprehensive mental health screening and tailored support systems within college settings. In summary, our findings show that college students with OCD conditions—a complex, multifaceted disorder with comorbidity risks varying by sex/gender—are an at-risk group for engaging in various risk behaviors. However, these results should be interpreted within the context of our study limitations. First, while this study employed one of the best national datasets available on college students’ health which includes data on diverse racial and gender identities, it is not nationally representative of college students in the U.S. Additionally, the cross-sectional nature of the data preclude the ability to establish causality and the limited confounder adjustment (e.g., no data on other mental health issues such as PTSD, sleep disorders) could influence study outcomes. It is also important to note that OCD and ED measures examined in the study were based on self-reported, single-item measures regarding clinically diagnosed conditions. Although these measures are useful for screening purposes, they do not allow us to disentangle specific eating disorder type (e.g., anorexia nervosa, binge eating) or obsessive compulsive and related disorders (e.g., perseverative thoughts, repetitive behaviors, body dysmorphia, hoarding, trichotillomania) that may have been the primary drivers of reported associations. Furthermore, although the sample included TGNC students, the relatively small size of this subgroup may have limited the study’s power to detect additional associations. Despite these limitations, this study’s contributions to the extant literature fill an important knowledge gap on the risks associated with OCD conditions among young adults in the U.S. This study represents a pioneering investigation of the complex interplay between OCD conditions, substance use, and disordered eating risks among U.S. young adult college students. Our findings reveal that students with OCD conditions have a higher prevalence of medium/high risk alcohol, tobacco, and cannabis use and disordered eating compared to their counterparts without such conditions, even after adjusting for stress, depression, and anxiety. Although cisgender male, cisgender female, and TGNC students with OCD conditions were at greater risk for eating disorders, TGNC students exhibited the highest predicted risk probability while cisgender females had the greatest comorbidity risks. The study underscores the unique challenges posed by the college environment and life-stage—where substance use and disordered eating behaviors are amplified—that may predispose young adults with OCD to developing substance use and eating disorders. The heightened risk for eating disorders emphasizes the significance of comprehensive mental health screening and support on college campuses. Our findings contribute valuable insights into the underexplored intersection of OCD, substance use, and disordered eating risks among college students, highlighting the imperative for future research to unravel the intricate sex/gender differences in the biopsychosocial mechanisms underlying these associations.
|
Study
|
biomedical
|
en
| 0.999998 |
PMC11694963
|
Obesity is a clinical condition characterized by increased body weight resulting from excessive fat accumulation. It is influenced by genetic, biological, environmental, and sociocultural factors [ 1 – 3 ]. This disorder is associated with the development of several chronic diseases, including metabolic syndrome, dyslipidemia, insulin resistance, cardiovascular diseases, type 2 diabetes mellitus, osteoarthritis, sleep apnea, infertility, cancer, and psychological problems . The global prevalence of this disease has doubled between 1990 and 2022, and now more than 1 billion people are living with obesity in the world , in Brazil about 20% of adult population has obesity . This alarming increase in obesity prevalence is partially due to increased consumption of ultra-processed and calorie-rich foods associated with sedentarism . Very low-calorie diets (VLCDs, less than 800 kCal/day) have been proposed as a valid treatment option for grade II and III obesity, with or without comorbidities . This diet involves altering and optimizing energy metabolism, stimulating the production of ketone bodies by the liver through the breakdown of fat, inducing weight loss, and improving other parameters such as insulin sensitivity and glycemic control [ 10 – 12 ]. Owing to the multiple interactions of causal factors, treating obesity requires an interdisciplinary and integrated therapeutic approach focused on lifestyle changes. In this scenario, hospitalization emerges as an interesting therapeutic strategy, offering a multidisciplinary intervention that successfully achieves weight loss . This study aimed to evaluate the effectiveness of treating severe obesity (grades II and III) in hospitalized patients, using VLCD and clinical support to develop lifestyle changes. This retrospective cohort study was conducted using secondary data from the medical records of patients hospitalized in a controlled environment for weight loss from October 2016 to October 2022. It employed an interventional pre-post design comparing exposure variables in a secondary data analysis. The research was conducted at a Brazilian hospital specialized in obesity treatment. The sample size was convenience-based, initially comprising 1,151 individuals with severe obesity hospitalized for 3 and/or 6 months. The inclusion criteria were age over 12 years, obesity grade II or III upon admission, and at least 3 months of inpatient treatment. There was no patient with recent slimming surgeries or gastric bypass among patients admitted to obesity treatment in the hospital, during the period of this study. We excluded 295 patients without data from laboratory tests and/or bioimpedance. We included 856 patients in the analysis. Of these, 323 presented complete data at 3 and 6 months, 454 presented complete data at 3 months alone, and 79 presented complete data at 6 months alone. Therefore, the analysis included 777 patients with data available at 3 months and 402 with data available at 6 months. Data was imported directly from the electronic medical records of each patient into an Excel table and then converted directly into SPSS format using SPSS software ver. 29.0.1.0 (IBM corporation, New York, USA) for statistical analysis. Data was assessed from electronic medical records from July 31, 2023 for research purposes. The authors had no access to information that could identify individual participants during and after data collection. The treatment involved a multidisciplinary approach with key components including low-calorie diets (LCDs) and VLCDs, physical activity, individual cognitive behavioral therapy (CBT), participation in educational groups aimed at lifestyle changes, and multidisciplinary clinical support. Patients received LCDs (800 to 1,200 kCal/day) or VLCDs (less than 800 kCal/day) during hospitalization. Both diets provided a higher percentage of protein (70 to 100 g/day or 0.8 to 1.5 g /kg of ideal body weight/day) and a low carbohydrate content. For VLCD, 50% of the total energy value were carbohydrates, 20% lipids and 30% proteins, while for LCDs, 40% were carbohydrates, 20% lipids and 40% proteins. These diets were supplemented with vitamins, minerals, electrolytes, and essential fatty acids to ensure adequate nutrition, following Brazilian guidelines for treating obesity . As patients were at inpatient treatment, every meal of each patient was precisely done and administered as prescribed by their nutritionist; acceptance was monitored in the dining room. There were weekly consultations with nutritionists, lasting sixty minutes for diet therapy guidance and assessment of difficulties in accepting food, associated with monthly assessment using bio impedance and biochemical tests. Patients engaged in lower-impact physical activities due to frequent arthropathies, including water aerobics exercises at least thrice a week, horizontal biking twice a week with a light load, and weight training thrice a week. The patients underwent individual one-hour CBT sessions with a psychologist twice weekly. Patients also participated in educational group activities, using active methodologies and experiences aimed at lifestyle changes, consisting of daily one-hour meetings . Daily clinical evaluations were conducted, with periodic consultations by endocrinologists, cardiologists, orthopedists, and psychiatrists. A team of physiotherapists, occupational therapists, and nurses assessed and monitored each patient’s condition, providing necessary interventions in their respective areas. All patients had their weight and height measured and underwent bioimpedance testing upon admission and at 3 and/or 6 months. Height was measured using a stadiometer (Tonelli Medical Devices, Brazil), and weight was measured using a bioelectrical impedance device. Body composition was assessed using a 3-frequency bioelectrical impedance device (5kHz, 50kHz, and 5,00kHz–Ottoboni-inbody570), utilizing a tetra polar system with eight points (tactile electrodes) to obtain 15 impedance measurements of each of the five body segments (right arm, left arm, trunk, right leg, and left leg). Obesity severity was assessed using body mass index (BMI): grade I, if BMI was between 30 and 34.9 kg/m 2 ; grade II, between 35 and 39.9 kg/m 2 ; and grade III, ≥40 kg/m 2 . Measurements of gamma-glutamyl transferase (GGT), blood glucose, triglycerides, total cholesterol, and high-density cholesterol (HDL) were performed using the enzymatic colorimetric method; low-density cholesterol (LDL) was calculated using the Friedewald formula. Serum zinc levels were measured using the flame atomic absorption spectrometry method. Electrochemiluminescence was used to measure ferritin and basal insulin levels. Oxaloacetic (GOT) and pyruvic (GPT) transaminases were measured using the UV-kinetic method; creatine phosphokinase (CPK) by the UV method; glycated hemoglobin HbA1c by turbidimetric inhibition immunoassay; C-reactive protein (CRP) by immunoturbidimetry. All tests were performed using an automated device and in the same laboratory. The bioimpedance parameters were described as median and inter-quartil interval. To compare values measured at admission with those observed after 3 and 6 months of hospitalization it was used the Wilcoxon test. The percentage change in each bioimpedance parameter was calculated by subtracting the values measured at admission from those obtained at 3 and 6 months of hospitalization, respectively. This difference was divided by the original hospitalization value and multiplied by 100. These percentage values were compared between men and women and between elderly (≥60 years) and non-elderly patients (<60 years) using a Mann-Whitney test. Laboratory measurements on admission and at 3 and 6 months of hospitalization were compared using a Wilcoxon test. Kaplan-Meier curves were used to compare the time to reach 20% weight loss and 35% fat mass reduction between male and female and between elderly and no elderly patients. To evaluate the factors associated with the success to reach median values of fat mass loss at 3 and 6 months of hospitalization, a multivariate logistic regression was performed: the dependent variables were the success of reach the median percentage of fat mass loss in each period of hospitalization (17% and 35% at 3 and 6 months hospitalization, respectively). The independent variables were included in the model, using the “forward conditional" method, the initial following dummy variables were used in the analysis: age ≥ 60 years, male sex, Diabetes Mellitus, current smoking, drinking habits, hypothyroidism, hepatic steatosis, sedentary life style and altered admission levels of glycated hemoglobin (> 5.6%), zinc (< 69.93 μg/dL), CRP (> 6 mg/L), and CPK (> 200 U/L). A significance level of 5% was adopted. This research was conducted following the principles of bioethics in accordance with resolution 466/2012 (CONEP/Brazil) and met aspects related to the Declaration of Helsinki. The Research Ethics Committee of the State University of Bahia (UNEB) approved the project with CAAE number 65578822.1.0000.0057. Informed consent paperwork could not be obtained due to the retrospective design of this analysis; patients were no longer available to sign informed consent. The Research Ethics Committee approved this. Table 1 presents the clinical and epidemiological data by length of hospitalization. Women were the majority (70%) with a median age of approximately 44 years, not differing significantly between the groups by length of hospitalization. Regarding lifestyle habits, smoking, drinking habits and sedentary lifestyle were prevalent and similar between the two groups. The prevalence of diabetes mellitus, hypertension, hypercholesterolemia and hypertriglyceridemia were high and did not differ significantly between groups by length of hospitalization. Hepatic steatosis, sleep apnea and depression were prevalent in both periods of hospitalization, not differing between them. Patients with 6 months of hospitalization had a higher percentage of most severe obesity (grade III) than patients who remained hospitalized for 3 months (75.9% vs. 62.9%, respectively) ( Table 1 ). After 3 months of hospitalization, body weight, BMI, fat mass, fat percentage, skeletal muscle mass, basal metabolic rate, and waist-to-hip ratio (WHR) reduced significantly ( Table 2 ). Patients hospitalized for 6 months demonstrated an even greater reduction in all analyzed bioimpedance parameters, doubling results from 3 months of hospitalization ( Table 2 ). After 3 months of hospitalization, men exhibited a higher percentage of reduction than women in the following bioimpedance parameters: weight, BMI, fat mass, body fat percentage, and WHR. However, for basal metabolic rate and skeletal muscle mass, men exhibited a lower reduction than women ( Table 3 ). Similarly, but to a greater extent, at 6 months of hospitalization, men exhibited higher percentages of loss in weight, BMI, fat mass, body fat percentage, and WHR. Conversely, women had higher rates of loss of skeletal muscle mass and basal metabolic rate ( Table 3 ). The Survival Kaplan-Meier curves compare males and females for the time to reach reductions of 20% weight and 35% fat mass . Males reach weight and fat mass reductions earlier than females. After 3 months of hospitalization, the elderly (≥ 60 years) demonstrated a lower percentage of loss in weight, BMI, fat mass, body fat percentage, and WHR than the non-elderly. The percentage difference in skeletal muscle mass loss but not in the basal metabolic rate was higher in elderly as compared to no elderly patients ( Table 4 ). Similarly, after 6 months of hospitalization, the elderly had a lower percentage of loss in weight, BMI, fat mass, body fat percentage and WHR. Contrary to the observation after 3 months, elderly individuals had a higher percentage of loss of skeletal muscle mass and basal metabolic rate at 6 months of hospitalization ( Table 4 ). Three months of hospitalization yielded a significant reduction in the levels of fasting glucose, insulin and glycated hemoglobin; triglycerides, HDL, LDL, and total cholesterol. Regarding liver injury and inflammatory markers, CRP, ferritin, and GGT levels significantly reduced after 3 months of treatment. However, GPT increased slightly while GOT remained unchanged during this period of hospitalization ( Table 5 ). Similarly, but to a greater extent, after 6 months of hospitalization, almost all parameters evaluated reduced significantly, including fasting glucose; insulin; glycated hemoglobin; triglycerides; LDL, and total cholesterol. However, HDL levels remained unchanged. Regarding liver injury and inflammatory parameters, a significant reduction was observed in the values of GGT, GPT, CRP, and ferritin. Contrary to the observation at 3 months, GOT levels increased within the normal range ( Table 5 ). The Survival Kaplan-Meier curves compare non-elderly and elderly patients for the time to reach reductions of 20% in weight and 35% in fat mass . Elderly patients reach weight and fat mass reductions later than non-elderly patients. Table 6 presents the multivariate logistic regression with the factors, measured at admission, associated with the success to reach the median percentage of fat mass loss during hospitalization. After three months, male sex, drinking habits and CPK above normal range were associated with a higher odds ratio to reach the median percentage of body fat mass loss. An inverse association was observed with elderly, C-reactive protein level above the normal range, and zinc level below the normal range ( Table 6 ). At six months of hospitalization, some of these predictors lost significance, with male sex, hepatic steatosis, and elderly remaining significant. Similar to the results at 3 months, elderly people presented a lower odds ratio to reach the median percentage of body fat mass loss at 6 months, while males presented higher odds ratio to reach such loss. Alcohol consumption, and altered glycated hemoglobin, CPK, C-reactive protein and zinc levels were no longer significant predictors of body fat mass loss at 6 months ( Table 6 ). Most hospitalized patients were women, presented with drinking habits, and had a sedentary lifestyle. The most prevalent comorbidities were hypertension, diabetes, hypercholesterolemia, hypertriglyceridemia, hypothyroidism, coronary artery disease, sleep apnea, hepatic steatosis, and self-reported depression. These data did not differ between the 3 and 6-month hospitalization groups. The group that underwent 6 months of treatment had a higher percentage of patients with grade III obesity, which partly justifies the longer hospital stay in this group. In a study of patients with obesity grade III who underwent outpatient dietary treatment in Rio de Janeiro, Brazil, most participants were women (76.3%) and adults (average age of 45 years), similar to this study. However, there was a higher prevalence of hypertension (80%) and diabetes (46%) and a lower incidence of drinking habits (37%), sleep apnea (31%), and dyslipidemia (16%) , compared to the patient profile of this study. This study is based on a Brazilian, mixed population of predominantly African (50.5%) and European (42.4%) ancestry and fewer Native Americans (5.8%) . Both groups revealed significant changes in body composition parameters throughout each period, with a more significant improvement in these variables after 6 months of hospitalization. After 3 months, we observed a significant decrease in weight, BMI, fat mass, body fat percentage, skeletal muscle mass, basal metabolic rate, and WHR. These results indicate a positive response to short-term treatment, suggesting the effectiveness of hospitalization in promoting weight loss and improving body composition. This finding aligns with a study that investigated the effects of short-term multidisciplinary interventions in patients with obesity and demonstrated significant reductions in fat mass, percentage of body fat, and BMI. Skeletal muscle mass and basal metabolic rate also reduced significantly but at a magnitude approximately six times smaller than the fat mass loss (1.6 kg vs. 10.9 kg, respectively), even at six months of treatment. Patients hospitalized for 6 months demonstrated more significant changes in body composition measured by bioimpedance. The magnitude of these changes, especially in the loss of weight (men: 23.6% and women: 20.4%) and fat mass (men: 45.3% and women: 31.3%), suggests that 6 months of hospitalization doubles the reduction in these parameters, with an eight times smaller loss of skeletal muscle mass. In both periods, men exhibited higher percentages of loss in weight, fat mass, percentage of body fat, and WHR, in addition to lesser loss of skeletal muscle mass compared to women. These results corroborate previous findings demonstrating that in short periods of intervention, men experience a more pronounced response in weight reduction with greater preservation of skeletal muscle mass . In addition to the differences in bioimpedance parameters previously discussed, the unequal reduction in WHR between men and women was notable in both periods of hospitalization. Men demonstrated a significantly greater reduction (almost double) in WHR. This observation is consistent with previous findings that men have a more pronounced loss of visceral fat and, consequently, a greater reduction in WHR . Reduced WHR is associated with cardiovascular benefits and reduced health-related adverse events. Epidemiological studies highlight that a lower WHR reduces the risk of cardiovascular diseases, hypertension, and type 2 diabetes . This association lies in the direct relationship between WHR and visceral fat and, consequently, with insulin resistance and inflammation caused by excess of this fat, factors that play crucial roles in increasing the risk of cardiovascular diseases . Despite the different reduction levels in WHR, both sexes improved these measures, suggesting that both men and women experience the benefits in cardiovascular risk resulting from these reductions. Regarding elderly and non-elderly patients, a reduction in weight, BMI, fat mass, percentage of body fat mass, and WHR was observed in both periods. However, non-elderly patients presented better results than elderly patients. A previous study also evaluated the repercussions of treating severe obesity in an inpatient setting in elderly and non-elderly patients. Their results are consistent with ours because although both groups obtained significant results, non-elderly patients exhibited a greater BMI, weight, and waist circumference reduction than the elderly patients. The aging process naturally modifies body composition, which can lead to an increase of 2 to 5% in total fat mass each decade from the age of 40, in addition to a reduction in muscle mass . In this study, the differences in the loss of skeletal muscle mass and reduction in basal metabolic rate were not significant between age groups during the 3-month hospitalization. However, during the 6-month hospitalization, elderly patients exhibited greater loss of skeletal muscle mass and, consequently, a greater reduction in basal metabolic rate. This result necessitates the development of strategies and different nutritional protocols for the elderly, which can minimize this loss of muscle mass during a longer period of hospitalization. The relationship between LCD and weight loss and its effects on glucose, lipid, and inflammatory parameters are well documented in the scientific literature (14). In this study, the reduction in blood glucose, insulin, glycated hemoglobin, triglycerides, LDL- and total cholesterol, and CRP was significant in both periods of hospitalization. A small reduction in HDL cholesterol was observed only in the short term and was not maintained after 6 months of hospitalization. These results suggest that inpatient treatment improves the metabolic and inflammatory profile of patients with severe obesity, with implications for reducing cardiovascular risk. Reducing body weight by 5 to 10% improves cardiovascular health, as lipid parameters tend to reduce significantly within this range of weight loss when they are at abnormal levels . The non-significant long-term reduction in HDL cholesterol can be partly explained by the progressive increase in physical activity during hospitalization, associated with improved eating patterns. Therefore, multidisciplinary therapy aimed at lasting changes in lifestyle habits is crucial. Hepatic transaminase GGT levels reduced significantly throughout both periods of hospitalization. GPT levels increased slightly in the first 3 months but decreased at 6 months. Despite not changing significantly in the first 3 months of hospitalization, the GOT level increased slightly at 6 months. This increase may be because of weight loss during hospitalization; it was within the normal range and without clinical implications. Fasting blood glucose, insulin, and glycated hemoglobin levels were significantly reduced in this study, indicating an improvement in glycemic control and insulin resistance, corroborating previous reports . This improvement in glucose parameters demonstrates that this treatment reduces weight and body fat, achieves positive metabolic profile results, and removes the patient from pre-diabetes classification. In both periods of hospitalization, inflammatory marker levels, CRP, and ferritin were also reduced significantly. Reducing weight and body fat mass reduces free fatty acid levels that activate the pro-inflammatory cascade. This increases adiponectin levels, reducing CRP production by reducing the release of interleucin-6 by adipose tissue . Elevated ferritin is an important marker of chronic inflammation, cardiovascular risk, and insulin resistance and is associated with inflammation in adipose tissue . Therefore, the reduction in ferritin and CRP levels observed in this study reflects an improvement in the inflammatory parameters of patients with severe obesity. A logistic multivariate regression analysis was conducted to understand the factors, measured at admission, associated with the success to reach the median value of fat mass loss percentage at 3 and 6 months of treatment. The model explained approximately 33.5%% and 28.7% of the success to reach such levels of fat mass loss percentage variation after 3 and 6 months of treatment, respectively. Male sex was a significant predictor of the percentage of fat mass loss at both treatment times. Previous reports have indicated greater weight and fat loss in men who underwent interventions to treat obesity . A higher percentage of lean mass in men, a greater amount of estrogen produced by women, and differences in insulin resistance between sexes may justify these different responses to treatment . Another significant predictor was age; elderly people presented lower chance of reach median values of fat mass loss percentage in both periods of hospitalization, corroborating previous data . Metabolic changes inherent to advancing age, associated with reduced muscle mass and strength , may explain this negative association with age. Alcoholism reported on admission was related to a higher chance to reach median percentage of fat mass loss in the first 3 months of hospitalization but not at 6 months. Upon admission, patients stop drinking alcoholic beverages, contributing to a reduction in caloric intake and, consequently, a reduction in fat mass in the short term. Hepatic steatosis was a predictor associated with a higher chance of reach median percentage of fat mass loss at six months of hospitalization. As the liver is one of the main metabolic regulatory organs, hepatic steatosis indicates an altered and compromised metabolism, which seems to respond better to a six months inpatient treatment. Patients with high CPK levels on admission had a higher percentage of fat mass loss in both periods of hospitalization. CPK is an enzyme in muscle cells, with only small amounts released into the bloodstream [ 39 – 42 ]. However, high body fat levels can generate changes in the cell membrane, increasing circulating CPK levels . As the main source of CPK is muscle tissue, the levels of this enzyme can indicate muscle mass, justifying its association with a better response in the loss of body fat. However, more studies are necessary to verify this association. CRP level was a negative predictor of fat mass loss at patient admission in this study. CRP level is an important indicator of subclinical systemic inflammation in individuals with severe obesity . Therefore, the greater difficulty in losing fat mass in patients with higher CRP levels could be explained by a worse inflammatory profile resulting from more dysfunctional adipose tissue. However, more studies are needed to address this question. The zinc level below normal range was associated with a lower chance to reach median level of fat mass reduction at 3 but not at 6 months of treatment. Zinc plays an important role in controlling appetite, has anti-inflammatory action, and is involved in the production of hormones associated with energy metabolism, insulin resistance, and diabetes . Therefore, the zinc association reported herein could be explained by its protective effects against insulin resistance, chronic inflammation, and hyperglycemia, leading to higher fat mass loss. Its effect loss at 6 months may be because this micronutrient is supplemented during hospitalization, correcting possible deficiencies and their effects in the long term. Three and 6 months of inpatient treatment substantially reduced anthropometric measurements in severe obesity, with 6 months yielding better results with approximately 20% reduction in weight, 31% in fat mass, and 6% in the WHR. Furthermore, glucose, lipid, and inflammatory profiles significantly improved in both treatment periods. These results were maintained in elderly and non-elderly patients, men and women, resulting in an improvement in cardiovascular risk and escape from pre-diabetes. This study suggests that VLCD in an inpatient facility with immersive lifestyle changes under multidisciplinary supervision is an alternative and effective intervention for managing severe obesity in the real world.
|
Other
|
biomedical
|
en
| 0.999994 |
PMC11694964
|
While chemicals can significantly improve lives and productivity, strategies used by chemical corporations to influence regulations and practices in order to increase profitability can negatively impact both human health and the environment. To better understand how corporate strategies can increase profitability to the detriment of public health, it is essential to investigate how scientific literature addresses corporate influence on shaping scientific knowledge, public policy discourse, and public discussions. No such systematic review exists to analyze ongoing problematic corporate strategies in the chemical sector. Internal corporate documents offer crucial insights into the strategies employed by corporations to influence scientific knowledge, regulation and practices. They often serve as unique and irreplaceable sources for evidence of corporate activities in pursuit of strategic goals [ 1 – 4 ]. These documents can shed light on the deliberate use of strategies detrimental to the public health for financial gain. Given that corporate interests have traditionally relied heavily on secrecy and that much of the information regarding potential risks of products is often treated as confidential business information, accessing and disclosing company documents, typically obtained through litigation, can be highly challenging . We utilize the conceptual framework of ghost management as articulated by Marc-André Gagnon and Sergio Sismondo [ 6 – 8 ], as well as Ulrich Beck’s conceptualization of the risk society to guide our analysis of corporate influences in the chemical industry. The notion of ghost management was developed by Sismondo to characterize the systematic utilization of tactics by pharmaceutical companies to mold medical knowledge and practices . Gagnon extends this conceptualization to encompass corporate strategies that exert influence over other strategic aspects of business success such as market power, regulatory capture and technological path dependency . In alignment with Miller and Harkins’ delineation of four categories associated with corporate capture , originally devised for the alcohol lobby, Gagnon delineates seven distinct ghost management categories: 1) scientific capture (pertaining to the influence on knowledge production); 2) professional capture (concerned with the influence on healthcare practices); 3) technological capture (focused on steering technological pathways); 4) regulatory capture (centered on shaping laws to serve commercial interests); 5) market capture (aimed at establishing market dominance or constraining competition); 6) media capture (addressing the influence over media institutions); and 7) civil society capture (pertaining to the influence over charities, non-governmental organizations, trade unions, and civil society groups). Gagnon and Dong’s classification of ghost management capture encompasses seven categories, yet it might not cover all possible scenarios. Any corporate strategies not falling within these seven categories can be placed under an “other” category for further analysis. Diverging from Carpenter and Moss , our approach adopts a more expansive conceptualization of “capture,” denoting the objective of ghost management strategies without necessarily implying their absolute success. Profitability of companies is traditionally seen as a sign of their good performance on markets, giving way to the suggestion that profits are compensation for a positive contribution to innovation, wealth or well-being. However, the concept of ghost management challenges this view, revealing how corporations deliberately and systematically intervene at different levels of societal structures to increase their profitability, often to the detriment of public health and welfare . Building on the works of Thorstein Veblen which consider capital a predatory force exerted on the social system , the concept of ghost management allows a more refined understanding about the ways these forces are being deployed. In particular, Ulrich Beck’s conceptualization of the risk society can be helpful to understand these dynamics. For Beck, with the technological advances of the 20 th century, the economic system does not only produce “goods” (wealth and well-being), but it also produces “evils” (or risks). New products and innovation do not only contribute to well-being, but they also often come with potential risks and negative externalities. While “goods”, in their traditional sense, are self-evident (a physical product, or a service we benefit from), “evils” or “risks” are not. In order to exist, a risk must be determined through scientific research. Risks are not self-evident, they are social constructions; they must be defined, characterized and managed through the lenses of socio-political structures : “While such things as income and education are consumable goods that can be experienced by the individual, the existence of and distribution of risks and hazards are mediated on principle through argument” (p. 27). Because many stakeholders can have interests in keeping some risks hidden, or in inflating the importance of other risks, the social existence (or not) and regulation (or not) of risks has less to do with scientific evidence and more to do with socio-political struggles : “The social effect of risk definitions is not dependent on their scientific validity” (p. 32). The question of who decides what is or is not a risk becomes central, and the way risks are defined and managed reflects existing power relations. For Beck , risks “are products of struggles and conflicts over definitions within the context of specific relations of definitional power” (p. 30). Since, for specific products, the existence of risks is mediated through socio-political debates and arguments, the production of “good arguments” can become central for the business success and earning-capacity of corporations. As Beck predicted in 1988 (26): “Argumentation craftsmen have sunny days ahead” (p.32). The concept of ghost management emphasizes the fact that businesses are not only invested in producing value or wealth, but also in producing arguments and habits of thought that will shape the social determinants of value . The earning-capacity of the corporation depends less on the production of a product, than on the production of the belief that the product is necessary or safe, or on the production of favourable socio-economic institutions, such as regulations or dominant narratives, for their product. Ghost management can be used to conceal established risks, hinder the identification of new risks, and obstruct regulatory actions. Taking inspiration from the pharmaceutical scoping review conducted by Gagnon and Dong in 2022 , our study performs a scoping literature review of previous scientific investigations that utilized internal chemical industry documents. This approach aims to shed light on hidden practices within corporations, fostering a better-informed public and supporting the implementation of effective regulations. This scoping review investigates how chemical companies further their corporate interests by examining what the scientific literature has uncovered through the use of internal documents. In contrast to Wieland et al. , who primarily map scientific articles across various industries, our goal is more expansive. We seek to systematically categorize insights from these papers, identifying tactics and strategies. Additionally, we compare the previously completed pharmaceutical scoping review with this new work on the chemical industry, highlighting parallels and distinctions in the corporate strategies of each sector that have been identified in the scientific literature. This review marks the initial phase of a broader research endeavor, outlining corporate strategies in the chemical sector and establishing a framework for future studies in other industries. Our categorization and theorization framework for ghost management in the chemical industry will become a valuable tool for subsequent case studies. Scoping reviews offer a robust tool to comprehensively map existing literature, effectively summarize the scope of current studies, elucidate known findings, and identify areas that warrant further investigation . Our primary research for this scoping review centers on of internal corporate documents within scientific literature to investigate business strategies employed by chemical companies. To comprehensively explore this subject, we have delineated specific areas of inquiry: Our goal is to build a comprehensive understanding of the way internal corporate documents have been used in academic research to illuminate diverse dimensions of chemical companies’ operations and ghost management strategies. Through revealing these dynamics, we seek to provide valuable insights into the intricate interplay between the chemical industry and the broader societal framework. Moreover, this research aims to pinpoint potential domains warranting regulatory and policy interventions to safeguard public interests. In seeking scientific articles or book chapters that use internal corporate documents to reveal aspects of chemical companies’ operations and covert ghost management strategies, there were three main criteria for inclusion in our study: peer-reviewed status, explicit citation of internal company documents as the primary data source, and exploration of corporate captures within chemical companies. We adhere to the internal document definition articulated by Gagnon and Dong , characterizing them as corporate/industry documents normally not accessible to the public, typically obtainable through court orders, leaks, or whistleblowers. Documents initially intended for public access, like those on corporate websites, were excluded. This broad definition was intentionally chosen to encompass various analyses related to chemical ghost management utilizing internal corporate/industry documents. Additionally, selected articles needed to examine one of the eight established ghost management strategies outlined in our theoretical framework or any new strategies employed by chemical companies. In our search for scientific literature incorporating internal chemical corporate documents, we initially employed broad keywords across 28 databases . We also used keywords from case studies of companies like Monsanto, Dow Agrosciences, Adama, Ciba-Geigy, Sandoz, Astra, ICI, Bayer, Schering, Hoechst, Rhone Poulenc, Rohms & Haas, Eli-Lilly, Dow Chemical, DuPont, BASF, ACC/American Home Products, American Home Products, Novartis, Astra-Zeneca, AgrEvo, Dow Chemical, Syngenta, Aventis CropScience, Bayer CropScience, Syngenta, ChemChina, Corteva, and BASF, where internal corporate documents were publicly available through litigations. We extracted 351 academic papers with our search keywords and reviewed their titles, abstracts, and keywords to eliminate duplicates, excluding 308 articles that did not meet our screening criteria. We then performed detailed coding and screening on the remaining 43 articles. During this process, we identified 17 additional scientific articles using the snowball sampling method to complement our search efforts. Subsequently, we coded 60 scientific articles to identify ghost management strategies employed by chemical companies. To conduct a comparative analysis with Gagnon and Dong’s pharmaceutical scoping review in 2022 , we employed the identical excel pivot table template available on the Dataverse ( https://borealisdata.ca/dataverse/Ghost management ). Our workbook has been made accessible on Dataverse to facilitate public access and enable future researchers to seamlessly apply the same analytical framework to their respective case studies . Specifically, one team member manually entered coding information for 18 articles published between 2001 and 2024 [ 1 , 21 – 37 ]. The workbook comprises 12 sheets, including a summary page with the corporate capture outline, explanations for eight ghost management captures based on our theoretical framework, a summary of case studies, sources for data collection, and an overview of research methodology. In the summary sheet, we denote the presence of each capture with Y or N. Additional concerns identified during separate coding by another researcher are documented in the MAG comments. One team member initiated the scoping review search on December 19, 2022. To validate the results and capture new publications, another researcher conducted a matching scoping review search on May 13, 2024. All scientific articles extracted were published prior to May 13, 2024. Subsequently, one researcher coded all articles between February 1, 2023, and May 20, 2024, while another researcher conducted a separate coding on May 30, 2024. The two researchers engaged in discussions to resolve divergent coding decisions. This study excluded patient inclusion or participation. Ethical approval was unnecessary as the study did not entail the involvement of human participants or include research on animals. Out of the 60 coded papers on chemical industry ghost management captures, 42 were excluded for various reasons . The exclusion criteria involved articles that didn’t explicitly examine internal documents, solely relied on practitioner surveys, cited secondary sources examining internal corporate documents, did not scrutinize the chemical industry/products directly, or failed to discover/examine covert corporate captures. To ensure inclusion in the final results, articles underwent an independent review by two researchers, with inclusion contingent upon unanimous agreement that all search criteria were met. In cases of discrepancies during independent reviews, researchers collaboratively addressed them over a Zoom discussion. Ultimately, 18 scientific articles [ 1 , 21 – 37 ] were included in the final analysis, synthesizing all ghost management chemical corporate captures from these articles in Table 1 . Our curated scoping review includes articles focused on Monsanto, with seven discussing glyphosate-based herbicides, one on polychlorinated biphenyls (PCBs), and another on neonicotinoids [ 26 – 28 , 33 , 35 , 36 , 38 ]. Additionally, there are four articles on asbestos [ 21 – 24 ], three on per- and polyfluoroalkyl substances (PFAS) or Teflon , one on greenwashing , one analyzing McKinsey’s role in agribusiness transformation , one on GMO salmon , one on argi-chemical , and another shedding light on collaborative efforts between the United States and Dow to hinder European regulatory initiatives . In contrast to the pharmaceutical industry scoping review, the examination of the chemical industry reveals a broader range of research methods. Notably, historical analysis, interviews, and archival research stand out as noteworthy methodologies, providing valuable supplements to theoretical frameworks. Richter et al’s literature review holds particular significance. All 18 papers employed qualitative analysis [ 1 , 21 – 37 ]. Five articles exclusively utilized qualitative analysis, while the remaining 13 employed multiple methods, including content analysis [ 1 , 25 , 27 – 29 , 32 – 37 ], historical analysis , interview , archival research , and quantitative methods . Among 18 scientific articles [ 1 , 21 – 37 ], 15 used internal chemical corporate documents initially obtained via litigation [ 1 , 21 – 28 , 31 – 33 , 35 – 37 ]. In contrast to pharmaceutical scoping reviews, our analysis indicates a trend of studies using leaked memos as their primary investigative source. Seven papers employed leaked memos to examine chemical corporate captures, including personal communications, meeting minutes and speech transcripts. 15 papers used multiple sources in their investigations [ 1 , 21 – 28 , 31 – 33 , 35 – 37 ]. Eight incorporated archival documents [ 1 , 21 , 27 , 28 , 28 , 31 – 33 ], three examined corporate public documents like financial records and marketing materials , while two relied on other academic literature as secondary sources . One paper referenced policy documents , and another included interviews . Krimsky and Gillam augmented their research with contributions from investigative journalists and legal professionals , suggesting alternative avenues for valuable insights beyond legal proceedings . All resources and archival sites are documented in S3 Appendix for easy access in future research. We have compiled these channels for accessing internal documents from chemical companies to facilitate and encourage further research endeavors. Notably, ToxicDocs has aggregated a substantial collection of internal documents from various chemical companies. Three of our analyzed published articles have drawn directly from the internal documents available in toxicdots.org . After analyzing 18 articles [ 1 , 21 – 37 ], we identified eight distinct ghost management strategies employed by chemical companies to shape public perception. These strategies involve diverse methodologies aimed at influencing experts, manipulating regulatory frameworks, engaging with policymakers, and molding media and cultural narratives in alignment with corporate interests. Table 1 summarizes the identified ghost management strategies from the analysis of these articles [ 1 , 21 – 37 ]. The subsequent sections provide a comprehensive exploration of each strategy, offering specific examples and insights to enhance our understanding of how chemical companies wield influence. This detailed examination aims to underscore potential implications for public perception and decision-making. 16 scientific articles had a predominant focus on scientific capture, specifically examining how corporate funding in the chemical sector manipulates knowledge production [ 21 – 31 , 33 – 37 ]. The most investigated aspect within scientific capture was the conflict of interest in chemical research, involving investigators, journals, and author interference, addressed in 14 articles [ 21 – 24 , 26 – 28 , 30 , 31 , 33 – 37 ]. Among these, seven articles delve into conflicts of interest with journalists and publishers [ 21 – 24 , 26 , 28 , 34 ], four articles explore conflicts of interest with authors [ 23 , 24 , 28 , 34 – 36 ], and five articles investigate conflicts of interest with investigators [ 21 , 23 , 24 , 30 , 34 – 36 ]. In addition, 11 articles explore non-disclosure and selective reporting, where unfavourable results were intentionally concealed [ 21 – 24 , 26 , 27 , 30 , 34 – 37 ], while eight highlight the prevalence of scientific articles written by ghostwriters paid by chemical corporations [ 23 , 26 , 28 , 29 , 34 – 37 ]. 11 articles investigate how the strategic downplaying of negative results reflects a deliberate effort to mitigate the perception of risks associated with products [ 21 , 23 – 27 , 30 , 34 – 37 ]. Only one article delves into disease mongering in the chemical sector . Among the 16 articles [ 21 – 31 , 33 – 37 ], unique features of the chemical industry led to the creation of new subcategories for these captures. Nine articles examine how chemical corporations manufacture doubt [ 21 – 24 , 28 , 30 , 31 , 35 , 36 ], while five study how these corporations discredit laypeople’s knowledge to undermine the reliability of research outcomes , and one paper focus on discrediting unfavorable researchers . The overarching theme of scientific capture explores unseen and undone science, particularly the concealment of health effects within corporate-driven agendas. Tactics include efforts to augment corporate-interest favourable publications, suppress adverse findings, and silence unfavourable results. 15 articles explored how chemical companies strategically influence or manipulate laws and regulations to prioritize corporate interests over public well-being [ 1 , 21 , 23 , 25 – 27 , 29 – 35 , 37 ]. Employing various tactics, the articles show how companies established safety standards aligned with their interests, or create doubt and uncertainty over scientific evidence, effectively delaying legal proceedings, punishments, penalties or regulatory actions. A prevalent pattern identified in 12 articles is the emergence of conflicts of interest with regulators, constituting a form of capture [ 1 , 21 , 23 , 25 , 26 , 30 – 35 , 37 ]. This capture dynamic is further examined in ten articles investigating lobbying and collaboration with trade associations as significant influencers of public policy [ 21 – 23 , 25 , 27 , 29 – 31 , 33 , 37 ]. Examining the theme of self-regulation, six articles highlight how chemical companies resist external regulations . Furthermore, eight articles shed light on how chemical corporations collaborated with the US government to influence public policies in other countries [ 21 – 23 , 30 , 31 , 35 , 37 ]. Notably, Monsanto leveraged resources to depoliticize biotechnology, and in other cases, chemical companies collaborated with the American government to influence foreign regulations, employing misrepresentation and manipulation to hinder progress . Lastly, a discernible pattern is the strategic targeting of regulators by chemical companies, tactically influencing decision-making processes. Collectively, these findings illuminate multifaceted and recurrent efforts by chemical companies to wield influence over laws, regulations, and policies, often at the expense of public health and safety. Seven articles reveal how chemical companies influence professionals to promote their products, cultivating intangible assets beneficial to corporate interests [ 1 , 21 , 23 , 27 – 29 , 34 ]. A recurrent strategy identified in the literature involves adapting messages and cultivating relationships—tailoring communications to resonate with professionals, and emphasizing product benefits while downplaying potential risks . Simultaneously, articles explored how chemical corporations advised professionals in order to nurture relationships, fostering trust and loyalty . Key opinion leaders played a crucial role, with four articles identifying influential professionals enlisted to endorse products or advocate for corporate interests, significantly amplifying credibility and acceptance . Moreover, three articles detail how chemical companies extend influence through training, education, and financing research initiatives, conferences, or educational programs to shape knowledge and practices aligning with corporate interests . Additionally, one article highlights the practice of providing gifts and bribes , where companies offer incentives to professionals in return for support, compromising objectivity and undermining professional relationships. Another article explores the strategic dissemination of advertising materials directed at professionals as a tactic—leveraging diverse channels to directly promote products, fostering familiarity and preference . Two articles investigate the discrediting of professionals and the issue of “hired guns” . Two articles also investigated smear campaigns employed to further discredit professionals . These observations highlight strategic maneuvers that have been employed by chemical companies to influence professionals, shaping perceptions and behaviors. Understanding these tactics fosters a nuanced recognition of potential conflicts of interest and biases within professional relationships and decision-making processes. Six papers delve into the methods employed by chemical companies to sway various entities, including charities, NGOs, trade unions, social movements, and grassroots groups . Two papers specifically address conflicts of interest involving consumer groups , be it through the establishment of front groups or through financial backing of existing groups. Additionally, three papers shed light on how chemical companies dispute causation claims to evade compensating victims and their families. The articles detail how the companies successfully challenged established links between their products and adverse health effects, and ultimately sidestepped financial responsibility . Furthermore, two papers reveal instances of chemical companies undermining opposing organizations . This included activities such as spying on environmental groups and fabricating fake memos against environmental organizations . These findings unveil intricate strategies that have been employed by chemical companies to influence a diverse range of groups. Through the use of front groups or financial support, they manipulated narratives, downplayed risks, and advanced their interests . Among the 15 papers analyzed [ 1 , 21 – 34 ], four explored how chemical companies use media and communication strategies to shape public and “elite” or more informed opinions . These strategies involve targeted efforts and discrediting industry critics. One significant aspect of media capture noted was direct-to-consumer advertising. Three articles examined how chemical companies engaged consumers directly through diverse marketing campaigns . For example, one article explored the push to replace the phrase "talc used in cosmetics" with "cosmetic talc", which highlights the latter as a marketing construct . Moreover, two papers examined how chemical companies collaborated with journalists and fostered conflicts of interest, thereby exerting influence on information production , such as an individual affiliated with Forbes employed by Monsanto. An analysis of these dynamics provides a nuanced understanding of how information is produced and disseminated, with potential influences from corporate interests. Three articles delved into how chemical companies strategically use technological standards and the safeguarding confidential business information (CBI) to exert influence . One article explores strategic patenting , while two others focus on the challenges posed by confidential business information . For instance, one article found that when chemical manufacturers label data as CBI, it becomes challenging for Environmental Protection Agency (EPA) scientists to access these data for research studies . Manufacturers can claim submitted information as CBI, limiting its availability to designated EPA offices and approved staff, with strict regulations on sharing within the agency or other government entities . Industries incur no expenses when designating information as confidential business information (CBI), and there is limited oversight or penalties for improper claims. However, as the article notes, the repercussions for mishandling or disclosing CBI data are severe, leading to criminal imprisonment and substantial fines . Considering that transforming agriculture in agribusiness can lead to significant corporate control over the intellectual property in the technology involved in agribusiness, one article included in our review detailed how the formal endorsement of the Trade-related Aspects of Intellectual Property Rights (TRIPS) Agreement by the World Trade Organization (WTO) and national-level policies like the Plant Variety Protection Act in the United States confer exclusive proprietary rights to breeders . Two papers , investigate how chemical companies gain market power, restrict competition, and establish dominance. One paper focuses on market concentration in the chemical sector , revealing a significant link between primary seed transactions and the use of Monsanto’s Roundup herbicide. The other paper explores conflicts of investment , exemplified by MetLife’s investments in mining companies producing asbestos and silicosis products. Through detailed scrutiny of annual reports obtained via litigation, this article highlights financial ties between MetLife and the chemical industry. Understanding these investment patterns provides insights into how chemical companies can fortify their market positions. Four articles explore other captures [ 22 – 24 , 27 ], with one focusing on the deliberate and unlawful actions of chemical corporations . Three articles explore previously unidentified legal captures [ 22 – 24 ], with one of these articles detailing how industry professionals redefined evidentiary standards for proving causation in potential carcinogen cases . The article shows how asbestos corporations argued for stringent documentation in epidemiologic studies, emphasizing the need for clear evidence regarding the type of asbestos and the precise nature of exposure to establish causation in cancer development. Another article discussed corporate strategies including influencing worker compensation laws and establishing arbitrary protective standards for monitoring asbestos exposure, which facilitates the dismissal of sick workers without informing them of the results . Additionally, collaborations with entities like MetLife, the Federal Bureau of Mines, and asbestos corporation maintain confidentiality of clinical findings. All corporate captures identified in other categories involved strategic deployment of judicial resources, using existing laws and regulations to impose specific perspectives and actions over the judicial process itself. There is a pressing need for more research on legal capture as a self-contained category for ghost management, particularly concerning the burden of proof. This would refine and enhance our understanding of the Ghost Management framework. In contrast to Gagnon and Dong’s scoping review on pharmaceutical corporate capture , which identified 37 peer-reviewed papers before 2022, our examination of the chemical industry reveals only 18 relevant papers. The scholarly examination of the pharmaceutical and chemical industries reveals a diverse and vibrant landscape of research on different aspects of industry capture. Specifically, 28 academic articles have explored scientific capture within the pharmaceutical sector, while 16 have focused on this issue within the chemical industry. Similarly, professional capture has been addressed in 16 articles related to pharmaceuticals and 7 related to chemicals. Market capture has attracted scholarly attention in 4 studies on the pharmaceutical industry and 2 on the chemical industry. The analysis of civil society capture has 4 studies on the pharmaceutical sector and 6 on the chemical sector. Media capture has been examined in 3 studies on pharmaceuticals and 4 on chemicals, while another set of studies shows 2 articles on media capture in the pharmaceutical industry and 3 in the chemical industry. Notably, the vast majority of the papers in both sectors shows an emphasis on analyzing strategies used for scientific capture. However, the area of regulatory capture reveals a significant distinction: only 6 of the 37 articles related to the pharmaceutical industry analyzed this dimension, as compared to 15 of the 18 articles related to the chemical industry. This body of work suggests that existing research on the chemical industry is particularly concerned with analyzing how the sector navigates and circumvents regulatory oversight. Both industries employ strategies involving conflicts of interest and the legitimization of their actions to shield themselves from public policy scrutiny and protect their interests. However, their goals seem to be significantly different. The scientific literature analyzing the pharmaceutical industry’s internal document tends to identify strategies maximizing profits through the biased promotion of health products, whereas the scientific literature analyzing the chemical industry’s internal documents is more inclined in identifying strategies institutionalizing ignorance about existing risks, evading accountability, and preventing regulatory actions. Adding complexity to this comparison, in contrast to the pharmaceutical scoping review, the papers from the chemical sector included in our review explore the way the concept of confidential business information (CBI) is constantly employed to further reduce access to scientific research and data. This approach poses a significant challenge for independent researchers and regulatory bodies, impeding access to data and hindering transparency in evaluating the safety and efficacy of chemical products. Our research underscores how the use of the ghost management framework allows the identification of systematic corporate strategies, revealing pervasive conflicts of interest (COIs), legitimization tactics, and their consequential impact on the productions of knowledge and of ignorance within the chemical sector. The identified strategies employed by chemical companies to manage their public image involves a complex web of conflicts of interest. Companies strategically nurtured conflicts of interest with various entities such as public institutions, regulatory bodies, professionals, media organizations, and scientific researchers, as revealed in our comprehensive review of peer-reviewed scientific literature . Internal corporate documents from the chemical sector reveals the significant impact these COIs can have on the definition of risks, on policy discussions, on public health and on commercial success. A prime example of the pervasiveness of COIs is the analysis of the “Monsanto papers”, which shows how Monsanto (now part of Bayer) has promoted glyphosate-based herbicides (GBH). Internal corporate documents show that Monsanto fostered COIs with an impressive network of “independent” academics, scientific journal editors, journalists, regulators and consultants. Working with “independent experts”, Monsanto ghostwrote an important part of the existing literature about the safety of GBH, which has furthered arguments against constraining regulations . Journal editors who also had COIs helped Monsanto interfere in the peer-review process of scientific papers . Monsanto also mobilized its network of “independent researchers” to successfully lobby journals for the retraction of scientific papers questioning the safety of GBH . The company also ghostwrote in the lay media with the help of journalists attacking independent researchers questioning the safety of GBH . Finally, internal corporate documents show Monsanto used their financial relations with three Environmental Protection Agency (EPA) officials to exert undue influence over the EPA and derail a safety review of GBH . This state of affairs should not be surprising considering that Monsanto is well-known for its intimidation and attacks on the reputation of any scientist or journalist questioning the safety of their products . The evidence of COIs from the Monsanto papers shows how the extent of corporate power in shaping narratives and practices, must not be underestimated. The manipulation of knowledge production and, in particular, the hiding and downplaying of data which may harm a company’s image or profitability seem to be central features of ongoing ghost management strategies related to scientific, professional and civil society captures in the chemical industry. Identified key tactics include non-disclosure of negative findings, systematic opacity of risks through CBI, downplaying of risks, selective reporting of data, ghostwriting of biased reports, bullying and discrediting “hostile” experts, and discrediting laypeople’s experience and knowledge of the risks and harms in using specific chemical products. In many ways, in the analyzed cases, it seems that the selective production of ignorance was as central to the business success of chemical corporations as the production of chemical products themselves. When analyzing the regulatory structure for PFAS, Richter et al. introduce the concept of the "institutionalized ignorance regime" in the chemical industry. While the concept is used to characterize the lack of regulation for PFAS, it also works well to describe some of the results in this scoping review. The "institutionalized ignorance regime" includes three levels of ignorance, each playing a significant role in shaping the industry’s practices and outcomes. The first level is "selective ignorance", which involves deliberate efforts to limit (non-disclosure) and manipulate information (ghostwriting) in a way that creates doubt and helps companies evade regulatory scrutiny . Selective ignorance could be compared to manufacturing doubt. By carefully controlling the flow of information, the industry attempts to shape public perception and avoid taking responsibility for potential risks associated with their products . The second level is "forbidden knowledge", which refers to various practices that prevent certain information from becoming widely known . As previously mentioned, the concept of confidential business information (CBI) serves as a means for companies to withhold important data from public scrutiny; the lack of resources within regulatory agencies can prevent a thorough analysis of health and safety data. Additionally, grandfather clauses in regulations, as seen in the Toxic Substance Control Act, provide exemptions to certain chemicals, allowing companies to bypass rigorous evaluation requirements. The third level of ignorance, "nescience", refers to a complete lack of knowledge or awareness, which can lead to unforeseen risks and uncertainties. In the chemical industry, these risks are often treated as radical uncertainties, meaning they are not fully understood or adequately managed. By not doing the appropriate research on risks, these potential risks disappear as “unknown unknowns”, which allows the shifting of these risks onto civil society stakeholders, including workers, communities, researchers, and decision-makers, without sufficient risk management measures or precautionary actions . Applying Beck’s understanding that risks, as social constructions, are products of power struggle over definitions and are managed through socio-political structures it can be argued that the proposed notions of “selective ignorance” and “forbidden knowledge” can describe dynamics which shape the ways risks are defined and managed. “Nescience” , however, refers to dynamics in which the research to find and define risks will not be done, creating radical uncertainty. This scoping review, which investigated how chemical companies further their corporate interests by examining what the scientific literature has uncovered through the use of internal documents, re-enforces Richter et al.’s notion of the existence of an institutionalized multi-level ignorance regime. Addressing these manifestations of ignorance is essential for fostering a more enlightened, accountable, and responsible chemical industry that would better protect the well-being and safety of citizens. We focused our research on analyzing internal corporate documents to unveil covert ghost management captures, excluding papers that did not meet this criterion. However, it is crucial to emphasize that the examination of corporate ghost management strategies should not be solely limited to internal corporate documents. Research involving public corporate documents, using tools like the Wayback Machine and other publicly available resources, can provide valuable insights as well . We exclusively focused on English-language scientific papers in our search, potentially excluding relevant articles lacking English translations due to our choice of keywords. Our investigation, centered on peer-reviewed scientific literature, English-centric databases, and examination of internal chemical corporate documents, lays a foundational but limited groundwork for comprehending corporate influence on scientific research, norms, and the chemical industry. We did not directly analyze internal documents from chemical companies in this study. Instead, the primary aim of this article is to systematically review academic research that utilizes internal chemical company documents, in order to identify channels and methodologies for accessing these documents, thereby encouraging further research. Resources like ToxicDocs.org and the "BP Papers" recently released by The Downs Law Group at https://downslawgroup.com/bp-papers/ represent valuable repositories that require more comprehensive future case studies by scholars. Such analyses could further uncover the strategies employed by chemical companies to influence public policy and values. By scrutinizing scientific literature which makes reference to internal corporate documents and categorizing these findings using ghost management categories, we gain deeper insights into the pervasive influence of chemical corporations. However, for a more comprehensive understanding, future research should extend beyond peer-reviewed literature to directly examine internal documents and incorporate diverse and multilingual sources such as journalistic investigations, criminal probes, or regulatory examinations. To enhance this analysis of the chemical sector, there is a need for further research to delineate ghost management categories, contributing to a more nuanced understanding of prevailing norms. Analyzing scientific literature which references internal company documents based on categories of ghost management allows a better understanding of how pervasive and “normal” corporate influence on knowledge and practices relating to chemicals has become. However, the limited scientific literature meeting our search criteria echoes a broader scarcity outlined by Wieland et al. . This underscores the deficiency of this kind of research within the chemical industry and emphasizes the crucial necessity to scrutinize internal corporate documents. Overcoming transparency challenges is crucial in enhancing public awareness and promoting scholarly investigations. To tackle this urgent concern, we must take decisive actions to dismantle institutionalized ignorance regimes that allow the chemical industry to avoid accountability. Prioritizing transparency, supporting independent research, and allowing informed public debates are crucial measures to mitigate this corporate power in order to better identify real risks and safeguard public interests. To be part of the solution, our research is publicly available on Dataverse to support future studies applying the same analytical framework to their investigations. Hopefully, it will contribute to a more sustainable, transparent, and responsible paradigm in chemical regulation and make it easier for other research to adopt the same investigational framework.
|
Review
|
biomedical
|
en
| 0.999995 |
PMC11694966
|
Physical inactivity increases the risk and severity of multiple chronic diseases and is the underlying cause of over 10% of deaths in the United States . Accelerometers are widely used in research to measure human physical activity (PA) and are typically worn on the hip or the wrist . Compared with other objective measurement instruments including lab-based techniques and direct observations, accelerometers are self-administered, more economical, scalable, and suitable for studying a large population longitudinally [ 4 – 9 ]. These unique features make accelerometers the preferred measurement instrument in many studies aimed to quantify individual-level PA behavior and to evaluate the effects of PA interventions. Accelerometers have been used as the gold standard to calibrate the less accurate self-reported PA measures . Count-based accelerometry data are in the format of counts at the epoch level, where each epoch is usually 30 seconds or one minute, for the entire duration the accelerometer has been operating. The count data can contain many zeros, very low values, and erratic values because either the device has not been worn or the wearer has been sedentary or did not wear correctly. The count-based accelerometer data must use a pre-processing procedure to eliminate invalid or meaningless (i.e., non-wear) epochs and to convert counts to more interpretable outcomes, such as time spent in a certain PA intensity level. The pre-processing procedure for count-based accelerometer data includes multiple steps: differentiating wear-time from non-wear-time; imposing minimum requirements for wear-time; defining PA intensity levels, in particular, the cutoff for moderate-to-vigorous physical activity (MVPA); and clustering PA in modified bouts. Each step consists of further technical details with alternative specifications . Past health and behavioral studies using accelerometer data have not always provided details of their pre-processing steps. Many studies simply conformed to a convention adopted in prior studies . The impact of the pre-processing procedure for count-based accelerometer data has been noted and investigated in the literature with mixed findings. Several studies acknowledged that choices of pre-processing criteria may influence statistical analyses further downstream [ 12 – 16 ]. For example, a recent study reported that by altering the MVPA intensity cutoff and using bouts vs. all minutes, the sample proportion meeting the national PA guidelines varied dramatically from 3% to 96% . Variations in pre-processing criteria, such as wear-time validation algorithms, minimum requirement for wear-time, and cutoffs for PA levels, can lead to significant changes in both the mean time and the percent of time in every PA levels . A prior study developed empirical approximation formulas for the MVPA outcome by epoch length and cutoff points with and without bouts . Another study showed that different pre-processing criteria altered the significance level of the relationship between a socio-economic predictor and PA outcome, specifically from statistically significant ( p < .001) to non-significant ( p >.05) . This last study is particularly concerning since many health studies aimed to estimate the relationship between predictors (e.g., a person-level characteristics, a group-level environmental factor, exposure to an intervention, etc.) and the MVPA outcome, rather than the absolute level of the MVPA outcome. This prior evidence suggests that pre-processing criteria can fundamentally influence the findings in such type of health studies. Contrary to these studies above, a study reported that various pre-processing criteria did not alter the findings of PA patterns among obese diabetic patients but only changed the sample size by about 10%, i.e., participants that did not meet the selected criteria were dropped from analyses . Prior studies also have mixed views on using the vector magnitude using triaxial counts. While one study praised the vector triaxial counts for improving identification of valid wear-time compared with the standard vertical axis counts , another study did not find that the vector magnitude with triaxial counts can further improve predictive performance of counts for energy expenditure beyond using the vertical axis counts only . Several studies warned to not directly compare data in vector magnitude across different types of accelerometers . In this paper, we aim to investigate multiple important pre-processing criteria by a comprehensive study design. We study the true sample size in pre-processed data, the absolute level of the MVPA outcome, and regression coefficients of age and gender predicting MVPA. The present study describes a systematic method that can be adopted by other researchers as a sensitivity analysis to guide decisions for pre-processing and analyzing accelerometer data. Our findings can also provide useful reference for future health studies using accelerometer to set up their pre-processing procedures. We used primary data collected from an ongoing cluster randomized controlled trial, i.e., the parent study. The parent study was approved by the Human Subjects Protection Committee , which is the institutional review board to review research involving human subjects at the RAND Corporation. The parent study was registered in ClinicalTrials.gov . The goal of the parent study is to promote PA among adult churchgoing Latinos from six churches in predominantly-Latino neighborhoods in and near East Los Angeles, California . All study participants were adult, from whom written consents were collected before data collection. Potentially eligible participants from the six participating churches were screened for a history and symptoms of health conditions that could preclude PA . Upon consent, participants eligible for the trial were asked to wear an ActiGraph wGT3X-BT activity monitor around their hip for a week, removing it only during water activities (e.g., showering) and when sleeping. To reduce the likelihood of draining batteries in the field, the accelerometers were set with the sampling rate of 30 Hz, the sleep mode enabled, and the inertial measurement unit sensor disabled. The low frequence extension option was not used since the parent study is focused on MVPA. The primary data consisted of 538 individuals and data were collected in 2019. Table 1 describes the characteristics of the sample. We processed the data using ActiLife software v6.13.4 (ActiGraph, Pensacola, FL) and analyzed data using SAS 9.4 and R 4.1. A pre-processing procedure can be implemented in a general-purpose statistical software package such as SAS and R. Researchers of ActiGraph accelerometers often use the software ActiLife specialized for pre-processing and analyzing ActiGraph accelerometry data. In this paper we worked with the ActiLife software tool. A pre-processing process can involve many steps and details. In this paper we selected the most important or required steps to study. We used the following key terms to organize these steps in line with the interface in ActiLife: domain , criterion , and option . From the top, a processing procedure is split into four sequential domains, where each domain has a distinct task: validation domain 1 (to find valid wear-time), validation domain 2 (to apply minimum valid wear-time requirement), scoring domain 1 (to implement MVPA intensity thresholds), and scoring domain 2 (to identify modified bouts by MVPA intensity levels). Each domain consists of multiple criteria, where a criterion is a specific decision users need to make. A criterion usually has two or more options, from which the user must choose one. Most criteria have one option set as default in ActiLife. Table 2 lists all domains, criteria, and options under consideration in this paper, which are used throughout the remainder of this paper. Lastly, we further define the term scenario : a scenario is a unique set of options selected across all criteria of the four domains. Two scenarios are different if at least one criterion has different options selected between the two scenarios. Non-wear-time includes periods when the device was not worn by a participant during the indicated timespan. In this paper, we considered two criteria in validation domain 1: an algorithm to identify valid wear-time and the vector magnitude for counts. First, there are two built-in options in ActiLife to identify non-wear-time, referred to as Troiano_2007 and Choi_2011 in this paper [ 29 – 31 ]. These two options aim to identify patterns in the count data that match with pre-defined non-wear-time patterns with different algorithms. One notable difference is that Troiano_2007 defines a non-wear period as at least 60 consecutive minutes of zero count values whereas Choi_2011 uses a minimum of 90 consecutive minutes. Thus, Choi_2011 may identify shorter non-wear-time than Troiano_2007 . The second criterion is referred to as “vector magnitude” with two options. The default option, i.e., “No”, denoted as requests that only the counts on the vertical axis of an accelerometer is used as input data for the selected algorithm above. The alternative option, i.e., “Yes”, uses a Euclidean-type metric combining triaxial counts . A minimum length of wear-time is necessary to reliably assess PA patterns at the daily or person level . First, a “valid day” needs to exceed a minimum wear-time threshold (e.g., 10 hours/day). A more stringent threshold for defining a valid day (e.g., 12 hours/day) will result in fewer valid days and potentially a smaller sample size in the analytic dataset. Next, a study participant may need to have a minimum number of valid days to be included in analysis. This requirement is due to the high inter-day variations in a person’s PA. Researchers may be interested in a person-level outcome, e.g., whether a person meets the national guideline of weekly MVPA time. Ideally, they may wish to have at least one entire week of valid days to assess this outcome for a participant. However, due to participant non-compliance, the actual data collected are usually much less than ideal. The minimum number of valid days (e.g., an entire week) may screen out participants with too few valid days and could potentially bias the sample by only including the most adherent participants. Lastly, the minimum wear-time requirements can also specify a minimum number of valid weekdays and weekend days to capture greater variability in PA levels across the week. For this domain, we compare three options: minimum time for a valid day (6, 8, 10, 12 hours of valid wear-time), minimum valid days for a participant (1, 3, 5 valid days), and minimum valid days during weekends (0 or 1 valid day), where 0 means no requirement for a valid weekend day. After processing the validation domains for valid wear-time and minimum time, the count data are processed to differentiate activity intensity levels, which are more meaningful and interpretable for analyses. Scoring domain 1 contains pre-defined thresholds on accelerometer counts to define intensity levels; we focused on MVPA given this intensity level is associated with multiple health benefits and is the focus of national PA guidelines. The three options to identify adult MVPA are built in ActiLife, referred to as V_1952 , V_2020 , and VM3_2690 hereafter. Specifically, V_1952 defines threshold for MVPA at 1,952 counts per minute on the vertical axis , V_2020 uses 2,020 counts per minute on the vertical axis , and VM3_2690 defines MVPA as 2,690 counts per minute based on the vector magnitude tri-axial counts . The threshold for V_2020 is a more stringent requirement than V_1952 . The VM3_2690 ’s cutoff value is based on vector magnitude counts and not comparable with the other two options that are based only on the vertical axis. Bouts are a consecutive period of time during which the PA intensity level is consistent. Arguably, PA times accrued by bouts are robust to temporary changes in a person’s PA levels or abrupt measurement errors. While all three pre-processing domains discussed previously are mandatory, this domain is optional depending on one’s interest in bouts. Modified bout definitions can also be customized in ActiLife: length of a bout, the logic to break an ongoing bout, and the MVPA threshold to define a bout. In this paper, we used the default logic to break a bout and the default minimum length of 10 minutes . We considered a single criterion with three options in this domain, corresponding to three options under scoring domain 1. When MVPA intensity levels are defined by a certain threshold, bouts should also be based on the same threshold. We tested various pre-processing scenarios using the design of a numerical experiment, where a criterion with multiple options can be seen as a design factor with multiple factor levels in an experiment. The full-factorial experiment design has a total of 864 scenarios by enumerating all combinations of options in all criteria across the four domains. In this paper, we elected to use a partial factorial design with a subset of scenarios. The partial factorial design was preferred for two reasons. First, the empirical evidence from the partial factorial design with a few representative scenarios was sufficiently strong to show the influence of a single criterion. Second, many scenarios in the full factorial design are unrealistic. For example, some conceptual scenarios can use one cutoff for defining MVPA intensity level but a different cutoff for defining MVPA bouts. These counterintuitive scenarios are neither practical nor interpretable. Our partial factorial design selected a few reasonable and representative scenarios. We first studied the influence of validation domain 1 (valid wear-time), by specifying options in the other three domains. Next, we studied the influence of scoring domain 1 and 2 by specifying the options in validation domains 1 and 2. Scoring domains 1 and 2 were studied together, because both scoring domains should always use the same MVPA cutoff values. Lastly, we studied the influence of validation domain 2 (minimum wear-time), by fixing options in the other three domains. Table 3 lists all the selected scenarios in details. To examine the potential influence of different pre-processing scenarios in the analysis of accelerometry data, we examined multiple outcomes. First, we examined changes in sample size after going through the two validation domains. We examined the number of participants with sufficient valid wear days, the total number of valid days, and the average number of valid days per participant, all of which reflect the concept of effective sample size in a hierarchical model. Second, we examined the absolute MVPA level. This outcome is directly affected by the two scoring domains but may also vary by the two validation domains. For each scenario, we calculated the mean and standard deviation (SD) of the MVPA time at the person-day level. We also calculated the percent of participants meeting national MVPA guidelines of 150 minutes/week by multiplying the daily MVPA by 7 and compared with the guideline. Except for studying scoring domain 2 (modified bouts), the daily MVPA outcome always refers to all time above the selected MVPA threshold. Wherever applicable, we conducted paired t-tests to assess the statistical significance of the difference in daily MVPA between comparable scenarios. Third, we examined the regression coefficients from a multiple regression model. The model regressed the daily MVPA time versus two predictors: age in years and gender (female versus male), whose coefficient estimates serve as our study outcomes. Other covariates in the regression models included body mass index (BMI) and average wear-time per day. However, these latter two covariates were mostly not significant in the models and excluding them did not affect the findings. We did not further adjust for other possible covariates because we were not interested in the actual estimate of any association but instead to observe their variations by pre-processing scenario. Table 4 shows the results for studying the influence of varying validation domain 1 (valid wear-time), holding options of the other three domains constant. The four scenarios give generally similar results. Using the same vector magnitude option, the Choi_2011 algorithm always yielded a slightly larger sample size compared to using Troiano_2007 . For example, when not using the vector magnitude option, Choi_2011 resulted in 449 eligible participants, 2,443 valid person-days, and an average of 5.57 valid days per eligible participants. By contrast, Troiano_2007 yielded 433 eligible participants, 2,343 valid person-days, and an average of 5.41 valid days per eligible participant. However, within the same algorithm, using the vector magnitude option gives slightly larger sample sizes than using the vertical axis only. For example, using the Troiano_2007 algorithm, with the vector magnitude, the sample size was 443 eligible participants whereas it was 433 with the vertical axis (similar results observed for valid person-days and average valid days per eligible participant). The modest differences in sample sizes by changing the criteria for domain 1 do not yield notable changes in the outcomes of MVPA time ( Table 4 ). The mean daily MVPA time is between 25.66 and 26.07 minutes per day, and the SD of daily MVPA time is between 20.68 and 21.06 minutes. Differences across the four scenarios are very small. Age is consistently negatively related to daily MVPA time: one year increase in age is associated with a reduction of 0.33 minutes in daily MVPA time ( p <0.001). Women have on average 10 fewer minutes per day of MVPA than men ( p <0.001). Table 5 shows the three scenarios comparing the influence of varying MVPA intensity threshold when the other criteria are held constant. The V_2020 option resulted in less average daily MVPA time than V_1952 (24.35 vs. 26.07 minutes per day, respectively, p = 0.03) and a smaller SD (20.24 vs. 21.06 minutes per day, respectively). Consequently, V_2020 resulted in fewer people meeting the MVPA guideline (46.9%) than V_1952 (49.9%). V_2020 also had slightly smaller absolute values in regression coefficients than V_1952 (coefficients of age: -0.31 vs. -0.33, and coefficients of gender: -10.3 vs. -10.9). Despite the consistent pattern, the differences between V_1952 and V_2020 in our study outcomes seem to be small or very small. The VM3_2690 option, based on the vector magnitude, yielded very different results than the two options using vertical axis counts. Compared with V_1952 and V_2020 , the mean and SD of daily MVPA time increased from 60 to 80% (20 more minutes per day in sample mean and 12 minutes per day in SD, approximately). Consequently, approximately an additional 25% of participants were deemed as meeting the national MVPA guidelines. The absolute values of regression coefficients of age and gender were also nearly doubled, compared with the previous two options. Table 5 shows three scenarios for MVPA time by modified bouts. These three scenarios are the same as in Table 5 , except that the MVPA outcomes are based on 10-minute bouts rather than using all minutes. Table 6 shows a pattern similar to Table 5 . The two options using vertical axis counts, Bout_1952 and Bout_2020 , produced almost identical MVPA outcomes and regression coefficients. Bout_2690 using vector magnitude had much larger mean and SD in daily MVPA time, as well as substantially larger regression coefficients in absolute values. Notably, the two options using vertical axis counts did not have significant regression coefficients ( p >0.05), which indicates a great reduction of statistical power. Figs 1 – 3 present the results of the 24 scenarios studying the influence of varying the validation domain 2 (minimum wear-time), when other criteria were held constant. Fig 1 shows that using more stringent requirements—i.e., longer minimum wear-time per day, more total valid days, or at least one valid weekend day—resulted in substantially smaller sample sizes than options with looser requirements. The number of eligible participants ranged between 157 (with the strictest requirements) and 557 (with the loosest requirements) and the number of valid person-days ranged between 953 (strictest) and 3,225 (loosest). Not only was the range sizable, but even a seemingly small change in a pre-processing option appeared to greatly alter the sample size. For example, when requiring 12 hours of wear-time and 3 valid days, further requiring at least one valid day on weekends reduced the sample sizes from 349 participants with 1,694 valid person-days to 220 participants with 1,183 valid person-days. When requiring 5 valid days per participant and no requirements on weekends, changing from 10 hours of minimum daily wear-time to 12 hours reduced the sample size from 318 participants and 1,933 valid person-days to 190 participants with 1,127 valid person-days. Fig 2 shows that the daily MVPA time outcome differed notably across the 24 potential scenarios. Higher minimum wear-time requirements resulted in higher mean daily MVPA times. The SD of daily MVPA time varied moderately across scenarios with a less clear pattern. Based on a crude SD of 20 minutes per day, differences in mean daily MVPA time between any two scenarios had a standardized effect size of 0 to .25 times SD. The proportions meeting national guidelines were higher in most stringent scenario (54.1%) than the least stringent (44.2%). Fig 3 shows that the inference precision, e.g., the width of the 95% confidence interval (CI), differed substantially across scenarios, which was unsurprising given the largely different sample sizes across scenarios . Point estimates for age as a predictor displayed a moderate level of variation across scenarios. By contrast, point estimates for gender, a relatively stronger predictor than age, showed a small level of variation across scenarios. Significance levels for both predictors varied across scenarios. For example, the association with age was highly significant ( p <0.001) in all scenarios requiring 10 hours minimum wear-time per day but non-significant ( p >0.05) in the scenario requiring 12 hours minimum wear-time per day and at least 5 days with no requirements on weekends. Generally, the relatively stringent scenarios had wider CI but often bigger effect sizes, the latter partially compensating the loss of precision, compared with less stringent scenarios. Our findings highlight the presence and the lack of influence of various pre-processing criteria on accelerometry-assessed PA outcomes. Our study is also among the first to investigate this topic in a community sample of Latino adults. Our main findings include the following. First, numeric algorithms and the decision to use the vector magnitude for identifying valid wear-time did not seem to result in notable influences on all study outcomes. Second, our data suggests that MVPA intensity thresholds based on vertical axis and tri-axial counts are not comparable, although each set of thresholds was independently validated in the literature. The prevalence of meeting PA guidelines varied greatly from very low to moderately high depending on the pre-processing scenarios, which is consistent with prior reports . This finding is concerning since meeting PA guidelines is an important health outcome. Lastly, the requirement for minimum wear-time played an important role influencing downstream statistical analyses. Key take-aways from our analyses are that 1) minimum wear-time requirements are the most influential criteria and thus need to be reported clearly in the methods of research studies; 2) predictors with weak to moderate levels of association with MVPA outcomes, such as age, may be more influenced by the pre-processing criteria than those with strong associations, such as gender. This finding is in agreement with prior studies on the importance of wear-time requirements . Therefore, high requirements for wear-time are not always helpful in studying the associations between MVPA and predictors. Given these lessons learned, sensitivity analyses based on alternative pre-processing criteria, in particular, minimum wear-time requirement settings, are highly recommended to check the robustness of the subsequent statistical estimates based on pre-processed data. Although such sensitivity analysis cannot establish the validity for any pre-processing criteria, it is informative in the sense of revealing the extent to which the subsequent statistical estimates depend on specific pre-processing choices. For example, in an imaginary study for the association between the MVPA outcome and a particular predictor, the estimated relationship changes its significance level between alternative pre-processing criteria . The presence of this level of sensitivity should be reported as a limitation of the study. Vice versa, if the estimates remain relatively stable between alternative pre-processing criteria, the robustness of results may be claimed as a strength. It is also worth noting key differences when using modified bouts versus not using them. Since modified bouts are a more stringent requirement to define MVPA than using all available valid wear-time, this resulted in lower MVPA. Our study suggests that MVPA outcomes based on unbouted versus bouted data are essentially two distinct concepts, each of which have their own absolute levels, SD, and associations with potential predictors. Their numeric values, statistical inference, and interpretations are not comparable, even when other pre-processing criteria are the same. This study is subject to several limitations. First, analyses were based on a single and unique primary dataset. The sampled participants are based on an understudied population and results should be interpreted with caution. The primary data were collected by study participants’ compliance to receive, wear, and return the accelerometer devices. Failure to comply with the basic protocol (e.g., worn by more than one individual) can result in biases in the data. Second, we only used one type of accelerometer with one wear position (hip) and epoch length of 30 seconds. We were not able to study sensitivity due to different wearing position or different epoch lengths. Third, pre-processing was conducted in the proprietary software ActiLife. Despite its powerful functionality, friendly user interface, and popularity among various users, ActiLife is specialized and does not have full data processing capacity. Certain technical issues can be better addressed in a general-purpose statistical software, such as the GGIR library in R. Fourth, pre-processing criteria, in particular, minimum wear-time, can impact measurements of sedentary behavior and light PA measurements, just as they do on measuring MVPA. However, the cutoff for light PA is less well-established compared with the MVPA cutoff. The lower bound for sedentary behavior also overlaps with the algorithm for assessing valid wear-time. These important issues are beyond the scope of the current paper. Future studies are still needed to continue the investigation. Despite these limitations, our results demonstrate that all pre-processing criteria used in analyzing accelerometry data need to be carefully considered, well understood, and well documented. Certain criteria may be preferred to alternative criteria for some scientific or other reasons. For example, one may prefer Choi_2011 to Troiano_2007 for the latter algorithm’s more comprehensive scan over time. Another example is the wear-time requirement of 12 hours for three days, which was used in the widely cited analyses for NHANES accelerometry data. Some may argue all time above threshold makes more sense than time in bouts. While we are not positioned to judge which criteria are more scientifically sound, it is clear that when comparing a study’s results with prior studies, one must be sure to only compare results generated by the same or very similar pre-processing criteria. Results from studies with very different pre-processing procedures are incomparable; improper comparisons can contribute to misleading conclusions. Sensitivity analyses are necessary to check the influence of selected and alternative pre-processing criteria in a study’s main findings. Before presenting any statistical estimates, specific pre-processing criteria that led to the conclusions need to be clearly presented. Pre-processing steps for accelerometer data can influence the effective sample size and the magnitude of the MVPA outcome as well as its association with other predictors. Moderate changes in minimum wear-time can yield notably different output data and subsequently influence analyses assessing the impacts of interventions on MVPA behaviors. Processed data using vector triaxial magnitude and conventional vertical axis counts are not directly comparable. Sensitivity analyses using alternative pre-processing scenarios are highly recommended to verify the robustness of analyses for accelerometry data. Between-study comparisons should be based on the same or very similar choices in pre-processing criteria.
|
Study
|
biomedical
|
en
| 0.999997 |
PMC11694968
|
Professional football players are confronted with high physical and mental demand levels. This condition inevitably leads to a substantial likelihood of (noncontact) injury events . In this study, a noncontact injury is defined in alignment with the Union of European Football Associations (UEFA) guidelines for epidemiological research. Specifically, it refers to an acute physical pain experienced by a player during a training session or match, which occurs without any physical contact with other players. Such an injury may or may not necessitate medical intervention but results in the player being unable to participate in the subsequent training session or match . Noncontact injuries (henceforth, injuries) are known to significantly impact the player’s physical performance and psychological aspects, with repercussions for the entire team and club . These injuries account for more than a third of all injuries that require players to stop, and more than a quarter of all injuries reported in a season . Nevertheless, the scientific evidence of the last two decades around the modifiable risk factors associated with preventing and reducing the risk of injury in professional football players is increasing . One way of assessing the risk of injury is by employing screening battery processes or training load monitoring . In training load monitoring, Global Positioning Systems (GPS) receivers occupy a prominent place since they can be used to measure distances covered by players, and in conjunction with other sensors, quantify the number of accelerations and decelerations, as well as provide estimations of measurements of the global and metabolic external training load. Previous studies have investigated the relationship between training planning based on GPS metrics and the risk of injury [ 14 – 17 ]. The majority of these studies are retrospective in nature and primarily focus on identifying significant statistical correlations between individual GPS-derived metrics and the incidence of injuries. Nevertheless, the causal relationship of injuries is complex and influenced by multiple factors. This means that while certain metrics tracked by GPS devices might show a correlation with the occurrence of injuries under specific conditions, there are also other important metrics that might not seem significant on their own. However, when these metrics are analyzed together with additional data, they could become important indicators of injury risk . In response to that multifaceted task, the literature suggests using machine learning techniques as an imminent solution for injury prediction . Nevertheless, certain studies concentrate on predicting specific types of injuries. Ayala et al. , for example, explored the potential of these techniques by using pre-season evaluation data, including personal, psychological, and neuromuscular measurements, to predict hamstring strain injuries. Similarly, Ruiz-Pérez et al. leveraged machine learning to predict lower extremity soft tissue injuries in elite futsal players. In both scenarios, the predictive models generate an estimation of the likelihood that each player will sustain an injury over the course of the entire season. In a different view, several studies have investigated the capabilities of machine learning methods in predicting injuries throughout a football season. A noteworthy example is the work of Rossi , which not only proposed a machine learning-based injury predictor but also introduced an interpretable framework for understanding the underlying causes of injuries. On the other hand, several studies [ 23 – 25 ] emphasize the importance of integrating diverse data sources, such as manually collected questionnaire responses and GPS tracking data, to improve the accuracy of machine learning predictions. In particular, Vallance et al. successfully developed an injury predictor capable of making short-term (1-week) and medium-term (1-month) predictions. The authors utilized GPS tracking data alongside data from well-being questionnaires as inputs for various machine learning models. The existing literature provides only a preliminary understanding of the factors that influence the risk of injury , and the statistical prediction models used are still basic (e.g., cut-off values ) or not accurate enough (i.e., not able to correctly identify both injury and noninjury events). Machine learning strategies, on the other hand, are associated with a high incidence of false positives. This could result in coaches erroneously sidelining players who are incorrectly flagged as having a high risk of injury or misallocation of medical resources. Additionally, existing methods focus exclusively on a singular injury type, do not provide precise temporal information about injury risk, and necessitate continuous manual data collection to enhance the accuracy of predictive models. The primary goal of this study is to address these challenges. More specifically, this research is aimed to develop an automated system that uses machine learning techniques to predict injury risks. Unlike previous machine learning models designed for injury prediction, the proposed approach calculates the likelihood of injury for each player daily over the course of a football season. To achieve this, data from GPS devices are utilized in conjunction with variables related to the players and the match sessions. This study employed the Maximum Relevance–Minimum Redundancy (mRMR) in combination with a wrapper method for feature selection, which helps in identifying the most relevant and nonredundant features for predicting injury events. It is important to note here that the methodology employed does not utilize data from questionnaires or any other continuous manual data collection. More specifically, the objectives of this paper are to: Fig 1 depicts the machine learning pipeline followed in the current study for injury detection. All the steps of this pipeline will be explored in the following sections. A longitudinal study was conducted among a convenience sample of 34 male professional football players from a Portuguese first-division team. The team’s physical trainers collected data over 36 weeks in the 2020–2021 season, covering 217 training sessions and 38 official games. The data collection period was from September 7, 2020, to May 19, 2021. Participants had a mean age of (26.27 ± 3.28) years, a mass of (77.54 ± 7.63) kg (collected with the InBody 770), and a height of (180.65 ± 6.60) cm measured with a stadiometer (SECA 213). These professional players were distributed among five positions. That is, 11 (32.35%) players were defenders, 9 (26.47%) attacking midfielders, 7 (20.59%) forwards, 4 (11.76%) defensive midfielders, and 3 (8.82%) midfielders. Besides being a goalkeeper, no other exclusion criterion was applied to recruit the players for this study. The data was collected in the context of the football club’s professional contract with each player. These contracts between the players and the club included clauses for gathering data related to their performance, thus ensuring that consent for this activity was formally obtained. As already mentioned, the responsibility for collecting these data was assigned to the club’s physical trainers, meaning the researchers were not involved in the data collection phase. Nevertheless, the authors submitted a request for an ethical assessment to obtain access to this data. This request was approved by the Faculty of Human Kinetics ethics committee at the University of Lisbon . The committee’s approval verified that the study complied with national and international ethical standards, as outlined in the Convention on Human Rights and Biomedicine (Oviedo Convention) and the Declaration of Helsinki. Furthermore, all participants provided written informed consent, ensuring they were willingly participating and fully aware of the study’s goals and methods. After receiving ethical approval and informed consent, the researchers accessed the data on June 15, 2021, which marked the start of the data analysis phase. The club’s medical staff reported 18 traumatic and overload injuries throughout the 2020–2021 season. Of these 18 injuries, 10 (55.56%) injury events were due to match circumstances, and 8(44.44%) were during training sessions. Moreover, most of the injuries were located in the muscles and tendons (11 injuries, 61.11%), the remaining being in the ligaments (6 injuries, 33.33%) or contusions (1 injury, 5.56%). Players used GPS receivers from Catapult (GPSports EVO) placed in a skin-tight vest in the thoracic region between the scapulae, capturing the players’ position data with a sampling frequency of 10 Hz. Such devices were already in use by the team when the study started. Despite this prior use, they have been shown to be valid, reliable, and accurate for measuring acceleration and the mechanical work applied to the player . Moreover, since these devices are equipped with accelerometers, magnetometers, and other sensors, it was possible to collect raw data, which was then processed by the Catapult system to derive a total of 1379 parameters for this study. The average Horizontal Dilution of Precision (HDOP) recorded was 1.12, which serves as a metric to assess the geometric accuracy of the GPS satellite-based positioning. HDOP values span from 1 to 2, with values closer to 1 reflecting higher positional accuracy. All the players’ metrics data per exercise and session were collected by the GPS receivers and extracted using the Catapult’s GPSports Cloud. The information was then grouped (using the proper aggregation function, such as maximum and average) into a single record according to the player’s unique identifier and session date. Moreover, metrics from GPS that contained GPS information (e.g., maximum satellite count and HDOP), as well as invalid or missing data were removed. The latter includes, e.g., metrics with invalid data due to sensors’ absence (such as heart rate and players’ weight). The unique identifier for each player was also removed from the dataset to maintain anonymity. This step also ensures that the prediction models were trained without access to player-specific information, allowing the models to remain player-independent. Those processes resulted in a set of 424 GPS metrics that were retained for further data preprocessing steps. In order for the model to capture sudden changes in the players’ load, time information called “dummy days” was added after grouping the sessions by player and date. A dummy day is attributed to a specific athlete and is denoted by a series of zeroes across all measured variables, thereby emulating a hiatus in the athlete’s training regimen, which may arise from intervals of rest or recovery. In other words, a dummy day is a record that indicates a specific day with no physical activity records from a given player. This information will help the models capture sudden changes in the player’s load time series, a potential factor that increases the injury risk . Also, this strategy enables the predictive models to learn short-term dependencies between past and current period information. Furthermore, the inclusion of dummy days ensures a continuous time scale in the dataset for all players, allowing the model to consistently analyze the temporal sequence of training and rest periods. Besides all the parameters collected by the GPS receivers, it was possible to identify other descriptive variables that could increase the probability of a given player having an injury. Namely, (a) the players’ position and corridor ; (b) the players’ age (in months) at the time of the session or match ; (c) the day of the week; (d) the type of session (i.e., training or match session); (e) the results of the games (win, draw or loss); (f) the game location (own stadium or opponent’s stadium); (g) the competition of the match session (in this case, Liga NOS [first division] or Taça de Portugal [national championship]); (h) the number of exercises that the players’ did in a session and (i) the duration of the session. It is also important to note that before feeding the information (GPS parameters and descriptive variables) to the models, categorical parameters were converted to numerical values utilizing the one-hot encoding method . The study utilized parameters obtained from GPS receivers as well as other descriptive variables as features. Additionally, the variable “injury” was incorporated into the dataset as a binary target variable. Nevertheless, not every feature was retained for the final model; feature selection using the mRMR method was conducted to exclude variables that were either irrelevant or redundant. Outliers were identified based on the upper and lower bounds calculated for each GPS parameter values x i taking into account the data from injured players. The upper x i U and lower x i L bounds for the i -th parameter were computed as x i L = x ˜ i - ( ⌊ min i σ i ⌋ + 1 ) × σ i , (1) and x i U = x ˜ i + ( ⌊ max i σ i ⌋ + 1 ) × σ i (2) where x ˜ i denotes the median, σ i the standard deviation, and min i and max i the minimum and maximum value of the i -th parameter, respectively. The lower and upper bounds were derived from the Standard Deviation Method for identifying outliers . In contrast to the original method, which uses a predetermined number of standard deviations, the approach followed here calculates the number of standard deviations based on the maximum and minimum values of each parameter, ensuring that at least one standard deviation is always maintained. As a result, the j -th data point of the i -th parameter was calculated as: x i , j = { x i L , if x i , j < x i L x i U , if x i , j > x i U x i , j , otherwise . (3) It is also noteworthy that x i U and x i L were calculated based on data from injured players only. This is because injury events can be caused by abnormal parameter values (compared to noninjury records), and these values should not be considered outliers. The outlier treatment was, however, applied to all records, including noninjured records. The decision to replace outlier values with the calculated upper or lower bounds was made to maintain the data distribution’s integrity and minimize the distortion of the features’ players’ natural range. In other words, this method maintains the important differences between the data points of a player, which is essential for the machine learning models to correctly identify patterns related to the risk of injury. Parameters were separately standardized such that the overall parameters’ values have a mean of zero and a standard deviation of one, that is x i = x i - x ¯ i σ i , (4) where x ¯ i denotes the arithmetic mean. This process is essential since the parameters are measured in different measurement units. Besides, having all the parameters on the same scale improves the stability of the models during the learning phase . In order to reduce the number of input variables of the predictive injury model, and thus reduce the computation time of training the model, a dimensionality reduction was performed by eliminating all features that demonstrated zero variance (i.e., constant variables). In total, 449 features were processed in this step of the machine learning pipeline (424 GPS features and 25 descriptive variables), resulting in 189 features being removed. In other words, 260 features (237 GPS features and 23 descriptive variables) were kept for further investigation. After removing all features exhibiting zero variance, an additional feature selection analysis was conducted to obtain the most important features for the injury detection model. In this case, the analysis consisted of using the mRMR method to calculate the importance of each feature and rank the features based on their importance. Subsequently, the most important features were identified, and the top p -features were selected for inclusion in the injury detection model. mRMR is a model-agnostic feature selection mechanism that finds an optimal set of features that minimizes the redundancy between the independent variables, and at the same time, maximizes the relevance with respect to the dependent variable (in this case, with the injury). The mRMR algorithm also ranks the features according to their importance and redundancy. Therefore, in this study, the method was used only to sort the GPS and the descriptive parameters, and only the top features were used in the models. In order to determine the ideal number of features for each injury detection model, and thus perform feature selection, the models were tested using various feature sets derived from the standardized dataset with outliers removed. Each set of features was composed of the first p (where p = 10, 20, ⋯, 260) most important and less redundant features according to the mRMR method. The combination of this wrapper feature selection mechanism with the mRMR method is a novel approach in the field of automatic injury prediction and is significantly faster than a sequential forward search . S1 Table provides a comprehensive list of the GPS and the descriptive parameters collected in this study. The supplementary table also includes their ranking according to the mRMR method. This work used SVMs, FNNs, and AdaBoost classifiers to map the GPS parameters and the descriptive variables to the recorded injury events. In other words, the predictive models were fed the most important GPS parameters (with dummy days, followed by outlier treatment and standardization) and combined with the most important descriptive variables (such as player position and corridor). It is highlighted again that, in order to simplify the data representation, the players’ data per exercise and session were grouped into a single record per day, i.e., each player has one sample per day. Each data point in the dataset has been assigned a discrete label based on the occurrence of an injury event for a given player and day (i.e., 1 = injured, 0 = noninjured). The predictive models utilize these labels to make accurate injury predictions. Although the injury labels presented to the models are binary, the models’ output is a continuous prediction in the range of (0, 1). Nevertheless, applying a threshold can convert these continuous predictions to a discrete label. That procedure opens the opportunity to adapt this threshold according to the strategy defined for the team. For instance, decreasing the threshold at the start of the season can make the model more sensitive to potential injury events, leading coaches to reduce training sessions’ intensity in order to have more players free from injury during this period. On the other hand, at the end of the season or in preparation for big games, the football coaches, coaching staff, and sports science staff could opt to increase this threshold, balancing better preparation (with higher training loads) with an increased risk of injury. It is important to note here that to report the results of our approach, the threshold was selected using the Receiver Operating Characteristic (ROC) curve. The ROC curve is a graphical representation that illustrates the trade-off between sensitivity and the False Positive Rate (FPR) at various threshold values. The selected threshold was the point on the ROC curve nearest to the top-left corner. This point corresponds to the highest sensitivity and minimum FPR, and is often referred to as the “elbow” of the ROC curve, signifying the location where the sensitivity and specificity (1—FPR) balance is optimal. It is also important to highlight that thresholds were calculated on the training sets and subsequently applied to the testing sets. The task of predicting injury events presents an imbalanced machine learning problem since only 0.20% of the observations correspond to injury events within the total dataset. For this reason, each model was trained using both cost-sensitive and traditional learning (i.e., cost-insensitive learning) in a supervised manner. In cost-sensitive learning, misclassification errors have different costs depending on the class, which leads to minimizing the total cost during model training. This enables the models to learn how to classify minority classes correctly (in this study, the injury class) . On the other hand, traditional learning does not explicitly address class imbalance, theoretically making the models less prone to classify the minority class correctly. In the cost-sensitive models, the cost of each class was inversely proportional to class frequencies. For the case of the SVMs, the calculated class weights were used to adjust the margin proportionally. For AdaBoost, the weights were applied to increase the influence of misclassified instances, ensuring that the model focuses more on injury records. Similarly, for FNNs, the class weights were applied to adjust the weighted squared error loss function, making the model more attentive to injury records during training and enhancing its sensitivity to these instances. The stratified cross-validation method was used to validate the predictive models. This was necessary due to the class imbalance between injured and noninjured records. Additionally, the scarcity of injury records required an effective validation method, as creating a validation set with the required number of samples was not possible. The cross-validation strategy splits the data into two subsets (i.e., two-fold), as depicted in Fig 1 . Each subset holds a random training dataset and a test data set. In both folds, the training datasets contain only players who were injured at least once during the season (50% of the total players injured in each fold). The subsets are subject-independent, meaning that each player’s data is exclusively in only one of the dataset splits. In other words, the cross-validation strategy was implemented on a player-by-player basis, meaning that the division of data was conducted with respect to individual players rather than individual samples. As a result, the stratified cross-validation yielded split sets of 1501 samples for the training set and 3090 for the testing set in fold 1, with 11 and 7 injury records (i.e., where injury = 1) included in the training and testing sets, respectively. On the other hand, for fold 2, the training set comprised 1410 records, and the testing set comprised 3082. As expected, the training and testing sets for fold 2 contain, respectively, 7 and 11 injury records. It is important to highlight that the train and test datasets were created from the complete dataset after executing outlier removal, standardization, feature selection, and feature importance sorting. To help address the class imbalance, however, an undersampling technique was applied to both training sets using the k -means clustering algorithm before being fed to the models. Essentially, the k -means clustering method undersamples noninjured records (i.e., samples where injury = 0) by replacing them with a cluster centroid calculated on noninjured records. The ratio of noninjured records to injured records was set at 40% and k = 8. All the predictive models were built using Python (v. 3.8) with the TensorFlow (v. 2.3.0) and scikit-learn (v. 0.24.2) libraries, in conjunction with other supplementary libraries, such as pandas . Models were trained in a machine equipped with an Intel Core i7–9700F CPU, running at 3.00 GHz, with 32GB of RAM. A SVMs is a supervised learning algorithm that constructs an optimal hyperplane through an optimization strategy. This hyperplane is designed to maximize the margin between the data points of distinct classes; specifically, in the context of this work, it differentiates between noninjured and injured records. SVMs are known for their good generalization capabilities, robustness, and effectiveness, even in high-dimensional spaces. Additionally, they require low computational requirements . Besides that, they are commonly used for injury forecasting . In order to develop an SVM-based injury model, the selected regularization parameter was the squared L2 penalty, and the radial basis function kernel was employed. Also, the classification output of the SVM was transformed into probability by using the Platt scaling , making thus possible to use a custom threshold for the final binary classification. A FNNs is a directed graph that processes input data through weighted connections, optimized during training to approximate the desired output. FNNs were chosen for this work mainly due to their ability to learn complex and nonlinear relationships between the inputs (i.e., the multiple predictor variables) and the desired output . Besides that, although FNNs have shown to be successful in various sport science works (see, e.g., ), their application to injury prediction is still to be further explored . In fact, to the authors’ best knowledge, only the work developed by Ruddy et al. used FNNs for injury prediction (in this case, injuries from professional Australian footballers). For this work, FNNs with three layers were used. The number of units on the input layer was tuned by testing the different number of features. The hidden layer consisted of ten units with the hyperbolic tangent activation function. A single neuron unit with the sigmoid function was used for the output layer with the range of (0, 1). A dropout regularization technique was also applied between the hidden and output layers . The dropout probability was set to 60% to prevent overfitting due to the limited data available for feeding the model. Lastly, and similarly to the SVM models, the chosen regularization parameter is the squared L2 penalty. The FNNs were trained over 20 epochs using the Adam optimization algorithm with a learning rate of 0.001, being the loss function the binary cross-entropy. AdaBoost is a well-known meta-learning algorithm in the boosting family due to its capabilities. The algorithm combines multiple weak classifiers into a strong one by iteratively adjusting the weights of training samples based on the performance of previous classifiers. AdaBoost has already been successfully applied for injury prediction; however, only two works were reported to the authors’ best knowledge . For this reason, exploring the AdaBoost algorithm in other contexts was considered necessary. AdaBoost is often used with decision trees (which was also the case for this work), where the learning models are stumps—one-level decision trees . The stumps are added to the ensemble at each iteration to minimize the errors from the previous weak learners . A maximum of 50 stumps were added to the final model. Sensitivity is the ability of the model to identify injuries correctly, and specificity is the ability of the model to identify noninjuries correctly. The average Geometric Mean (GMEAN) was calculated between the two folds to find a balance between sensitivity and specificity. This function is defined as GMEAN = ( Sensitivity × Specificity ) 1 2 , (5) with higher values representing better models. The features sorted by the mRMR method were tested for the best combination by sequentially adding features to the models. In order to achieve statistical significance, each combination of model, type of learning, and number of features was tested 500 times, resulting in 78 000 simulations. The results of the current work are divided into three sections. The first section investigates the effect of each type of learning on the GMEAN value. Then, the best models found among the simulations are presented. Finally, the last section assesses the quality and stability of those models. Fig 2 compares, for each model, the cost-sensitive learning and traditional learning approaches regarding the average GMEAN value of the 500 runs for each number of features. It also offers the possibility of studying the number of features each model requires to reach a certain GMEAN level. From that figure, it is possible to infer that cost-sensitive learning in the AdaBoost models does not represent a substantial advantage in terms of average GMEAN value. Regarding the training splits, if the number of features is lower than 230, both types of learning behave similarly; nevertheless, if a higher number of features are fed to the models, the traditional learning models are, on average, significantly better than the cost-sensitive learning models. In the testing splits, both types of learning follow the same trend of average GMEAN values. Nevertheless, traditional learning is better in some number of features and cost-sensitive in others (especially in the range of 180 to 210 features). For the FNN and SVM models, however, cost-sensitive learning was always the best methodology to train the models. For these models, the average GMEAN value was consistently superior to traditional learning. This situation can be stated in both training and testing splits. It is also important to note that for the AdaBoost and SVM models, both types of learning require approximately the same number of features to reach a certain level of average GMEAN value. On the contrary, FNNs trained with a cost-sensitive learning approach needed fewer features when compared to traditional learning. As an illustration, in order to reach 60.00% of the average GMEAN value on the training splits, cost-sensitive learning requires 40 features, while traditional learning requires 180 (127.27% more). The best predictive models were selected in a three-step process. First, the best classifiers were selected by considering the highest value of GMEAN in the training splits for each model and type of learning (independently of the number of features). Then, the same models were tested on data from players that were not used to train the models (i.e., unseen data). The final best models were then selected considering the highest GMEAN in the testing splits. It is highlighted again that, due to data limitations, it was not possible to create a validation set that would further improve the selection of the best models. Those models are depicted in Table 1 , along with their respective evaluation metrics and the best number of features. From this table, it can be concluded that the best two models used the AdaBoost learning method. Regardless of the type of learning, these classifiers obtained in the training splits a mean sensitivity of 88.31%, a specificity of 100.00%, and an accuracy of 96.60%, resulting in a GMEAN of 93.97%. Followed by the AdaBoost classifiers, SVMs were the second-best models, obtaining a GMEAN ranging from 91.33% to 93.10 in the training splits. Although the AdaBoost and SVMs models presented similar sensitivity values (88.31%), i.e., an equivalent capacity of identifying injuries on truly injury events, the mean specificity metrics dropped 5.56% in cost-sensitive learning and 1.85% in traditional learning. In its turn, FNNs were the models with the lowest performances, with a GMEAN ranging from 81.30% to 84.78% Consequently, these classifiers also obtained the lowest mean sensitivity and specificity metric values in the training splits (76.62% and 85.81%, respectively). After choosing the best classifiers in the training datasets, the models that best predicted the injury and noninjury events on the testing splits were the AdaBoost and SVM, both combined with cost-sensitive learning. If, on the one hand, the AdaBoost classifier obtained a GMEAN value of 71.47%, on the other hand, the SVM obtained the more balanced result (balancing sensitivity and specificity), reflected in the highest GMEAN value obtained (72.80%). At the same time, the SVM model only required 20 features (vs. 90 features required by the AdaBoost classifier) to predict more than 70% of injury and noninjury events. Nevertheless, these two models were selected as the best two models of this work since they achieved similar GMEAN values. It is noteworthy, however, that the AdaBoost classifier combined with traditional learning was the model with the highest sensitivity (86.36%) at the cost of having a low specificity and accuracy. This is a common scenario in imbalanced classification problems. At the other end of the spectrum, the model incorporating SVM with traditional learning could not detect any injury, invalidating its use in real-world scenarios. Similarly to the training splits, FNN got one of the lowest performances of the set of models and types of learning. Indeed, the ability of the FNN trained using cost-sensitive learning to detect injuries is even inferior to a pure random classifier. The combination of FNN with traditional learning is, moreover, at the limit of its use in a real-world application since it can only detect 57.79% of the actual injury events. Nevertheless, both models presented a high percentage of correct detections of noninjury events (81.81% and 82.19%, respectively). The effect of cost-sensitive learning on the evaluation metrics compared to traditional learning is visible here. In this work, all models trained with cost-sensitive learning resulted in sensitivities in the training splits equal to or higher than traditional learning. As expected, the models’ accuracies were neglected in the training splits; in the testing splits, it is interesting to note that traditional learning seems more favorable in correctly predicting injury events. However, the more balanced results (i.e., the higher GMEAN values) are usually achieved with cost-sensitive learning coupled with fewer features. Fig 3 depicts the radar plots containing the mean and standard deviation information obtained from the 500 runs of the models’ combinations, type of learning, and number of features identified in Table 1 . This figure thus enables assessing the quality and stability (based on the standard deviation) of the injury forecasting models in the training and testing phases. At the same time, it also compares cost-sensitive learning with traditional learning, in terms of GMEAN, accuracy, specificity, and sensitivity, for both train and test splits. AdaBoost models trained with cost-sensitive learning consistently learned to predict 88.31% of the injury events in the training splits. Although the 500 trials exhibit, during training, the same and constant evaluation metric values from Table 1 , the same was not true for the testing phase, especially for the sensitivity metric. In testing, these models correctly predicted, on average, 68.09% (SD = 5.82%) of the injury events and 64.33% (SD = 0.51%) of the noninjury events, resulting in an average GMEAN value of 66.11% (SD = 2.72%). Cost-sensitive learning SVMs models, on the other hand, were shown to predict, on average, 79.43% (SD = 13.43%) and 71.43% (SD = 0.00%) of the injury events in the training and testing phases, respectively. Unlike the previous AdaBoost models, SVMs trained with cost-sensitive learning were more stable in the testing phase than in the training phase in all evaluation metrics. For instance, the specificity obtained in the training splits was 89.36% with a standard deviation of 14.62%; conversely, in the testing phase, a specificity of 74.10% with a standard deviation of 0.03% was reached. It is also important to note the average GMEAN values, which were 84.25% (SD = 14.01%) for training and 72.75% (SD = 0.01%) for testing. AdaBoost models trained using traditional learning gave more emphasis to specificity and accuracy at the expense of a lower sensitivity value in the testing splits. Still, these models were able to detect correctly, on average, 60.47% (SD = 15.42%) of the injury events and 68.76% (SD = 7.35%) of noninjury events. On the other hand, FNN models were again dubious concerning their applicability in real scenarios, only detecting 40.25% (SD = 15.24%) and 54.32% (SD = 15.00%) of the actual injury events when using cost-sensitive learning and traditional learning, respectively. It is also noteworthy that, from the 500 trials, no SVM model trained using traditional learning was able to detect true injury events. Overall, models trained with cost-sensitive learning showed to predict, as expected due to this learning’s sensitivity to the minority class, more injury events correctly when compared to traditional learning. Equally important is the fact that cost-sensitive learning also produced the most stable models during the training and testing phases. However, FNN models are an exception to these two findings. In particular, although using cost-sensitive learning on these models provided, in the testing datasets, higher specificity and average accuracy values, sensitively, and GMEAN metrics were lower than traditional learning. By the same token, the differences in the standard deviation between cost-sensitive learning and traditional learning were, despite minimal, sometimes higher. The current investigation aimed to develop an automatic technique for forecasting injury events. This technique is based on the information from the GPS devices collected throughout games and training sessions combined with other descriptive variables (such as the player’s corridor). In this view, the conducted study allowed us to derive two models. The first model consisted of an AdaBoost classifier, and the second of an SVM. Both models generalized the results with acceptable sensitivity, specificity, and accuracy and were revealed to be stable, thus suggesting their applicability in real-world scenarios. To the authors’ best knowledge, this is the first study that included the use of the mRMR algorithm to rank the features according to their relevance and redundancy for forecasting injuries. In this view, the AdaBoost required 90 features to detect 78.57% of the actual injury events. On the other hand, the SVM model required only 20 features but obtained a lower sensitivity (71.43%). Nevertheless, the selected variables in both cases enabled the two predictive models with a high explanatory capability for injury events. The player’s position and type of session (i.e., training or match session) were essential descriptive variables for the two injury prediction models (cf. S1 Table ). The player position has not been broadly reported to be associated with an increased injury risk factor ; however, injury rates are documented to be higher for matches than training sessions . Possible justifications for the increased injury rate in match sessions may arise from the variations in the intensities of training and competition and, to a certain level, from the type of training before the next match (e.g., tactical practice) . Thus, the two developed models seem to capture this information and use it to predict injuries in conjunction with the other predictors. The day of the week is another descriptive variable the models require for predicting injuries. To the authors’ best knowledge, no other study on injury prediction in professional football reported a relationship between the day of the week and injury risk. All things considered, the added descriptive variables were essential to leverage the model’s performance in predicting injury and noninjury events. It is also interesting to note that AdaBoost used the player load and all the velocity and acceleration bands in order to predict injuries. The information about the player load across all axes of accelerometer movement, distance, duration, and effort count performed in all bands makes the model detect when players are in undertraining or overtraining situations. Both situations are boosters of injury events, with some studies reporting a U-shaped curve between these parameters and injury risk . Interestingly, those models do not use the players’ age for forecasting injuries. Indeed, the literature that studied the relationship between age and risk of injury is inconsistent. Some works suggest that the risk of injury increases with age, and others report insufficient evidence to infer a significant effect of age on injury risk . The use of cost-sensitive models is an approach followed by several studies for injury detection (see, e.g., ) due to the problem’s imbalanced nature, i.e., due to a significant difference between the number of injury and noninjury events. However, to the authors’ best knowledge, this technique was only employed in studies made based on the data collected on screening battery processes, leaving its use with longitudinal GPS data to be investigated in this work. In the screening studies , cost-sensitive models performed better than traditional learning classifiers. Despite having a different data collection process, this study demonstrates to be also in line with these results; however, AdaBoost coupled with a cost-sensitivity train did not show to be superior to traditional learning in terms of average GMEAN value. A plausible explanation is that, unlike the FNN and SVM models, the AdaBoost algorithm is an accuracy-oriented classification algorithm. Thus, even with a cost-sensitive learning approach and oversampling, the specified cost for the minority class could have been insufficient to incline the boosting strategy to the minority class . The most recent studies investigated, on the other hand, the use of tree-style classifiers for injury detection since these models provide the classification rules and the most critical features for injury prediction (see, e.g., ). However, a trade-off between performance and interpretability must be made. As a result, tree-style classifiers have a performance that is usually inferior to other methods. The current work, in its turn, is focused on the performance metrics, leaving the interpretability aspect for future works. In that view, after conducting the 500 runs for each combination, the AdaBoost and the SVM were the best models identified. These models use cost-sensitive learning and can detect, respectively, 78.57% and 71.43% of the injuries in the testing datasets while keeping an acceptable (>65%) true negative rate. Although there is a known trade-off between correctly predicting more injuries and incorrectly flagging noninjury events , the SVM model is the most balanced of the two models, obtaining an Area Under the receiver operating characteristic Curve (AUC) of 0.85. A comparison between the results reported by the previous state-of-the-art works and the results attained in this work is presented in Table 2 . It is important to note that the best model for the studies that reported multiple classifiers was selected based on the highest sensitivity since not all studies reported the metrics used in the current study to compute the GMEAN. Besides being the first work that combined machine learning with GPS data to predict injuries, the work of Rossi et al. can be considered the most influential in this area. It was established in their work an injury forecaster capable of predicting 80.00% of true injury events. It also provided an interpretable framework between injury risk and training performance. However, the AUC metric suggests that the model might create a significant number of false alarms and thus unnecessarily bench players before the next game or training session. Although this situation is also visible in the proposed AdaBoost model, the SVM model remedies this situation by obtaining a more balanced result between sensitivity and specificity at the expense of lower sensitivity. The models proposed in this study do not require constant manual data collection to forecast injury events accurately, thus being cost- and time-effective. This, however, is not the case for some studies in the literature that combine GPS data with other pieces of information. For example, Naglah et al. obtained one of the highest sensitivity values. Although meritorious, their proposal uses GPS data combined with players’ questionnaire data. Requiring players to fill out questionnaires frequently is a strategy that can be time-consuming and challenging to incorporate into players’ routines. Furthermore, the increase in sensitivity (about 2%) is not significant to the point of requiring questionnaires. In the same view, Rossi et al. combined GPS data with blood parameters to assess individual psychophysiological responses to training and create an injury forecasting model that predicts injury events in the subsequent seven days. Although only three blood samples were collected, on average, from each player, the post-blood collection procedure is complex, costly, time-consuming, and requires specialized personnel to be conducted. Besides that, unfortunately, they were unable to predict more than 65.00% of the injury events. Vallance et al. presented the best results in the literature, detecting almost every injury event while balancing sensitivity and specificity. Contrary to the approach presented in the current proposal, Vallance et al. generate injury predictions for the forthcoming week or month. The superior performance of Vallance et al.’s method, when compared to the present approach and other methodologies in the scientific literature, could be attributed to the difference in prediction time frames. Generally, forecasts covering a more extended period tend to be more accurate because they allow for a wider margin of error, leading to less precise predictions. For example, predicting an injury for the next week suggests a potential occurrence at any time during those seven days, which is inherently less precise than a prediction pinpointing a specific day for the potential injury. The main findings of this study will help the coaching staff to identify football players in high-risk situations for injury and improve their decision-making. This will inevitably leverage the team’s performance, and simultaneously, reduce the club’s economic cost due to injury events. Besides, it will enable constant monitoring of multiple parameters without manual intervention and the analysis from the coaching staff, which is limited due to the large number of parameters collected by the GPS receivers. Indeed, knowing training and competition effects on the injury risk will also improve the training design and ensure that players receive an adequate training session before and after matches by keeping the correct balance between high and low intensities. Ultimately, having the possibility to adjust the threshold used to convert continuous injury predictions to discrete labels will enable the coaching staff to draw more informed football tactics. This will enable to control the risk of injury events according to, for example, the team’s position in the championship table. This study is to be seen in the light of some limitations, which can be, at the same time, possible directions for future works. The number of injuries needed to be larger to test more complex models (eventually, recurrent models such as long short-term memory networks) or to assess the models’ prediction capabilities fully. Additional instances from, e.g., another season or different cohorts (for example, U23 and U19 teams) would remedy this situation. The authors thus highlight that the models were only validated for the analyzed football team. This study did not specifically measure muscle and body fatigue or include certain types of exercise, such as cardio workouts, that players might have engaged in on their rest days (for example, during the dummy days). These factors might also be connected to the risk of injury and could improve the injury prediction models. Nonetheless, this study aims to develop an automated system to predict injury risks daily throughout a football season without requiring continuous manual data collection, such as that needed for assessing muscle and body fatigue (for example, through self-reports). And, although direct measurements of muscle fatigue or specific activities on off days were not part of the data collected, it is reasonable to assume that the information obtained from the GPS devices provide a reliable indication of the player’s physical condition. In future studies, it would be beneficial to include additional physiologic parameters of the players (e.g., history of prior injuries and rating of perceived exertion for each session) to enhance the models further. Unfortunately, this information was not available at the time of this study. Moreover, future studies should focus on enhancing model interpretability without significantly sacrificing performance. This work used three machine learning methods (SVM, FNN, and AdaBoost) to predict injuries from professional football players. Besides using the information from the GPS receivers, the models incorporated the effect of sudden changes in player load by including dummy days (i.e., records with zeros for all parameters). Descriptive variables, such as player position and day of the week, were also included and showed to leverage the ability of the models for injury prediction. Before feeding information to the models, features were sorted and selected according to their redundancy and relevance to injury risk using the mRMR method. This procedure revealed the player’s position, type of session, velocity bands, and acceleration bands as essential features for injury prediction. In turn, the predictive models were shown to be able to accurately detect injuries and noninjuries events, especially the AdaBoost and the SVM, trained with cost-sensitive learning. These models were able to predict more than 70.00% of new injury and noninjury events and be stable in terms of performance metrics. Comparing these results with the ones available in the literature, the models developed in this work stand out for (a) being the most balanced ones (between sensitivity and specificity), (b) not requiring lengthy and manual data collection processes, and (c) the ones that predict injury for short time frames (in this case, one day). Although the number of injuries was not large enough to fully assess the models’ prediction capabilities, the current models can be used in real-world scenarios. Models will help the coaching staff to identify football players in high-risk situations and, thus, leverage the team’s performance while minimizing rehabilitation costs.
|
Study
|
biomedical
|
en
| 0.999997 |
PMC11694973
|
In recent years, e-commerce has developed rapidly, but the phenomenon of poor-quality goods “squeezing out” quality goods is frequent, and the e-commerce market has even been labeled a “lemon market” . Online reviews, sales rankings, and other reputation information are important factors for consumers to judge the quality of e-commerce products and make consumption decisions. When information asymmetry occurs, merchants with sufficient information are in an advantageous position, while consumers with poor information are at a disadvantage, leading to adverse selection . Not only does this waste consumers’ time and money, but, more importantly, it distorts the marketplace and undermines the foundations of honest competition. The reputation information of e-commerce products mainly includes consumer feedback evaluation, sales rankings, credit ratings, and real name certification , among which feedback evaluation and sales rankings play a significant role in the reputation mechanism . False reputation information, such as “click farming,” “cash rebates for favorable comments,” and “pay per click,” has a low cost and strong operability, and has become a “heavy disaster area” with frequently asymmetric reputation information. Therefore, we focus our research on reputation information such as feedback evaluation and sales ranking. To improve the “lemon market” phenomenon, some efforts have been made. During the period from October 13, 2020 to May 17, 2021, Sam’s Club APP was fined 300,000 yuan for violating the Anti-Unfair Competition Law due to its default five-star positive reviews ( https://finance.sina.com.cn ). Amazon has long strictly prohibited false information and other behaviors on its platform. In 2023, the company used machine learning and artificial intelligence (AI) technologies and professional expert investigation teams to monitor and block false reviews. It proactively blocked more than 250 million suspected false reviews on its platform, and took legal action against more than 150 bad actors involved in review abuse in the United States, Europe, and China ( https://www.takungpao.com ). However, review manipulation is persistent, and the rise of generative AI has made it easier than ever for bad actors to write fake reviews. In August 2024, the U.S. Federal Trade Commission (FTC) issued a ban on fake e-commerce reviews that explicitly prohibits companies from knowingly buying, selling, or promoting fake online reviews, including AI reviews, and all forms of fake review behavior are regulated in detail ( https://www.163.com ). It can be seen that the problem of asymmetric information about e-commerce product reputation is receiving increasing attention. Network externalities are the basic attributes of the e-commerce market, which means that the value of connecting to a network depends on the number of others connected to the network . In general, the more users there are, the higher the utility of each user. Network externalities are divided into direct and indirect network externalities. Direct network externalities refer to interactions between users on the same side (i.e., the same type of users, whether buyers or sellers) through platform interaction; indirect network externalities refer to interactions between users on both sides of the platform, such as buyers and sellers . The network externality of the e-commerce market is increasingly affected by reputation information. Real reviews increase user value and promote the growth of network scale, while review manipulation has a negative impact . Scholars have also found that network externalities can affect the social welfare effect by influencing the peer effect and conformity behavior of consumers . Therefore, we aimed to explore the theoretical mechanism of the impact of e-commerce product reputation information asymmetry on social welfare from the perspective of network externalities to provide a theoretical basis for the governance of e-commerce platform reputation information asymmetry. We sought to answer the following questions: (1) Can the e-commerce product reputation mechanism effectively increase the total welfare of consumers, merchants, platforms, and society? (2) How do consumers, businesses, platforms, and total social welfare change when the information is asymmetric? (3) What are the factors that affect these changes? To answer the above questions, in this study, which we based on the perspective of network externalities, considering the background of China’s duopolistic e-commerce platform based on the Hotelling model, we took consumer online feedback evaluation and sales ranking as the main research object of e-commerce product reputation information asymmetry, and discussed the changes in consumer utility, merchant utility, platform revenue, and total social welfare when e-commerce product reputation information is asymmetric. Our research contributions are reflected in the following two aspects. First, we theoretically demonstrate the impact of e-commerce product reputation information asymmetry on platform stakeholders and social welfare. Making up for the lack of uniformity of empirical conclusions, this enriches theory on the e-commerce reputation mechanism and provides a theoretical reference for future empirical research. Second, it discusses the factors influencing social welfare in the e-commerce market and provides some ideas and references for the role of reputation mechanisms in the e-commerce market and the formulation of platform governance strategies. Scholars have previously studied network externalities, e-commerce platform reputation mechanisms, and e-commerce product reputation information asymmetry and its impact, laying a solid foundation for the development of this study. Previous research by experts and scholars on network externalities focuses mainly on their strategic role in platform competition and platform value as well as pricing strategy. Cusumano et al. analyzed multilateral market platforms, such as Apple, Google, and Microsoft, through a case comparison, and found that platforms can create and enhance direct and indirect network externalities and promote platform competitiveness by regulating key elements . Zhu and Liu’s empirical research revealed that the direct and indirect network effects of multilateral market platforms significantly increased platform sales and revenue . Liano et al. found through pricing game modeling that platforms’ low-price strategies can strengthen network externalities, attract more users, increase competitors’ user transfer costs, and maintain platforms’ competitive advantages . Zhang et al. investigated cross-network externalities, constructed a recovery pricing model, and studied the investment and pricing strategies of value-added services of multilateral distribution platforms . Li and Gao also studied the impact of network externalities on online medical platforms and Waste Electrical and Electronic Equipment recycling platforms , and proposed corresponding product pricing strategies. From the literature, we can see that existing studies have investigated the role of network externalities on different types of platforms through case studies, empirical evidence, and theoretical modeling, but none have considered the problem of network externalities affected by reputation information asymmetry in bilateral markets. Experts and scholars have long studied reputation mechanisms and their effectiveness. After the KMRW reputation model was proposed, Shapiro was the first to find that reputation premiums can motivate firms to improve the quality of their products and services and abandon the speculative behavior of lowering quality to gain short-term benefits . Since then, scholars such as Resnick et al., Melnik and Alm, and Houser and Wooders have used data from online auctions on eBay to empirically demonstrate the effect of product reputation on price, sales, and quality [ 16 – 18 ]. Qian and Zhang used data from Chinese e-commerce platforms to argue that reputation mechanisms can effectively mitigate the adverse selection problem . With the continuous emergence of reputation “noise” , scholars have launched a heated discussion on the effectiveness of reputation mechanisms. Resnick and Zeckhauser found that most eBay consumers do not participate in reviews, while those who actively review tend to choose positive reviews . Jin and Kato argued that eBay’s ranking mechanism and anonymity allow speculative sellers to obtain top rankings at low cost and to “restore reputation” by changing accounts after selling low-quality goods that damage their reputation, leading to the failure of the binding force of reputation . Scholars found that if only consumer evaluation is used as the main content of the reputation mechanism, the reputation signal cannot significantly affect product sales or encourage merchants to improve product quality , quality certification can be used as a supplement . In addition, consumers’ attitudes towards false marketing and promotion of e-commerce products were more negative than their attitudes towards actual sales fraud, which may cause more damage to the reputation of platform-based e-commerce . From these studies, it is not difficult to see that the effectiveness of the reputation mechanism is closely related to the asymmetry of reputation information in the two-sided market of e-commerce platforms. Academic research on demand information asymmetry and cost information asymmetry is extensive, but research on e-commerce product reputation information asymmetry is lacking. The reputation information asymmetry of e-commerce platforms that we propose refers to the inconsistency between the “expected product reputation information” seen by consumers and the “real product reputation information,” which is caused by violations such as the manipulation of comments. The expected reputation tends to be greater than the real reputation, essentially reflecting an asymmetry in quality information. Akerlof suggested that adverse selection caused by quality information asymmetry is the root cause of market failure . Zhou et al. found that in product-differentiated markets, when product quality information is asymmetric, monopolistic firms have an incentive to use false quality , and whether a firm uses false quality depends on the additional marketing cost and the penalty cost of being discovered after using false information . Wang defined the concept of the degree of quality information asymmetry and discussed the impact of changes in quality information asymmetry on consumer utility and firm profit . The abovementioned studies provide the research basis for the model construction described in this paper. Research on the impact of asymmetric information of e-commerce product reputation focuses on consumer decision-making, product sales, and platform revenue and is mainly carried out using empirical methods. In terms of consumer decision-making, through scenario experiments, Liu and Wang found that review manipulation leads to a significant decrease in consumers’ perceptions of the usefulness and trustworthiness of online reviews, and purchase intention decreases significantly . However, Zhong demonstrated empirically, using questionnaires and commercial data, that fake online reviews were positively related to consumer purchase decisions . In terms of product sales, some have shown that fake reviews have a negative impact on product sales , others have proposed an inverted U-shaped relationship, i.e. that the small-scale use of fake reviews would boost product sales, but once a critical value is exceeded, it would inhibit performance . Chen found that false reviews lead to increased transaction costs for consumers and merchants, and the platform suffers as a result. However, if merchants choose to improve reputation ratings by manipulating reviews, consumers perceive higher-quality goods in the short term, and the platform benefits as a result. Nevertheless, as consumers’ purchasing experience on the platform increases, the higher reputation ratings caused by false reviews fail to convey high-quality signals, consumer perceived goods quality decreases, and therefore platform gains are impaired . Zhang found that the quantity of review information has a positive impact on social welfare, but quality information and matching information play different roles in the welfare enhancement process, and a higher manipulation cost factor can alleviate the prisoner’s dilemma of sellers and increase consumer welfare . The above studies researched the welfare utility of e-commerce market network externality and reputation information asymmetry to consumers, merchants, and platforms from different perspectives but did not comprehensively consider their interaction relationships. The conclusions of empirical research are not uniform, which is related to the difficulty of obtaining fake review data, the accuracy of manual labeling, and the applicability of research methods. Therefore, this paper avoids the empirical approach and explores the theoretical mechanism of the impact of e-commerce product reputation information asymmetry on platform stakeholders and social welfare from the perspective of network externalities. The Hotelling model is a classic model of spatial competition. It mainly analyzes how firms compete in a limited market space. The two-sided market and the network effect of the e-commerce platform increase the complexity of the competition. Therefore, we extended the Hotelling model to better fit our research scenario. Referring to the model settings of Armstrong and Wright , Zhou , Yu , and Xie , and the characteristics of China’s duopolistic e-commerce platforms Tmall and Jingdong, and assuming that different e-commerce platforms have differences for both merchants and consumers, we constructed the consumer, merchant and platform profit utility models. Different from their models, we refer to the definition of quality information asymmetry by Wang and incorporate quality information asymmetry into the models. We form a linear city of length 1, in which the e-commerce platform T is located at the left end of the city and the e-commerce platform J is located at the right end of the city, and the e-commerce platform has two types of users, consumers (b) and merchants (s), uniformly distributed along a line with a total number of 1. The user’s location represents their ideal choice of platform. Since there are two choices in the market, each user incurs a certain transportation cost, denoted by t b and t s for buyers and sellers, respectively. E-commerce platform transportation costs reflect platform differences, but also represent the choice of user preferences. Users joining the platform will receive a basic utility (θ), which is assumed to be large enough to allow the duopoly to cover all users in the market. Individual buyers or sellers join the platform by paying a certain amount of p or w. Merchant participation in the market generates an indirect network externality with a coefficient β s (0 < α b < 1), and consumer participation in the market generates both an indirect network externality with a coefficient β b (0 < α b < 1) and a direct network externality with a coefficient α b (0 < α b < 1). Indirect network externality exists on both sides because an increase in user size on both sides attracts users on the other side, and both have positive utility. The direct network externality on the merchant side has both learning utility (positive utility) and competitive utility (negative utility) and was assumed to be zero here to simplify the model calculation. The direct network externality on the consumer side mainly arises from the feedback evaluation of different consumers and sales rankings to help consumers’ decision-making and is of positive utility. Therefore, in this paper, the consumer-side direct network externality is used to characterize the e-commerce product reputation information, and if there is information asymmetry, the direct network externality is regulated by the degree of reputation information asymmetry i = m 0 - m y m 0 , where m 0 is the real product reputation information of the e-commerce product and m y is the expected product reputation information of the consumer. According to reality, when false reputation information exists, the expected product reputation information of the consumer is often larger than the real reputation information of the product; that is m y > m 0 , so i = m y - m 0 m 0 , assuming 0 ≤ i ≤ 1. In addition, assuming that merchants can choose single-homing or multi-homing and consumers choose single-homing, the number of single-homing and multi-homing merchants on platforms T and J is denoted by n s T n s J , and n s T , J , respectively, and the number of single-homing consumers is denoted by n b T and n b J , respectively. According to the previous assumptions, it is known that n s T + n s J + n s T , J = 1 , n b T + n b J = 1 . Fig 1 shows the market share structure of e-commerce platforms. The consumer and merchant utilities of platform T are denoted by U b T and U s T , the consumer and merchant utilities of platform J are denoted by U b J and U s J , the profits obtained by the platform are π T and π J , and the platform merchant reputation information violation generates a penalty cost of f. The model parameters and variables are defined in Table 1 . The game between the duopoly platform and the bilateral users consists of two phases. In the first phase, platform T and platform J set pricing strategies (p T , w T ) and (p J , w J ) simultaneously, while in the second phase, the bilateral users observe the pricing and make their own participation decisions, and both platforms determine the market size. We used the inverse induction method to solve this dynamic game. The basic model of this paper is consumer utility, business utility, platform profit, and total social welfare when an e-commerce platform has no reputation mechanism, which means that the direct network externality on the consumer side is zero. At this time, consumers rely completely on product promotion information to make independent decisions on whether to consume. There is no direct network externality at the consumer’s end. The utility of consumers located on the T platform is U b T = θ + β b ( n s T + n s T , J ) - p T - t b n b T . (1) Similarly, the utility of a consumer located on platform J is U b J = θ + β b ( n s J + n s T , J ) - p J - t b ( 1 - n b T ) . (2) The utility of merchants single-homing to platform T is U s T = θ + β s n b T - w T - t s n s T . (3) The utility of merchants single-homing to platform J is U s J = θ + β s n b J - w J - t s ( 1 - n s T - n s T , J ) . (4) The utility of merchants multi-homing to platforms T and J is U s T , J = θ + β s n b T + n b J - w T - w J - t s . (5) Let (1) = (2), (3) = (5), and (4) = (5) to obtain the undifferentiated position: n b T = 1 2 - t s p T - p J + β b ( w T - w J ) 2 ( t b - β b β s ) n s T = 2 t s - β s + 2 w J 2 t s - β s t s p T - p J + β b β s w T - w J 2 t s t b - β b β s n s T , J = β s - t s - ( w T + w J ) t s . (6) According to n s T + n s J + n s T , J = 1 , n b T + n b J = 1 , it can be found that n b J = t s p T - p J + β b ( w T - w J ) 2 ( t b - β b β s ) - 1 2 n s J = 1 - 2 t s + β s - 2 w T 2 t s + β s t s p T - p J + β b β s w T - w J 2 t s t b - β b β s (7) At this point, the profits of platforms T and J are: π T = p T n b T + w T ( n s T + n s T , J ) π J = p J n b J + w J ( n s J + n s T , J ) (8) Taking (6) and (7) into (8) and taking the first-order derivatives of p T , p J , w T , and w J , we obtain the bilateral pricing in equilibrium as p T = p J = t b - β s 3 β b + β s 4 t s w T = w J = β s - β b 4 . (9) Substituting (9) into (6), (7), and (8) yields the market share of the bilateral users of the platform under equilibrium: n b T = n b J = 1 2 , n s T = 4 t s - β b - β s 4 t s , n s T , J = β b + β s - 2 t s 2 t s , n s J = 4 t s - β b - β s 4 t s . The platform profits are π T = π J = 1 2 t b - β s 2 + 6 β s β b + β b 2 16 t s . In equilibrium, all consumer surplus (CS) is ∫ 0 1 2 θ + β b ( n s T + n s T , J ) - p T - t b n b T d t b + ∫ 0 1 2 θ + β b ( n s J + n s T , J ) - p J - t b ( 1 - n b T ) d t b = θ - 3 8 + β s 2 + 4 β s β b + β b 2 4 t s (10) In equilibrium, all merchant surplus (PS) is ∫ 0 4 t s - β b - β s 4 t s θ + β s n b T - w T - t s n s T d t s + ∫ 0 4 t s - β b - β s 4 t s θ + β s n b J - w J - t s ( 1 - n s T - n s T , J ) d t s + ∫ 0 β b + β s - 2 t s 2 t s θ + β s n b T + n b J - w T - w J - t s d t s = θ + 1 2 β s + 1 2 β b - ( 4 t s - β b - β s ) 2 + ( β b + β s - 2 t s ) 2 16 t s 2 (11) In equilibrium, the profits of the two platforms are π T + π J = - β s 2 + 6 β s β b + β b 2 8 t s (12) From Eqs ( 10 ), ( 11 ), and ( 12 ), the total level of social welfare of the platform system in the absence of the reputation mechanism can be obtained as W 1 = CS + PS + π T + π J = 2 θ − 3 8 + 1 2 β s + 1 2 β b + t b + ( β b + β s ) 2 8 t s − 4 t s − β b − β s 2 + β b + β s − 2 t s 2 16 t s 2 (13) Eqs 10 – 13 show the consumer surplus, merchant surplus, platform profit, and total level of social welfare in the e-commerce market in equilibrium in the basic model; that is, when there is no reputation mechanism. When the e-commerce platform is designed with a reputation mechanism, consumers provide real evaluation feedback on the products they purchase, and the platform system recommends products for search users based on sales data. Merchants and consumers are fully informed and consistent on reputation information, which increases the direct network externalities on the consumers’ side. The model of consumer utility, merchant utility, and platform profit is as follows: The utility of consumers located on platform T is U b T ' = θ + α b n b T + β b ( n s T + n s T , J ) - p T - t b n b T . (14) Similarly, the utility of a consumer located on platform J is U b J ' = θ + α b n b J + β b ( n s J + n s T , J ) - p J - t b ( 1 - n b T ) . (15) The merchant utility and platform profitability remain unchanged: U s T ' = θ + β s n b T - w T - t s n s T U s J ' = θ + β s n b J - w J - t s ( 1 - n s T - n s T , J ) (16) U s T , J ' = θ + β s n b T + n b J - w T - w J - t s π T ' = π J ' = p J n b J + w J ( n s J + n s T , J ) . Similar to the previous calculation process, under equilibrium, consumer pricing, merchant pricing, the platform user market share, consumer surplus, merchant surplus, platform profit, and platform system social welfare are as follows: p T ‘ = p J ’ = t b - α b - β s 3 β b + β s 4 t s . (17) w T ‘ = w J ’ = β s - β b 4 . (18) n b T ‘ = n b J ‘ = 1 2 , n s T ‘ = 4 t s - β b - β s 4 t s , n s T , J ‘ = β b + β s - 2 t s 2 t s , n s J ‘ = 4 t s - β b - β s 4 t s . (19) C S ' = θ - 3 8 + 3 2 α b + β s 2 + 4 β s β b + β b 2 4 t s . (20) The merchant surplus is P S ' = θ + 1 2 β s + 1 2 β b - 4 t s - β b - β s 2 + β b + β s - 2 t s 2 16 t s 2 . (21) The profits of the two platforms are π T ‘ + π J ‘ = t b - α b - β s 2 + 6 β s β b + β b 2 8 t s . (22) The total social welfare level of the platform system is W 2 = CS' + PS' + π T‘ + π J‘ = 2 θ − 3 8 + 1 2 α b + 1 2 β s + 1 2 β b + t b + ( β b + β s ) 2 8 t s − 4 t s − β b − β s 2 + β b + β s − 2 t s 2 16 t s 2 . (23) Eqs. ( 20 )–( 23 ) show the consumer surplus, merchant surplus, platform profit, and total social welfare levels in the e-commerce market in equilibrium when there is a reputation mechanism and the information is symmetric. Does the reputation mechanism play a positive role? We compared the results of the calculations in this section with the basic model and obtained Δ p T = Δ p J = p T ‘ - p T = p J ‘ - p J = - α b < 0 Δ w T = Δ w J = w T ‘ - w T = w J ‘ - w J = 0 Δ n b T = 0 , Δ n b J = 0 , Δ n s T = 0 , Δ n s T , J = 0 , Δ n s J = 0 Δ C S = C S ' - C S = 3 2 α b > 0 Δ P S = P S ' - P S = 0 Δ π = π T ‘ + π J ‘ - π T + π J = - α b < 0 W 2 - W 1 = 1 2 α b > 0 . (24) It can be seen that under the reputation mechanism and when information is symmetric, the platform has lowered pricing for consumers, and consumer welfare has risen positively in proportion with the network externalities (i.e., the more users there are on the consumer side, the more consumers benefit). The merchants are not affected by the direct network externality on the consumer side, so merchant pricing and welfare remain unchanged. Platform profits are reduced by an amount equal to the direct network externality coefficient, confirming that the reputation mechanism enables the e-commerce platform to transfer part of the surplus value to consumers. The total social welfare level of the e-commerce system increases by an increment of 1 2 α b . Therefore, the symmetry of reputation information contributes toward the total social welfare level and is positively proportional to the network externality. At the same time, we confirm that when the reputation mechanism of the e-commerce platform works, the party that directly benefits is the consumer. So if there are unfair competitive behaviors, such as “click farming,” “cash rebate for favorable comments,” and “pay per click,” on the merchant side, consumers cannot obtain real information. With this asymmetry of reputation information, what will happen to total social welfare? When the e-commerce product reputation information is asymmetric, a degree of reputation information asymmetry i ( i = m y - m 0 m 0 ) is introduced to regulate the direct network externality on the consumers’ side. The utility of an undifferentiated consumer located on platform T is U b T = θ + α b n b T - m y - m 0 m 0 α b n b T + β b ( n s T + n s T , J ) - p T - t b n b T . After simplification, we obtain: U b T = θ + ( 1 - i ) α b n b T + β b ( n s T + n s T , J ) - p T - t b n b T Similarly, the utility of an undifferentiated consumer located on platform J is U b J = θ + ( 1 - i ) α b n b J + β b ( n s J + n s T , J ) - p J - t b ( 1 - n b T ) . (25) Since violation behavior, such as brushing orders, requires merchants to pay certain operating costs, if such behavior is investigated and punished, it will increase the penalty cost. To simplify the model, we combine the two costs, which we call the violation cost f. Therefore, the merchant utility is U s T = θ + β s n b T - w T - t s n s T - f U s J = θ + β s n b J - w J - t s ( 1 - n s T - n s T , J ) - f U s T , J = θ + β s n b T + n b J - w T - w J - t s - 2 f . (26) The platform profit formula remains unchanged, and the calculation process is similar to the previous one, and we can obtain consumer pricing, merchant pricing and market share of platform users, the consumer surplus, merchant surplus, platform profit, and total social welfare of the platform system in equilibrium: p T ” = p J ” = ( 1 - i ) α b - t b 2 . (27) w T ” = w J ” = β s - 2 f 3 . (28) n b T ” = n b J ” = 1 2 , n s T ” = 1 - β s - 2 f 6 t s , n s T , J " = β s - 2 f 3 t s - 1 , n s J " = 1 - β s - 2 f 6 t s . (29) The consumer surplus is C S ” = θ - 1 2 ( 1 - i ) α b + ( β s - 2 f ) β b 6 t s . (30) The merchant surplus is P S ” = θ - 3 2 + 2 3 ( β s - 2 f ) + 2 β s - 2 f θ + β s - 1 - ( β s - 2 f ) 2 - 2 θ - β s 6 t s - ( β s - 2 f ) 2 12 t s 2 . (31) The profits of the two platforms are: π T ” + π J ” = ( 1 - i ) α b - t b 2 + ( β s - 2 f ) 2 9 t s . (32) The total social welfare level of the platform system is W 3 = C S ‘ + P S ‘ + π T ‘ + π J ‘ = 2 θ + 1 2 ( 1 - i ) α b - 3 2 - t b 2 + ( β b 6 t s + 2 3 ) ( β s - 2 f ) + 6 β s - 2 f θ + β s - 1 - ( β s - 2 f ) 2 - 6 θ - 3 β s 18 t s - ( β s - 2 f ) 2 12 t s 2 . (33) From the above proofs, it can be seen that e-commerce product reputation information asymmetry has certain effects on consumer pricing and surplus, merchant pricing and surplus, e-commerce platform market share and profit, and total social welfare through the asymmetry degree i and the violation penalty cost f, so what is the impact? The theoretical results of total social welfare levels when e-commerce product reputation information is asymmetric and when it is symmetric were compared and analyzed as follows: Suppose β b = 0.20, β s = 0.22, t b = 0.35, t s = 0.30, and θ = 1 . Network externalities were used as moderating variables and took α b = 0.1, 0.2, and 0.3 for sensitivity analysis, respectively. From Eqs ( 27 )–( 17 ), ( 30 )–( 20 ), we obtain the following: Changes in consumer pricing is Δ p T ’ = Δ p J ’ = p T ” - p T ' = p J ” - p J ' = 2 α b - i α b - 3 t b 2 + β s 3 β b + β s 4 t s (34) Changes in consumer surplus is Δ C S ’ = C S ” - C S ’ = 3 8 + ( 1 2 i - 2 ) α b - 4 f + 3 β s 2 + 10 β s β b + 3 β b 2 12 t s (35) We used MATLAB software for example analysis and visualization, as shown in Fig 2 . It can be seen that the reputation information asymmetry of e-commerce products has a definite impact on both consumer pricing and surplus. It can be seen from Fig 2a that the change in consumer pricing Δp T’ is inversely proportional to i, indicating that the larger the reputation information asymmetry, the lower the platform pricing to consumers, the stronger the direct network externality at the consumer end, and the greater the impact. However, the reduction in consumer pricing does not lead to an increase in welfare, as can be seen from Fig 2b . Only when α b = 0.1 and f < 0.0812 + 0.045i, there is a positive and negative dividing line of ΔCS’, and the reputation information asymmetry increases consumer surplus, otherwise consumer surplus is less than 0. It can be said that reputation information asymmetry has a negative impact on consumers. The change in consumer surplus ΔCS’ is proportional to i and inversely proportional to f and α b , indicating that the greater the reputation information asymmetry, the greater the consumer welfare loss, and the higher the cost of violation penalty, and with more platform consumers, the smaller the loss of consumer welfare. From Eqs ( 28 )–( 18 ) and ( 31 )–( 21 ) we have the following: Changes in merchant pricing: Δ w T ’ = Δ w J ‘ = w T ” - w T ' = w J " - w J ' = β s - 2 f 3 - β s - β b 4 = β s + 3 β b - 8 f 12 (36) Changes in merchant surplus: Δ P S ’ = P S ” - P S ’ = 1 6 β s - 1 2 β b - 4 3 f - 3 2 + 2 θ - 3 β s + β s 2 - 4 f + 2 θ + 4 f - 4 f 2 6 t s + 3 ( 4 t s - β b - β s ) 2 + 3 β b + β s - 2 t s 2 - 4 ( β s - 2 f ) 2 48 t s 2 (37) We used MATLAB software for example analysis and visualization, as shown in Fig 3 . It can be seen that e-commerce product reputation information asymmetry has a certain impact on merchant pricing and surpluses, and the main influencing factor is the penalty cost of reputation information violation. As shown in Fig 3a , Δw T ’ < 0 for f > 0.1025, which means that merchant pricing is reduced after reputation information asymmetry, but merchants do not increase their surplus due to lower pricing, and the change in merchant surplus is less than zero for f > 0 . Therefore, the reputation information asymmetry reduces the merchant surplus, which may be due to dishonest behavior triggered by a merchant’s low reputation and poor word-of-mouth, thus leading to damaged sales and profit. When f becomes larger, merchants give up illegal business activities such as click farming, and merchant welfare losses decrease. From Eqs ( 29 )–( 19 ) and ( 32 )–( 22 ), we obtain: Changes in platform market shares: Δ n b T ’ = 0 , Δ n b J ’ = 0 , Δ n s T ’ = Δ n s J ’ = 1 - β s - 2 f 6 t s - 4 t s - β b - β s 4 t s = 3 β b - 5 β s + 4 f 12 t s , Δ n s T , J ’ = β s - 2 f 3 t s - 1 - β b + β s - 2 t s 2 t s = - β s - 3 β b - 4 f 6 t s (38) Changes in platform profit: Δ π ’ = π T ” + π J ” - π T ’ + π J ’ = 2 α b - i α b + - 3 2 t b + 8 ( β s - 2 f ) 2 + 9 ( β s + β b ) 2 + 36 β s β b 72 t s (39) We used MATLAB software for example analysis and visualization, as shown in Fig 4 . It can be seen that e-commerce product reputation information asymmetry has a certain impact on both platform market share and profit. As Fig 4a shows, e-commerce product reputation information asymmetry increases the number of single-homing merchants on the platform, while the number of multi-homing merchants reduces, and the platform is more willing to focus resources on a platform to create advantages. In addition, the greater the cost of reputation information violation penalty f, the fewer multi-homing merchants there are. Increasing the reputation information violation penalty is conducive to the development of the platform enterprise segmentation industry, avoiding homogeneous competition, and favors the healthy development of the e-commerce platform market. Fig 4b shows that the reputation information asymmetry of e-commerce product does not have a significant impact on the hit on platform profits, Δπ’ is mostly positive, and the higher the cost of reputation information violation penalty, the stronger the direct network externalities, the greater the degree of asymmetry, and the larger the increase in platform profits. It is evident that e-commerce platforms enjoy the benefits of market regulation as well as network externalities but lack the economic impetus to address reputation information asymmetry, which explains the reasons for the persistence of fake reviews. From Eqs ( 33 )–( 23 ), we obtain changes in total social welfare: W 3 - W 2 = - 1 2 i α b - 9 8 - 3 t b 2 - 1 2 β s - 1 2 β b + β b 6 t s + 2 3 β s - 2 f + 24 β s - 2 f θ + β s - 1 - 4 ( β s - 2 f ) 2 - 24 θ - 12 β s - 9 ( β b + β s ) 2 72 t s + 3 4 t s - β b - β s 2 + 3 β b + β s - 2 t s 2 - 4 ( β s - 2 f ) 2 48 t s 2 . (40) We used MATLAB software for example analysis and visualization, as shown in Fig 5 . From Fig 5 , it can be seen that the asymmetry of reputation information of e-commerce products causes a loss of total social welfare, which is mainly affected by the penalty cost of reputation information violation, and has an open-ended downward quadratic function relationship with the penalty cost f. When 0 < f ≤ M, the total social welfare loss increases with the increase of f; when f > M, the total social welfare loss decreases with the increase of f, where M = 3 t s ( - β b - 2 β s + 2 ) - 16 t s 2 - 8 θ t s + β s 6 - 3 β s + 4 t s - 2 β s t s . For 0 < f ≤1, the total social welfare loss is inversely proportional to f. In addition, the sensitivity analysis for α b is taken as 0.1, 0.2, and 0.3, respectively, and the three surfaces are close to overlap, but when the direct network externality is very large, the total social welfare loss will be smaller. The degree of asymmetry adjusts the speed of the change in social welfare loss. When i becomes larger, social welfare loss accelerates. In this study, we found that reputation mechanisms can effectively increase consumer surplus and total social welfare, and this increases as the size of the online consumer user base increases. When the reputation information of e-commerce products is asymmetric, the consumer surplus is reduced, which is proportional to the degree of reputation information asymmetry and inversely proportional to the penalty cost of violation and the direct network externalities of consumers. The merchant’s surplus decreases, and the merchant’s welfare loss decreases with the increase in the penalty cost of reputation information violation. The platform revenue increases and is proportional to the network externality, the penalty cost of violation, and the degree of reputation information asymmetry. The platform lacks the economic motivation to strictly control reputation information asymmetry. Total social welfare is reduced, which is inversely proportional to the cost of punishing violations and the direct network externalities. Increasing the cost of violation punishment, increasing the scale of consumers, and reducing the number of fake reviews can reduce the loss of social welfare. It is not difficult to see that information asymmetry has a negative impact on consumers, merchants, and the total welfare of society but promotes a high probability of profitability for e-commerce platforms, which is extremely unfavorable for the development of the e-commerce market. Therefore, we propose the following: This study examined the effectiveness of the reputation mechanism in the e-commerce platform market and conducted a game argumentation on the changes in consumers, merchants, platforms, and total social welfare under asymmetric reputation information. The conclusions obtained expand the theory related to the mechanism of the impact of asymmetry of reputation information and lay the theoretical foundations for future empirical research. In future research, we can start with the following aspects: (1) This study considers the market situation of duopolistic e-commerce platforms, consumers’ single-homing and linear costs and benefits, but with the increasingly fierce competition, multiple platforms are emerging and the fact that consumers are mostly assigned to multiple platforms is closer to real life. An improved general competition model can be considered to further verify the robustness of the conclusion. (2) This study is based on theoretical arguments and verified by example analysis. There are no actual data. The follow-up study can consider adding actual data from an e-commerce platform for verification. (3) This study only considered only two types of reputation information: feedback evaluation and sales ranking. Subsequent research can complement, expand, and optimize the design of the content of the reputation mechanism to make it work better.
|
Other
|
other
|
en
| 0.999997 |
PMC11694976
|
In the contemporary global context, the Sustainable Development Goals (SDGs) have established themselves as an essential guide to address society’s most pressing challenges. The SDGs, adopted by the United Nations, seek to eradicate poverty, protect the planet and ensure dignity and rights for all by 2030 . However, despite global efforts, significant inequalities persist in the distribution of resources, opportunities and access to fundamental goods and services, perpetuating socioeconomic disparities around the world. These inequalities manifest themselves in a variety of ways, including income, education, health and employment, underscoring the urgent need to understand their causes in order to move towards a more equitable and sustainable society. In Colombia, socioeconomic inequality remains a central challenge for the fulfillment of the SDGs. The country’s Gini index reached a value of 0.556 in 2022, placing Colombia among the most unequal nations in Latin America . In addition, various poverty indicators reveal the precariousness of the population’s living conditions, such as the Monetary Poverty Index, which in 2022 affected 36.6% of Colombians, and the Multidimensional Poverty Index, which showed that 12.9% lacked the necessary conditions for individual development . In this context, the state of Nariño, located in the southwest of the country, faces one of the most complex socioeconomic inequality situations in the region, characterized by limited economic development, especially in its agricultural sector, and a high level of unsatisfied basic needs . Besides these socioeconomic disparities, Nariño faces an additional challenge related to the informal economy, specifically coca cultivation. According to the United Nations Office on Drugs and Crime report , the coastal regions of Nariño register the highest growth of illicit crops in Colombia, accounting for 65% of national cocaine production. This phenomenon reflects the precarious socioeconomic conditions in the region, which facilitates the expansion of illegal activities such as drug trafficking. In this context, the central question of this research is: To what extent does the spatial distribution of socioeconomic conditions explain coca cultivation patterns in the state of Nariño? This question is posed because of the need to understand how socioeconomic conditions directly affect the prevalence of coca cultivation in the region, which could provide clues to develop more effective public policies aimed at eradicating illicit crops and promoting sustainable development alternatives. The theoretical framework supporting this research is based on several fundamental theories that explain the socioeconomic factors associated with coca cultivation. The conflict economics theory suggests that armed conflict and violence have a direct impact on the economy and development, and in the case of Colombia, helps to understand how illegal armed groups control areas of coca cultivation, generating an environment conducive to drug trafficking . Social capital theory highlights the importance of social networks and trust as key factors for economic and social development. In coca-growing communities, the lack of social capital hinders the implementation of crop substitution programs and the development of economic alternatives, lacking support networks . Finally, social marginalization theory argues that social exclusion and lack of economic opportunities lead people to engage in illegal activities such as coca cultivation. In Colombia, the historical marginalization of rural communities and the lack of access to basic services such as education, health and employment contribute to perpetuate this phenomenon . This study seeks to answer the research question by constructing composite indices that reflect key dimensions such as education, health, public services, economy and vulnerability in Nariño. Through a spatial analysis of these socioeconomic conditions, it aims to identify areas with higher rates of poverty and vulnerability and examine how these directly affect the prevalence of illicit crops. In addition, the study will model the spatial non-stationarity of the factors associated with coca cultivation and provide detailed results on the most relevant factors in each area of Nariño, which will allow the formulation of specific recommendations for public policies. The structure of the article is as follows: Section 2 describes the methodology employed in the research; Section 3 presents the results and discussion of the findings; and Section 4 concludes with a discussion of the implications of the results and recommendations for future research and for government decision-making. This investigation was conducted in the state of Nariño, located in the southwestern region of the Republic of Colombia. Encompassing an area of 33,268 km 2 , the state, as per recent data from DANE, is inhabited by a total population of 1,627,589 individuals. The administrative territorial structure comprises 64 municipalities, which constitute the units under scrutiny in this study. The Municipality of Pasto serves as the capital of the state . Nariño, as a state, exists within the dichotomy of being strategically positioned for Colombia owing to its geographical location, agricultural potential, and prospective industrial development, for example, in which the potential of the Nariño in terms of biodiversity and national and international connectivity is presented. Simultaneously, it unfortunately garners recognition in regional and international contexts for issues related to drugs and violence, as evidenced by reports and studies conducted by various offices of multilateral organizations [ 14 – 16 ]. These reports, particularly those from national entities , acknowledge its potential as a special border zone due to its proximity to Ecuador in the south and its possession of the port of Tumaco, connecting it to the Pacific Ocean in the northwest. However, the region currently attains global recognition primarily due to illegal activities that have detrimental effects on people’s well-being, contributing to a pervasive stigma across various levels. Moreover, several municipalities within the state of Nariño contend with highly intricate conditions of economic and social marginalization. To measure the economic and social development of the state of Nariño, six composite indices were meticulously constructed for each of its 64 municipalities. Each index integrates multiple quantitative variables corresponding to the unique characteristics of each municipality. The Educational Performance Index (EPI) captures information from the results of a national test (called Prueba Saber 11) in Mathematics and Spanish Language. The variables related to education coverage in both transitional and secondary education contributed to the construction of the Education Coverage Index (ECI). The Health Coverage Index (HCI) was formulated using data on the population affiliated with one of Colombia’s health regimes and the demographic segment under one year of age that received the third dose of the pentavalent vaccine. The Public Services Coverage Index (PSCI) was crafted by considering critical services such as electricity, internet, aqueduct, and sewerage coverage. The Economic Index (EI) is contingent upon metrics related to GDP per capita and the employed population of the municipality. To construct the Vulnerability Index (VI), parameters such as overcrowding, the population in a state of misery, and instances of child labor were systematically integrated. Furthermore, three pivotal variables integral to the dynamics of the state’s economy were taken into consideration. Firstly, an examination of the homicide rate (HOMI) in each municipality was conducted. Secondly, an evaluation of the connectivity and infrastructure within the state was undertaken, with a specific focus on the variable ’Roads’. This variable was systematically constructed by factoring in the length of primary roads in each municipality. Lastly, an assessment of the production of cocaine cultivation in each of the state’s municipalities was included. The homicide rate, connectivity variables, along with the meticulously constructed indices, served as the independent variables in the initial phase to analyze their correlation with cocaine cultivation as the response variable. Data for this study were provided from various sources of information. Specifically, the socioeconomic variables for the state of Nariño were extracted from the DANE database. Information related to cocaine production was provided by the Ministry of Justice and Law. Data concerning the length of primary roads in each of the state’s municipalities were obtained from OpenStreetMap (OSM) , a collaborative project for the creation of open-access maps. Table 1 provides a detailed overview of the six indices, their associated variables, and sources of information. It is important to note that the databases originate from various sources, which means the data were collected in different years due to the country’s policies. Given the disparate origins of the data, an initial merge was executed, consolidating all information into a unified spatial database that encompasses records for each municipality. The statistical analysis of the dataset involved the computation of summary statistics, providing a comprehensive overview of both the variables employed in the study and the constructed indices. Additionally, traditional, and spatial statistical graphs, some of which are presented in this paper, were employed to analyze the global and local dynamics of variables and indices. To elucidate the dynamics of cocaine production in the state of Nariño with respect to the indices, homicide rate, and roads, an econometric model was systematically formulated. In the following subsections, we present a succinct overview of the theory guiding index construction and a brief description of the econometric model designed to explore their relationship with cocaine production. Initially, 6 synthetic composite indices were estimated for the 64 municipalities of the state of Nariño. The procedure used to construct the indices was based on the Distance-Learning (DL2) proposed by . Let X denote a matrix of size m × n , where the m columns represent the quantitative variables and the n rows represent the observations or spatial units (municipalities, countries, regions, etc.). Initially the variables are normalized by a change of scale. Subsequently, let Z denote the matrix of size m × n containing the standardized variables. Then, the DL2 is defined as follows: DL 2 Z s , Z t = ∑ j = 1 m Z s j − Z t j 2 ω j 1 / 2 (1) where s and t are two compared units or observations and ω j are the weights that are calculated using iterative machine learning algorithms. This definition (function) considers the concept of proximity between units, allowing comparisons to be made between the spatial units (in our case municipalities) studied and territorial disparities to be identified. According to its construction, the values taken by all indices are between 0 and 1. Once the six indices were calculated for all municipalities, they were combined (summed) to determine a single Multidimensional Index (MI) for each municipality j . The MI was calculated as follows . Finally, to better interpret the results, the MI is scaled between 0 and 1 using the following expression: MI s c a l e j = MI j − min MI j m a x MI j − m i n MI j (3) A value close to 0 means low living conditions of the population and low economic growth, and a value close to 1 indicates good living conditions of the population and high economic growth of the municipalities of the state of Nariño. Both the descriptive analysis and the spatial distribution of each index in the study area were visualized and analyzed. This research employs global and local regression models to explore the relationship between the proportion of hectares dedicated to cocaine cultivation (response variable) and the indices, homicide rate and road (explanatory variables) in the state of Nariño. To compare and select significant independent variables, initially a global regression was used followed by a local extension called geographically weighted regression (GWR). The latter method facilitated the examination of spatial heterogeneity of the relationship between the response variable and the explanatory variables. In the examination of the relationship between a response variable Y , and a set of independent variables X 1 , X 2 ,…, X p , the analytical framework involves an Ordinary Linear Regression (OLR) model: y i = β 0 + ∑ k = 1 p β k x i k + ε i (4) where β 0 , β 1 , … , β p are the parameters and ε 1 , ε 2 ,…, ε n are the error terms. In this global model, the estimated coefficients β k are considered constant throughout the study area. However, the hypothesis of spatial uniformity of the effect of the explanatory variables on the dependent variable is often unrealistic . Then, to consider the geographical non-stationarity of the relationship and incorporate the spatial structure, an extension of the model represented by the Eq (1) , referred to as GWR, is introduced. This extension involves the estimation of local parameters for each geographic location in the dataset, as defined by : y i = β 0 u i , ν i + ∑ k = 1 p β k u i , ν i x i k + ε i (5) where (u i , v i ) denotes the geographic coordinates at location i (in this study, these were the coordinates of the centroids of each of the municipalities of Nariño), y i is the value of the dependent variable at location i; x ik is the value of the kth independent variable at location i ; p is the number of independent variables; β ik is the local regression coefficient for the kth independent variable at location i ; and ε i is the random error at location i . In the calibration of Eq (5) , it is implicitly assumed that data observed near the location have more influence on the estimation than data located farther away from. Next, the model measures the inherent relationships around each regression point i , where each set of regression coefficients is estimated using a weighted least squares approach. Thus, the matrix expression for this estimation is given by : β ^ u i , v i = X T W u i , v i X − 1 X T W u i , v i y (6) where X is the matrix of predictor variables with a column of 1s for the intercept, y is the vector of response variable, and W ( u i , v i ) is a weighting matrix of size n × n whose off-diagonal elements are zero and whose diagonal elements denote the geographic weighting of each of the n observed data for regression point i at location ( u i , v i ) . There are three key elements in the construction of this weighting matrix: (i) the type of distance, (ii) the kernel function, and (iii) its bandwidth. For this work considering the irregular topography of the state of Nariño, the Euclidean distance, the Bi-squared function and an adaptive bandwidth were used [ 24 – 26 ]. Since extreme values can generate biased results, the GWR was performed with a robust analysis to mitigate this issue . Furthermore, for constructing the spatial database, estimating, and mapping various measurements, R software was employed . Table 2 provides an overview of the behavior of the response variable (Coca) and the independent variables (Indices, HOMI, and Roads) through eight global descriptive statistics: minimum (Min), quartile 1 (Q1), median (Median), mean (Mean), quartile 3 (Q3), maximum (Max), coefficient of variation (CV), and coefficient of asymmetry (CA). Fig 2 displays the statistical distribution of the indices using box-and-whisker plots. The EPI, HCI, PSCI and VI indices exhibit values relatively close to 1. According to the Q1 of these indices, 75% of the municipalities have a high mean value. These results indicate that municipalities have a good educational performance, with good coverage in education, public services, and health. Along with low vulnerability. However, these indices display left-skewed distribution with the presence of outliers (CA), signifying the existence of municipalities with lower values and, consequently, indicating areas with inadequate protection. The EI is characterized by values close to 0, with most municipalities not surpassing a value of 0.1585 (Q3). This indicates a generally low economic performance across the municipalities. However, the corresponding box-and-whisker diagram exhibits an asymmetric trend towards the right side with extreme values (as indicated by CA value). This asymmetry reflects a few municipalities with notably higher economic performance in the state of Nariño. Furthermore, based on the CV values, all indices and variables display a high degree of dispersion. The descriptive metrics and the statistical distribution show the existence of significant developmental differences (educational, social, economic, health, etc.) among the municipalities under investigation. The spatial distribution of the proportion of hectares of cultivated cocaine and the location of primary roads in the state of Nariño is depicted in Fig 3 . This distribution reveals the geographic variability of cocaine production in the state, showing a concentration of high values in the northwestern part of the state of Nariño. This concentration can be attributed to the significant pressure exerted by non-government armed groups aiming to control several municipalities in this area . Additionally, the proximity of these municipalities to the Pacific Ocean facilitates illegal exportation to other countries. In contrast, municipalities with lower cocaine production tend to be situated in the southeast, closer to Pasto city, the capital of Nariño. Furthermore, a notable observation is the poor road infrastructure in most municipalities of the state, with many of these coinciding with areas of high cocaine production. This observation aligns with findings from other studies indicating that municipalities with limited roads, infrastructure, connectivity, and access tend to experience an increase in illicit crops [ 28 – 30 ]. Fig 4 illustrates the spatial distribution of the constructed indices and the homicide rate, revealing the spatial non-stationarity of these characteristics in the state of Nariño. The distributions of the indices EPI, ECI, HCI, PSCI, VI, and the HOMI variable exhibit a consistent spatial pattern from northwest to southeast, dividing the state into two segments. The southeastern part comprises municipalities with favorable living conditions and economic development, while the northwestern part faces social and economic fragility. The EI index demonstrates a distinctive spatial distribution, indicating a prevalence of municipalities with low economic performance, except for Potosi and Pasto, the capital of Nariño. The spatial distribution of MI summarizes the behavior of these indices, reinforcing the observed division and emphasizing the socioeconomic inequality prevalent in the state of Nariño. Initially, a global regression model (Ordinary Least Square) as a benchmark, performance comparison and attribute selection were applied. Table 3 shows the results of the global and GWR models: significant estimated coefficients, variance inflation (VIF), the R 2 , adjusted R 2 and residual sum of squares. According to these results, two variables were significant in explaining cocaine production in the state of Nariño: EPI and HOMI. Collinearity was tested by analyzing the VIF, with values below the common threshold, indicating no multicollinearity issue. The estimated coefficients in both global and local models were consistent with their expected signs. Assessing goodness-of-fit measures, R2 and adjusted R2 in the GWR notably improved from 0.468 and 0.450 to 0.647 and 0.539, respectively. The analysis of residuals (sum of squares residuals) indicated a superior fit in the GWR. Moreover, the examination of spatial variation in the explanatory power of the model revealed notable improvements (spatial distribution not presented in the paper). The comparative results underscored that the explanatory power of the local regression model was significantly higher than that of the global regression model, which is consistent with the results of the reviewed studies. Undoubtedly, the most important results of the modeling reside in the estimated local coefficients of each of the explanatory variables. These coefficients provide a means to comprehend the spatial variation visually and analytically in the influence of each variable on cocaine production. The spatial distribution of the parameter estimates, and their significance are shown in Fig 5 . According to these spatial distributions and based on the descriptive statistics (Min, Median, Max) of the local estimates of the GWR coefficients ( Table 3 ), there are significant variations in the relationships between the two independent variables (EPI and HOMI) and cocaine production in the State of Nariño. The negative sign of the estimated coefficients on the EPI variable is expected and it indicates that there is an inverse relationship between EPI and cocaine production (although a change in the sign of the EPI coefficients is observable, it lacks significance). This correlation is consistent with findings in existing literature, indicating that cocaine crop production tends to decrease with higher levels of education [ 28 , 31 – 33 ]. Notably, this relationship is not uniform across all municipalities , and the effect of EPI is more pronounced and statistically significant in a considerable number of municipalities in the state of Nariño . The region where this relationship holds significance corresponds to municipalities exhibiting low educational performance , coinciding with the area of high cocaine production and representing the less developed part of the state, as illustrated by the spatial distribution of the MI . This pattern can be attributed to a demand that surpasses the educational resources in this zone, characterized by difficult access (distant from the capital city of Nariño). Consequently, some municipalities lack the necessary infrastructure for providing educational services and often face a shortage of qualified educators, leaving numerous young individuals susceptible to engaging in activities such as cocaine production. Finally, Fig 5c reveals a direct and spatially non-uniform correlation between the homicide rate and cocaine production. This finding aligns with the outcomes of previous studies that highlight a positive connection between increased illicit crop cultivation in regions marked by security issues and a limited government presence [ 31 , 34 – 39 ]. Specifically in Colombia, the evidence indicates that, on average, the homicide rate tends to rise in municipalities with cocaine cultivation since 2015 . Nevertheless, the impact of the homicide rate on cocaine production does not demonstrate significance uniformly across all municipalities in the state of Nariño. Fig 5d delineates the municipalities where the association is statistically significant at a 95% confidence level. Notably, it is evident that municipalities primarily situated in the northwest of the state exhibit a substantial and statistically significant correlation between the homicide rate and cocaine cultivation. This region corresponds to municipalities characterized by elevated homicide rates aligning with the areas of heightened cocaine production . Thus, the findings of this study corroborate the assertions of the referenced authors, affirming that municipalities with elevated homicide rates also exhibit increased cocaine cultivation. The SDGs serve as a crucial framework for tackling global imbalances and striving toward a more inclusive and sustainable future. However, entrenched socioeconomic disparities pose a substantial challenge to realizing these objectives. To inform the design and implementation of comprehensive solutions addressing the root causes of inequality and fostering sustainable and just development, this study quantified and spatially analyzed the socioeconomic and territorial conditions of municipalities in the state of Nariño, one of the most unequal and least developed states in Colombia. The initial segment of this article employed statistical methodologies to construct diverse composite indices. Subsequently, in the latter part of the study, an econometric analysis was conducted utilizing these indices and additional variables, such as the homicide rate and road infrastructure to elucidate their potential role in explaining cocaine production across the municipalities. The DL2 was used to create the indices. These indices were combined to build a single index called MI, serving as a comprehensive summary that encapsulates the information encompassed by the individual indices. The spatial distributions of all the indices reveal a discernible spatial heterogeneity in social and economic inequality. This spatial diversity substantiates the existing disparities in opportunities among municipalities, indicating that their populations do not experience optimal conditions and lack basic capabilities such as education, health, and public services, as per the framework outlined by . The observed spatial pattern distinctly divides the state into two zones: one exhibiting favorable social and economic conditions (Southeast) and another manifesting suboptimal performance (Northwest). The delineated spatial heterogeneity provides valuable insights for the formulation of targeted public policies and the implementation of programs aimed at enhancing the overall quality of life for the populace. The econometric analysis employed both global and local regression methodologies. Initial application of global regression served a dual purpose: variable selection based on significance and as a benchmark for comparison. Two variables, EPI and HOMI, emerged as statistically significant. Subsequently, GWR was employed to delve into the spatial nuances of the relationship between cocaine production and the inherent characteristics of each municipality. The findings indicated that GWR outperformed global regression in explaining cocaine production in the state of Nariño. This suggests that all estimated parameters exhibit a discernible spatial variation pattern, enabling nuanced conclusions specific to each municipality. The anticipated type of dependence between cocaine production and the two explanatory variables, EPI and HOMI, manifested as expected: negative for EPI and positive for HOMI. However, the impact of these variables on cocaine production did not achieve significance across all spatial units within the study area. GWR mapping results illustrated municipalities in the northwestern part of the state exhibiting a noteworthy correlation between cocaine cultivation issues and education as well as homicide cases. These outcomes align with the findings of the , highlighting a consistent rise in cocaine cultivation in this region over the past decade. The report identifies various factors contributing to this increase, some of which are related to EPI and HOMI, including the heightened global demand for cocaine, expectations stemming from peace agreements, an increase in illegal drug trafficking actors, the persistence of territorial vulnerability, and heightened incentives for cocaine production. This study provides evidence suggesting a close relationship between socioeconomic conditions and the spatial distribution of coca cultivation in the state of Nariño, Colombia. The results support the social marginalization theory by showing that areas with high poverty and exclusion have a higher concentration of illicit crops, suggesting that the lack of access to basic services and economic opportunities creates a favorable environment for the informal economy and activities related to drug trafficking. This finding is consistent with social capital theory, as it indicates that the lack of social cohesion in these communities could limit the effectiveness of development programs that seek to replace coca cultivation. Based on the theory of conflict economics, the results also indicate that socioeconomic factors in Nariño not only influence coca cultivation but are also related to conflict dynamics in areas where coca cultivation is controlled by illegal armed groups. This conflict-prone environment, in combination with social marginalization and lack of social capital, contributes to the perpetuation of dependence on coca cultivation as the main livelihood in certain areas. In addition, the results suggest that illicit crop reduction policies must be adapted to the specific conditions of each area and cannot be uniform. Intervention in Nariño requires a comprehensive approach that addresses socioeconomic conditions and improves community cohesion to facilitate viable and sustainable economic alternatives. Limitations of this study include the lack of detailed local data on social capital and armed group activity, which could enrich the analysis in future research. In conclusion, this study helps to understand how socioeconomic conditions affect the distribution of coca cultivation in Nariño, underscoring the importance of specific interventions that consider the particular social and economic dynamics of the region. Public policies should be designed in a way that not only addresses socioeconomic conditions, but also promotes the strengthening of social capital and the reduction of armed conflict, creating an environment more conducive to sustainable development and peace.
|
Other
|
other
|
en
| 0.999997 |
PMC11694978
|
Functional data analysis (FDA) [ 1 – 4 ] is a rapidly developing area of statistics for data that can be naturally viewed as a smooth curve or function. Unlike traditional methods where the basic statistical unit is a vector of measurements, FDA treats entire functions or curves as the primary objects of analysis . With the development of data collection technologies that use powerful monitoring devices and computational tools, many scientific fields are now generating increasingly complex, high-dimensional datasets . Analyzing these datasets, which can be viewed as functions, requires characterizing the relationships among numerous variables to gain insight into underlying phenomena . Graphical models have been widely used to explicitly capture the statistical relationships between the variables of interest in the form of a graph. Recent progress in graphical modeling has focused on methods for modeling complex dependencies among binary variables through Ising models [ 9 – 11 ] and among continuous variables through Gaussian graphical models [ 12 – 16 ]. However, there has been less attention paid to functional variables, and most existing work concentrates on estimating discrete correlation structures at individual time points rather than global dependencies across all time points. To address this gap, functional graphical models have been introduced to model the conditional dependence structure among random functions, such as measurements over time or frequency in data like electroencephalogram (EEG) or functional magnetic resonance imaging (fMRI). These models have been estimated using various approaches, including parametric approaches based on Gaussian assumption , nonparametric approaches based on the additive conditional independence or additive principal scores , and Bayesian approaches . Recent extensions are primarily based on Gaussian assumption. A doubly functional graphical model has been developed to deal with the case where functional data is sparsely observed . A functional copula Gaussian graphical model was proposed to deal with marginal violation of the Gaussian assumption . A conditional functional graphical models was also introduced for the graph structure that is conditioned on and thus varies with the external variables . All of these approaches assume that the multivariate functional data come from a homogeneous source. In contrast, many real-world scenarios involve data from heterogeneous sources, where dependencies may vary across different groups or subpopulations. Although it is common in graphical model literature to assume homogeneity, there has been growing interest in incorporating heterogeneity. For the continuous variables, mixtures of Gaussian graphical models and its variants have been proposed , while for the binary variables, mixtures of Ising graphical models have been developed . Similarly, mixtures of ordinal graphical models have been introduced for ordinal data . In this paper, we propose finite mixtures of functional graphical models (MFGM) to capture the heterogeneous conditional dependence relationships in multivariate functional data. Our method simultaneously identifies latent subgroups of the studied population and estimates separate functional graphical models for each subgroup, allowing for different dependency structures across the groups. To estimate the model, we adopt a penalized likelihood approach for sparse estimation, which involves regularizing the likelihood function with a non-smooth penalty. This creates a challenging optimization problem, especially due to the functional nature of the data. To tackle this, we extend the framework for the functional graphical model , assuming that the observed functional data are realizations from a Gaussian process, and propose an effective EM algorithm that incorporates the functional graphical lasso (fglasso) method. Our proposed mixture of functional graphical models (MFGM) are generalization of mixture of graphical models from finite vector-valued context to infinite functional context. Suppose the functional variables g 1 ( t ), …, g p ( t ) jointly follow a p -dimensional multivariate Gaussian process with vertex set V = 1, …, p and edge set E . Let K be the number of mixtures and let G k = ( V , E k ) represents the functional graphical model in the k th subpopulation. Now our mixture of functional graphical model can be represented as G ( X ) = π 1 G 1 ( X ) + π 2 G 2 ( X ) + ⋯ + π K G K ( X ) , where ∑ k = 1 K π k = 1 . Therefore, the goal of MFGM is to estimate π = ( π 1 … π K ) and recover { E 1 , …, E K } and then infer membership label of each individual via maximizing the penalized log-likelihood of the observed functional data. Suppose we observe g i = ( g i 1 , …, g ip ) ⊤ , i = 1, …, N and for each i , g ij ( t ), t ∈ T is a realization from a Gaussian process. The Karhunen-Loève expansion allows us to represent each functional variable with g i j ( t ) = ∑ l = 1 ∞ a i j l ϕ j l ( t ) , for i = 1, …, N and j = 1, …, p . We propose to approximate g ij ( t ) by truncating the number of bases, denoted as M , which increases asymptotically as N → ∞. the M -truncated version of Karhunen-Loève expansion would be g i j ( t ) ≈ ∑ l = 1 M a i j l ϕ j l ( t ) , for i = 1, …, N and j = 1, …, p . Here we assume that the truncated multivariate random vector follows mixture of multivariate Gaussian distribution and a i M = ( ( a i 1 M ) ⊤ , … , ( a i p M ) ⊤ ) ⊤ ∈ R M p ∼ ∑ k = 1 K π k N ( μ k , Θ k ) , represents the first M principal component scores for the i th set of functions for i = 1, …, N , where a i j M = ( a i j 1 , … , a i j M ) ⊤ . Here Θ k represents the precision matrix. Now the log-likelihood function for the observed functioanl data is given by ℓ = ∑ i = 1 N log ∑ k = 1 K π k N ( a i M | μ k , Θ k ) , where N ( a i M | μ k , Θ k ) = ( 2 π ) - M p 2 | Θ k | 1 2 exp { - 1 2 ( a i M - μ k ) ⊤ Θ k ( a i M - μ k ) } . Given the log-likelihood, we then maximize the penalized log-likelihood to estimate π k , μ k , and Θ k for k = 1, …, K as follows: max { ( π k , μ k , Θ k ) ; k = 1 , … , K } ∑ i = 1 N log ∑ k = 1 K π k N ( a i M | Θ k ) - ∑ k = 1 K λ k ∑ j ≠ l ‖ Θ k j l ‖ F where ‖ · ‖ F denotes Frobenius norm. Here, Θ kjl ’s are M × M matrices for j = 1, …, p and l = 1, …, p . The EM algorithm provides a powerful tool to deal with latent variables in mixture models. Following the spirit of the EM algorithm, we view the functional data to be incomplete, and treat the latent variables as “missing data”. Moreover, unlike traditional approaches, the sparse estimation imposes the non-smooth penalty function to regularize the likelihood function, which leads to solving a challenging non-convex and non-smooth optimization problem. We introduce the latent random variables τ i = ( τ i 1 , …, τ iK ), i = 1, …, N , satisfying that τ i k = { 1 if g i ( t ) belongs to the k th group , 0 otherwise . (1) Now given the complete data, the complete log-likelihood would be ℓ comp = ∑ i = 1 N ∑ k = 1 K τ i k log π k + τ i k log N ( a i M | μ k , Θ k ) , and the complete ℓ 1 -penalized log-likelihood function becomes: L comp = ℓ comp - ∑ k = 1 K λ k ∑ j ≠ l ‖ Θ k j l ‖ F (2) E-step : Let π k ( l ) , μ k ( l ) , and Θ k ( l ) be the estimate of π k , μ k , and Θ k for k at the l th iteration. In the E-step of the ( l + 1)th iteration, we compute the conditional expectation of τ ik given current estimates π k ( l ) , μ k ( l ) , and Θ k ( l ) for k = 1, …, K . By using Bayes’ rule, we have γ i k ( l + 1 ) = π k ( l ) N ( a i M | μ k ( l ) , Θ k ( l ) ) ∑ k = 1 K π k ( l ) N ( a i M | μ k ( l ) , Θ k ( l ) ) . M-step : In the M-step of the ( l + 1)th iteration, we obtain the estimates of parameters from maximizing ∑ k = 1 K [ ∑ i = 1 N γ i k ( l + 1 ) ( log π k + log N ( a i M | μ k , Θ k ) - λ k ∑ j ≠ l ‖ Θ k j l ‖ F ] . subject to the constraint that ∑ k = 1 K π k = 1 . It is equivalent to maximizing ∑ i = 1 N ∑ k = 1 K γ i k ( l + 1 ) log π k subject to ∑ k = 1 K π k = 1 , and ∑ k = 1 K [ ∑ i = 1 N γ i k ( l + 1 ) log N ( a i M | μ k , Θ k ) - λ k ∑ j ≠ l ‖ Θ k j l ‖ F ] . for k = 1, … K . Now by solving the two above subproblems respectively for π k ( l + 1 ) and μ k ( l + 1 ) , we can find the following closed-form solutions. We update π k ( l + 1 ) by π k ( l + 1 ) = 1 N ∑ i = 1 N γ i k ( l + 1 ) , and update μ k ( l + 1 ) by μ k ( l + 1 ) = ∑ i = 1 N γ i k ( l + 1 ) a i M ∑ i = 1 N γ i k ( l + 1 ) . Next, we update Θ k ( l + 1 ) by solving the below optimization problem with the state-of-art optimization algorithm fglasso . Θ k ( l + 1 ) = arg max Θ k ( l ) { log | Θ k ( l ) | - tr ( S k ( l + 1 ) Θ k ( l ) ) - λ k ∑ j ≠ l ‖ Θ k j l ( l ) ‖ F } , where S k ( l + 1 ) = ∑ i = 1 N γ i k ( l + 1 ) ( a i M - μ k ( l + 1 ) ) ( a i M - μ k ( l + 1 ) ) ⊤ ∑ i = 1 N γ i k ( l + 1 ) . Another way to update Θ k ∈ R M p × M p is by employing the alternating direction method of multipliers (ADMM) algorithm with the separability assumption on the precision matrix . ADMM algorithm is useful in estimating a sparse precision matrix , and partial separability assumption allows the covariance matrices across different dimensions of Karhunen-Loéve expansion so instead of estimating Mp × Mp , it becomes the estimation for Mp 2 . The plot (c) in Fig 1 of shows an example of such a precision matrix. To further clarify the distinction between the fglasso method and the partial separability assumption, let { θ ijuvk : i , j = 1, …, p ; u , v = 1, …, M } be the elements of Θ k ∈ R M p × M p . Under the partial separability assumption, we impose that all θ ijuvk = 0 whenever u ≠ v . In contrast, the fglasso method applies a group lasso penalty, which encourages the parameters θ ijuvk to exhibit a block structure. Specifically, all θ ijuvk with i ≠ j are either simultaneously zero or nonzero. We alternate between the E-step and the M-step until the estimates of parameters converge. Our proposed EM algorithm satisfies an ascent property as the classical EM algorithm and the proof follows . Here, the ascent property means the likelihood value will not decrease after each step of EM. However, the ascent property does not imply that the EM updates will necessarily converge to the MLE and our proposed EM algorithm may converge to a local maximum of the observed data likelihood function, depending on the initial values. The EM algorithm is sensitive to the initial values of the parameters, so care must be taken in the first step. In this work, the Mclust function, acquired from the R package mclust , and the split_comp function, acquired from the R package gmgm , are applied to the multivariate principal component score vectors to provide good initials for the EM algorithm. Now we discuss the tuning parameter selection of our algorithm via a cross-validation (CV) approach. The J -fold CV score for K -mixture case is represented with: C V ( λ 1 , … , λ K ) = ∑ j = 1 J ∑ k = 1 K N j ( log π ^ k , - j - log | Θ ^ k , - j λ k | + tr ( Θ ^ k , - j λ k Σ k , j ) ) , (3) where N j is the sample size of test data in j th CV, π ^ k , - j is the estimated k th group proportion by using training data in j th CV, Θ ^ k , - j λ k is the estimated precision matrix of k th group by using training data with the tuning parameter λ k in j th CV, and Σ k , j is the test data sample covariance matrix in j th CV. This cross-validation score approximates negative log-likelihood of the data. Therefore, a lower cross-validation score indicates better estimation. We built on the cross-validation score for penalized likelihood estimation in Gaussian graphical models and extended it to accommodate mixtures of distributions. As the regular grid search process requires too much computing time for finding the optimal tuning parameters, the more efficient random search process is performed to find the optimal tuning parameter vector (λ 1 , …, λ K ) ⊤ that results in smallest value of CV. Then, the optimal tuning parameter vector is used for MFGM to estimate parameters. We conduct a series of simulations to compare our MFGM algorithm with fglasso algorithm and ADMM algorithm under the partial separability assumption. For simplicity, we refer to these methods as MFGM-fglasso and MFGM-ADMM, respectively. The MFGM-ADMM implementation is based on the R package fgm . Additionally, we compare these two methods with the mixggm algorithm , which ignores the functional structure. We take the average of observations across the time interval for each node, making a functional object into a single value, and implements mixture of Gaussian graphical models in a multivariate vector context. The implementation of mixggm algorithm is based on the R package mixggm . In each setting, the multivariate Gaussian functional variables are generated via g ij = s ( t ) ⊤ δ ij for i = 1, …, N and j = 1, …, p , where s ( t ) is a five-dimensional Fourier basis function, and δ i j ∈ R 5 is a mean zero Gaussian random vector. Hence, δ i = ( δ i 1 ⊤ , … , δ i p ⊤ ) ⊤ ∈ R 5 p follows a multivariate Gaussian distribution with covariance Σ = Θ −1 . Different block sparsity patterns in the precision matrix Θ correspond to different conditional dependence structures. We consider five general structures as follows: The five simulation models are depicted in Fig 1 . In all settings, we consider dimension parameter p = 20, and generate observations of δ i from the associated multivariate Gaussian distribution, and the observed values h ijl are sampled using h i j l = g i j ( t l ) + e i j l , e i j l ∼ N ( 0 , 0 . 5 2 ) , for i = 1, …, N , j = 1, …, p , and l = 1, …, T where each function is observed at T = 100 equally spaced time points between 0 and 1. Two-cluster mixture models . We consider the following three different cases of two-cluster mixture models with π = ( 1 2 , 1 2 ) . We generate N = 100 functional observations of h i for each mixture. We expect that it is less challenging to do clustering and to estimate connection structures in Model (1,4) as there is an obvious distinction between the Identity precision matrix and AR2 precision matrix with strong connections. Model (2,3) will be more difficult since the AR1 precision matrix and AR2 precision matrix with weak connections are more similar to each other. We go further to set the design of Model (4,5) to explore whether our method can perform good analysis in the mixture model in which a subgroup with random connection structure is involved. Three-cluster mixture models . To explore even more complex scenarios, we consider the following two different cases of three-cluster mixture models with π = ( 1 3 , 1 3 , 1 3 ). We generate N = 50 functional observations of h i for each mixture. In Model (1,2,4), the three basic graphical structures, Independent, AR1, and AR2, are involved; and in Model (2,4,5), the subgroup with random graphical structure is considered for the mixture with two other heterogeneous subgroups. Here, we expect that the three-cluster mixture models are more challenging than the two-cluster mixture models to analyze. To apply our proposed MFGM algorithm to the analysis of simulated mixture data, first, the total functional observations are fitted by using an L -dimensional cubic B-spline basis. The Generalized Cross Validation (GCV) method is used to choose the optimal dimension parameter L . Then the smoothed functions are decomposed by M -truncated Karhunen-Loève expansion, and the optimal harmonic number M is determined by eight-fold CV. It turns out that M = 5, which aligns with our design. Further analysis reveals that five principal components already explain over 99% of the total variation in the signal trajectories for each node. The multivariate Karhunen-Loève expansion basis coefficient (principal component score) vectors a i M with M = 5 are thus acquired for further mixture analysis assuming Gaussianity. In the iterative EM process to analyze the mixture of blocked Gaussian multivariate graphical models, our proposed method utilizing the fglasso algorithm (MFGM-fglasso) is compared with the ADMM algorithm under the partial separability assumption (MFGM-ADMM), to solve the maximization problem of log-likelihood with penalty for estimating the conditional dependence structures in each cluster. Our MFGM algorithms are also compared with the mixggm algorithm to confirm the advantage of considering inherent functional nature of the data. To provide good initials for the EM iterations, the Mclust function, acquired from the R package mclust , and the split_comp function, acquired from the R package gmgm , are applied to the multivariate principal component score vectors, for two-cluster and three-cluster mixture models analysis, respectively. We tried tuning parameter values for λ k in the range from 0.8 to 2.5, with increments of 0.1, and determined the optimal value for each group k by minimizing the cross-validation score in Eq (3) . The optimal values were mostly from 0.9 to 1.5. The estimation of the edge structures in each cluster are checked with following metrics; Accuracy (Accu), True Positive Rate (TPR), and False Positive Rate (FPR). We run each simulation 100 times for two-cluster mixture models analysis and 50 times for three-cluster mixture models analysis, and the means of all metrics for the three methods are obtained for comparison. Two-cluster mixture models analysis . Table 1 shows the performance of estimates of the conditional dependence structures in each subgroup in the designed two-cluster mixture models. In the analysis of Model (1,4), all of the three methods do a good job to estimate the edge structure in subgroup 1. The MFGM-fglasso and mixggm outperform MFGM-ADMM in estimating the edge structure in subgroup 2. In analyzing the challenging mixture model, Model (2,3), the three methods show similar decent performances, which are a little worse than that in analyzing Model (1,4). However, in analyzing Model (4,5), MFGM-fglasso and mixggm algorithm do a decent job in estimating the conditional dependencies in both of the two subgroups, but the MFGM-ADMM suffered in estimating the conditional dependencies in subgroup 1. Three-cluster mixture models analysis . Table 2 compares the three algorithms in estimating the conditional dependence structures in each subgroup in the designed three-cluster mixture models. It shows that the three algorithms do better for Model (1,2,4) than for Model (2,4,5) in estimating the graphical structures in the first two subgroups. However, they do worse in estimating the graphical structure for the third subgroup. Moreover, it is revealed that MFGM-fglasso does the best to estimate the heterogeneous networks in terms of Accuracy for most of the three subgroups in both mixture models. It is worth to note that the mixggm algorithm performed similarly to MFGM-fglasso. Alcoholism is a common neurological disorder caused by the mutual effect of genetic and environmental factors. It not only damages the brain system but also leads to cognitive and mobility impairments . It is of great importance to not only find a way that is reliable to distinguish alcoholics from normal subjects, but also recover the distinction of the brain patterns between alcoholics and normal subjects, which helps to explore the underlying mechanisms for alcoholism. Electroencephalogram (EEG) is a very effective tool for studying the complex dynamics of brain activities. It can visualize complex brain activities as dynamic outputs . Therefore, it can be used to distinguish alcoholics from normal subjects based on the differences in the signals. A functional brain network accounts for the neuro-dynamical interactions between neural regions. Functional connectivity defines statistical interdependence between the dynamics of all pairs of the network nodes without taking into account causal effects . Therefore, the analysis of the functional EEG data by mixture of functional graphical models is expected to depict the distinct brain networks in the two subgroups. We apply the proposed MFGM-fglasso algorithm along with MFGM-ADMM and mixggm algorithms on the EEG dataset acquired from the online UCI Knowledge Discovery in Databases Archive ( https://kdd.ics.uci.edu/databases/eeg/eeg.html ). Zhang et al. describe in detail the data collection process. This data arose from a large study to examine EEG correlates of genetic predisposition to alcoholism. The study consisted of 122 subjects, of which 77 belonged to the alcoholism group and 45 to the control group. The data were initially obtained from 64 electrodes placed on the subjects’ scalps that captured EEG signals at 256 Hz during a one-second period. Each subject completed 120 trials under either a single stimulus (a single picture) or two stimuli (a pair of pictures) shown on a computer monitor. As the 64 electrodes were located at standard positions, to reduce the dimension of the data, we select the electrodes that detect signals in the 19-channel montage as specified according to the 10–20 International system (Fp1, Fp2, F7, F3, Fz, F4, F8, T7, C3, Cz, C4, T8, P7, P3, Pz, P4, P8, O1, O2) , which are depicted in Fig 2 by the red circles. Furthermore, referring to the cases considered in , we focus on the EEG signals filtered at α frequency bands between 8 and 12.5 Hz that are acquired by applying the eegfilter function (R package eegkit ) on the raw data. To remove the potential dependence between the measurements and the influence of different stimulus types, we only select observations under single stimulus for the use in this study . Moreover, it shows that many studies used multiple samples per subject in order to obtain a sufficiently large sample, which violated the independence assumption inherent in most methods. Following the analysis in , we average the valid band-filtered EEG signals across all trials for each subject. First, the filtered EEG functional observations are fitted by using an L -dimensional cubic B-spline basis. The GCV method is used to choose the optimal dimension parameter L . Then the smoothed functions are each decomposed by M -truncated Karhunen-Loève expansion. Different from that in the simulation studies, the CV method always selects the highest value from the search grid as the harmonic number M , which leads to a very high dimension of the multivariate Karhunen-Loève expansion basis coefficient vector, making it too difficult for the following mixture model analysis. As the FPCA turns out that six principal components already explain more than 90% of the total variation in the signal trajectories for each node, we fix M = 6 as the truncation number for the Karhunen-Loève decomposition. The multivariate Karhunen-Loève expansion basis coefficient (principal component score) vectors a i M with M = 6 are thus acquired for further mixture analysis assuming Gaussianity. Similar to the simulation studies, we compared our MFGM-fglasso method with the MFGM-ADMM and mixggm algorithm. Again the Mclust function, acquired from the R package mclust , is applied to the multivariate principal component score vectors to obtain the initialization for EM algorithm. For the tuning parameter selection, values from 0.8 to 2.5 with an increment of 0.1 were tried. The optimal values were found to be λ 1 = 2.2 and λ 2 = 2.4. In Table 3 , clustering results of three algorithms are reported. Here, we can see that our proposed MFGM-fglasso method performed best in terms of finding two groups where Group 1 consists of most control subjects and Group 2 consists of most alcoholic subjects. Both MFGM-ADMM and mixggm found less distinctive groups compared to our proposed method. Fig 3 depicts the estimated brain nodes connection structures in each clustered group by the three methods. Our MFGM-fglasso method reveals that, in both subgroups, the electrode locations from the frontal region are densely connected, and the electrode locations from other regions of the scalp tend to be only sparsely connected. This is consistent with the findings reported by a functional graphical models study that analyzed the same EEG dataset . Notably, while Qiao et al. applied a functional graphical model separately to each true group, our approach analyzes data from both groups, simultaneously uncovering brain connectivity patterns and identifying the heterogeneous subgroups within the data. We also notice that the nodes connection structure in the frontal region in the alcoholic subgroup has an asymmetric pattern compared to a symmetric pattern in the control which echoes the findings from . In addition, the Fz electrode-located region has a little more connection with the adjacent regions in the alcoholic subgroup than that in the control, but the Cz electrode-located region has less connection with the adjacent regions in the alcoholic subgroup than that in the control. Moreover, very sparse connections in the lower left Temporal region and Occipital region are revealed in the alcoholic subgroup compared to none in the control. The MFGM-ADMM algorithm shows distinction between the two subgroups. Very dense regional connections are found all over the whole brain in the control. In contrast, very spare regional connections are shown in the alcoholic subgroup except for the occipital region and the lower temporal regions. These findings do not align with the previous findings in the EEG study and this may suggest that the assumption of partial separability in MFGM-ADMM algorithm may not be valid for the EEG data analysis. Finally, the mixggm algorithm estimates super dense regional connections in both of the two subgroups which again does not aligns with previous studies. This might be due to the following reason. Taking the average of observations across the time interval for each node, ignoring the inherent functional nature in the data, could be invalid in EEG data analysis. To sum up, our MFGM-fglasso method outperforms the other two competing methods in the real-world EEG data analysis, in finding distinctive two groups where one group represents the control group and the other group represents the alcoholic group and in estimating the heterogenous brain connectivity patterns. The main strength of our method lies in integrating mixture models with functional graphical models, which allows us to simultaneously detect heterogeneous subgroups within a population and estimate graph structures based on global correlation patterns. The promising performance of our approach is demonstrated through carefully designed simulation studies and its application to an EEG dataset studying alcoholism. The simulation results also reveal that ignoring the functional structure of the data leads to suboptimal performance, and imposing the partial separability assumption on the precision matrix is similarly ineffective. Our model assumes that the functional variables jointly follow a p-dimensional multivariate Gaussian process. If this assumption does not hold, alternative methods, such as copula Gaussian graphical models or nonparametric approaches, may be considered. Additionally, while we assume the number of clusters is known a priori, this is not always the case in practice. If the true number of clusters is unknown, model selection criteria such as BIC or the Integrated Classification Likelihood (ICL) can be used. However, due to the complex functional structure of graphical models, it remains unclear how to accurately compute the effective degrees of freedom for BIC . Our method is also well-suited for estimating heterogeneous dependencies in human brain functional magnetic resonance imaging (fMRI) data and identifying subpopulations with shared brain connectivity patterns. For example, it can be applied to the ADHD-200 Global Competition dataset , which contains 776 resting-state fMRI scans from eight independent imaging sites. This dataset includes 491 scans from typically developing individuals and 285 from children and adolescents with Attention Deficit Hyperactivity Disorder (ADHD). Moreover, our method is applicable to functional genomics, particularly in the analysis of gene expression data during disease progression, where patients may come from diverse backgrounds. Gene expression data are often represented as functional curves, with each gene’s expression measured at multiple time points. Our approach can uncover heterogeneous dependencies among genes within different patient subgroups, allowing for the identification of distinct gene interaction networks that evolve as the disease progresses. We introduced the MFGM method, which combines mixture graphical models with functional data analysis (FDA) to generalize mixture graphical models from a vector-based to a functional context. Our MFGM method leverages an efficient EM algorithm that solves the log-likelihood maximization problem with a penalty, enabling the estimation of graphical model parameters for each subgroup. Additionally, we incorporate the fglasso algorithm within the EM framework to estimate the precision matrix. We believe that our approach, which not only clusters functional observations into subgroups but also uncovers heterogeneous conditional dependencies within each subgroup, significantly advances the methodology of high-dimensional graphical models. The proposed method has the potential to expand the applicability of graphical models to a variety of complex data types, such as functional genomics, brain imaging, and longitudinal health data. By enabling more accurate modeling of heterogeneous dependencies, our method offers valuable insights into the underlying structures of high-dimensional data that are often missed by traditional methods. Looking ahead, there are several promising avenues for future research. For example, extending our method to non-Gaussian settings could broaden its applicability, while further advancements in the selection of the optimal number of clusters could enhance model accuracy. Additionally, integrating our approach with other advanced machine learning techniques could improve its performance and scalability in real-world applications. Ultimately, our method provides a novel strategy for analyzing complex functional data, offering new possibilities for understanding the intricate dependencies within high-dimensional datasets in various scientific and clinical fields.
|
Other
|
biomedical
|
en
| 0.999996 |
PMC11694979
|
The under-five mortality rate refers to the probability a newborn would die before reaching five years of age, expressed per 1,000 live births. About 4.9 million children under five years die annually, translating to 13,400 children dying daily before they make 5 years. This is happening despite the global progress in reducing under-5 mortality . Globally, under-5 mortality has declined by 60% from 93/1000 live births in 1990 to 37/1000 live births in 2022. In Uganda, the under-5 mortality rate has dropped from 182.1 deaths per 1,000 live births in 1990 to 45.8 deaths per 1,000 live births in 2019 . Most of these deaths are from preventable causes like pneumonia, severe acute malnutrition, diarrheal diseases and malaria, in addition to newborn-related issues like prematurity and birth asphyxia . While numerous studies have documented the predictors of overall in-hospital mortality in children, there is limited research specifically focused on the first 24 hours of admission—a period that is crucial for survival. In particular, the clinical and demographic characteristics of children who die within this critical period are not well-documented in settings like ours in South Western Uganda. Although delays in emergency care have been implicated in these deaths, there is a significant gap in understanding all the contributing factors . Variables such as illness severity, comorbidities like severe malnutrition, and HIV status have been linked to in-hospital mortality but have not been rigorously studied as predictors of mortality within the first 24 hours of admission in our context .This may not clearly bring out the fine degree of granularity required to study the patterns of children under five who die in-hospital. Moreover, more than 50% of these deaths happen in the first 24 hours of hospital admission . These deaths are at times due to several factors including delays in seeking healthcare, inadequate healthcare interventions, financial limitations, lack of life-saving equipment, and insufficient support services, among others . Health workers in these settings often operate with minimal training and supervision, further complicating the delivery of critical care. This study, therefore, aims to describe the patterns and predictors of mortality within the first 24 hours of admission among children aged 1–59 months at a tertiary facility in South Western Uganda. Data generated from this study could guide the development of targeted interventions, improve early triage and management protocols, and ultimately reduce under-5 mortality. We conducted a prospective cohort study among children aged 1–59 months admitted to the Pediatric ward of Mbarara Regional Referral Hospital (MRRH) from 13 July 2022 to 25 October 2022 and enrolled 208 participants. The study was conducted in the Pediatrics ward of Mbarara Regional Referral Hospital (MRRH). MRRH is a regional referral situated in Mbarara City in southwestern Uganda, about 260 kilometers from Kampala, the capital city of Uganda. It is also a teaching hospital for the Mbarara University of Science and Technology (MUST) and other tertiary health training institutions in the region. MRRH receives patients from 13 districts in Southwestern Uganda and neighboring countries of Rwanda and the Democratic Republic of Congo. The Paediatrics ward of MRRH admits on average 5000 children annually. Children who meet the following criteria were included in the study: To calculate the sample size for the predictors associated with mortality within the first 24 hours of admission, we used OpenEpi, an online epidemiological and statistical calculator. Using a similar study done in Muhimbili Hospital in Tanzania where the significant predictor of mortality was severe acute malnutrition as compared to normal nutrition status (24.4% Vs 10.8% Vs 9.0%, p = <0.001) , our required sample size was 208. We consecutively sampled all children who met the study criteria until the desired sample size was achieved. All critically ill children who had been triaged, admitted, and received emergency care were approached by a trained research assistant. The research assistant who was not part of the emergency admission team then screened the admitted children to check if they met the eligibility criteria, explained to the caretakers the purpose of the study, and obtained written informed consent from the caretakers who consented to participate in the study. The Principal Investigator, who is a practicing clinician in the Paediatrics ward then ensured that all the requisite study examinations and investigations were done. Consent and recruitment were done after the children had received the appropriate emergency treatment and care. The study team did not directly intervene in the care plans of the children and the investigations. All admitted children aged 1–59 months who met the eligibility criteria were enrolled in the study after they were stabilized and followed up until the outcome which was either dead or alive at 24 hours post-admission. Pre-admission data regarding the time from onset of symptoms to contact with a health worker, previous diagnosis, home remedies, previous treatments, household status, and current referral status was taken. The time of arrival and time of review by health care workers, immunization status, presenting complaint, treatment instituted at home, working diagnosis, and investigations requested, treatment received, and duration of stay before the outcome was also recorded. The history of any previous illnesses and any emergency care given to the child was also recorded. The clinical signs: respiratory rate, blood pressure, axillary temperature, level of consciousness, pulse-oximetry, and heart rate was taken and interpreted for age and sex. The anthropometry was done and converted to weight for age, weight for length/height and height for age assessed using the World Health Organization Child Growth Reference Standards . A comprehensive systemic exam capturing emergency and danger signs like reduced level of consciousness, excessive vomiting or diarrhea, seizures, poor feeding, difficulty in breathing, etc. was recorded on admission as well. Investigations done at admission included a Complete Blood Count, Random Blood Sugar, HIV Screening, and a Blood Slide for malaria parasites. All laboratory investigations were done at the MRRH main laboratory. The Principal Investigator double-checked all collected data for completeness and rectified any errors or omissions while the child was still in the Emergency Room. The study met all the costs of the investigations that were charged a fee or were not available in the hospital laboratory at any time during the study. All children admitted were managed by the clinical care team based on standard treatment protocols for the different conditions. The diagnoses were based on those made by the admitting team of which the principal investigator was not part of. For any death within 24 hours of admission, the study team provided psychosocial counseling to the caretaker and family members present at the time of death. Autopsies were not done to determine the cause of death and the study used the diagnoses at admission as proxies. For the blood tests, 2 ml of blood was drawn under sterile conditions during the insertion of an intravenous cannula or a venipuncture. The area was swabbed with cotton dipped in 70% ethyl alcohol. The blood was collected in an EDTA tube for a complete blood count. A thin smear for malaria was made in the laboratory for microscopic analysis. Blood smears for malaria parasites were done using GIEMSA stain and examined by a laboratory technician using an Olympus CX21F21LED microscope. For HIV tests, pre-test counseling was done, and a standard testing algorithm was used. Children under 18 months with positive rapid tests were referred for DNA PCR tests in the HIV clinic at MRRH. For the urine dipstick, the urine sample was analyzed within an hour of collection using Chemstrip-10A urine test strips. The random blood sugar was done on the bedside using one touch-type glucometer. The CBC was analyzed using Sysmex Automated Analyzer X5-10001. The lab results obtained were shared with the clinical team as soon as they were received. The outcome variable was death within the first 24 hours of admission. The independent variables were pre-hospital factors like maternal age, education level, income status, parity and birth order of the child, past medical history of the child, cultural and spiritual factors, and distance from home to hospital. The hospital factors were the child’s vital signs and anthropometry, laboratory findings, emergency care given, quality of reviews, diagnosis, and severity of illness. We used a questionnaire to collect pre-hospital data from the caretakers of the children and a data abstraction tool for the demographic, anthropometric, clinical assessments, and laboratory data from the patient’s records. All the children who were admitted were screened by a trained Research Assistant to check if they met the eligibility criteria. The Principal Investigator, who is a clinician in the Paediatrics ward then reassessed the patients and took off the samples for basic investigations and ensured that emergency care plans had been effected by the clinical care team. The Research Assistant enrolled those patients whose caretakers had consented to participate in the study. Consent and recruitment were only done after participants had received the appropriate emergency care and had been stabilized. We collected and managed the data using Research Electronic Data Capture (REDcap ™), a secure web-based data capture platform software used for research studies. Each Research Assistant used an android tablet with pre-installed REDcap to collect data. We checked all the collected data for completeness immediately after each recruitment. The data was cleaned and transferred to STATA version 14 for analysis. For the first objective of describing the patterns of mortality (this refers to the time of death, age group of children under 5 dying, and conditions causing or contributing to their death), we used descriptive statistics like means, proportions, and modes and summarized the data in tables and graphs. The Gaussian assumption was assessed using the Shapiro-Wilk test and histograms. Where the data was not normally distributed, the median and interquartile range (IQR) were calculated. The Chi-squared test was used to compare the distribution of mortality within 24 hours of admission among the categorical variables. For the second objective, bivariable analysis was done using cox-regression to identify crude associations between the exposure variables and the outcome. The measure of association between the exposure variables and the outcome variable i.e. mortality was hazard ratio together with its 95%confidence interval and the p-value. Then predictors with crude hazard ratios whose p-value is less than 0.2, and those that were biologically plausible were included in the multivariate Cox regression model to adjust for confounding effects. Predictors with adjusted hazard ratios having p-value <0.05 were considered statistically significant in the final considered multivariate model. A screening log was used based on the daily hospital admissions. A study management Standard Operating Procedures (SOP) tool was designed to monitor the entry and exit of patients into the study. All these tools were reviewed by the Principal Investigator, Research Supervisor, and Biostatistician. The tools were pre-tested before use to ensure that they were accurate and collected data that is relevant to answer the study objectives. Anthropometric measurements, clinical examinations, and sample collection were done by the Principal Investigator and a trained Research Assistant. Weights and heights were done using the same weighing scale and height board respectively for all the patients. The weighing scales are routinely calibrated and standardized by the hospital maintenance department. All children underwent the same procedure for laboratory investigations. All the tests were done at the Mbarara Regional Referral Hospital laboratory which is a certified laboratory. The hospital laboratory is certified to do these tests and they run daily internal quality controls and quarterly external proficiency quality control tests. Mbarara University of Science and Technology Research Ethics Committee (MUST REC) provided ethical clearance to conduct this study . Written informed consent to participate in the study was obtained from the caretakers of all the patients included in the study after receiving emergency care. We did not have any emergency room consent or deferred consent. Names and identifying information were not used and unique study codes were generated for each participant. Emergency care was given to all patients before recruitment. The laboratory tests carried out were shared with the clinical team and caretakers on the ward for timely patient care. During the study period , a total of 438 children above 28 days were admitted into the ward excluding cancer patients. Of these, 242 were aged 1–59 months. We excluded 34 children and enrolled and analyzed results for a total of 208 children aged 1–59 months. Five children of the 208 were readmissions and each readmission was treated as a different event . The median age of participants was 13.0 (IQR 5.7–31.6) months. About one-third of the children had visited another facility before coming to MRRH. We admitted about one-third of the children during the weekend. Half of all children received emergency care within 30 minutes of arrival. More than one-third had symptoms for more than 48 hours before seeking health care. One-third of the children had a fever and altered level of consciousness. Half of the children had a normal nutritional status and less than a third were severely malnourished. One-third of the children had abnormal random blood sugar, low hemoglobin, and a high total white cell count. The most common diagnoses at admission were severe pneumonia (55, 26.44%), severe acute malnutrition (SAM) (49, 23.55%), sepsis (24, 11.54%), congenital heart disease (15, 7.21%) and malaria (12, 5.77%). For children with SAM, 9.1% had edema. The common causes of mortality were severe pneumonia (3, 18.75%) and severe acute malnutrition (2, 12.5%) with case fatality rates of 5.5% and 4.1% respectively . The overall mortality rate was 7.7% (95% CI 4–12) and the median time from admission to death was 7.3 (IQR 2.62–8.75) hours. More than two-thirds of children died within 12 hours of admission as shown in . The mortality according to age categories was 9(56.3%) and 7(43.8%), children aged 1–11 months, and 12–59 months respectively. There was no sex difference among the children who died. The mortality was higher, 13 (81.3%) among children who sought care more than 48 hours after the onset of symptoms. Most children who died had visited another facility before coming to MRRH (11, 68.75%) and the majority had been to private facilities (7, 43.8%). Most deaths occurred during the day shift (10, 62.5%) however the majority (11, 68.8%) had been admitted during the night shift. Ten (62.5%) of the children who died had received emergency care within less than 30 minutes of admission. Most of the children who died were admitted on a weekday (13, 81.3%).Severe pneumonia (3, 18.75%) and severe acute malnutrition (2, 12.5%) were the commonest causes of death. At multivariate analysis using stepwise regression and adjusting for different confounders, the hazard of death was directly proportional to being admitted during the night shift (AHR: 3.7, 95% CI 1.02–13.53, p-value 0.047), and abnormal neutrophil count (AHR: 3.5, 95% CI 1.10–11.31, p-value 0.034). The mortality rate within 24 hours of hospital admission among children aged 1–59 months in this study was 7.7%. This is still higher than the SDGs and WHO target of below 25/1000 . This high rate means that the emergency care offered to children who present to our units needs to be improved. Many such children may present in very critical conditions, have multiple comorbidities, and may be difficult to resuscitate. Studies in Uganda , and Nigeria have also found rates between 5–10%. However, These rates are still higher than global estimates of 4.3%. Many high-income countries have rates of 0.5–1.5%, indicating more robust and better-developed emergency care. Many LMICs still have challenges in their healthcare systems like late referrals, reduced health worker coverage at night, and inadequate emergency supplies . It is, however, much lower than in other African countries where the 24-hour mortality rate ranged from 30% to 67% . The studies that were done by Lahmini et al. and Jofiro et al. in Morocco and Ethiopia, respectively, included neonates and neonatal mortality disproportionately increased the mortality rate. The higher rates observed across these studies could also be due to differences in the study design, where these studies used a retrospective study design and could have missed a lot of vital data . The Department of Paediatrics at MRRH needs to improve its emergency care steadily by increasing the number and expertise of emergency admission teams and creating a better-equipped emergency room. The median time to death in our study was 7 hours, and only one child died within an hour of admission. This highlights a gap in emergency care given to children under 5 in low-resource settings since 7 hours should provide ample emergency care and institute long-term care plans. There is a lack of literature on time to death within 24 hours of admission for children in the age group of 1–59 months. However, A study done in Morocco had a median time to death of 12 hours and 10% of the children died within the first hour of admission . This study recruited newborns who have higher odds of dying commonly due to prematurity, hypothermia, infections, and birth asphyxia. This study also showed more deaths among infants as compared to toddlers. Infants have an immature immune system compared to toddlers making them more susceptible to severe infections. It is also more difficult for caregivers to identify danger signs among infants as they may not express themselves as compared to toddlers. This delays their access to health care and thus deterioration and increases the chances of death within 24 hours of admission. This finding means that keen attention should be given to infants as their illness manifestation may differ from older children and the severity may not be easily classified by a naïve person. This is similar to other studies done in Uganda , Morocco , and Ethiopia where infant mortality rates were between 2–12%. More children with fever died as compared to those without fever. Fever is a marker of acute or chronic inflammation following pathogenic invasion. Fever patterns and grade indicate the infection’s severity and chronicity. In SSA, where infectious diseases like pneumonia, malaria, and diarrheal illnesses are very common, many children present with fever and are likely to deteriorate and die shortly after admission. This has been reported in several studies in sub-Saharan Africa, where high-grade temperatures are a major presenting complaint . The most common causes of death in our study were pneumonia and severe acute malnutrition, and other infectious diseases like malaria and diarrheal diseases caused fewer deaths. These deaths were mainly complicated by HIV, sepsis, and shock, among other comorbidities. This is comparable to studies done in Uganda . These are all LMICs with a similar burden of infectious diseases. Malnutrition causes reduced cell-mediated immunity and humoral response making the children more susceptible to other infections like pneumonia. Many of these children also present late with septic shock and multiple organ dysfunction. However, this is different from studies done in high-income countries where noninfectious causes of hospital admissions were dominant. The difference in the burden of diseases seen between these HICs and LMICs like ours calls for comprehensive assessment, investigation, and management of comorbid disease conditions . This also shows that a lot of children may present in very critical conditions with multiple comorbidities and may be difficult to resuscitate . Our study found an almost 4-fold increase in the likelihood of death within 24 hours of admission for children admitted at night compared to those admitted during daytime. More than half of the children died during the day but had been admitted during the night. The night duty cover is usually manned by few junior house officers who may not be as experienced and skilled enough to make appropriate critical decisions and provide emergency resuscitation and care. Late in the night, specialists and Senior House Officers are not physically present but can be consulted on the phone. This may not work in life-threatening emergencies that require meticulous evaluation and critical interventions to save a life. This results in deterioration and irreversible organ damage leading to death during the day. Many of the night admissions could also be due to late arrival from referral facilities or severely ill children. Similar observations have been made in studies done in Tanzania and Brazil . There is usually a smaller number of critically skilled staff working and also difficult to get emergency supplies of sundries at night . However, a study done in India found no difference in mortality between day and night time admissions, and this was attributed to the strict treatment protocols in their Pediatric Intensive Care Units Another systematic review and meta-analysis also found no difference in mortality between day and night time admissions . On the contrary, a study done in Nigeria found more death among children admitted during the day which was explained by uncoordinated changes in nurses’ shifts that compromised care . In our Paediatric department, there has been an improvement in the handover of patients during shift changes and also the utilization of a notice board where all priority patients are written. We also found a 3.5-fold increase in the probability of death among children who had an abnormal neutrophil count. Neutrophils, as part of innate immunity, respond to pathogens and protect the body from infections alongside other immune system components. A high or reduced neutrophil count is associated with the depressed activity of other immune cells like the T-lymphocytes and natural killer cells in response to acute or chronic inflammation. This directly shows an overwhelmed immune system that cannot protect the body. Such responses are even worsened in infants and neonates with immature immune systems. In LMICs where bacterial infectious diseases are responsible for many hospital admissions, an abnormal neutrophil count strongly predicts early mortality. This has also been found in other studies conducted in sub-Saharan Africa .
|
Other
|
biomedical
|
en
| 0.999997 |
PMC11694981
|
Emotions are a fundamental aspect of human experience, influencing how individuals perceive and interact with their surroundings . Considering that emotions are a constant aspect of human life, it is evident that individual hold beliefs about their origins, purpose and regulation . These emotion beliefs are thought to shape how individuals understand and manage their emotions, with emerging research suggesting they may serve as a key mechanism in emotion regulation . Given that difficulties in emotion regulation are linked to the development and maintenance of psychopathology , an understanding of how emotion beliefs contribute to these regulatory processes could offer valuable insights for improving mental health outcomes. Adaptive emotion regulation, defined as the up- and down-regulation of positive and negative emotions depending on regulatory goals , is associated with better mental health [ 4 , 6 – 8 ]. To promote better mental health, it is important to examine which factors lead to more successful emotion regulation. According to Gross’ extended process model , emotion regulation consists of four stages: identification, selection, implementation, and monitoring. During the identification stage, an individual decides whether to regulate the identified emotion, during the selection stage, an individual decides on the regulation strategy, during the implementation stage, an individual implements the previous chosen emotion regulation strategy, and during the monitoring stage, an individual evaluates the successful implementation. Multiple deficiencies at each stage can lead to maladaptive or unsuccessful emotion regulation. One factor are beliefs about emotions [ 9 – 13 ]: Superordinate beliefs about emotions (good versus bad; controllable versus uncontrollable) influence subsequent emotion regulation by having an impact on effort and performance at all stages of Gross’ process model of emotion regulation . While there has been a growing number of research on the impact of malleability beliefs of emotions on emotion regulation (whether emotions are controllable, [ 9 , 11 , 13 , 15 – 18 ], only few studies examined additional beliefs about the utility of emotions [ 19 – 22 ], its links to psychopathology [ 12 , 23 – 25 ], or even emotion-specific control beliefs . In general, beliefs about the (un-)controllability of emotions were associated with social anxiety , schizophrenia-spectrum , psychological stress , depressive symptoms , and poorer mental health . The growing body of research on beliefs about emotions was associated with multiple different terms to describe individuals’ assumptions about their own emotions or emotions in general: implicit theories of emotions, emotion mindsets, and emotion beliefs . All current labels have in common that they refer to how people explicitly state their perspectives on emotion (regulation) in self-report measures. At the same time, it has been found that differences exist between general emotion beliefs and personal emotion beliefs, with personal emotion beliefs having a higher impact on subsequent emotion regulation than general beliefs . As Kneeland and Kisley stated, assessing multiple emotion perspectives within the same sample will help clarify the redundancy and differentiation between the constructs of implicit theories (see ITES) and beliefs (see EBQ). This will go beyond Becerra et al.’s approach comparison existing measures of beliefs about emotions. Presenting the Emotion Beliefs Questionnaire (EBQ), Becerra et al. stated three goals which laid the foundations of their approach: A questionnaire assessing beliefs about emotion should assess controllability and usefulness of emotions separately, should assess beliefs about emotions in general, and provide valence-specific assessment of beliefs about controllability and usefulness of both positive and negative emotions. The authors argued that only with considering all three criteria, beliefs bout emotions could be assessed properly and thus exceed the assessment of emotion beliefs with existing measures such as the Implicit Theories of Emotions Scale , Beliefs about Emotions Scale or the Attitudes Toward Emotions Scale . The EBQ consists of 16 items which were developed based on the theoretical foundations by Ford & Gross . The EBQ includes four theoretical factors on the controllability and usefulness dimensions across positive and negative emotions. However, in its initial validation in an online sample of 161 middle-aged Australian adults, confirmatory factor analyses resulted in three instead of the assumed four factors: General-Controllability, Negative-Usefulness, and Positive-Usefulness. Fit indices showed moderate fit and internal consistency reliability was good . Male participants showed on average more maladaptive beliefs about emotions than female participants. Inspecting the construct validity, measures included the Implicit Theories of Emotions Scale (ITES), the Emotion and Regulation Beliefs Scale (ERBS), the Beliefs about Emotions Scale (BES), the Perth Emotion Regulation Competency Inventory (PERCI), and the Depression Anxiety Stress Scales-21 (DASS-21) . In accordance with the assumptions, the ITES was moderately linked to the General-Controllability subscale, whereas both usefulness subscales were not associated with the ITES. Further, all EBQ subscale and composite scores were significant predictors of most of the PERCI total scores , with the subscale General-Controllability being the strongest predictor of both positive- and negative-emotion regulation. Examining its links with markers of psychopathology, the EBQ total score showed strong associations with all DASS-21 subscales. On the EBQ subscale level, only the EBQ subscale General-Controllability significantly predicted DASS-21 depression and stress subscale, but not the anxiety subscale . Additionally, EBQ scores showed incremental predictive value above that of the ITES when predicting psychopathology and emotion regulation. Since its first presentation and validation, the EBQ turned out to be popular and has since been validated in an Iranian and a US sample of adolescents and adults , an Italian sample of adults , two Japanese samples of university students , and a German sample of adults . We will first give an overview over these studies, before we lay out why another validation study is needed. In a sample of 104 German psychotherapists (young adults, predominantly female sample) in training, Biel et al. used a different translation than the here presented translation and replicated the three factors found by Becerra et al. : General-Controllability, Negative-Usefulness, and Positive-Usefulness. They applied an exploratory factor analysis trying to replicate the three-factor structure, thus not receiving fit indices. The resulting three factors showed acceptable internal consistencies. The EBQ total score was significantly associated with all DASS-subscales as measure of psychopathology. Further, higher EBQ total scores were linked with less emotional acceptance. Complementing the limitations stated by the authors, namely the small sample size and the unbalanced male-female ratio, both the missing fit indices and the limited range of measures to assess construct validity restricts interpretability and drawing conclusions for further studies. Applying their translation of the EBQ in a prospective study, beliefs about emotions (EBQ) predicted psychological stress related to somatic symptoms two weeks later over previous psychological stress in three samples , thus stressing the EBQ’s predictive validity. In three large samples of Iranian adolescents, Iranian adults, and American adults, Ranjbar et al. examined the factor structure of the EBQ applying confirmatory factor analyses. Comparing multiple models, the 4-factor-model (Negative-Controllability, Positive-Controllability, Negative- Usefulness, Positive-Usefulness) had the best fit with the data within all three samples. Internal consistency reliability was moderate, with the highest scores in the subsample of American adults. Retest-reliability was reported to be moderate across all samples and scores. Construct validity was assessed using the ITES, PERCI, and DASS-21. Consistent with Becerra et al. , higher EBQ scores were moderately associated with higher ITES scores. However, in contrast to Becerra et al. , the EBQ usefulness subscales correlated significantly with ITES scores, even though less than the controllability subscales. Also, higher beliefs in the controllability and usefulness of emotions were associated with better emotion regulation abilities, assessed with the PERCI. Finally, greater overall maladaptive beliefs about emotions correlated significantly with higher levels of depression, anxiety, and stress, assessed with DASS-21. In 516 Italian adults, Rogier et al. examined the construct validity of the EBQ using measures for emotion dysregulation (DERS, DERS-Positive) and psychopathology (DASS-21). Applying confirmatory factor analysis, the 4-factor (Negative-Controllability, Positive-Controllability, Negative- Usefulness, Positive-Usefulness) model fitted best with good internal consistency reliability. Full measurement invariance was reached for gender, however, the authors did not examine mean differences in their sample. Comparable to findings of Becerra et al. , the EBQ controllability subscales were most predictive: Less beliefs about controllability of emotions was associated with higher difficulties emotion regulation, assessed with the DERS. Associations between DASS-21 and EBQ were found for all subscales except for the link between the EBQ Positive-Usefulness subscale and the DASS-21 Stress subscale . Apart from administering the EBQ and DASS-21, Rogier et al. did not use the same measures used by Becerra et al. , thus limiting results about the concurrent validity. The previous validations have supported the EBQ’s construct and concurrent validity. As the German validation lacked a comprehensive view on the EBQ’s links with both measures of emotion (dys-)regulation, beliefs about emotions, and psychopathology in a large sample, the aim of the below-depicted study was to examine (a) the factorial structure and (b) the construct validity of the German EBQ in a large sample of young adults. By using multiple measures to validate the EBQ, we show a broader picture of the EBQ and its abilities to measure beliefs about emotions. In exploratory analyses, the objective is (c) to identify any differences in emotional beliefs as a function of gender, emotional reactivity, and emotional self-efficacy. The sample was recruited through a convenience sampling technique. A digital survey was created using the SoSciSurvey platform and promoted through university mailing lists. Participants were encouraged to recruit other participants by receiving course credit. The survey took place in the period from 9 June to 2 September 2023. After an initial presentation of the study’s aims and procedure as well as information about data protection, all participants provided written informed consent for participation. In the case of underage participants, the written consent of their legal guardians was obtained. Then, participants were presented with a battery of self-report questionnaires. The study was approved by the local ethics committee. Sample size for the confirmatory factor analysis was calculated in accordance with Kyriazos , which resulted in a sample size of at least 320 participants. To prevent dropouts due to careless responding, discontinuation of the study or similar, a sample size of 350 test subjects was aimed for. The final sample consisted of 348 adults with a mean age of 27 years ( SD = 10.14, Min . = 14 to Max. = 61). Two-hundred-and-thirty-two respondents (66.66%) identified as female, 111 as male (31.90%), and 5 as diverse (1.44%). The measurement instrument to be validated in the present study is the German translation of the Emotion Beliefs Questionnaire (EBQ) , a self-report questionnaire assessing beliefs about the controllability and usefulness of emotions. The total of 16 items (e.g., “Once you feel negative emotions, you can no longer change them.”) recorded the controllability and usefulness of positive and negative emotions. The answers were given on a 7-point response scale, ranging from 1—strongly disagree , to 4—neither , to 7—strongly agree . Higher scores on the scales indicated more maladaptive beliefs about emotions. To locate the four facets of the German-language adaptation of the EBQ in a nomological network and assess its convergent and discriminant validity, we investigated its relations to a set of key constructs of emotion processing and perception. Our goal in including this broad range of correlates was to explore the nomological network of emotion beliefs, including measures of emotion regulation and processing, reactivity to and self-efficacy with emotions, and clinical dimensions of, for example, anxiety and depression. We selected certain emotion processing constructs because these constructs were also the focus in Becerra et al.’s original validation study of the EBQ, allowing for direct comparisons of our results. Further, we expanded the construct validation of the German EBQ by including more independent constructs of emotional processing like sensitivity and reactivity to emotional experiences and perceived self-efficacy in regulating and processing emotions. Lastly, we included clinical measures of emotion processing to allow the first exploration of consequences of differing emotion beliefs in an applied context. Emotion regulation . We measured emotion regulation via the three most prominent existing measures. The Difficulties in Emotion Regulation Scale (DERS) measures difficulties in ER. A five-point Likert scale is used to indicate how frequently the situation presented in an item is experienced by the respondent (i.e., 1—almost never, 0–10%, 2—sometimes, 11–35%, 3—about half the time, 36–65%, 4—most of the time, 66–90%, 5—almost always, 91–100%). The higher the values in the DERS, the more pronounced the difficulties in emotion regulation. The German translation for use with adolescents showed good reliability and validity . In the present study, the short version DERS-18 was used as well as all items of the original scale "Limited access to emotion regulation strategies" (e.g. “When I’m upset, my emotions feel overwhelming.”) in order to establish comparability with other studies that used this subscale to measure self-efficacy in emotion regulation. The Emotion Regulation Questionnaire (ERQ) , German translation by Abler and Kessler , uses ten items to measure the emotion regulation strategies of reappraisal (e.g. “When I want to feel less negative emotion (such as sadness or anger), I change what I’m thinking about.”) and suppression (e.g. “I control my emotions by not expressing them.”) on a seven-point scale, ranging from 1—not true at all to 7—completely true . The Perth Emotion Regulation Competency Inventory (PERCI) uses 32 items to measure the ability to regulate positive (e.g. “I don’t know what to do to create pleasant feelings in myself.”) and negative (e.g. “When I’m feeling bad, I’m powerless to change how I’m feeling.”) emotions. The items are answered on a 7-point Likert scale, 1—strongly disagree , to 4—neither agree nor disagree , to 7—strongly agree , with higher values indicating greater difficulties in emotion regulation. Emotional Reactivity . We assessed the emotional reactivity via two prominent scales. The Perth Emotional Reactivity Scale (PERS) uses 30 items to measure the activation (e.g. “I tend to get upset very easily”), ease (e.g. “”My negative feelings feel very intense) and duration (e.g. “I can remain enthusiastic for quite a while”) of positive and negative emotions on a 5-point Likert scale (i.e., 1—very inapplicable, 2—rather inapplicable, 3—neither, 4—rather applicable, 5—very applicable). In the present study, the German translation of the PERS by Schnabel and Witthöft (in preparation) was used. The Emotional Reactivity Scale (ERS) uses 21 items to measure the sensitivity (e.g. “My feelings get hurt easily.”), intensity (e.g.” When I experience emotions, I feel them very strongly/intensely.”) and persistence (e.g. “When something happens that upsets me, it’s all I can think about it for a long time.”) of emotional reactivity on a 5-point Likert scale, ranging from 0—does not apply to me at all , to 1—applies to me a little , to 2—applies to me to some extent , to 3—applies to me quite a lot , to 4—applies to me completely . Again, for a German translation of the ERS we used the version by Schnabel and Witthöft (in preparation). Regulatory Emotional Self-Efficacy . The Regulatory Emotional Self-Efficacy scale (RESE) , German version , measures perceived self-efficacy in ER. A five-point Likert scale is used to indicate the extent to which the 10 items apply, from 1—not at all good , to 2—less good , to 3—fair , to 4—fairly good , to 5—very good . The higher the scores on the RESE, the higher the self-efficacy in emotion regulation. The RESE includes three subscales measuring the perceived self-efficacy in the expression of positive emotions (POS/four items; e.g. “Express joy when good things happen to you?”), in dealing with anger (ANG/three items; e.g. “Avoid flying off the handle when you get angry?”) and dependency/stress (DES/three items; e.g. “Keep from getting discouraged in the face of difficulties?”), respectively. Depression , anxiety , and stress . The Depression Anxiety Stress Scale 21 (DASS-21) is a 21-item-measure of depression (e.g. “1 felt that life was meaningless”), anxiety (e.g. “I felt 1 was close to panic”), and stress (e.g. “I found it hard to wind down”) symptoms during the last seven days. The items are rated on a four-point-Likert scale (i.e., from 0—did not apply to me at all , to 3—applied to me very much or most of the time ). All analyses were conducted in the statistical language R, version 4.3.2. Our analyses comprised five steps. First, we analyzed the descriptive statistics and zero-order correlations of all 16 items of the German-language adaptation of EBQ. We report the mean, median, standard deviation, minimum and maximum, skewness, and kurtosis, respectively. Second, we assessed the factorial validity of the EBQ through confirmatory factor analysis (CFA). We computed five different CFA models via maximum likelihood estimation (MLR). One analysis concerned the direct replication of the complete four factor structure of the EBQ assessed in the original paper . The additional four analyses addressed the factorial validity of every single facet theorized by Becerra et al.’s model of emotion beliefs. Third, we estimated the reliability of the EBQ in terms of internal consistency. We used McDonald’s ω complementary to the widely used Cronbach’s α because the latter assumes equal factor loadings for all items (i.e., an essentially tau-equivalent measurement model), an assumption that is unlikely to hold for multifactorial models like the EBQ. Fourth, we report the construct validity of the EBQ by correlating the four facets of emotion beliefs with the aforementioned validation constructs. The aim here was to embed the four emotion belief facets in a nomological network spanning a broad range of conceptually close constructs to investigate divergent and convergent validity. Fifth and last, to investigate the measurement generalizability of the EBQ, we tested the scale’s measurement invariance across gender of respondents and across different levels of emotional reactivity and self-efficacy by means of multi-group CFA . In each measurement invariance analysis, we tested four successive levels of measurement invariance: configural invariance (same measurement model), metric invariance (additionally same loadings), scalar invariance (additionally same intercepts), and strict or uniqueness invariance (additionally same residual variances). To decide on the achieved level of measurement invariance, we relied on conventional cutoffs for changes in fit indices when comparing models with different levels of invariance [ 49 – 51 ]. We tested the measurement invariance of the complete model of four facets based on the tau-congeneric model. We also compared the scale scores (manifest unit-weighed means scores) for each emotion belief facet between genders. We analyzed the descriptive statistics and reference ranges for the German-language adaptation of the EBQ. Table 1 shows the mean ( M ), median ( Mdn ), standard deviation ( SD ), minimum (Min) and maximum (Max), skewness, and kurtosis of all 16 single items of the four facets: Controllability of positive emotions, controllability of negative emotions, usefulness of positive emotions, usefulness of negative emotions. For the inter-item correlations of all 16 single items, see S1 Table in the Supporting information. The intercorrelations between the manifest scores of the four facets of the EBQ in German were moderate to strong,.30 ≤ r ≤.62 (see S2 Table in the Supporting information). Table 2 shows results for the CFA models for the joint model and for each individual emotion belief facet. Besides inspecting the model Chi-Square test, we consulted the following fit indices to evaluate model fit: comparative fit index (CFI), Tucker-Lewis index (TLI), root mean square error of approximation (RMSEA), and standardized root mean square residual (SRMR). All five models, in respect to their complexity, showed a satisfying fit according to conventional guidelines (e.g., ; but see also against rules of thumb, ), after freeing one covariance relationship between item 11 and 15. Due to the complexity of four factors with only 4 items per factor, we deem TLI = .883 acceptable. Although slightly below the conventional cutoff of 0.90, this TLI still reflects a reasonable fit when considering that TLI tends to penalize models with higher complexity more heavily. In this context, the model’s complexity—with four factors and relatively few items per factor—creates a more stringent evaluation by the TLI. Nonetheless, the overall fit indices, including the CFI and RMSEA, support that the model adequately captures the present data. Further, the complete emotion beliefs model of the EBQ with all four facets showed a fit very similar to the one reported by Becerra et al. for the English original EBQ. To further support the model’s fit, we compared it with the fit of all six alternative models proposed and tested by Becerra et al. . The four-factor model showed the best fit in this array of alternative models, being significantly better fit to the data than the second best model, Δx 2 = 23.44, p < .001, ΔAIC = 17, ΔBIC = 6). In addition, we tested the model fit of the four subscales independently, in case researchers want to assess a selected subfactor independently (see for another example, ). This can be understood as an extension of why researchers also consider the internal consistency of subscales, testing whether items are not only acceptably similar, but also load unidimensional. The four separate models per subscale each showed good fit according to CFI, TLI, SRMR, and RMSEA. We note that a TLI above 1.00 is not uncommon for small item sets and simple models, which subscales tend to be (e.g., ), given that it was developed by Tucker and Lewis specifically as a Nonnormed Fit Index (NNFI). S1 Fig in the Supporting information shows the full path model of the four-factor structure of the German EBQ. Table 3 shows the reliabilities of the four emotion belief facets in the German EBQ. The values of consistency (for the facets and item-corrected) ranged from an acceptable (α = .68; ω = .70) to a good internal consistency (α = .79; ω = .80). Table 4 shows correlations of the emotion regulation subscales of DERS-18 , ERQ , and PERCI , the emotional reactivity subscales of PERS and ERS , the facets of perceived emotional self-efficacy (RESE) , and depression, anxiety, and stress symptoms (DASS-21) with the four EBQ-facets. Closely mirroring recent correlational analyses by Becerra et al. , the EBQ facets showed substantial associations with the PERCI facets (.17 ≤ r ≤.48) and the DASS-21 facets (.12 ≤ r ≤.31). Further replicating the correlational pattern of the original English EBQ, the two controllability factors of the EBQ in German also show substantially higher associations than the usefulness facets across several validation constructs. Specifically, comparing the factor’s average correlations across all validation constructs, we find a descriptive difference between the controllability factor for positive emotions and both usefulness factors and a statistically significant difference for the controllability of negative emotions with both usefulness factors ( p < .05). We also present relevant associations of emotion beliefs beyond the original nomological network: Individuals’ perceived controllability of emotions is substantially related to their difficulties in regulating their emotions through different channels, most prominently regarding clarity (.30 ≤ r ≤.47) about and strategies with emotions (.35 ≤ r ≤.44). We further show that controllability perceptions of emotion are substantially negatively correlated with the suppression of emotions (-.22 ≤ r ≤ -.26) and positively with individuals’ ability to reappraise their emotions (.21 ≤ r ≤.26). Lastly, emotion controllability is substantially related to how sensitive individuals are to emotional experiences and how intensely and constantly they experience them (.18 ≤ r ≤.35). General beliefs about the controllability of emotions are neither associated with personal expressions of positive emotions (-.01 ≤ r ≤ -.14), productive ways of dealing with anger (-.13 ≤ r ≤ -.22), nor resistance against stress (-.12 ≤ r ≤ -.17). Individuals’ perception of the usefulness of emotions shows a more nuanced correlational pattern, specifically showing small to moderate associations with emotion regulation measured by PERCI (.17 ≤ r ≤.35) and the depression, anxiety, and stress subscales of the DASS-21 (.12 ≤ r ≤.19). We also identify substantial associations between the perceived usefulness of emotions and all facets of regulation difficulties (.12 ≤ r ≤.31) except with the facet representing individuals’ awareness of their emotions (.02 ≤ r ≤.03). In conclusion, we replicated the pattern of correlations from Becerra et al.’s original work, with even nuanced correlational differences within the EBQ-construct being reproduced. We also present relevant associations of emotion beliefs beyond the original nomological network, showing that especially controllability perceptions of emotion are substantially associated with emotional self-efficacy and reactivity. Via multigroup CFA (i.e., using the above tested CFA model for different invariance assumptions of metric, scalar, and strict), we tested the measurement invariance of the German-language translation of the EBQ across genders for the complete emotion belief model and for each of the four facets separately. Results of all five analyses are shown in Table 5 . According to the criteria of Chen , Rutkowski and Svetina , and Putnick and Bornstein , all five tested models reached scalar measurement invariance between the genders (i.e., the factor loadings, intercepts, uniquenesses, and factor variances are equal across genders). For the full model and the three facets controllability of negative emotions, controllability of positive emotions, and usefulness of negative emotions strict invariance can be accepted (i.e., the item residuals are equal across genders). The last facet, namely, usefulness of positive emotions could not reach (partial) strict invariance. Because strict or at least scalar invariance across genders held for all EBQ facets, we can compare the manifest scale scores of the facets across genders. A direct comparison of the composite scores of the four facets of emotion beliefs shows only minor descriptive differences between the two genders, M dif . = 0.13, p s ≥.066, r r-bi ≤.12, namely, slightly lower emotion beliefs in female compared to male respondents. Due to violations of the normality assumption, we used the non-parametric Mann-Whitney U test that indicates effects as rank-biserial correlations ( r r-b ). For further testing EBQ’s measurement robustness, we analyzed the scales’ measurement invariance across two different levels (i.e., high vs. low) of emotional reactivity (measured by the composite score of the ERQ) and self-efficacy (measured by the composite score of the RESE), respectively. S4 Table in the Supporting information shows that all EBQ facets show strict measurement invariance across respondents with low vs. high emotional self-efficacy, while S3 Table shows that most EBQ facets show strict invariance across respondents with low vs. high emotional reactivity, with the exception of the controllability of positive emotions that only reached metric measurement invariance. Due to this level of invariance, we can investigate differences in emotion beliefs between low vs. high emotional efficacy and reactivity (through Mann-Whitney U testing due to violations to normality). Highly emotionally reactive individuals showed stronger beliefs in the controllability of negative and positive emotions and in the usefulness of negative emotions , while there was no difference for beliefs in the usefulness of positive emotions ( p = .124). For self-efficacy, we only found one difference: Individuals that indicate to be emotionally self-efficient report lower beliefs in the controllability of negative emotions , while we find no differences for the other EBQ-facets ( p s ≥.103). The aim of this study was to validate the German EBQ’s factorial structure in a large sample and to demonstrate the multi-faceted capacity of the EBQ to assess beliefs about emotions. Our findings corroborate the theoretical four-factor structure in the data and substantiate the high relevance of the EBQ in both research on emotion regulation and in clinical settings. The moderate to strong intercorrelations between the manifest scores of the four facets of the EBQ in German showed, that these facets are related but not redundant. Thus, people who believe in the controllability of emotions also believe in the usefulness of emotions. The highest association was between the two controllability subscales, a finding that led Becerra et al. to suggest that distinguishing between the two controllability subscales is unnecessary. In our opinion, this assumption is too strong and leads to loss of information. Therefore, we support the examination of all four subscales. The most appropriate solution for our data set was a four-factor model. We were able to replicate the complete four-factor structure proposed by Becerra et al. , which also proved to be the best-fitting model in an Iranian and US American , a Polish , an Italian sample , and, most recent, another US American sample . The four factors were also found in the Japanese translation , although they found a better fit including two additional second-order factors. In their first study of the EBQ, Becerra et al. claimed that it was unnecessary to distinguish between the two controllability subscales and that doing so created methodological issues that prevented a four-factor solution. Although both controllability subscales were highly correlated in this sample as well, results of the nomological network underscored the importance of conceptualizing each subscale separately. The internal consistencies of all four EBQ subscales were descriptively smaller in this sample compared to the first (English) administration of the EBQ , but are still acceptable to good. The more individuals believed in the controllability of emotions, the less they used suppression as an emotion regulation strategy. They also reported higher beliefs in their ability to reappraise emotional states, as assessed by the ERQ. Interestingly, while Kashimura et al. did not find a significant relationship between beliefs in emotion controllability and the use of reappraisal, our findings are consistent with Becerra et al. , who also found that individuals who believed in the controllability of negative emotions were more engaged in reappraisal. Another interpretation could be that individuals who tend to engage with and reflect on their (negative) emotions, i.e., actively approach their emotions, increasingly realize that emotions are indeed controllable. Complementing previous research on emotion beliefs, our study is the first to examine the association between emotion reactivity and beliefs about the controllability and usefulness of emotions. Individuals who were more sensitive to emotional experiences, experienced these emotions more intensely, and experienced emotions more consistently than others, also believed less in the controllability of emotions. At the same time, the small associations between the ERS subscales and the EBQ controllability subscales suggest that beliefs about the controllability of negative emotions depend only to some extent on emotional reactivity. Furthermore, this finding adds to the research on the development of emotion beliefs and emotion (dys-)regulation, and indicates that the impact of emotional reactivity on emotion regulation is less than suggested by Nock et al. . From a clinical perspective, cognitive restructuring these uncontrollability beliefs may, in turn, have an impact on pathological levels of emotional reactivity [ 61 – 63 ]. Both EBQ controllability and usefulness beliefs show moderate to high associations with emotion regulation across positive and negative emotions assessed by the PERCI, a finding similar to the original validation study , a replication study , and those of Ranjbar et al. . Thus, individuals who believed in the controllability and usefulness of positive and negative emotions also reported higher beliefs in their ability to regulate their emotions. Again, the subscale measuring controllability of negative emotions showed the highest correlation. From a clinical perspective, the relationship between beliefs in the controllability and usefulness of emotions and psychopathology, as assessed by the DASS-21, is of particular interest. In our sample, lower beliefs in the controllability of negative and positive emotions were associated with elevated symptoms of depression, anxiety, and stress, which seems to support the assumptions of learned helplessness . Kashimura et al. found that the General-Controllability composite scale predicted all DASS subscales, whereas the General-Usefulness composite scale did not. Using path analysis, Johnston et al. showed that the Negative-Controllability subscale predicted symptoms of depression, anxiety, and stress, and the Positive-Usefulness scale predicted symptoms of anxiety. The other EBQ subscales (Positive-Controllability and Negative-Usefulness) did not predict any symptoms of depression, anxiety, or stress. Becerra et al. found that all EBQ subscale and composite scores were associated with higher levels of depression, anxiety, and stress symptoms. However, only the General-Controllability composite scale predicted anxiety and stress symptoms in an online sample of adults. In a German sample, the EBQ total score was associated with higher scores on all DASS subscales . In an Australian online sample, controllability beliefs were negatively correlated with psychological distress in both a simple and a multi-predictor model . Ranjbar et al. found that lower beliefs in the controllability and usefulness of emotions were associated with higher levels of depression, anxiety, and stress. In addition, in an Italian sample, individuals who believed in the controllability and usefulness of positive and negative emotions reported lower symptoms of depression, anxiety, and stress. However, there was no relationship between the DASS-21 Stress subscale and the Positive-Usefulness factor . Finally, the belief that positive emotions are uncontrollable was significantly and negatively associated with anxiety . In summary, previous studies, including the present one, underscore the importance of controllability beliefs in the development and maintenance of psychopathology. We found that higher emotion regulation difficulties, as assessed by the DERS, were associated with higher beliefs that emotions are both uncontrollable and not useful. In contrast to Rogier et al. , we did not find a significant association between perceived usefulness of emotions and individuals’ difficulties in being aware of their emotions (DERS awareness subscale). This may be due to the somewhat difficult methodological nature of this subscale in the German translation . When examining the associations between EBQ and DERS at the subscale level, the most prominent relationships were found with the clarity and strategy subscales. Specifically, the EBQ controllability subscale and the DERS strategy subscale could be viewed as proxies for emotion regulation self-efficacy , which is an individual’s belief in their ability to successfully regulate their emotions. Our study is the first to examine this relationship between EBQ and self-efficacy beliefs. Interestingly, general beliefs about the controllability of emotions (EBQ) were not associated with any personal self-efficacy beliefs about the regulation of specific positive and negative emotions, as assessed with the RESE-D. It could be argued that personal self-efficacy in regulating positive emotions, anger, and stress may be different from the emotions that individuals have in mind when thinking about the controllability of emotions in general. In summary, individuals who believed in the controllability (and usefulness) of emotions reported stronger beliefs in their ability to regulate their positive and negative emotions, stronger actual emotion regulation and fewer difficulties in doing so, less emotional reactivity, and, in turn, better mental well-being in terms of fewer symptoms of psychopathology. While the controllability subscales turned out to be more or less homogeneous in terms of several constructs, a different picture emerged when examining the usefulness subscales. In fact, the correlations within the nomological network were smaller than for the controllability subscales. In support of the distinction between beliefs in the usefulness of positive and negative emotions, links between valence-specific beliefs and emotion regulation resulted in theory-confirming associations. Individuals who believed that negative emotions were not useful (EBQ) tended to be less accepting of their emotional reactions (DERS). In addition to validating the usefulness subscale of the EBQ, this finding has important implications for the development and maintenance of mental health: In two studies, higher acceptance of negative affect was associated with better mental health outcomes and better adaptation to negative stress . Although the controllability subscales tended to show stronger relationships with the constructs examined, such as emotion regulation and symptoms of psychopathology, than the usefulness subscales, significant correlations were also found for the latter. Contrary to the assumption that both beliefs about the controllability and usefulness of emotions are orthogonal , these associations were moderate to large in our sample. This seems to indicate that both beliefs are intertwined, and thus may influence further regulatory efforts: Individuals who believe that a particular emotion is not useful may put less effort into regulating it, leading to fewer control experiences and, ultimately, fewer control beliefs about that particular emotion. However, the nature of how belief systems influence each other needs to be examined with longitudinal studies. Examining measurement invariance, we found individuals with high versus low emotional reactivity, as assessed by the ERS, to differ in beliefs about emotions. Individuals with higher emotional reactivity (higher emotion sensitivity, intensity, and longer emotion persistence; ) were found to hold higher beliefs about controllability of negative and positive emotions, and about the usefulness of negative emotions, than individuals with lower emotional reactivity. These individuals might have more experience with a wide array of emotions and thus might also experience more situations in which (negative) emotions are useful and controllable. Contrary to the assumption that individuals with high self-efficacy in emotion regulation, as assessed by the RESE-D, would have higher beliefs in the controllability of (negative) emotions, our results point in the opposite direction. Looking at the constructs at item level, one obvious difference between the two scales is the goal of emotion control: While the EBQ focuses on people’s emotion beliefs in general, the RESE-D consists of statements about one’s own self-efficacy beliefs. This difference in wording, which has implications for emotion beliefs in general , may have led individuals with high self-efficacy beliefs to assume that others have less control over their emotion regulation. To the best of our knowledge, the present study is the first to validate a German-translated version of the EBQ and to, more generally, comprehensively analyze the validity of the instrument in the context of multiple different facets of emotion regulation, emotional beliefs, psychopathology, and sociodemographic variables. We hasten to especially note the exhaustive exploration of EBQ’s measurement invariance not just across sociodemographic characteristics, as is commonly done, but also between different levels of conceptually relevant psychological constructs like emotional reactivity and psychopathological aspects of a respondent’s personality. At the same time, the present survey had its own limitations that need be addressed in future research. A central limitation is the cross-sectional nature of the survey data, which reduces the interpretability of our findings beyond judging statistical fit and correlational patterns. To explore possible causal pathways between emotion beliefs and psychological constructs of reactivity, efficacy, and psychopathology as well as sociodemographic characteristics require longitudinal research in the future. Further, the present sample consisted mainly of young and well-educated adults. To make more general claims about the scale’s validities, future research should focus on more diverse samples. Lastly, we investigated EBQ’s construct validity in the context of other closely related measures of emotional beliefs, regulation, and reactivity. It may be of particular interest to researchers to comprehensively analyze this vast landscape of measurement instruments that concern constructs that are very closely related. A specific goal may be to find conceptual overlap among the vast array of instruments and to condense them into overarching, central dimensions that capture beliefs about, regulation of, and reactivity to emotions. In psychotherapy, challenging and transforming existing maladaptive beliefs about the nature of emotions is a promising strategy to provide belief and for productive personal change. In accordance with this, the associations found in this study suggest that beliefs in the controllability and usefulness of positive and negative emotions are associated with better mental health outcomes in terms of fewer difficulties with emotion regulation in general, better abilities to regulate positive and negative emotions, and fewer symptoms of psychopathology. Addressing the reverse causal path, focusing on increasing a patient’s acceptance of different emotions and establishing the use of emotion regulation strategies can also improve their beliefs in the controllability and usefulness of emotions, which may, in turn, ensure a healthy way of dealing with one’s own emotions. The modulation of positive and negative emotions based on goals, is a crucial function for a person’s well-being and general functioning. Factors influencing successful emotion regulation include beliefs about emotions, such as the controllability and usefulness of emotions. The Emotion Beliefs Questionnaire (EBQ) was developed to assess these beliefs and has shown promise in predicting emotion regulation and psychopathology across different countries. The present paper has shown that the scale’s validity is also supported in a German sample, while also expanding the nomological network further. Measuring the defining aspects of emotion regulation reliably and accurately seems to be imperative to support individuals to deal with problematic or overwhelming situations in everyday life or more clinical situations. Future research should focus on further expanding and understanding the causal relationships between different aspects of emotion regulation, such as beliefs, abilities to regulate, and reactivity or sensitivity to emotional situations.
|
Review
|
biomedical
|
en
| 0.999996 |
PMC11694983
|
The medical interview is a critical component of the diagnostic process, contributing to approximately 80% of diagnoses . Within this context, a physician’s gaze behavior—defined as the act of directing eye movements to fixate on specific regions or shift focus from one location to another —plays a crucial role in both building trust with the patient and gathering the essential information needed for accurate diagnosis and treatment . In natural face-to-face interactions, gaze behavior serves a dual function: it signals transmission social interest and communicative intent and enables individuals to observe the eyes and face of others, interpreting their focus of attention and behavioral intentions . Gaze behavior plays a critical role in both signal transmission and information gathering . However, traditional studies conducted in laboratory settings have primarily relied on images or videos, which may not fully capture the dual function of gaze behavior in real-world, face-to-face interactions . To address this gap, recent advances have employed dual eye-tracking technology to investigate gaze behavior in more naturalistic, face-to-face scenarios. Dual eye-tracking is a technique that records and analyzes the gaze behavior of both individuals during face-to-face interactions, allowing for detailed examination of interaction dynamics, such as mutual gaze and gaze synchronization . For instance, in natural settings, mutual gaze during face-to-face interactions only occurs approximately 10% of the time , yet the information exchanged during these brief moments of eye contact plays a crucial role in building trust and promoting cooperative behavior . Further, even gaze-anxious individuals tend to exhibit typical gaze behavior in naturalistic face-to-face settings to maintain interaction . These findings provide key insights into the fundamental mechanisms of interpersonal interaction in natural settings, offering valuable predictors for behavioral and psychological outcomes . In contrast to typical social interactions, a medical interview has a distinct purpose: to effectively gather diagnostic and therapeutic information while simultaneously building rapport with the patient, all within the constraints of limited time. Direct gaze during short interactions is often associated with positive evaluations and perceptions of warmth and trustworthiness from the other party . Therefore, physicians’ gaze behavior is likely more intentional and goal oriented. It involves two key strategies: signal transmission trust and rapport through well-timed eye contact, and efficiently gathering diagnostic information necessary for clinical decision-making. Physicians’ gaze behavior develops through experience, enhancing both diagnostic efficiency and accuracy . Therefore, studies using eye trackers have examined the gaze behavior of expert physicians (experts), demonstrating their ability to quickly identify areas relevant to diagnosis. For example, in studies on pathological image interpretation [ 15 – 17 ], electrocardiogram analysis , and simulator-based treatments , experts’ gaze was not dispersed [ 18 – 20 ], and the number and duration of fixations on specific areas were fewer compared to novice medical students (novices) [ 15 – 17 ]. However, most of these studies focused on information gathering during diagnostic imaging, such as X-rays or MRIs, or during treatments like surgery or anesthesia. The dual function of experts’ gaze behavior during a medical interview—signal transmission and information gathering—has not been sufficiently explored. Previous research on physicians’ gaze behavior during a medical interview has typically relied on video recordings for analysis. Studies have reported that the frequency of eye contact between physicians and patients is associated with greater patient self-disclosure , higher patient satisfaction , and a more patient-centered approach to care . However, these studies did not use eye-tracking technology to measure gaze behavior, limiting the precision of the quantitative evaluation of eye contact. Human eye movements are subtle, and without the use of eye trackers, it is difficult to accurately determine where a person is looking . Therefore, an accurate quantitative assessment of physicians’ gaze behavior during a medical interview using eye-tracking technology is necessary. As previously mentioned, experts’ gaze behavior during a medical interview is suspected to involve fewer fixations and shorter fixation durations on specific areas compared to novices. Gaze behavior is closely related to cognitive processes such as attention, memory , and decision-making . Therefore, the differences in gaze behavior between experts and novices during a medical interview are likely related to underlying cognitive differences, which may relate to the dual function of gaze behavior in signal transmission and information gathering. However, these differences have not yet been thoroughly explored. This study addresses two research questions related to the gaze behavior of experts and novices during a medical interview and the cognition underlying this behavior: 1) How does gaze behavior toward patients differ between expert physicians and novice medical students? 2) How does cognition related to gaze behavior toward patients differ between expert physicians and novice medical students? To answer these questions, we decided to use an eye-tracker , which is effective for investigating the complex interactions between gaze behavior and cognition, in an actual outpatient clinic setting rather than a laboratory environment. Further, our previous study indicated that specialist physicians, during simulated consultations, focused their gaze on five key areas: the simulated patient’s (SP) eyes, face, body trunk, medical chart, and medical questionnaire. We adopted the same five points of fixation, expecting that this approach would allow for a detailed comparison of the differences in gaze behavior and the underlying cognition between experts and novices. The study objectives are as follows: a) to quantitatively assess the gaze behavior of experts and novices during interactions with a SP and explore the differences in gaze behavior between the two groups; and b) to qualitatively describe the cognition underlying the gaze behavior of experts and novices toward a SP and investigate the cognitive differences between the two groups. Through this study, we aim to transform the tacit experiential knowledge related to the dual functions of experts’ gaze behavior—signal transmission and information gathering—into explicit knowledge that can be communicated and shared, with the goal of applying it to both medical education and the education of gaze behavior. Understanding phenomena demands appropriate methodology, without the limitation of a qualitative or quantitative approach, as the research questions demand. Therefore, pragmatism and a research paradigm that integrated qualitative and quantitative approach were employed. This study used an exploratory sequential mixed methods design, comprising two distinct phases: quantitative research, followed by qualitative research. In the first phase, researchers collected and analyzed quantitative data . In the next phase, they collected and analyzed qualitative data as it helps to explain or build on the findings of the first phase’s quantitative research . The second phase’s qualitative data builds on the first phase’s quantitative findings, and the two phases are linked in the intermediate stages of the research . The theoretical rationale for this approach is that quantitative data and the associated qualitative analyses provide a general understanding of the research questions . Therefore, to understand the complex phenomenon of cognitive processes in the gaze behavior of experts and novices, this study used an exploratory sequential design, which can provide more valuable insights into a phenomenon and, cannot be fully understood using a single research design . In this study’s first phase of quantitative analyses, a wearable eye-tracker was used to measure and quantitatively evaluate the gaze behavior of experts and novices during a simulated medical interview and identify differences in their respective gaze behavior. During the second phase, to explore the implications of differences in gaze behavior, qualitative data were collected through cued retrospective reporting, in which experts and novices were presented with their gaze measurement data and asked to recall and verbalize their cognitive processes during gaze behavior. Their data were then analyzed using a qualitative descriptive approach. This study was conducted in Japan. Data was collected from February 2022 to August 2023. Fig 1 shows an overview of this study’s mixed methods research design. This study was approved by the Research Ethics Committee of the University of Toyama and conducted in accordance with the guidelines and regulations of the Declaration of Helsinki. The researcher explained in writing and orally to the experts, novices, and the SP the study’s main purpose, the voluntary nature of their participation, their freedom to discontinue without any disadvantage, the confidentiality of their personal information, that the results may be published, and that their data would be destroyed at the end of the study. Participants’ written informed consent to participate and to publish their images in an online, open-access publication was obtained. Further, the participants provided written informed consent (as outlined in the PLOS ONE consent form) for these case details to be published. The research team comprised two medical education researchers with more than 10 years of experience as clinicians (MF and SK), a nurse and health sciences researcher (RY), and a cognition researcher (KX). RY and SK had experience of conducting and publishing qualitative research in a constructivist paradigm. KX and MF had extensive experience of quantitative research and eye measurements. All members had research experience in medical education. RY and SK, who led the qualitative research did not know the experts and novices and were not involved in evaluating the novices’ performance. To establish a relationship with the experts and approach the study with a general understanding of their views, attitudes, and behavior, RY attended their seminars and workshops and participated in their medical examination a year before study commencement. To qualify as an expert requires a level of high performance with a wealth of skills and knowledge, and at least 10 years of experience . Therefore, this study’s eligibility criteria were defined as follows: for experts, a minimum of 10 years of experience as a physician; for novices, medical students in their 5th or 6th year who had passed the Computer Based Test and the Pre-Clinical Clerkship Objective Structured Clinical Examination (OSCE). Participants were recruited by 1) soliciting participation from physicians at Toyama University Hospital and medical students at Toyama University Faculty of Medicine, and 2) snowball sampling from among experts and novices who agreed to participate. Research cooperation was obtained from eight experts (seven men and one woman; mean age 51.9 ± 8.1 years; mean years of experience as physicians 26.5 ± 8.7 years) and nine novices (four men and five women; mean age 27.6 ± 6.0 years). Six of the eight experts and seven of the nine novices, all without eye diseases aside from refractive errors, used vision correction through glasses or contact lenses. To recruit an SP, research cooperation was requested from volunteers registered at the Centre for Older People. A man in his early 70s, who agreed to participate, and did not know either the experts or the novices, was selected as an SP. He was diagnosed with benign prostatic hyperplasia 20 years ago, took one tablet of 0.2 mg tamsulosin hydrochloride daily, and experienced nocturia 2–3 times/night. The novices and the SP were paid a small remuneration. The data collection site was the outpatient examination room of Toyama University Hospital. The experts/novices and SP sat on chairs (height: 45 cm) placed approximately 115 cm apart. The room temperature was set at 26.0 ± 2.0°C. This setting was where experts conducted their daily clinical practice, and novices had already performed an initial patient interview as part of their clinical training. The environment of the simulated medical interview closely resembled their usual clinical practice setting. During the simulated medical interview, the experts and novices were asked to obtain the SP’s medical information relating to chief medical complaints, current and past medical history, and family history, and record this information in the medical records. They were also told that it was up to them when to review the medical questionnaire. No time limit was set for the simulated medical interview. The SP’s diagnoses and prescribed medications were not given during the expert and novice sessions. This study used Tobii Pro Glasses 3, a 50-Hz wearable wireless eye-tracker (Tobii Technology, Danderyd Municipality, Sweden), a glasses-type gaze-measuring system called the head unit. It included Tobii Glasses Controller software (version 1.141) for recording gaze, and Tobii Pro Lab (version 1.194) analysis software. The Tobii Pro Glasses feature a high-resolution camera embedded in the forehead area that records the wearer’s gaze and visual field, enabling the distinction and analysis of the five gaze behavior types described below. Fixations were defined as periods during which the velocity of eye movements dropped below the Tobii Pro I-VT filter threshold (30 degrees/second) and lasted at least 200 ms. Referring to the classification of gaze behavior in interpersonal relationships by Cranach , the gaze behavior of experts and novices toward the SP’s face was categorized as follows: eye gaze (EG) and face gaze (FG). Based on previous studies on expert physicians’ gaze behavior during a simulated medical interview , gaze behavior toward the SP’s body and other objects was categorized as follows: body trunk gaze (BG), medical chart gaze (MG), and medical questionnaire gaze (MQG). Table 1 defines these five gaze behavior types. While this study primarily focused on the EG of experts and novices, it also analyzed other gaze behavior, hoping to observe significant differences in them as well. Several previous studies on gaze analysis used total gaze duration as a measurement item , whereas this study additionally defined the start and end points of the five gaze behavior types to measure fixation duration. Fig 2 defines the five gaze behavior types. Prior to the medical interview, the SP was asked to complete a questionnaire on sleep, appetite, defecation, and general condition. Experts and novices who did not require vision correction wore the Tobii Pro Glasses 3 without any corrective lenses and underwent the eye-tracking measurement with their natural vision. However, experts and novices who required vision correction removed their glasses and attached prescription lenses, which can be magnetically fitted to the Tobii Pro Glasses 3, to correct their vision before undergoing the eye-tracking measurement. The researchers then used Tobii Glasses Controller software to calibrate the gaze control behavior of the experts and novices, who were asked to sit in a chair wearing Tobii Pro Glasses. The batteries supplied with the Tobii Pro Glasses were placed in the pocket of their examination gowns to ensure no interference with the medical interview. The SP waiting in the outpatient waiting room entered the examination room and sat on a chair. The gaze behavior of experts and novices during the medical interview of one SP, were recorded with Tobii Pro Glasses. We conducted the coding based on the manual coding method by Yamamoto et al. . Specifically, the first author (RY) used the Tobii Pro Lab Analyzer software version 1.232 (Tobii Technology) to review each video frame recorded from the perspectives of experts and novices. The videos were played back at 1/16 speed and, based on the five gaze behavior types previously described ( Table 1 ), the start and end points of each gaze behavior were coded. The fourth author (MF) independently assessed 20% of the gaze behavior randomly selected from all the data. The interrater reliability between the first and fourth authors was calculated using the kappa coefficient with IBM SPSS Statistics version 28 (IBM Corp., Armonk, NY, USA), resulting in a kappa coefficient of 0.98, indicating a high level of agreement. After manually coding the start and end points using Tobii Pro Lab Analyzer software (Tobii Technology), the data were exported in Excel format. From the start and end points in the Excel data, we calculated the fixation times for each of the five gaze behavior types for both experts and novices. This enabled us to compare and analyze the fixation times between the two groups. During the medical interview, gaze behavior toward each facial region was converted into the percentage of time spent focusing on each region relative to the total interview duration. Specifically, each participant’s gaze behavior on each facial region was quantified as a percentage by dividing the fixation time on each region by the total duration of the medical interview. For instance, if the recorded fixation time for EG was 250 ms (30 Hz) and the duration of the medical interview was 50 000 ms , the resulting percentage for EG would be 0.005. The fixation counts for the medical interview were as follows: experts,—30 263.25 ± 16 064.31; novices,—32 028.11 ± 21 709.25. ZIB is a mixed distribution, incorporating “0” data, and bounded within the range of 0–1. It combines a Bernoulli distribution, which estimates whether the value is “0;” and a beta distribution, which models the distribution for values other than “0.” By utilizing this characteristic, the probability of “gaze behavior toward a specific part in a specific scenario” and “the frequency of gaze behavior directed toward a specific part in a specific scenario,” were predicted. Referring to previous studies [ 38 – 40 ], this study constructed a ZIB model, expressed in Eqs ( 1 )–( 6 ) below. The model elucidates the relationship between novices and experts regarding specific gaze behavior (G_k, where k represents indices for the following gaze behavior: EG, FG, BG, MG, and MQG). The parameter q_k is used to estimate the Bernoulli distribution, and its relationship with the examinees’ characteristics is expressed in Eq ( 3 ). Additionally, parameters a_k and b_k are used to estimate the beta distribution, and their relationships with the examinees’ characteristics are expressed in Eqs ( 4 )–( 6 ). The parameters for the beta distribution are detailed in Eqs ( 4 ) and ( 5 ). The model’s linear components illustrate that whether a participant is a novice or a expert (V_k) is associated with whether a specific area is observed (Eq ) and the extent of observation (Eq ). These were treated as fixed effects (β_k^bern for Bernoulli and β_k^beta for beta), while r^subj represents the random effects attributable to participants. The Rstan package [ 41 – 43 ] was used for parameter estimation. The prior distribution for fixed effects was a normal distribution with a mean of 0 and an SD of 50. Default Stan hyperparameters were used. Convergence was assessed using Rhat (R) for each parameter, with convergence criteria of at least 3 chains, and Rhat < 1.1 for all parameters. All parameter estimations met the convergence criteria. Additionally, the highest density interval (HDI) was used to determine parameter significance, considering effects as “significant” when the HDI did not include “0.” The simulated medical interview times were as follows—experts, 10.0159 ± 4.916 minutes; novices, 11.2426 ± 7.0123 minutes—with no significant difference observed. Fig 4 shows the percentages of average gaze behavior of experts and novices. Experts exhibited higher occurrences of MG and MQG than novices, who displayed higher frequencies of FG. Gaze behavior was relatively infrequent in both the BG and EG groups. Table 2 presents the results of analyses using a ZIB regression model to detect significant differences in gaze behavior between experts and novices during the medical interview. If the 95% HDI does not include “0,” the effect of the gaze behavior is considered “significant.” If the mean is positive, it indicates that novices are observed more frequently, whereas a negative mean suggests that experts are observed more frequently. This analysis confirmed that novices tend to focus more on the SP’s eyes (EG) compared to experts (mean = 30.533; 95% HDI: 0.702–70.703). There were no significant differences between experts and novices in FG, BG, MG, and MQG. Significant differences were found in the EGs of the experts and novices. To our knowledge, no previous studies have explained the mechanisms underlying these results. Qualitative interview data are useful for a better understanding of the observed quantitative results . Therefore, a qualitative study was conducted in the second phase, in which the narratives of the parties involved were collected and analyzed to understand the quantitative results. This study employed a qualitative descriptive approach, which is based on naturalistic inquiry. By approaching the phenomenon of interest without modifying the immediate environment , minimizes theorizing. Additionally, it aims at a frank description of the phenomenon and is suitable for gaining knowledge in areas where little research has been conducted . This study considered a qualitative descriptive approach to be suitable, as its aim was to frankly describe experts and novices’ perceptions of their gaze behavior, an area that had not been explored to date. The three main methods for verbalizing cognitive processes during task performance are think-aloud reporting, retrospective reporting, and cued retrospective reporting [ 27 , 46 – 48 ]. Think-aloud reporting involves participants verbalizing their thoughts and cognitive processes in real-time as they perform the task [ 27 , 46 – 48 ]. The main issues with this method are that verbalization can interfere with task execution, and the speed of thinking is faster than speaking, leading to only a partial report of cognitive activities [ 27 , 46 – 48 ]. Retrospective reporting involves participants recalling the thoughts and cognitive processes that occurred during task performance and verbalizing them afterward [ 27 , 46 – 48 ]. Since this method relies on participants’ memory, there is a risk of generalization, interference, and forgetting, which may lead to post-hoc rationalizations, bias, or even fabrication [ 27 , 46 – 48 ]. Cued retrospective reporting is a method where cues related to the task are provided after task completion to help participants recall and verbalize the cognitive processes they experienced during the task [ 27 , 46 – 48 ]. This method is less susceptible to the effects of forgetting or fabrication compared to pure retrospective reporting, and it allows for more accurate verbalization of cognitive processes [ 27 , 46 – 48 ]. Further, presenting eye-tracking data as cues, which reflect cognitive processes, enables participants to more precisely explain the “why” and “how” of their actions during the task [ 27 , 46 – 48 ]. In fact, previous studies that used eye-tracking data as cues in cued retrospective reporting have qualitatively described the cognitive processes of expert clinicians during cardiopulmonary resuscitation and reported differences in cognitive approaches between medical students, residents, and physicians in electrocardiogram interpretation . This suggests that presenting experts and novices with their eye-tracking data as cues could provide qualitative insights into how they think and use eye movements during simulated medical interview. Such qualitative data could be useful in explaining the differences in eye movement behavior between experts and novices. To identify the cognitive processes reflected in eye movements, we conducted an interview using a combination of cued retrospective reporting and eye-tracking. Qualitative data were collected by following the procedure described below. After completing the simulated medical interview, RY interviewed each expert and novice in an examination room, and with their consent, the interview was recorded using an IC recorder. The interview began with the playback of the video footage of the experts and novices’ gaze behavior during the medical interview, and each time their EG video footage was played, RY paused the video . They were then interviewed using a semi-structured interview guide comprising three questions: 1) Did you notice anything when you watched the video of your gaze behavior? 2) What did you think about when you looked into the SP’s eyes? and 3) Did you think about anything else when you looked into the SP’s eyes? Experts and novices retraced their cognition while watching a video of their gaze behavior and spoke freely in response to the questions. To obtain as much detailed information as possible, RY asked additional questions depending on the content of their narratives, such as “Can you be more specific about what you just told me?” The average interview time was 23 minutes (10–54 minutes). Additionally, informed consent for the publication of images was obtained from all participants whose images are presented in Figs 1 , 2 and 5 . All the recordings were transcribed verbatim into Japanese and used as data. The verbatim transcripts were quality-checked by RY, and then translated into English. RY and SK analyzed the data using the analytical procedure of Sadowski’s qualitative descriptive approach . First, the data were read repeatedly to understand its overall content. Second, individual analyses were conducted for each participant. Narratives of participants’ cognition of their gaze behavior were extracted and coded so that no semantic content was lost. The codes were reviewed, and those with commonalities were aggregated into subcategories, and subcategories with commonalities were aggregated into categories. Disagreements between the two coders throughout the analysis process were resolved through discussions between MF and KX. The following four criteria were identified for assessing reliability of the data: credibility, transferability, dependability, and confirmability. Based on these criteria, to ensure prolonged involvement, the researchers participated in the activities of participants and tried to understand their thinking and behavioral patterns. Data analyses were shared with the research team, and an agreement was reached on data interpretation. To ensure triangulation, we used multiple data sources, including interview data, video recordings of EG behavior during a simulated medical interview recorded from both expert and novice perspectives, and eye-tracking data. Based on these data, medical education researchers and health science researchers on the research team engaged in thorough discussions to enhance the credibility of this study. Member-checking was conducted with participants and supported as an appropriate representation of their perceptions of EG. The researchers maintained a consistent reflective stance throughout, to monitor their subjectivity and minimize their own influence on data collection and analyses. Three categories and eight subcategories emerged from the experts’ qualitative data ( Table 3 ). The three categories were: (1) observing the patient to obtain clues for diagnosis ; (2) being careful not to look too closely into the patients’ eyes ; and (3) attempting to establish an ongoing relationship with the patients by using gaze behavior . Each of these categories reflect the experts’ cognitive behavior. Looking into patients’ eyes can lead to physical findings and subjective information; which can lead to a diagnosis or build a relationship with patients; that can lead to further treatment. In the paragraphs that follow, categories are shown in bold , subcategories in “double quotes,” experts and novices’ narratives in italics . All the experts explained that to make an accurate diagnosis and provide treatment, it is necessary to obtain multiple findings—one of which is to observe the patients and obtain physical findings. Therefore, they said that they observed the eyes and faces of patients, who were not covered by masks for “confirming malnutrition from the dry skin on the patients’ faces” and “confirming microcirculation disorders from the patients’ facial color and pigmentation.” They cognized, observing the findings as doing something naturally, rather than consciously. Regarding wanting to prescribe the best medication for patients, the experts stressed the need to know if, and to what extent, symptoms were present by observation, as patients may not be aware of their symptoms. They explained that as edema can be perceived differently depending on sex “confirming by checking the patient’s eyelids and face for edema” is important. Most experts emphasized the importance of listening to patients, allowing them to feel comfortable discussing their symptoms and thoughts about their chief complaint, which provides essential clues for an accurate diagnosis. From their past clinical practice, the experts said that “looking too much into the patients’ eyes can make them nervous” and “looking too much into patients’ eyes can make them uncomfortable.” They explained that daring to not look into the patients’ eyes too much could elicit patients’ chief complaints. The expert, who said that consultations cannot begin without listening to patients, explained that although eye contact is generally emphasized when listening to patients, in clinical settings with diverse medical needs, “eye contact can be judged as too much eye gaze.” The experts who emphasized non-verbal communication with patients said that they used gaze behavior to avoid looking too much into the patients’ eyes, as first-time patients are often nervous. They explained that it is important to make patients feel comfortable during the first visit, and advised, “try to check the level of satisfaction from patients’ facial expression,” leading to continued treatment. The experts who determined the need for continued treatment based on the patients’ age and chief complaints said, “We try to read the patients’ facial expressions” at the first meeting because a relationship has not yet been established. They explained that it is important to tell from subtle changes in the patients’ facial expressions whether the conversation should be continued. They believed that careful observation of such non-verbal cues was the key to building a continuous relationship. Three categories and eight subcategories emerged from the novices’ qualitative data ( Table 4 ). Each category reflects the novices’ cognition, that looking into patients’ eyes is more about building personal relationships and facilitating communication, than obtaining the physical findings necessary for diagnoses. The three categories are: (1) looking at patients and trying to increase their satisfaction ; (2) confirming changes in patients’ expressions and trying to build a relationship in the here and now ; and (3) utilizing their gaze-related experience and knowledge for their eye behavior. All the novices stated that by looking into the patients’ eyes, they were trying to increase patient satisfaction in terms of reassurance, relationship building, and smooth communication. Most novices stated that their reason for looking at patients was to increase satisfaction, rather than to obtain findings that would lead to a diagnosis. Novices who were concerned that, as medical students, they might make patients anxious by conducting a medical interview explained that they “consciously try to look at patients and make them feel at ease,” so that they would not feel anxious. Novices who emphasized patients’ personhood, saw person-to-person and doctor–patient relationships as incompatible, and because of this desire to focus on the person-to-person relationship, a novice explained: “look at patients and try to build a relationship with them.” A novice, who spoke about wanting to first provide the patient with medicine, believed that using eye contact to send non-verbal signals about being interested in the patient leads to smoother communication. This novice explained about attempts to “look at patients and create an atmosphere where it is easy to talk to them.” A novice, who valued interactions with patients, explained about refraining from asking new patients detailed questions and being “careful to look for changes in patients’ facial expressions,” so as not to delve too deeply into what they do not want to talk about. Additionally, the novice believed that such considerations were important for building relationships in the here and now. Novices emphasized the importance of establishing a rapport with patients. A novice shared, “trying to detect changes in patients’ sense of tension” through non-verbal cues, such as their mood and changes in facial expression. Other novices saw it as establishing a relationship in the present moment, when they could confirm that the patients’ tensions were relieved. All the novices reflected on one of the following three types of knowledge and experience related to gaze behavior: knowledge taught in lectures and practical training, experience of independent learning in practical training, or experience outside of lectures and practical training. Through such knowledge and experience, they recognized that looking at the patients was necessary for an appropriate medical interview with them. Novices who mentioned that eye contact knowledge were taught in lectures and practical training, also talked about gaining knowledge from OSCE video materials and practical training. They were “taught to look into their patients’ eyes and listen to what they had to say” by their supervisors. They explained that by using such knowledge, they tried to look into their patients’ eyes Novices, who cited gaze behavior as an experience that they independently learned during their practice, said that when they had sat in consultations with supervising doctors, they had observed that “many of these doctors typed in the chart without looking at the patient at all.” A novice explained that she was conscious of looking her patients in the eye because of having a good example of who not to be. A novice described an instance, in which, “a family member complained about a doctor not looking at the patient,” as a private experience of gaze behavior outside of lectures and practice. Based on the negative experiences of such close family members and familiar others, the novice explained about being careful to look at the patients to avoid making them feel uncomfortable. Previous studies on experts’ gaze measurement have primarily focused on diagnostic imaging, such as X-rays or MRI [ 15 – 18 ], and surgical or therapeutic situations . Little research has been conducted on the medical interview, which accounts for 80% of diagnoses . This study is the first to quantitatively evaluate experts’ gaze behavior during a simulated medical interview and qualitatively explore the cognition closely involved with gaze behavior. By doing so, it aims to visualize the dual function of experts’ gaze behavior—signal transmission and information gathering—providing valuable insights for medical education that had not been previously investigated. In the first phase, the gaze behavior of experts and novices was quantitatively evaluated during the simulated medical interview, showing that experts looked at the SP’s eyes less frequently than novices. Previous studies have also shown that experts do not look at specific areas more frequently than novices when collecting visual information during diagnostic imaging [ 15 – 18 ] or during treatment and procedures . Taken together, these findings suggest that across different phases of medical practice—whether in a medical interview, diagnostic imaging, or treatment—experts possess the ability to rapidly identify key areas and efficiently gather visual information. In the second phase, qualitative data on the cognition of experts and novices were collected to gain a deeper understanding of the quantitative results. This study used five gaze behavior types (SP’s eyes, face, body trunk, medical chart, and medical questionnaire); among them, the only type that showed a significant difference between experts and novices was gaze toward the SP’s eyes. Both experts and novices recognized that they looked at the patient’s eyes to read facial expressions. Concerning facial recognition, Westerners focus on the mouth, while Japanese individuals focus on the eyes . Additionally, older adults may look at the eyes less frequently than younger adults when identifying emotions, whereas younger adults tend to look more at the eyes . Participants were Japanese, with an average age of 51.9 years for experts and 27.6 years for novices. Considering these factors, both experts and novices looked at the patient’s eyes owing to their cultural background; however, the significant difference in the frequency of EG may be attributed to age-related differences in emotional processing. The combination of cultural background and age-related differences in emotion processing likely resulted in the significant difference in gaze behavior toward the SP’s eyes, among the five gaze behavior types. Further, two differences were observed between experts and novices regarding their cognition of gazing at the SP’s eyes. The first difference lies in the cognitive difference regarding how gaze behavior as signal transmission is received by the patient. Experts recognized that gazing at the patient’s eyes could cause tension or discomfort, thus they gazed at the eyes less frequently. In contrast, novices recognized that gazing at the patient’s eyes would improve patient satisfaction, so they directed their gaze more frequently. Individuals perceive eye contact as being established when another person’s gaze is directed toward a part of their face . The more frequently experts and novices gazed at the patient’s eyes, the more likely the patient was to perceive that eye contact was made. In general, individuals who make eye contact are evaluated as being more likable, intelligent, and trustworthy . However, in Japanese culture, individuals who make eye contact are often perceived as being angry, intimidating, or uncomfortable , and avoiding direct eye contact is often seen as a sign of respect or humility . This cultural background may have related the gaze behavior of experts as signal transmission, resulting in them avoiding direct eye contact with the patient. In clinical settings, experts may feel that reducing direct eye contact helps the patient relax and facilitates building a trusting relationship. However, novices, with less clinical experience, may base their gaze behavior on a general understanding of eye contact as a form of signal transmission, leading them to gaze at the patient’s eyes more frequently. In recent years, Western values and culture have increasingly influenced Japan, and medical textbooks now encourage making eye contact with patients. As noted earlier, the experts in this study were older, while the novices were younger, suggesting that generational differences in the influence of Western culture may also have contributed to the cognitive differences in how gaze behavior as signal transmission was perceived by patients. While eye contact between physicians and patients has been emphasized [ 21 – 24 ], the importance of clinical practice and medical education that consider cultural differences in gaze behavior, including eye contact, has not been sufficiently clarified . Failure to address cultural differences may lead to misdiagnosis, medical errors, and suboptimal treatment outcomes . In this study, Japanese experts explained that they avoided frequent eye contact with patients to prevent causing tension or discomfort. These results suggest that considering cultural differences in gaze behavior is important both in clinical practice and in medical education. For healthcare providers, it is crucial to understand and respect patients’ cultural background, rather than relying solely on their own cultural perspective, as this can help prevent medical errors and lead to optimal treatment outcomes. Such a perspective is essential in medical education curricula to promote appropriate communication during encounters with culturally diverse patients. Another difference lies in the cognition associated with gaze behavior as information gathering necessary for diagnosis. Experts mentioned that they looked at the patient’s eyes to observe abnormal findings. Although they gazed less frequently than novices, they observed a wide range of visual cues, such as eyelid or facial edema and pigmentation. Experts gather information from a broad visual field , quickly making an overall assessment of the image and then identifying potential abnormal findings . This suggests that experts have distinct cognitive processes in which they observe diagnostic areas while maintaining an overall perspective, collecting visual information without being limited to specific areas. In contrast, novices reported looking at the patient’s eyes not to observe diagnostic findings but to provide patient-centered care, build relationships, and facilitate smooth communication; thus, they did so more frequently. Observers can typically only see three to four objects at a time . In situations requiring the processing of large amounts of visual information and focused attention, important clinical information may be missed, even when it is clearly within the field of view . Additionally, junior and prospective medical students tend to prioritize empathy, patient-centered care, and communication skills over clinical competence and knowledge . For novices, a medical interview with an SP they are meeting for the first time may be an overloaded situation, requiring them to focus and process large amounts of visual information, likely causing some confusion and frustration. In such physically and mentally stressful circumstances, even senior medical students with clinical training experience may, like junior students, struggle to fully execute the dual function of gaze behavior—signal transmission and information gathering—and may prioritize signal transmission, focusing more on care. In the early stages of clinical training, novices are prone to stress and may not fully execute the dual function of gaze behavior (signal transmission and information gathering). In this study, novices did not mention observing the area around the face for diagnosis or information gathering, suggesting that they could be unaware of the need to balance information gathering and signal transmission. Therefore, it is important for instructors to remind novices at the beginning of clinical training that effectively utilizing the dual function of gaze behavior—signal transmission and information gathering—can help achieve the goals of the medical interview: gathering information necessary for diagnosis and treatment as well as building relationships with patients. For instance, one approach is for experts to demonstrate their gaze behavior while verbally explaining the process of interpretation , which can enhance medical students’ visual search and symptom interpretation skills . By presenting their own gaze behavior and accompanying cognitive processes, experts can help novices deepen their interest in and understanding of gaze behavior as a means of visual information gathering. Additionally, novices should be made aware that they tend to prioritize signal transmission in their gaze behavior, highlighting the importance of training to develop a broader visual perspective, like experts, to search for abnormal findings necessary for diagnosis during the medical interview. Traditionally, eye contact between physicians and patients has been emphasized; however, this study revealed that experts use the act of looking at the patient’s eyes to build rapport and gather the necessary visual information for diagnosis from a broader perspective. While gazing at the patient’s eyes is important during a medical interview, experts recognize their gaze behavior as serving the dual function of signal transmission and information gathering, whereas novices only recognized it as signal transmission. These results suggest that experts’ gaze behavior, beyond just eye contact, and their awareness of it, hold key importance in clinical practice and medical education, offering new perspectives for research and training on gaze behavior. This study had some limitations. First, using only one SP may have highlighted individual differences between experts and novices. We accounted for individual differences among participants by using a linear mixed model to eliminate random effects. However, following the speed-interaction methodology of Hoffmann et al. , we believe that conducting simulated consultations with multiple SPs of different ages, sexes, symptoms, and stages of illness would strengthen the validity of the results and enhance statistical power through repeated measurements. Therefore, verification with multiple SPs and diverse scenarios remains a challenge for future research. Second, the experts were physicians; if they had been doctors from other specialties, such as surgeons, psychiatrists, or pediatricians, their gaze behavior and cognition may have differed from the current findings. However, as this study focused on the comparison between experts and novices, such differences in specialties are not believed to have directly impacted its results. Third, participants’ medical knowledge, skills, prior experiences, and personal values, which they have developed over time, may have influenced their gaze behavior toward the SP’s eyes and their cognition. Fourth, this study did not investigate the relationship between eye-tracking data and interview data using statistical methods, making it difficult to assert that there is a significant relationship between the two. In future research, it is necessary to incorporate quantitative methods to statistically examine the relationship between eye-tracking data and interview data. Additionally, conducting studies to predict psychological and behavioral outcomes, such as patient satisfaction and trust in physicians, based on physicians’ gaze behavior toward patients is also considered an important challenge. Fifth, because we adopted a mixed methods approach combining eye-tracking (quantitative research) and cued retrospective reporting (qualitative research), it was difficult to determine the sample size using a power analysis. Although a power analysis was applicable to the diverse experiences and perspectives of participants in the qualitative research, we judged that the size calculation was inappropriate. Therefore, referring to prior research , we set the sample size to 17 participants based on a comprehensive judgment. However, the inability to utilize a power analysis is a limitation. Sixth, to enhance internal validity, cued retrospective reporting was used to reinforce participants’ memory when verbalizing their awareness of gaze behavior. However, some explanations regarding their awareness of gaze behavior may have been post-rationalized. To address this, measures such as triangulation and member-checking were implemented to increase reliability, but completely eliminating post-rationalization remains a challenge. Future studies could collect data on confounding factors, such as participants’ age, sex, and experience level, in advance and use statistical methods to control these variables to further enhance internal validity. Seventh, to enhance external validity, we used an outpatient examination room, where experts conduct their daily practice and novices have experienced an initial patient interview during their clinical training. This helped increase the realism of the simulated medical interview. However, since participants were aware that this was a simulation, some of their behavior may have been intentionally controlled. This is particularly true for novices, who may have experienced heightened anxiety, potentially limiting their cognition related to gathering visual information. Further, real patient interactions may involve more unpredictable emotional and behavioral responses that differ from those in a simulation. Therefore, it cannot be ruled out that their behavior differed from what would occur in an actual clinical setting. In further studies, masking techniques should be implemented to ensure that participants are unaware of the simulation, allowing for more natural behavior. Additionally, comparing and validating a medical interview using real patients and SPs should enhance external validity. Finally, cultural differences exist in the way eye contact is perceived. The experts and novices in this study were Japanese, and it cannot be ruled out that the Japanese-specific perception of eye contact avoidance being positive, may have influenced gaze behavior and cognition. Thus, the generalizability of the results is limited. In addition, as we used a cross-sectional design, the gaze behavior and cognition of experts and novices were outside its scope. Future studies should increase the sample size and number of SPs, as well as consider the specialization of doctors, symptoms of SPs, and their disease stage. Additionally, a longitudinal study on the factors that influence the gaze behavior of experts and novices, and the process of cognitive mastery, would provide knowledge that could be fed back to learners at different developmental stages. This study quantitatively evaluated the gaze behavior of experts and novices in a simulated medical interview and qualitatively described their cognitive processes. Experts looked at the SP’s eyes less frequently than novices, as experts recognized this as important for gathering diagnostic information and building rapport, while also acknowledging that it can cause discomfort. While novices did not mention information gathering, they recognized that gazing at the eyes was important for building rapport. These differences in cognition were related to the differences in gaze behavior between the two groups. Although traditional approaches have emphasized gazing at the eyes, the current findings provide valuable insights into the instruction of gaze behavior beyond eye contact, enhancing novices’ learning effectiveness. Future research should longitudinally investigate the factors that influence the acquisition of gaze behavior in both experts and novices.
|
Study
|
biomedical
|
en
| 0.999996 |
PMC11694985
|
Developing countries face significant educational disparities across various rural development indicators. Rural adolescents, in particular, often occupy the most disadvantaged positions . The State of Global Learning Poverty: 2022 Update report reveals that 70% of 10-year-olds are in learning poverty, unable to read and understand a simple text. This situation can be attributed to significant disparities in access to educational resources and family educational support . Additionally, the exodus of rural elites exacerbates these challenges . The growth environment of children, especially in villages, is widely regarded as a crucial factor influencing academic performance . In light of these disparities, the role of village heads in rural areas of developing countries becomes increasingly significant. Village heads serve as the primary managers of village governance and play a vital leadership role . They are responsible not only for daily management and resource allocation but also for influencing villagers’ educational attitudes, lifestyles, and social behaviors through the exchange of information and knowledge with other community members . This transmission of information can enhance villagers’ appreciation for education and create a better learning environment for adolescents . Research has shown that the education level of village heads is directly related to their governance capabilities. Those with a strong educational background tend to implement policies, organize resources, and facilitate the dissemination of knowledge more effectively . Nonetheless, research on how the educational levels of village heads influence rural adolescents’ education through governance mechanisms remains limited. Leveraging the unique context of rural education in China, this study aims to analyze the potential impact of village heads’ educational levels on family behavior and adolescents’ academic performance. Addressing this question is crucial for several reasons. First, village heads play a key role in rural governance. Governments often rely on them to promote infrastructure development, social governance, and the provision of public services, all of which are essential steps in driving rural economic growth. Many rural areas in developing countries face governance challenges similar to those encountered in the early histories of developed nations . Existing literature highlights the significant role of village heads’ knowledge and education in promoting rural economic development, increasing agricultural output, and raising farmers’ incomes [ 6 , 7 , 9 , 12 – 16 ]. However, there is ongoing debate about whether well-educated village heads are more favorable for enhancing the supply of public goods during the resource allocation process [ 14 , 16 – 19 ]. In light of these considerations, this study investigates the impact of village heads’ educational levels on adolescents’ academic performance in rural China. It specifically examines how the educational attainment of village leaders influences the allocation of collective resources and shapes family educational expectations. This analysis seeks to elucidate the governance mechanisms at play, emphasizing the pivotal role of village heads in influencing educational outcomes. Understanding this relationship is crucial, as it addresses broader implications for rural development and social equity. Ultimately, this study aims to contribute to the enhancement of educational practices and policies in disadvantaged rural areas by underscoring the significance of effective leadership in promoting adolescent academic performance. We used a cross-sectional study design to examine the association between village heads’ educational levels and adolescents’ academic performance, utilizing data from the CFPS to assess how these educational levels influence academic achievements. This study focuses on adolescents and their families in rural areas of China. In this context, "rural" specifically refers to rural communities in China, which often face challenges such as insufficient educational resources, relatively slow economic development, and a lack of social services. Adolescent education is regarded as a key factor in rural development, particularly in the broader context of reducing the urban-rural income gap, improving the well-being of rural residents, and promoting sustainable development. Understanding the role of village heads and their influence on education is especially important. Village heads, as grassroots leaders in rural areas, are typically elected by local residents and are responsible for managing village affairs, promoting community participation in economic and social activities, and implementing national and local policies. The educational background of village heads is directly related to their governance capabilities, particularly regarding resource management, policy communication, and community collaboration. In this study, we specifically focus on the educational levels of village heads. Data were obtained through the official channels of the CFPS, which employs a stratified random sampling method to ensure the representativeness of the sample. The survey questionnaire, used as part of the CFPS, includes multiple-choice questions and scales that cover information on family background, educational environment, and characteristics of village heads. The CFPS is a nationwide, large-scale, multidisciplinary social tracking survey project that spans 25 provinces, municipalities, and autonomous regions in China. There is a large body of literature on scientific research using CFPS data . As of now, CFPS has released data from the years 2010, 2012, 2014, 2016, 2018, and 2020; however, only the 2010 and 2014 datasets include community/village-level data. Therefore, in accordance with the research objectives, this paper selects the 2014 CFPS data for empirical analysis. The 2014 CFPS dataset includes basic characteristics, such as the economic and social conditions of 365 villages, as well as demographic information about the village heads, along with data on the academic performance of adolescents within these villages. After retaining rural and adolescent samples and removing missing values for various variables, a total of 1,720 valid samples were included in the analysis. The educational level of village heads is the main explanatory variable in this paper, measured using the Educational Level of Village/Community Heads from the CFPS community/village questionnaire. This variable is ordinal, with values ranging from 1 to 6, corresponding to illiterate/semi-literate, primary school, middle school, high school, junior college, and undergraduate or higher education levels. This paper uses the average scores of the word group test and mathematics test from the child cognitive module in CFPS data to measure the academic performance of adolescents. The advantages of this approach include the uniformity of the word group test and mathematics test in CFPS data, which ensures the same level of difficulty, and the objective nature of the test questions, which ensures consistent scoring standards. However, children with higher levels of education may achieve higher test scores. To address this, the paper standardizes test scores by converting them into z-scores based on each adolescent’s current educational level, aiming to reduce differences between groups. It is worth noting that to ensure the scientific validity and comparability of test scores, the CFPS team only tests individuals aged 10 years and older; the CFPS children’s questionnaire only includes children under the age of 16. Therefore, the sample interval for adolescents in this paper is 10–15 years old. It is worth noting that standardization is performed under the original sample of rural families without excluding missing variables. Therefore, the standard deviation of the Academic performance variable in empirical analysis is not 0. This paper explores how the educational level of village heads affects the academic performance of adolescents within the village from the perspective of the external village environment. Specifically, we examine two channels: one is the provision of public goods in the village, and the other is the role of the village head as a role model. This paper uses the expenditure on public services (exp_public) within the village to measure the supply of public goods, which is treated as a continuous variable and log-transformed. Additionally, a poor and disorganized living environment can impact adolescents’ cognitive abilities to some extent and can negatively model behavior, thus affecting academic performance. Therefore, we further discusses whether well-educated village heads can enhance adolescent academic performance through the management of environmental pollution in village living environments. The cleanliness of village roads is used as a proxy variable for the management of the village living environment. This is a categorical variable with values ranging from 1 to 7, where higher values indicate cleaner roads. The role model effect’s inherent manifestation in education is the motivation and expectation for education. This paper uses parents’ educational expectations for adolescents to measure the role model effect. Educational expectations are measured using the CFPS question "Desired Level of Education for Children," with values ranging from 1 to 8, corresponding to no schooling necessary, primary school, middle school, high school, junior college, bachelor’s degree, master’s degree, and doctorate. In order to control for confounders, the following variables were included in the analysis, which were chosen a priori on a theoretical basis. First, the gender of the village head is included (male = 1; female = 0), as it may influence preferences for economic investment and public services . Additionally, female village heads may exhibit more intergenerational care during village governance . According to human capital theory , the age of the adolescent is a crucial factor affecting performance on the CFPS academic tests; therefore, it is included as a covariat. Furthermore, adolescents are influenced by external environments that vary by age and gender, which is why the gender of the adolescent is also controlled for. At the parental level, parental education is directly linked to adolescent academic performance, while parents’ political status can provide access to better educational resources through political capital . Hence, this paper controls for the highest level of education attained by parents (with values ranging from 1 to 6, similar to the educational level of the village head) and the highest political status (party member = 1; otherwise = 0). Regarding family characteristics, family income significantly impacts adolescents’ access to educational resources. In rural areas, insurance and land assets influence family education decisions . Additionally, the income risk associated with agricultural production can restrict educational investments, while non-farm employment involving labor migration may adversely affect adolescents . Previous studies indicate that internet access can also impact the quantity and quality of education, as well as the accumulation of new human capital . Moreover, increases in the number of health shocks and working family members can affect parental involvement and family education investment , thereby impacting adolescent academic performance. Consequently, this paper controls for various family characteristics, including family income (logarithmic), land assets (logarithmic), nature of family production (agricultural production = 1; otherwise = 0), internet access at home (yes = 1; no = 0), the number of family health shocks, and the number of working family members. Descriptive statistics of the variables are shown in Table 1 . This study employs multiple regression analysis to examine the impact of village heads’ educational levels on adolescents’ academic performance, utilizing mechanism analysis to investigate the underlying pathways of this influence. To address potential endogeneity, this study will utilize instrumental variable methods and perform a series of robustness checks to ensure the reliability and validity of the findings. According to the research purpose of this article, the following benchmark econometric model is set: E d u _ S c o r e i = α 1 + β 2 E d u _ y e a r i + β 3 C o n t r o l i + ε i (1) Among them, Edu _ Score represents the average of the youth phrase test and math test, Edu _ year represents the education level of the village head, Control represents various control variables at the individual characteristics, youth characteristics, and family level of the village head, and ε represents the random error term. In this study, the regression analysis model used is based on several key assumptions. First, we assume that the relationship between the educational level of village heads and adolescent academic performance is linear, meaning that each unit change in educational level leads to a corresponding change in academic performance. Additionally, we assume that the error terms in the model are independent and homoscedastic, which implies that there is no autocorrelation or heteroscedasticity, ensuring the validity and robustness of the regression coefficients. At the same time, the CFPS data ensures the randomness and representativeness of the sample selection. The sources of endogeneity primarily include reverse causality and omitted variables. Although the academic performance of adolescents does not affect the educational level of village heads, thereby eliminating the possibility of reverse causality, omitted variables remain a significant factor causing endogeneity in the baseline model. Therefore, this paper attempts to mitigate endogeneity issues using the instrumental variable (IV) approach. Wooldridge noted that group mean variables are commonly used to overcome the endogeneity of individual variables . The paper employs the average educational level of village heads at the county level as the instrumental variable. On one hand, policy-wise, within the same area, the arrangement of village heads across different villages has a certain policy-relatedness . The average education level of county-level village leaders reflects the characteristics of the education level of village leaders in the region as a whole, which is strongly correlated with the education level of individual village leaders. It has been shown in the literature that there is a correlation between regional characteristics such as social capital and regional governance level and individual leader characteristics. For example, Moffitt notes that regional characteristic means are often effective in capturing changes in individual-level variables . Case and Deaton demonstrated the relevance of the mean variable by using district educational resource means to study individual educational performance . On the other hand, the average educational level of village heads in a county does not directly impact the academic performance of adolescents within a single village, satisfying the exclusivity of the instrumental variable. However, due to the CFPS data possibly having fewer samples for some counties’ villages, this could lead to a weak instrumental variable problem. Therefore, following existing literature, the paper also relaxes the level of the instrumental variable to the provincial level, using the average educational level of village heads at the provincial level . To ensure our results remain consistent across different model specifications or sample selections, thereby verifying result stability and enhancing the persuasiveness of the study, this paper conducts robustness tests from the following perspectives. First, we redefine the educational level of village leaders using three binary dummy variables: college degree, high school diploma, and secondary school or below. This allows for a more nuanced representation of the educational backgrounds of village leaders. Second, we assess adolescent academic performance by estimating math and language scores separately, rather than relying solely on grade point average. Given the term limits and electoral fluctuations for village heads, changes in office may lead to variations in their educational levels. If such changes occur within a short time frame, they could introduce bias into the estimated results. To address this potential issue, the paper utilizes CFPS data from 2010 and re-estimates Eq ( 1 ) by retaining only those samples where the educational levels of village heads remained unchanged between 2010 and 2014. Finally, recognizing that adolescents’ academic performances are correlated at the village level, the paper further clusters standard errors at the village level to account for any intra-village correlation This section will further validate Hypothesis 2 using the three-step method of mechanism testing known as the BK (Baron and Kenny) method . Step One involves examining the impact of the educational level of village heads on adolescent academic performance, as specified in Eq ( 2 ). Step Two assesses whether there is a significant impact of the educational level of village heads on the mechanism variables, as specified in Eq ( 3 ). Mech Represents mechanism variables. Step Three incorporates both the educational level of village heads and the mechanism variables into the model, as shown in Eq ( 4 ). If, in Eq ( 4 ), the mechanism variable ( Mech ) is significant and the coefficient of the village head’s educational level decreases, then the mechanism is considered valid. The equations are laid out as follows: E d u _ S c o r e i = α 1 + β 2 E d u _ y e a r i + β 3 C o n t r o l i + ε i (2) M e c h i = α 1 + β 2 E d u _ y e a r i + β 3 C o n t r o l i + ε i (3) E d u _ S c o r e i = α 1 + β 2 E d u _ y e a r i + β 3 M e c h i + β 4 C o n t r o l i + ε i (4) This structured approach allows for a clear assessment of whether the proposed mechanisms (such as increased access to educational resources, improved public goods, or role model effects) mediate the relationship between the educational level of village heads and adolescent academic performance. By examining the significance and the size of the coefficients in these equations, one can determine the extent to which these mechanisms play a role in influencing adolescent outcomes. To ensure the maximum protection of the rights of participants in the project, the CFPS regularly submits ethical review or continuous review applications to the Biomedical Ethics Committee of Peking University. Data collection is conducted only after receiving ethical approval. The ethical review approval number for the CFPS project is uniformly IRB00001052-14010 and remains consistent across different survey waves. We obtained informed written consent from all participants. Table 2 presents the Ordinary Least Squares (OLS) regression results regarding the influence of village heads’ educational levels on adolescents’ academic performance. Model 1 does not control for any variables, providing a straightforward analysis of the relationship between the educational level of village heads and adolescent academic performance. The results indicate a significant positive correlation at the 1% level, with a coefficient of 0.120. To mitigate the impact of omitted variables and enhance the robustness of the model, Models 2 and 3 progressively incorporate individual characteristics of adolescents, parental characteristics, and family attributes. Results from Model 2 show that even after accounting for the individual traits of adolescents and parents—both of which significantly influence academic performance—the educational level of village heads remains positively correlated with adolescents’ academic outcomes, albeit with a reduced effect size. In Model 3, after further controlling for family characteristics, the coefficient decreases to 0.096. Nevertheless, the educational level of village heads continues to exhibit a significant positive correlation with adolescent academic performance at the 1% level. Specifically, for each unit increase in the educational level of the village head, the standard deviation of adolescents’ academic performance scores increases by 0.096. Table 3 presents the results of the Two-Stage Least Squares (2SLS) regression analysis. Models 4 and 5 showcase the first-stage regression results, employing the average educational levels of village heads at the county and provincial levels as instrumental variables, respectively. The findings indicate that both instrumental variables are significantly positively correlated with the educational levels of village heads, thus confirming their relevance. Furthermore, the weak instrumental variable tests reported at the bottom of Models 4 and 5 demonstrate C-D Wald F-statistics significantly exceeding 16.38, effectively ruling out the possibility of weak instrumental variables. Models 6 and 7 provide the second-stage regression results of the 2SLS analysis. It is clear that whether using the average educational level of village heads at the county or provincial level as the instrumental variable, the educational level of village heads remains significantly positively correlated with adolescents’ academic performance at the 1% level. The regression results presented in Table 4 , Models 8 to 10, demonstrate that village heads with a university degree significantly positively influence adolescents’ academic performance. The coefficient for university education is 0.131, which is notably higher than the coefficient for high school education at 0.073. This indicates that higher educational attainment among village heads corresponds to a greater positive impact on adolescents’ academic performance. The results are presented in Table 4 , Models 11 and 12, respectively. The estimated coefficients for both variables are significantly positive at the 1% level. The results in Model 13 indicate that even after accounting for changes in village heads, their educational level continues to significantly impact adolescents’ academic performance at the 1% level. The results after adjusting for clustered standard errors, as shown in Model 14, remain robust. Table 5 presents the regression results for this section. Models 15 and 16 re-estimate the impact of the village head’s educational level on adolescent academic performance, taking into account missing values in the mechanism variables. Models 17 and 18 examine the effect of the village head’s educational level on the supply of public goods and environmental pollution management, with both showing p-values less than 0.01. The results in Models 19 and 20 indicate that the supply of public goods and environmental pollution management in the village significantly influence adolescent academic performance; however, the coefficient for the village head’s educational level is lower compared to those in Models 15 and 16. Models 21 to 23 of Table 6 report the results of the tests for the role model effect mechanism. These models correspond to the first, second, and third steps of the three-step method, respectively. The educational expectations are significantly positively correlated with adolescent academic performance at the 1% level in Model 23, with a noticeable decrease in the coefficient of the village head’s educational level compared to Model 21. Models 24 and 25 in Table 7 report the regression results segmented by adolescent gender. The findings indicate that girls are more significantly influenced by the educational level of village heads compared to boys. The mechanism analysis reveals that well-educated village heads can enhance the village living environment by increasing the supply of public goods. Models 26 to 28 present the model estimation results after dividing the sample into low-income, middle-income, and high-income families based on the 25th and 75th percentiles of income. For low-income families, the educational level of village heads significantly influences adolescent academic performance at the 1% level, with a coefficient of 0.160. In high-income families, highly educated village heads also positively impact adolescent academic performance, with a coefficient of 0.084; however, this is lower than the effect observed in low-income families. Notably, in middle-income families, there is no significant correlation. In this study, we examine how the educational levels of village heads affect adolescents’ academic performance in China, a major agricultural nation, and highlight the village head’s role in rural educational governance. OLS regression analysis reveals a significant correlation between village heads’ education and adolescents’ academic achievements. This aligns with previous research, indicating that educated leaders can improve educational outcomes. For instance, Lahoti and Saho found that educated political leaders enhance their constituents’ educational results. Karadag’s meta-analysis of studies from 2008 to 2018 also shows a significant impact of educational leadership on student performance . Our study uniquely highlights the external benefits of village leaders’ educational levels on adolescents, emphasizing that village communities play a crucial role in influencing educational outcomes, in addition to schools and families. This underscores the importance of village heads as key factors in governance, with the potential to drive long-term benefits for adolescents’ academic success and the overall quality of rural education. These findings provide insights for policy discussions, emphasizing the need for greater investment in rural governance resources. Additionally, they contribute to understanding the impacts of public education policy expansion in developing countries and impoverished areas. Our results indicate that the educational level of village heads not only affects academic performance but also influences adolescents through improved community environments and elevated educational expectations within families. This highlights the significant role of the external environment in villages. Our empirical analysis aligns with existing literature; for example, Jain et al. found that educated leaders enhance the provision of roads, electricity, and power. Additionally, Zhang et al. discovered that highly educated village heads can promote local infrastructure development. Existing research also indicates that the neighborhood environment in urban areas significantly impacts children’s education [ 36 – 40 ]. Specifically, improvements in community resources, environmental quality, social networks, and public safety during childhood enhance academic performance [ 41 – 47 ]. Unlike studies that focus on urban communities, our research examines rural communities, emphasizing the critical role they play in educational outcomes. In developing countries, rural education faces severe challenges, particularly with high rates of left-behind children, which directly impacts poverty alleviation and social equity. Addressing these issues is vital for fostering equitable educational opportunities and improving overall community well-being. The strong positive correlation between parents’ educational expectations and adolescents’ academic performance, particularly evident in Model 23, underscores the transformative power of community leadership as a role model. This observation aligns with literature highlighting the influence of community leaders on parental attitudes toward education. For instance, Beaman et al. demonstrated that female leadership positively impacts girls’ aspirations and educational attainment by comparing villages in India with randomly assigned female leaders. They argue that female leaders primarily effect this change by serving as role models, thus elevating both girls’ self-expectations and parental expectations for their daughters. Our findings complement existing research by showing that it is not only the educational levels of school leaders that significantly impact adolescents’ educational outcomes , but that community leadership also plays a crucial role. These findings provide actionable insights for policymakers, indicating that investing in the education of local leaders can significantly enhance community education standards—not just through direct policy enforcement, but also by serving as effective role models and setting positive norms. Finally, our empirical results indicate that the educational levels of village heads have heterogeneous effects on adolescents’ academic performance. The results in Table 7 reveal gender-based differences in this influence, showing a more pronounced impact on girls than on boys. This differential effect may stem from the village heads’ role in improving the living environment through enhanced public goods provision, which tends to benefit female adolescents more significantly. This finding aligns with research indicating that girls are more sensitive to improvements in their immediate environments . The heterogeneity analysis across different income groups provides nuanced insights. For low-income families, the educational level of village heads positively impacts adolescents’ academic performance, highlighting the critical role that village governance can play in compensating for a lack of educational resources in economically disadvantaged settings. Similar to the findings of Zhang et al. , highly educated village heads can alleviate poverty in the community. Interestingly, the analysis shows no significant correlation for middle-income families, possibly indicating a threshold effect where improvements in governance and public goods provision do not substantially alter educational trajectories, given that these families have adequate resources and a lesser reliance on public goods. These findings underscore the importance of targeted policy interventions. Educational policies and community development initiatives should be tailored to address the specific needs and existing resources of different demographic groups. The results of this study are based on stratified sampling data from rural villages in China. While we draw several conclusions, it is important to acknowledge the limitations due to data constraints, which prevent a comprehensive capture of all possible influencing factors. Notably, in rural China, village heads are elected by local residents, and their authority in the community may outweigh their educational leadership. Future research should carefully consider the impacts of cultural and social structures in this context. Our study reveals that the educational level of village heads significantly influences rural economic growth and social governance, with intergenerational effects on human capital accumulation. Village heads with higher educational attainment substantially enhance adolescents’ academic performance in their communities through improved public goods provision and by serving as role models that elevate parental educational expectations. These findings suggest that investing in the education of village leaders is a crucial strategy for promoting sustainable rural education and achieving broader revitalization goals. Additionally, future research should focus on the impact of village heads with higher educational qualifications, such as those from the ’College Student Village Official’ program (a national initiative that appoints university graduates as village leaders), as well as the effects of different academic backgrounds on adolescents’ academic performance. Furthermore, emphasizing the long-term influence of these leaders could provide deeper insights into the benefits of effective educational leadership in rural communities.
|
Other
|
other
|
en
| 0.999997 |
PMC11694989
|
Neisseria gonorrhoeae and Neisseria meningitidis are closely related bacteria that cause a significant global burden of disease. While vaccines are licensed and routinely used for N . meningitidis , no vaccine is licensed for N . gonorrhoeae . In addition, control of gonorrhoea is becoming increasingly difficult due to widespread antimicrobial resistance (AMR). But there is hope: meningococcal vaccines potentially offer some cross protection against gonorrhoea . Recent observations and retrospective studies from Cuba , New Zealand , Canada , USA [ 6 – 8 ], and Australia reported between 31% and 59% reduction in incidence rates of gonorrhoea in those vaccinated with meningococcal B (MenB) outer membrane vesicle (OMV) containing vaccines. This is because minor antigens in the OMV and a Neisseria heparin binding protein in other MenB vaccines are also surface exposed in N . gonorrhoeae . The UK introduced a MenB vaccine (Bexsero, GSK) into the national infant immunization schedule from 2015 . Cost-effectiveness of the MenB vaccine against meningococcal disease in adolescents in the UK is borderline given the low impact against carriage acquisition and thus no indirect protection , the relatively low incidence of N . meningitidis group B infections and the cost of the vaccine ; hence immunization in the UK has been targeted to infants and direct protection. For this to have any effect on the incidence of gonococcal infections it will take another 10 to 15 years if the effect, diluted by waning immunity, is noticeable at all. Given the uncertainty around the effectiveness and duration of this potential vaccine, and around the best vaccination age and population, mathematical models can be a useful tool to simulate different scenarios and strategies. They can explore the impact of vaccines with different characteristics on the long-term gonorrhoea incidence and the level of AMR. If linked with health economics, these models can also advise on the cost-effectiveness of different vaccination strategies. And while there has been a growing number of modeling studies on the use of MenB vaccines against gonorrhoea, especially since the NZ study in 2017, there has not been a systematic review of the modeling literature so far. Here we searched a range of scientific and grey literature databases and summarized results to give an overview of the different techniques that have been used to model MenB vaccination scenarios for Neisseria infections, both gonococcal and meningococcal. A secondary aim was to summaries how the spread of AMR in Neisseria sp . was modeled and what impact vaccination campaigns could have on the spread of AMR. This review seeks to identify existing research gaps in this field. This systematic review was conducted and written up following the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) 2020 guidelines , and a PRISMA checklist is available in S4 File . The review protocol was not registered prospectively though, it is available in S5 File . The inclusion and exclusion criteria were developed following the Population, Intervention, Comparison, Outcomes, and Study (PICOS) framework , see Table 1 . We applied the following inclusion criteria: We applied the following exclusion criteria: We searched the journal databases MEDLINE and EMBASE via OVID, PubMed, and Scopus. We searched for preprint articles on OSF Preprints (incl. aRxiv and bioRxiv), and on medRxiv via a google scholar search. Finally, we also searched for grey literature including conference publications, technical reports, dissertations etc. on the repositories base-search.net , British Library, and OpenGrey. The search was conducted on 30 th June 2023 for all databases, and the search strings used for databases and repositories are available in the protocol in S5 File . A list of all identified studies is found in S7 File . All search results were screened to eliminate duplicate entries, including preprints that were later published as journal articles. After deduplication, titles and abstracts were screened for our inclusion and exclusion criteria as defined above. If the screening of the title and abstract was inconclusive, the whole paper was screened to make sure the selection criteria could be applied correctly. This screening process was done by two authors independently and the results were compared. If the two authors came to a different conclusion for a particular study, the study’s abstract and full text was discussed in detail until an agreement was reached. If there was no agreement, a third author acted as reviewer to arbitrate a final decision. A qualitative synthesis of the included studies was used to organize the modeling studies. An extraction form was developed based on the following categories: study title, infectious disease system, model type, model formulation/class, transmission route, methodology, validation technique, intervention target, type of data used, and health economic analysis. Data was extracted and organized by three different authors, depending on their expertise. The data extraction template is available in S2 File . A descriptive analysis of the data generated from the systematic search, in line with the study protocol, is reported using flow charts to illustrate included and excluded publications and their sources and tables (to present studies, models, and setting characteristics). The main model assumptions, including model structure, setting, vaccines, AMR, and health economics, are summarized for meningococcal and gonococcal studies separately. We use the standardized survey from Lo et al. to assess the quality of evidence and the studies’ usefulness for decision making. The modeling studies were assessed by checking We have assigned a “+” for each category if the study ticked all or most of the category’s checks. A list of the checks can be found in S3 File . All studies were included in the review regardless of their validity rating. We found a total of 479 documents with online search engines and an additional 2 documents were identified through further reading. Of the 479 documents, 306 were identified as duplicates, either having been found by multiple search engines or being preprints or thesis chapters that were later published as journal articles. A further 4 documents were not retrievable in English and thus excluded. The remaining 169 documents were then checked for inclusion and exclusion criteria and a total of 52 documents were eligible for full text review. See Fig 1 for the process and reasons for exclusion. The 52 included documents comprise 48 journal articles, 1 dissertation, 2 preprints, and 1 conference article. After some initial modeling of AMR in GC , the importance of the rise in AMR and vaccination strategies as a possible solution have only been analyzed from 2012 onwards, see Fig 2 . In the 10-year period from 2013 to 2022, an average of 4 articles have been published per year. In addition, three topical dissertations and 25 conference abstracts have been found for this period . Study characteristics are presented in Table 2 . The 52 included modeling studies were describing either gonococcal infection (32) or meningococcal infections (20) but not both, see S1 Fig in S1 File . The MC models are either deterministic differential equation models or stochastic Markov models. Ordinary differential equation (ODE) models are widely used for modelling dynamic transmission of diseases because they are simpler to implement and interpret with a single deterministic outcome. Markov models account for random variation in their inputs and yield outcome probabilities, thus illustrating the uncertainty in the process. The work on meningococcal serogroup C by Trotter et al. has often been cited by studies of both model types, see Fig 3 , and Trotter’s group have used both differential equation and Markov models to analyze MC transmission later on . Nearly all models simulated dynamic disease transmission that can reflect non-linear effects, only some models assume a constant force of infection year on year and thus do not have dynamic transmission . As all studies included some sort of vaccination, they mostly followed a SIRS (susceptible-infected-recovered/vaccinated-susceptible) structure, in which an immune state was reached after infection or vaccination. With waning immunity over time, people return to the susceptible population. In some cases, an additional non-symptomatic but infectious exposed state was used [ 23 – 25 ], other models used additional infection classes for multiple meningococcal strains . All of the models either split the population into age classes and used age-dependent contact matrices for bacterial transmission (e.g. ) to account for the age heterogeneity in meningococcal incidence, or they only looked at a single specific age group . Almost all studies focus on developed countries only (Western Europe, USA and Canada, Australia and New Zealand), with the exception of one study set in Chile . This is in part a result of our inclusion criteria, as many studies model transmission of meningococcal serogroups A, C, or W in the African meningitis belt. However, we wanted to exclude these studies because of their different setting and effective vaccination strategies that are already in place . The MC studies looked at younger age groups like infants, adolescents, college students or analyzed how vaccination programmes in these younger age groups affected the general population. The models were all parameterized with national population and infection data to different degrees. Most calibrated their model parameters to point estimate or time series data of national incidence or prevalence [ 13 , 20 , 28 , 31 – 33 ], some models just used point estimates for incidence as starting values . Model validation with data that was not used for calibration was only performed in two MC studies . Multiple studies have modeled the impact of vaccines against meningococcal serogroup B infections, especially since the vaccines’ approval for use that started in the early 2010s for different countries and age groups. Vaccine effectiveness was estimated by vaccine efficacy alone or in combination with vaccine strain coverage and vaccine uptake. Vaccine efficacy against disease was high: 78 to 95% [ 21 , 26 , 27 , 32 , 35 – 37 ] and lower against carriage: 20 to 30% (exploring ranges up to 60%) . The strain coverage was assumed to be between 66 and 90% , and uptake or vaccine coverage decreased with age, from around 90% in infants , to 60 to 75% in school children , to 30% in adolescents outside school . The duration of vaccine protection was assumed to be age dependent. A population up to 1 year of age with three or four doses only had a vaccine protection of 18 to 38 months [ 23 , 26 – 29 , 31 , 34 , 37 ]. Adolescents with one or two doses around the age of 14 were assumed to have a longer protection of 8 to 10 years . However, there were studies assuming longer protection for younger or shorter protection for older age groups . Waning of protection was modeled as either a constant annual waning (e.g., in ordinary differential equation, ODE, type models) , or as a combination of a constant protection level during the protective period followed by a waning process . In the short term (up to about five years), the vaccine can lead to a reduction of cases of 5 to 27% , with higher reductions in certain populations like small children (40–46%) or college students (63%) . In the longer term (10 years to lifelong), the herd effect of vaccinations can lead to a higher reduction of cases of around 25 to 60% . Some models, however, do not include dynamic disease transmission and thus no vaccination herd effect, leading to lower long-term reductions . Meningococcal AMR is still very rare for first-line antibiotics albeit sporadic reports of reduced susceptibility against cefotaxime, ceftriaxone or rifampicin, and increasing resistance to penicillin globally . Limited efforts seem to have been dedicated to this problem, and only a single modeling study looking into AMR (Penicillin G resistance) in N . meningitidis was found . S1 Table in S1 File summarizes the evidence on the 11 studies we identified which investigated the cost-effectiveness of meningococcal serogroup B vaccination. Ten of the studies had very high incremental cost-effectiveness ratios (>£100,000 per QALY) suggesting, in most countries, that an MC vaccine would struggle to be deemed cost-effective. Methodological approaches varied considerably although key consistencies were the adoption of an effective life-time time horizon, the modeling of multiple alternative vaccination strategies, and the inclusion of the costs and harms of long-term sequelae associated with MC. Key drivers of cost-effectiveness was the prevalence of disease, the cost of the vaccine, the type of sequelae included, the use of QALY-scaling factors, and the discount rate. In most studies, the vaccine price would have had to be very low (<£10) to be considered cost-effective. GC studies used a greater variety of modeling approaches than MC studies, ranging from ordinary and partial differential equation systems to individual- or population-based Markov, to network models. Different approaches, often novel rather than built on previously published models, were used to account for the transmission in populations with non-random mixing. In general, all studies used an SIS approach, with infected individuals returning to the susceptible population after treatment or through natural clearance. As it has been shown that recovered individuals can be re-infected after short periods , the studies did not account for an immune state with the exception of the work by Duan et al. who used a very short immune period of only 3.5 days . The infected state was divided into symptomatic and asymptomatic carriers in all studies. Site-specific infection dynamics are especially important for MSM. Here, gonococcal infections can occur in three sites: in the pharynx, urethra, and rectum, and a few studies take this into account [ 42 – 46 ]. In this case, models had to include site-specific transmission routes and infection rates, and calibrating transmission parameters to site-specific prevalence showed that the risk of infection is higher for the receiving partner . Other studies on MSM modeled GC infections in individuals rather than anatomical sites but exhibit similar dynamics to the site-stratified ones (compare e.g. and ). GC modeling studies on vaccination or AMR only focus on developed countries (Western Europe, USA and Canada, Australia).Here, they often concentrated on certain risk groups, such as MSM (14), heterosexuals (9), sex workers (1) or indigenous people (1). Only three GC studies [ 47 – 49 ] looked at both the MSM and heterosexual population. However, they used separate models without any spill over for the two populations. In addition, the populations in GC studies were often stratified into high and low risk groups by their sexual activity, following the work on core groups in gonorrhoea transmission . While early works were based on purely theoretical populations , more recent models were parameterized similar to the MC studies, mostly to national population and infection data [ 49 , 52 – 60 ], some used averaged European and global prevalence estimates . Model validation with data that was not used for calibration was performed in three GC studies . All AMR studies identified in this search were for gonococcal infections, reflecting the change of first line antibiotics from penicillin pre-1990s to Ciprofloxacin in the 1990s , Cefixime in the 2000s , Azithromycin in the 2010s , to Ceftriaxone, which is currently used in the UK . Here, the spread of AMR was modeled by using a susceptible and a resistant GC strain for infection , multiple strains with different degrees of antibacterial susceptibility , or multiple strains, each with a resistance against a different antibiotic . Model structures with and without co-infection of multiple such strains were compared by Turner and Garnett . AMR cases were either imported or arose through treatment . Without a substantial fitness cost associated with AMR, the resistant strains outperformed the susceptible ones in all studies, leading to the spread of AMR. The models show that the balance between using more antibiotics to treat people and using less antibiotics to prolong their lifespan will be key in the future. Without treating less, the lifespan of antibiotics could be extended with AMR point-of-care testing , combination therapy , different screening frequencies , vaccines , or focus the screening on core groups . We found ten studies modeling GC vaccinations. All ten studies looked at hypothetical vaccine benefits: they all screened the potential ranges for uptake, effectiveness (10 to 100%), and protection duration (1 to 20 years) in different scenarios or with sensitivity analyses. The impact on the population was accordingly depending on the given scenario. The reduction of cases was around 10 to 40% for the heterosexual population and 45 to 66% for MSM . As for MC models, the longer the chosen study horizon the higher the reduction of cases due to vaccination herd effects. S2 Table in S1 File summarizes modeling studies for gonococcal infection. There were three studies focusing on gonococcal infections with the first two considering antimicrobial residencies. Only two studies focused on vaccination to prevent gonococcal . Régnier & Huels used a Markov-based model to explore the cost-effectiveness of a hypothetical vaccine with differing effectiveness rates when vaccinating adolescents in the USA. They model men and women separately. Long-term sequelae associated with a gonococcal infection for women include ectopic pregnancy; chronic pelvic pain and infertility all of which have an impact on patient utility values and health care costs. For men, sequelae include urethritis, epididymitis, and an increase in the risk of HIV infection. Vaccination has a substantial impact on reducing infections, health, and costs which all result in a low value-based price for the hypothetical vaccine, i.e., implying a potential vaccine is likely to be cost-effective. Important parameters driving the vaccine value related to the reduction in risk of HIV infection associated with fewer infections and a reduction in the number of sequelae occurring in women. Whittles et al. use an ODE model to explore the impact of vaccination on MSM in England. They model four different scenarios–Vaccination before entry (VbE), Vaccination on diagnosis (VoD), Vaccination on attendance (VoA) and Vaccination according to risk (VaR). They find the hybrid strategy of VaR to be the most cost-effective, leading to an overall reduction in costs (at £18 per dose) and a reduction in cases versus no vaccination. At a vaccine price of £85, VaR would likely be cost-effective at threshold of £30,000 per QALY. Whittles et al. do not model infection in women or associated sequelae, neither does it model long-term sequelae in MSM. We performed a quality evaluation of all 52 included modeling studies. Of these studies, 30 were rated positive in at least seven of the nine categories (while some of these still failed to provide essentials such as a mathematical description of the model). All studies were judged to have reasonable model structure and assumptions with sufficient description of the transmission processes except for one study that referred to other work for the description (51/52). A total of 15 studies failed to provide a full mathematical description of the model, with the rest having equations either in the method’s section or in the supplementary material. Most studies (44/52) performed some sort of model calibration with varying degrees of detail. 41/52 studies tested the influence of parameters in either parametric sensitivity or uncertainty analyses, or both. A structural sensitivity analysis in which different model types or model structures were compared, was only performed in 7/52 studies. Only 5/42 of the models validated their results with internal or external data, but all were judged to have face validity. In 31/52 studies, the authors declared some sort of conflict of interest, ranging from minor funding received by one or two authors (8/52) to all authors working for a vaccine-producing company (8/52). The incidence of gonorrhoea has increased year on year in Europe and the US before the SARS-CoV-2 pandemic and current numbers are the highest in decades [ 75 – 77 ]. While a large proportion of these infections show resistance to specific antibiotics , reduced susceptibility against the first-line antibiotic Ceftriaxone is still relatively low . More worryingly, multi-drug resistant (MDR) and extensive drug resistant (XDR) gonorrhoea are fast emerging in other parts of the world and can spread after importation . The ability of N . gonorrhoeae to develop resistance to antibiotics has led to relatively early modeling efforts in this field, e.g., with analyzing the resistance of GC against penicillin . However, while this field gains more and more traction now that gonorrhoea has developed resistance to all classes of antibiotics recommended for treatment, modeling of gonococcal AMR is still identified as one of most understudied AMR topics given its urgency . In general, gonorrhoea modeling has been used to understand transmission dynamics and treatment scenarios, largely influenced by the work of Hethcote and Yorke who introduced core groups with a higher rate of partner changes . However, since the recent observations of MenB vaccine effectiveness against gonorrhoea, there is a growing number of modeling papers in this field too. This is comparable with vaccination modeling for N . meningitidis : previously, models have been used to inform public health actions and vaccination campaigns against serogroups A, C and W in the African meningitis belt (e.g. ) and other parts of the world (e.g. ). Serogroup B, however, has only become the subject of mathematical modeling in more recent years, especially with the introduction of specific vaccines like MeNZB, Trumenba and Bexsero. This led to the modeling of serogroup B meningococcal disease to inform vaccination policies and programmes in the 2010s, and the same is slowly starting with gonorrhoea, where models are used to analyze strategies for selected populations at risk. Given this recent increase in modeling approaches to Neisseria infections and their implications for treatment and vaccine strategies, it was necessary to review the literature so that future modeling studies have an overview on used assumptions, model approaches and research gaps. In this review, we found a broad range of model types used, with deterministic dynamical model and stochastic Markov model types dominating for both MC and GC infections. While both infections were modeled following a susceptible-infected-recovered/vaccinated-susceptible transmission cycle, MC models stratified the population by age groups whereas GC models stratified by sexual activity risk groups, each with according contact matrices for the respective Neisseria transmission. In 2019, the WHO convened a multidisciplinary international group of experts to understand the potential health, economic and social value of gonococcal vaccines and to describe an ideal set of product characteristics for such a vaccine . The group identified that the overall strategic aim for a vaccine should be to: a) reduce the negative impact of infection on health outcomes and b) reduce the threat of gonococcal antimicrobial resistance (AMR). In the short-term, a reduction in the negative health consequences was deemed to be the priority with a particular focus on reducing the impact on women who tend to have the most severe sequelae , whereby an infection can cause pelvic inflammatory disease, infertility, chronic pelvic disease, and ectopic pregnancy. The health economics perspective sought to focus in particular detail on the negative consequences associated with infection and how these had been conceptualized in the existing modeling literature. We found only two studies that investigated the cost-effectiveness of vaccination to prevent gonococcal infection but eleven studies that investigated the cost-effectiveness of vaccination for meningococcal disease. In general, the meningococcal studies went to great lengths to integrate the consequential impact of infection on sequelae and the knock-on patient outcomes and costs. However, despite the inclusion of these potential sources of value, vaccination for MC was often unlikely to be cost-effective because it required significant investment in vaccination to prevent very serious but very rare events. By contrast, the sequelae incorporated in the GC models were generally limited, unjustifiably so, particularly in terms of the impact of GC infection on women, which can be considerable. Yet both studies did demonstrate the potential for a cost-effective vaccine even when only partially incorporating the value of a potential vaccine. The attempts to model vaccination strategies against GC show that empirical studies in the lab or clinical trials are necessary to get a better picture of MenB vaccine characteristics against GC infections. Randomized-controlled clinical trials on the effectiveness of the vaccine are currently under way in the US and Thailand for heterosexuals , and in Hong Kong and Australia for MSM. In addition, another gonorrhoea vaccine was recently fast tracked in the US , and is now entering a phase 2 trial . More detailed information on vaccine characteristics will in turn help inform cost-effectiveness analyses looking at the general population or certain risk groups. These can then be used to inform public health action and policies, comparable to how cost-effectiveness studies of MenB vaccination against meningococcal disease have shaped vaccination strategies in several countries . In fact, following the analysis of Whittles et al. and Looker et al. , the British Joint Committee on Vaccination and Immunisation (JCVI) has now recommended the use of Bexsero for those who are at greatest risk of gonococcal infections in the UK . As our review has shown, there are already a good number of options for model structures and assumptions available to study Neisseria infections. Not relying on a single approach is very useful to check the influence of assumptions. Especially for GC, lots of models have been developed independently (albeit mostly influenced by some early work on sexual networks) which contributed to the diversity of approaches. Nevertheless, we identified four key gaps in the modeling work on strategies against Neisseria infections and AMR development: While we did look at different risk groups like MSM and the heterosexual population for our review, we did not specifically check for vaccination impacts on different age groups. Studies suggest that especially vaccination duration differs significantly by age, so this could be done in a meta-analysis for either meningococcal or gonococcal infections. Using multiple grey literature data bases and relative broad search terms in the screening process yielded a wide variety of Neisseria modeling approaches. This inevitably also led to the inclusion of studies that were not directly aligned with our question but still relevant in the field. The used assessment tool developed by Lo et al. should thus be seen as an indicator of how useful the studies are for our purpose, rather than of their quality. Nevertheless, the assessment emphasizes that a clear documentation and the inclusion of uncertainty analyses should be the standard when modeling infectious disease scenarios that should influence public health action. A final limitation of our systematic review is that its protocol was not registered prospectively with PROSPERO. The literature search had already started before we thought about registering and thus it was not possible anymore, but for comparison of protocol changes and to avoid possible duplication efforts, it should have been done. For transparency, we have uploaded the original draft of the protocol in S6 File . In conclusion, George Box’s aphorism, ’All models are wrong, but some are useful,’ aptly frames the two disease areas studied in this literature review. For MC, we found that most models investigating the cost-effectiveness of vaccination went to great lengths to incorporate the potential value of avoiding the debilitating, life-limiting, and devastating sequelae of the disease. These models often included detailed considerations of the quality-of-life impacts during and after the acute disease episode, long-term health consequences such as scarring, paralysis, and neurological disorders, and even indirect costs such as legal claims against healthcare systems. Yet, despite these comprehensive analyses, the upfront cost of mass vaccination against MC was often not deemed to be cost-effective due to the relatively low incidence of these severe occurrences. By contrast, our review found that for GC, existing models predominantly focus on high-risk populations, such as men who have sex with men or heterosexual men. This is despite the WHO’s expert group in 2019 emphasizing the need for a gonococcal vaccine to primarily reduce the health consequences of infection, especially in women, who are disproportionately affected. Many women with gonorrhea are asymptomatic, potentially leading to chronic infections without treatment, resulting in pelvic inflammatory disease, infertility, chronic pelvic pain, and ectopic pregnancy. This oversight in modeling represents a significant limitation in current strategies, failing to fully capture the value of vaccination approaches. Yet, in the few studies that do investigate the cost-effectiveness of GC vaccination, even without adequately considering the impact on women, vaccination still appears to be potentially cost-effective. Future modeling studies should always seek to fully characterize the potential for spillovers across populations, such as into women, where the short and long-term cost-consequences are likely to be an important part of the whole decision-making picture. The future for vaccination against Neisseria infections looks promising though: for MC, a pentavalent MenABCWY vaccine for individuals aged 10 to 25 has recently been approved in the USA , and could increase MC vaccination coverage for all five serogroups. This vaccine uses Trumenba for the B component and thus its effectiveness against gonorrhoea infection is yet unclear. Another pentavalent vaccine currently in phase III clinical trials uses Bexsero for the B component and could thus also offer some protection against gonorrhoea should it be approved. That said, vaccines specifically against GC are also under development, including a vaccine currently being developed by INTRAVACC , and the aforementioned vaccine by GSK that in turn might offer some level of cross protection against MenB.
|
Other
|
biomedical
|
en
| 0.999996 |
PMC11694990
|
Approximately 900 000 fetuses die as a result of intrapartum hypoxia each year and it is associated with another 1.1 million stillbirths during childbirth . Intrapartum fetal hypoxia is characterized by a deficit of O 2 caused by a pathological change in the components of the placenta. This situation leads to an accumulation of CO 2 , causing fetal acidemia and subsequently a lower pH in the fetal blood vessels . Early detection of babies at risk of fetal acidemia may decrease the chance of post-diagnosis of cerebral palsy, neonatal encephalopathy or even death . In this sense, adequate obstetric intervention depends on early diagnosis, which may be crucial in preventing fetal damage . In the event of fetal metabolic acidosis, the umbilical artery pH is below 7.00 and the base deficit in the extracellular fluid is above 12 mmol/l [ 4 – 6 ]. However, there are already reports of undesired outcomes when the pH is below 7.05 and the base deficit in the extracellular fluid is above 10 mmol/l . According to , in situations of acidosis, moderate to severe situations are considered when the pH is below 7.15. Fetal heart rate (FHR) is often used as an indicator of fetal well-being and its variation reflects the influence of the fetal autonomic nervous system and its sympathetic and parasympathetic components . According to , health can be interpreted in three domains—simple, complex and chaotic. Specifically, FHR falls between the complex and chaotic domains, reflecting the low agreement and certainty among experts . To mitigate this disagreement, auxiliary automatic systems have been developed over the last few years, based on linear characteristics, which have the potential to alert to possible pathological cases, providing specialists with an objective and automatic fetal health monitoring . The fact that the fetal heart rate signal is complex and sometimes behaves unpredictably, makes it relevant to study other approaches that may be more appropriate, namely non-linear compression-based methods. Regarding maternal-fetal heart rate simultaneous analysis, Khandoker et al. evaluated the direction and strength of maternal-fetal heart rate coupling in healthy and pathological cases . Regarding the direction, they concluded that in pathological cases, compared to healthy cases, there is a lower influence of FHR on MHR and a greater influence of MHR on FHR. In the case of the coupling force, it must be evaluated taking into account the two directions mentioned above. Therefore, when it comes to the influence of FHR on MHR, the coupling force is much greater in pathological cases than in normal ones. On the other hand, in the influence of MHR on FHR, the coupling force for healthy cases is lower than for pathological cases . In this sense, the interpretation of the two signals simultaneously seems to provide additional information in the detection of pathological cases when compared to the FHR analysis alone. The main aim of this study was to access which compression indices, applied to FHR and MHR, perform better in the discrimination of acidemic from non-acidemic fetuses in the intrapartum period. For this aim, a real unicenter dataset of FHR and MHR simultaneous intrapartum signals was explored, and bivariate as well as univariate compression indices were computed. Considering the personal and social impact of fetal pathologies, the Omniview S is P orto ® was developed to monitor and analyse the needed data for fetuses evaluation . This is based on guidelines provided by the International Federation of Gynaecology and Obstetrics for fetal monitoring and incorporates linear features such as baseline estimation, identification of accelerations and decelerations, and assessment of long- and short-term variability. In addition, this system provides real-time visual and audible alerts with different color codes, regarding fetal health state . Non-linear analysis of biological time series offers new possibilities to improve computer-aided diagnostic systems . Gonçalves et al. calculated several linear and non-linear indices, including the average of the FHR, the very low, low, and high spectral frequencies, and entropies (Approximate—ApEn—and Sample Entropy—SampEn), and concluded that labor progression was associated with a significant increase in the domain of linear frequency indices, while non-linear indices decreased significantly . A study carried out by Spilka et al. analyzed 217 signals of FHR where 94 of these signals corresponded to fetuses with acidemia. The team showed that adding non-linear methods to conventional features provided better accuracy in classifying healthy vs pathological fetuses. Sensitivity values of 73.4%, specificity of 76.3% and F-measure of 71.9% were obtained using linear and non-linear methods . In 2013, Henriques et al. reported that through entropy and compression approaches it is possible to quantify different complexities of a system. To distinguish between hypoxic and healthy fetuses, the complexity in the initial and final segments of the FHR signal during the last hour of labor was calculated, considering segments of 5 and 10 minutes. It was concluded that both entropies and compressors allow distinguishing the two groups and that fetuses with lower umbilical artery blood pH have significantly lower entropy and compression indices, more markedly in the final segments . Costa et al. studied how complexity varies between two groups of fetuses—acidemic and non-acidemic—using multiscale entropy indices. In this study, they showed that the complexity of FHR signals in the last two hours of labor was significantly higher in non-acidemic compared to acidemic fetuses and that, with the removal of the last 30 minutes before delivery from the analysis, the complexity remained lower for the acidemic group. These results support the hypothesis that an altered, i.e., less complex, temporal structure of FHR baseline fluctuations on multiple time scales may be a marker of acidemia . More recently, Marques et al. analyzed the non-linear complexity in two databases, intrapartum and antepartum, with previously identified normal and pathological groups, through the calculation of ApEn and SampEn. When the whole examination is considered, the results obtained showed low entropy values with no evident difference between the non-pathological and pathological groups. Since the analysis did not reveal the intended result, the authors decided to process the time series in a 5-minute window long, computing a parameter per window. Changes were detected during specific long-term events, which allows us to infer that entropy can be considered as a first-level indicator for accelerations, decelerations and also for other physiological behaviors, such as sinusoidal FHR . In 2016, Gonçalves et al. showed the potential of bivariate analysis and carried out an exploratory study to investigate how maternal heart rate and fetal heart rate variability changed during childbirth and how well they could detect newborn acidemia. Throughout this study, linear and non-linear indices for FHR and MHR were calculated separately and simultaneously, in a database of 51 pregnancies, where each signal was registered for 2 hours during labor (the same database used in this work). There was a significant increase in most linear indices of FHR and MHR and a decrease in entropy indices with labor progress. FHR alone and in conjunction with MHR (FHR-MHR) demonstrated the highest auROC values for fetal acidemia prediction, with 0.76 and 0.88 for umbilical arterial blood (UAB) pH thresholds of 7.20 and 7.15, respectively. The inclusion of the MHR in the bivariate analyses allowed obtaining a sensitivity and specificity of approximately 100 and 89.1%, respectively . Moreover, there are other studies with different purposes that encourage analysis of the coupling of maternal-fetal heart rate [ 18 – 20 ]. Non-linear indices are not completely explored in the intrapartum fetal asphyxia diagnose. So, new studies should be performed to explore their potential as the one presented on this work. The system used to collect the data is called STAN ® 31 fetal monitor (NeoventaMedical, Gothemburg, Sweden). There are many STAN configurations, and the one used to collect these signals has several elements, namely, two sockets for heart rate acquisition, an electrocardiography sensor connected to three electrodes on the mother’s chest and finally, an ultrasound sensor placed on the mother’s abdomen. Usually, this is connected to Omniview S is P orto ® , an intrapartum monitoring system, which is based on the linear characteristics of the heart rate signal to assist in the diagnosis of possible pathologies, through the generation of alerts . It is important to point out that the signals analyzed during this work had been collected previously, within the scope of another project . The data used throughout this work are completely anonymized. The collection was carried out following the principles of the Declaration of Helsinki, it was approved by the local ethics committee and all women gave their consent to participate . The database contains heart rate biosignals, in which 61 women participated. Table 1 presents the maternal summarized characteristics of the study sample, namely their age, height, weight, systolic and diastolic blood pressure and gestational age. An analysis of the maternal characteristics revealed that all variables follow a normal distribution, with the exception of gestational age. In this study, 34 male and 27 female fetuses were involved. In Table 2 , it is possible to obtain information about the birthweight of the fetuses and the arterial pH of the umbilical arterial blood at birth, as well as the Apgar score at the 1st and 5th minutes. For more detail on data description, please refer to . It is important to note that the weight and pH of the fetus exhibit a normal distribution, whereas Apgar scores at 1 and 5 minutes deviate from normality. This database contains 2 signals: fetal heart rate collected through cardiotocography and maternal heart rate collected through electrocardiography. The signals were collected during two hours before childbirth. The data collection finished 10 or 30 minutes before the childbirth, in the case of vaginal or cesarean delivery, respectively . To facilitate the study, the signals were divided into a first hour, symbolized by h1, and a second hour, represented by h2, that is, the hour immediately before delivery. These signals were obtained with a sampling frequency of 4 Hz. Since the database under study does not contain severe acidemic cases and similar to what was done in other studies , it was decided that cases with a pH value lower than 7.15 will be considered for the acidemic group. Consequently, values above 7.15 were labeled non-acidemic. Based on the chosen criteria, there were 7 acidemic and 54 non-acidemic cases, making a total of 61 cases. The study of the signals revealed that not all cases completed the two hours of data collection, obtaining a median of 119.27 and an interquartile distance of 1.08 minutes. The values of the extracted features are highly dependent on the quality of the signal pre-processing . The fetal signals had already been subjected to pre-processing, in order to reduce noise and artifacts. Briefly, this algorithm detects fetal heart rate beats below 60 bpm, above 200 bpm and beat-to-beat differences above 25 bpm. Values where this occurs are eliminated and replaced using interpolation, when the loss periods do not exceed 2 seconds. In case these conditions hold for longer periods, the previous segment of equal length and without loss is replicated . For the maternal signal, a scale conversion obtained by adding 50 beats per minute to the original MHR values was performed and, after that, the signals were subjected to exactly the same algorithm, for more details refer to . Therefore, the quality of the signals in the database was evaluated to determine possible losses and amount of information for each of the signals. The loss of each signal was calculated based on the number of missing samples compared to the total number of samples. The calculated loss for the FHR and MHR is an average of all the losses calculated for the fetal signal and the maternal signal, respectively. It was verified that there are no losses in the fetal signal, however, losses in the maternal signal were observed with an average percentage loss of 0.01. Since the fetal signal does not present losses, the losses of the maternal-fetal combination correspond to the individual losses of the maternal signal. It was decided that the duration of the signal to be used would be that of the signal with a shorter duration, in order to standardize the size of the simultaneous signal segments to be used. This was only found in two cases in the database, in which the loss was not significant. Since all the collected data may be useful, the FHR and the MHR were decomposed in their trend and residuals to be separately analyzed and processed. The trend was estimated in order to obtain a smoothed signal. With this procedure it is possible to obtain the low frequency component, without resorting to frequency filters . Additionally, through the residual, calculated by the difference between the original signal and the trend, the high frequency variations can be obtained. The trend can be calculated through the centered moving average ( m ^ t ), described by Eq 1 . This type of smoothing procedure generally uses samples before and after the time when the smoothed estimate is to be calculated. m ^ t = x t - w 2 + ⋯ + x t - 2 + x t - 1 + x t + x t + 1 + x t + 2 + ⋯ + x t + w 2 w + 1 (1) where t is the index of the sample under study and w + 1 is the size of the window. The application of this type of method implies the loss of some points at the beginning and at the end of the signal, depending on the choice of the window. Different sizes were defined for the window, more specifically 9, 17 and 25 samples, and their respective plots were visualized and analyzed. It was determined that the value that best preserves the signal, without softening it too much, that is, without removing possible pathophysiological information, would be a window of 17 samples. This window takes into account the 8 samples immediately preceding and following of the sample to be smoothed. The choice of the window size to use took into account its real-time implication, since the trend calculation is based on previous and following samples, which would cause a real-time delay. Therefore, considering a window of 17 samples, that is, 8 samples before and 8 samples after, only a delay of 2 seconds is generated, given that the signal sampling rate is 4 Hz. Trend calculation as a pre-processing method was applied to all fetal and maternal heart rate signals. Thus, it was possible to obtain a signal with fewer variations. As in the literature found on similar studies, the fetal and maternal heart rate signal segments were divided into 10-minute segments . It is important to note that not all data contained the complete 2 hours, which implies that some segments do not reach the intended 10 minutes. These cases were analyzed in detail and only segments with more than 5 minutes of data were considered for the study, with 8 segments being discarded in total. In the scope of this study, it is intended to evaluate the non-linear relation between maternal and fetal data, which may provide useful information to study acidemic fetuses. Also, the maternal-fetal relation should be inspected, as, according to recent studies, it seems that there is a coupling between the signals in the presence of pathology. So, under this study, bivariate measures of compression are studied. The use of the methods configures a breakthrough line of research under the acidemia study. Considering that the FHR and MHR are signals that present variation that cannot be predicted, the evaluation of complexity and its potential alterations through time may conduce to new biomarkers of fetal acidemia. The complexity of the simultaneous FHR and MHR signals can be evaluated through some metrics, and throughout this study emphasis was given to the normalized relative compression (NRC) and to the normalized compression distance (NCD). Several studies have been developed using these metrics in various types of signals, especially in the area of biometric identification, which showed their potential [ 25 – 28 ]. The database has a total of 54 non-acidemic cases and 7 acidemic cases, the fact that the database is unbalanced can influence the results obtained, namely when studying the behavior of groups. The number of existing cases in each group can be a highly relevant characteristic for the conclusions that can be drawn. For this reason, it was decided to proceed with its balancing. The SMOTE data augmentation technique was used. SMOTE is an over-sampling approach in which the minority class is over-sampled by creating “synthetic” examples. Taking each example from the minority class and introducing symbolic examples along the line that connects any/all of the minority class’s closest neighbours, artificial samples are created as follows: 1 . Calculate the difference between the feature vector in question and its nearest neighbor; 2 . Multiply this difference by a random number between 0 and 1, and add it to the feature vector under consideration; As a result, a random point along the line segment between two distinct features is chosen . The SMOTE technique allowed to increase the acidemic dataset to 49 samples. This number was chosen to balance the two sets. Thus, the expanded database has a total of 103 cases, of which 49 are acidemic and 54 non-acidemic. It is important to mention that data augmentation is a technique commonly used in physiological data, given that pathological cases appear mostly in a smaller number of cases. Specifically, the SMOTE technique was chosen because it is widely used in the health area, particularly in these studies [ 30 – 32 ]. One of the approaches that may quantify the complexity of a signal is the compression. Compression is usually associated to file data reduction, but the mathematical principle behind it is much more complex. A compressor is able to estimate the data quantity, by the observation of patterns and repetition, and the ability to reduce information. So, if a compressor is able to severely reduce the size of a file, it indicates that the file has a lot of redundancy, and therefore there is a small amount of information in the file. By the opposite, if a compressor is not able to reduce the size of a file, it indicates that there is a small amount of repetitions, conducing to a high quantity of information in the file. In an analogue hypothesis, the information of the maternal and fetal heart rate is evaluated in the same way. Compressors can be distinguished into two major classes: lossless and lossy. In lossless compressors every bit of data after decompression is restored, i.e., it is possible to reconstruct the original file. On the other hand, when it comes to lossy compressors, this is not possible as the file is permanently reduced, eliminating certain information during the compression process . One of the examples of a lossless data compression algorithm is the software library zlib , created by . This algorithm is used on various types of data and by description fits the requirements of the data used in this study. Furthermore, it provides good compression on a wide range of data while using little system resources . Since the results obtained were as expected, zlib was the compressor used. In the scope of this work, the first approach was based on the analysis of maternal and fetal compression ratios, with the aim of verifying which features contributed most to the distinction between the two groups in question. The compression ratio is calculated from the Eq 2 , translating the number of times its original length has been reduced. Compression Ratio = | x | | x * | (2) where | x | is the length of the original segment and | x *| is the length of the compressed segment. This calculation can also be carried out inversely, obtaining the percentage of the reduction performed. Normalized Relative Compression (NRC) is based on the notion of relative compression methods. This method is capable of compressing an object using exclusively the information of another . The NRC of a string x , based on y , is defined as: NRC ( x | | y ) = C ( x | | y ) | x | l o g 2 | A | , (3) where | x | is the length of the string x and | A | is the size of the alphabet. The size of the alphabet corresponds to the number of letters that will be used to encode the string. C ( x || y ) represents the compression of x relatively to y . This measure provides information about the amount of data in x that cannot be described by y . The NRC value is expected to be lower when comparing two data segments from the same source than when comparing data from different sources , i.e. when in presence of similar data the NRC is expected be lower than when the compared data is not similar. In the scope of this work, based on this measure, it will be inspected the similarity between maternal vs maternal data in disjoint time periods, fetal vs fetal data in disjoint time periods, maternal vs fetal and fetal vs maternal in the same time period. With this evaluation, it is intended to inspect the relation between the segment studying their similarity and consequently their dependence and coupling. In this work, extended alphabet finite-context models (xaFCM) will be implemented to calculate C ( x || y ). Finite-context models have been useful in very different pattern recognition tasks since they have the ability to create similarity/dissimilarity measures . This model conforms to the Markov property as it estimates the probability of the next sequence of d > 0 symbols from the information source using the k > 0 symbols immediately preceding it . These parameters, called depth ( d ) and order context ( k ), must be adjusted depending on the type of data being processed, in order to improve compression performance. It is important to point out that the calculation of this measure requires a symbolic representation of the signals, so it is necessary to quantize the signal. A quantization method aims to convert real continuous signals into symbolic ones. Quantization methods make use of the statistical characteristics of the signals allowing their representation in a symbolic space. The application of these methods presupposes loss of information, however it is minimized by the correct choice of the method to use. So, the application of this compression method requires the choice of some factors. Among them, the choice of the quantizer that best preserves the information contained in the signals, the size of the alphabet and the parameters d and k stand out. Normalized Compression Distance (NCD) consists of the distance between two signals . The NCD can be obtained using Eq 4 , as follows: N C D ( x , y ) = C ( x y ) - m i n { C ( x ) , C ( y ) } m a x { C ( x ) , C ( y ) } (4) Briefly, the signals are compressed with a certain fixed compressor and the bit size of the compressed version of a file x is recorded as C ( x ) . Each pair of signals FHR and MHR, that is, x and y , are added in a single file xy and are compressed, generating a bit file C ( xy ). Subsequently, the difference between the length of the compressed file and the minimum lengths C ( x ), C ( y ) of the compressed versions of the two signals is calculated. Finally, we divided this difference by the maximum length of the compressed versions of the two signals, C ( x ), C ( y ), in order to normalize the values between 0 and 1, so that the relative comparison between instances is possible . The zlib compressor was chosen and the fetal and maternal signals were compressed. From the values obtained for these compressions, it was possible to calculate the NCD. Considering the goal of understanding the bivariate relation between maternal and fetal data in the prediction and evaluation of acidemia, in this work, the explored features were based on 3 different metrics—Compression Ratio, NRC and NCD. Firstly, maternal, fetal and maternal-fetal compression ratios were extracted. These were calculated through expression 2 , using the zlib compressor. Secondly, in order to apply the NRC, some important factors were chosen, namely, the quantizer that best preserves the information contained in the signals, the size of the alphabet and the parameters d and k . After carrying out several tests it was concluded that the parameters that optimize the performance of the NRC are the Lloyds Max quantizer, an alphabet size of 20 letters and the parameters d = 6 and k = 6. The calculation of the NRC was performed based on Eq 3 . As explained earlier, this measure provides information about the amount of data in x that cannot be described by y . Compression was performed using simultaneous maternal and fetal heart rate signal segments. NRC is not symmetrical, so depending on the direction of the calculus, a different result is obtained. Thus, the NRC value can be calculated in two ways, i.e., calculating it based on the fetal signal or based on the maternal signal. When the calculation is based on the fetal signal, we are inspecting the dependency of the mother to the fetuses, by the opposite when the NRC is calculated based on the maternal signal, we are evaluating the fetal dependency to the mother. For each 10-minute segment, the NRC value was calculated in both ways, using the simultaneous fetal and maternal signal. After calculating the measure in both ways, it was found that there was a higher occurrence of overfitting, i.e., values very close to 1, when calculating the NRC based on the fetal signal. Therefore, it was decided to use the NRC of the fetal signal given the sequence of the maternal signal. Thus, throughout this work, we will assume x as the fetal signal and y as the maternal signal, which empirically indicates the dependence of the fetuses to the mother. It is supposed that the dependence to the mother, in healthy foetuses should diminish through time, when approaching the delivery time . In the analysis under study, the NRC was calculated with simultaneous maternal and fetal signal segments. However, it was also studied over time, both for the mother and the fetus, i.e., the NRC was calculated over time, in relation to the initial 10 minutes of signal. The study of the NRC over time for the mother and the fetus has as main objective to evaluate the evolution of each one, analyzing the similarity or dissimilarity of the fetus to itself and the mother to herself over time. The NRC calculated over time through the FHR and MHR separately, will allow to describe the dependency through time of each series. Finally, the NCD was calculated for the maternal-fetal signal and, similarly to what was done before, it was also calculated for the mother and the fetus over time, separately. This work was conducted using Python programming language. Machine learning models allow to characterize the relation between data, finding relations, correlations, and patterns not accessible by traditional methods . Being the machine learning models able to describe high dimensional data spaces, and considering that in clinical practice it is important to be able to describe and interpret the data relation with the prediction, under the scope of this work a layer of feature engineering was introduced, allowing the study of feature importance in the decision making. To try to distinguish acidemic from non-acidemic cases, three classifiers were tested: Random Forest (RF), XGBoost and Support-Vector Machine (SVM). Each tree in a random forest depends on the values of a random vector that was sampled randomly and with the same distribution for all the trees in the forest . This type of classifier was one of the chosen to distinguish the two groups, as it is straightforward to interpret and self-explanatory. Extreme Gradient Boosting, known as XGboost is a scalable, distributed gradient-boosted decision tree machine learning library. It is the leading machine learning library for regression, classification, and ranking problems because it includes parallel tree boosting, which allows solving many data science problems quickly and accurately . To finish, the main objective of the Support-Vector Machine (SVM) is to find an optimal separation hyperplane between classes, which correctly classifies the data points. The main advantage of this classifier is that it tries to find the decision boundary between classes without really worrying about the number of cases available for each of the classes . Random Forest, XGBoost and Support Vector Machine was selected based on their distinctive characteristics and suitability for the data set. RF and XGBoost, both tree-based algorithms, are widely recognized by their ability to handle complex and nonlinear relationships in data. In addition, these methods are inherently robust with unbalanced and underrepresented data sets due to packing techniques (RF) and pulse (XGBoost) they employ, which reduces bias and variance. In contrast, SVM was chosen to test a different approach based on distance to a hyperplane, allowing us to assess whether the model structure influenced performance. This selection ensures a comprehensive assessment of different algorithmic paradigms and their impact on the model results. Grid search was performed to correctly identify the optimal hyperparameters for each model. Specifically, for the SVM model, the kernel used was Radial Basis Function (RBF), with class_weight=‘balanced’ to handle class imbalance and probability = True to enable probability estimation. In order to try to distinguish the two groups, the original database was used first. However, with only 7 pathological cases, the model was not able to define a strategy for data evaluation. Based on this evidence, it was decided to use the augmented database, which contains the original data together with the data from the artificially created acidemic cases. The study of the mother-fetus relation was accessed on 10 minutes intervals, the inspected variables were: the compression ratio, the NRC and the NCD. Once the relation is intended to be evaluated the simultaneous metrics were calculated, but also the individual ones, the idea is to find the impact of each one in the process of acidemia evaluation. It is important to mention that, the metric values referring to the first 30 minutes were eliminated, due to the fact that not all participants completed the 2 hours of signal and this causes inconsistencies on data length. In this feature selection step, we study collinearity. As collinearity implies redundancy in the data, correlated features were reduced to one feature, when the correlation was above 0.5. The strategy for train data selection impacts the performance of the classifier. To select the training and test set, several approaches were tested, including leave-one-out, k-fold, stratified k-fold (number of splits = 20) and a specific validation designed to be more appropriate for the type of study data. Leave-one-out and k-fold cross-validation would be acceptable options, however, these methods do not guarantee the existence of both classes in both training and testing sets. Since this factor is essential, the stratified k-fold was chosen. In this method, the folds are created by keeping track of the sample proportions for each class, thus ensuring that both training and testing sets will have both groups. However, both were tested and the high-performance values obtained with the first 3 cross-validation hypotheses may be affected by the increase in the database, as we are training and testing the model with data that may be replicas of each other. To rule out this hypothesis, we decided to test the model with 7 acidemic cases, originating from only one original acidemic case, and with 7 non-acidemic cases chosen at random. Model training was performed with the remaining cases, then, we trained the model with 89 cases (acidemic and non-acidemic) and tested it with 7 acidemic and 7 non-acidemic cases. In this way, we are guaranteeing that the model has never been in contact with one of the original cases, nor with its replicas, increasing the reliability of the performance metrics obtained with the testing set. Fig 1 shows a schematic of the selection of the test set, outlined by the black rectangles, and the training set, i.e., the remaining cases. The original acidemic cases are outlined by a gray rectangle. Those immediately below are the acidemic cases obtained using data augmentation (SMOTE), remembering that each of the original acidemic cases generated 6 new ones. Since this validation is more appropriate to the type of data available for the study, the results shown hereafter are based on it. The proposed approach was based on a high explainable concept, according to the needs in the understanding of the physiological process underlying the model. The process design, and the model implementation was carefully planned to accomplish an extra level of knowledge about the maternal-fetal relation, enabling an improvement in pathological identification. The following characteristics will be used in the exploration of the classifiers: fetal compression ratio, maternal compression ratio, maternal-fetal compression ratio, fetal NRC, maternal NRC, maternal-fetal NRC, fetal NCD, maternal NCD and maternal-fetal NCD. Each of these characteristics has two calculated values in each 10-minute segment, for the trend and for the residual. Nine variable sets were tested using the three classifiers. In the first eight sets, several classifiers were used, in which the input data were the features described below for the trend and for the residue, individually. Only the last test developed, test I, aggregates features from the trend and the residual at the same time as input data. The features used for each of the tests performed are described below: Test A contains all the features used in this paper. The remaining tests are variations of test A, in which different features are selected. In order to facilitate the study of the distribution of features, this study was carried out only for test A. Table 3 shows the mean and standard deviation for features that follow a normal distribution, and Table 4 shows the median and interquartile range for those that do not follow a normal distribution. The distribution of features was analyzed for all cases and also separately by classes. The results of the class-based tests are presented in Tables 5 – 7 . While for non-acidemic cases some features follow a normal distribution and others do not, in acidemic cases no feature follows a normal distribution. This discrepancy can be explained by the fact that some acidemic cases were obtained through data augmentation, which may have reduced the diversity of the data. The performance values obtained in all tests performed, using the three classifiers under study—Random Forest, XGBoost and SVM—showed that the one that allows a better distinction between groups is the SVM, as can be seen in the Tables 8 – 10 . For this reason, the other two classifiers were discarded and the analysis and discussion will focus on the SVM classifier. Table 10 shows the performance metrics obtained for each of the variable sets. The first two tests use all extracted features as input data, for the trend and for the residue, respectively. Through the results observed in Table 10 , it can be concluded that the features from the residue (test B) have a superior ability to distinguish the two groups, in relation to the same features extracted through the trend (test A). The same was verified for tests C and D, in which the input data only contains maternal and fetal metrics, separately. These results may indicate a high discriminative power of the fast fluctuations of FHR and MHR time series, in comparison with the slow fluctuations. To understand the impact of the maternal-fetal relationship, maternal-fetal features were given as input data in tests E and F. It is possible to observe, in Table 10 , that the performance values are higher in the trend when compared to the tests A and C. On the other hand, the performance values related to the residue are lower (test F), in comparison with the two previous ones performed for the residue (tests B and D). Contrary to what was observed in the previous tests, in tests E and F, the trend signal has more discriminatory power than the same characteristics obtained from the residual signal, indicating that the slow variation is also an important part of the process. To evaluate the effect of maternal-fetal compression ratio on classifier performance, the input data for tests G and H were the same as the two previous tests excluding the compression ratio. Comparing these tests with E and F, it is possible to conclude that the compression ratio plays an important role in distinguishing the two groups since the performance values have decreased. In the trend, the value of f1-score decreases from 0.699 to 0.133 and in the residual from 0.211 to 0. In the last test, it was decided to combine the maternal-fetal features (compression ratio, NRC and NCD) of trend and residual in the same set of input. This was the set of variables that provided the best performance of the classifier (f1-score = 0.793), which indicates that slow and fast fluctuations in the time series of heart rate are important in the acidemia evaluation, as well as the simultaneous evaluation of the maternal-fetal relation. Based on Table 10 , it can be concluded that maternal-fetal features play a fundamental role in the performance of the classifier. In the Fig 2 , we present the ROC curve of test I. Analysis of the ROC curve reveals that the SVM model achieved a rate of true positives higher than the rate of false positives in most of the graph. The dashed red line represents the performance of a random classifier, while the blue curve, which positions itself above that line, suggests that the model is performing more accurate ratings than chance. In summary, the performance of the SVM model for the classification between Acidemic and Non-acidemic is moderate. Although the model shows an above random discrimination capacity, evidencing effectiveness in the distinction between the two classes, the presence of a significant rate of false positives at some points indicates the need for adjustments. Despite these limitations, the current performance of the model demonstrates a promising potential. This study shows the potential of using features from the mother, together with those from the fetus, is shown to improve the detection of fetal acidemia, which corroborates the study presented earlier . Furthermore, it is extremely interesting to evaluate the recall (or sensitivity) and the specificity values obtained. A system implemented in clinical practice is ideal when its specificity and sensitivity are 100% so that there is no margin of error when a result is presented. However, this is not always possible. Cardiocotography has been seen as one of the main causes of the increase in the cesarean rate , so although the main objective is to correctly detect fetuses with fetal acidemia, assessed by sensitivity, it is important to also control the specificity, not to waste resources. Test I has a value of 1 for recall and 0.469 for specificity, so we can infer that it has a greater probability of detecting an acidemic fetus, but its specificity, i.e., the probability of detecting a non-acidemic fetus is low, which in this particular application may imply an increase in cesarean rates. There are limited studies that incorporate both maternal and fetal signals. However, in 2016, Gonçalves et al. demonstrated the potential of bivariate analysis by exploring how maternal heart rate (MHR) and fetal heart rate (FHR) variability change during labor and predict fetal acidemia. By analyzing both signals together, the prediction of fetal acidemia improved, reaching a sensitivity of 100% and specificity of 89.1%, with a high predictive accuracy (auROC of 0.88 for pH < 7.15). Other studies, such as those by , have shown similar potential using only fetal signals. This study enabled to explain and interpret the physiological data that may indicate a support in decision making of acidemia evaluation. The use of maternal-fetal data introduces an extra information that allow to discriminate the condition with better feasibility, indicating that the maternal-fetal relation presents different patterns on pathological and non-pathological cases. Maternal-fetal compression ratio, maternal-fetal NRC and maternal-fetal NCD, obtained through trend and residual signals presented good results, with an f1-score value of 0.793. Based on this, it is important to highlight: This study has several limitations that should be noted. The primary limitation is the small number of acidemic cases. While validating the results with a larger database might yield better outcomes, the low incidence of the disease could still pose a challenge . Another significant limitation is the use of synthetic data resampling to address class imbalance. The pathological class had few representative cases, necessitating the use of the SMOTE technique to increase sampling, which could introduce biases into the model’s predictions . Additionally, the study used a pH cutoff value of 7.15. Although some research adopts this threshold, other studies use lower pH values. Due to the absence of severe acidemia cases in the database, 7.15 was chosen as the cutoff. A current limitation of this study is the lack of a standardized approach for selecting cutoff points. Future research should focus on validating these cutoffs with independent datasets and larger sample sizes to enhance their clinical applicability and enable meaningful comparisons across models. Future research should primarily focus on addressing the issue of class imbalance, developing methods that ensure fair decision-making without biasing the majority class. Here are some suggested areas for future exploration: Investigating the impact of varying time windows for segmenting signals (in this study, 10-minute non-overlapping intervals were used). Exploring the NRC calculation by adjusting the depth (d) and order context (k) values. Additionally, other studies have shown that non-linear measures, such as entropy, can effectively discriminate between acidemic and non-acidemic cases . Therefore, it may be beneficial to develop a classifier that combines multiple non-linear features, like compression and entropy, with the linear features currently used in clinical practice.
|
Other
|
biomedical
|
en
| 0.999998 |
PMC11694992
|
Wireless transmission networks have led to substantial advances in data networking and communications, as well as the establishment of integrated networks. The rapid progress of information and communication technologies (ICTs) has offered numerous benefits to system users, but these technologies also have various vulnerabilities that might be exploited by network adversaries . Cyberattacks such as malware attacks, classified data breaches, denial of service, phishing, and other security-related incidents have increased significantly in recent years. A cyberattack or a cyber threat refers to any unauthorized event or trespassing that compromises the network and carries out diverse malicious operations such as identity theft, spoofing, exfiltration, or exploitation of sensitive data and network resources . A cyber-attack identification mechanism is a proactive approach that analyzes network traffic, identifies anomalies, and classifies cyber threats in the network . Wi-Fi, or the IEEE-802.11 wireless local area networking (WLAN) standard, is crucial in daily life. IEEE-802.11 networks are at the forefront of this rapid change to a wireless space due to their potential to provide fast speed, enhanced mobility, usability, and cost-effective installation and maintenance expenses . IEEE802.11-based wireless networks are widely used in homes, businesses, and public places, but also in critical infrastructures such as hospitals or manufacturing facilities where their availability is vital. Wi-Fi’s success may be attributed to a variety of factors, including well-defined use cases, deployment and configuration flexibility, and the accessibility of inexpensive, highly interoperable hardware . As IEEE802.11-based networks became more ubiquitous, so did the possibility for hackers and other malicious activities to exploit them. Wi-Fi networks were initially open with data moving over the unencrypted medium. Individuals connected to their companies through public Wi-Fi networks such as coffee shops or libraries were always vulnerable to security threats. Anyone with a Wi-Fi receiver in the public space premises could access and interpret the sniffed data. Over the years, several approaches have been introduced to prevent security threats. Wired equivalent privacy (WEP) was the first scheme for the prevention of cyberattacks, but it had several flaws and soon became unreliable . Later, Wi-Fi-protected access (WPA), WPA2, and WPA3 were introduced to secure wireless networks via authentication and encryption . However, these standards are also vulnerable to cyberattacks with compromised encryption keys as authentication/association attacks are risky if the pre-shared key is down even though protected management frames (PMF) are operative. Conventional or traditional intrusion detection systems require skilled human expertise to analyze network traffic patterns for cyber-attack detection, and attackers are generally familiar with the working of these mechanisms [ 8 – 10 ], which leads to several challenges. With advancements in network environments and the use of transformative technologies, the nature of attacks is also modified. Therefore, contemporary intrusion detection systems leverage advanced technologies such as machine learning or deep learning for cyber-attack detection in a specific environment including Wi-Fi in an IoT environment The amount of network data has significantly risen due to the increasing prevalence of connectivity, cloud services, and the Internet of Things (IoT) . Due to this huge volume of data transmission through modern high-speed/ bandwidth communication networks, cyber-attack detection has become inefficient . Due to this, in-depth automated monitoring of network traffic is required to identify distinct network attacks. Machine learning and deep learning approaches have the potential to revolutionize technology and operations as they address the problem of big data. Neural networks and various other deep learning techniques consistently achieve commendable results in addressing classification problems . Incorporating these techniques allows for intelligently analyzing and discovering useful insights and patterns to detect attacks or security threats . This could be the key to lightweight and cost-effective intrusion detection systems. This domain’s main shortcoming is that most publicly available benchmark wireless traffic datasets are outdated and do not include recent attack scenarios such as key reinstallation (Krack) or unauthorized decryption (Kr00k) attacks. It is crucial to acknowledge that the AWID3 dataset stands out as an exception in this regard, as it encompasses a more comprehensive range of scenarios, including those involving Krack and Kr00k attacks. Therefore, it is imperative to highlight the significance of the AWID3 dataset, emphasizing its relevance. Existing wireless attack datasets do not include the enterprise version of the 802.11 protocols. Another essential overlooked factor is the selection of appropriate performance metrics as accuracy measures do not demonstrate insights into the results [ 15 – 17 ]. However, it’s important to note that this specific issue falls outside the scope of the current research. Therefore, an effective system is required that can indicate any data breach or vulnerability in the network before any major loss of sensitive data. Additionally, cyberattack detection is still a major challenge due to the ubiquity of successful cyberattacks publicized in the mainstream media. While there are some incredible cyber-attack detection results, they frequently rely on certain datasets and can’t always work well in a variety of real-world settings. In other words, while these models can thrive on their training data, their performance on varied network traffic remains uncertain. This highlights the need for intrusion detection features that are successfully transferable across different 802.11 datasets. The concept of feature transferability is especially significant when obtaining labeled data for the first time is excessively expensive, time-consuming, or unattainable. These are the potential features that continuously perform well across various scenarios. Transferable features, in the context of deep learning or machine learning models, are those that demonstrate consistent performance and efficacy across a range of datasets or scenarios . If the features consistently maintain their performance well across diverse datasets, it shows that the proposed cyber-attack detection model has real-world application potential under a variety of network environments. Conversely, if the transferability of the features is limited, it will prompt further investigations to refine the feature selection process or develop more flexible models for broader applications in different network environments. While extensive research has been conducted to improve the security of Wi-Fi networks, a distinct focus on Krack and Kr00k attacks appears to be lacking. The Krack vulnerability exploits instil flaws in the 4-way handshake protocol, allowing an attacker to reinstall a key that is momentarily in use. This, in turn, could end up in the decryption of Wi-Fi traffic, allowing unauthorized parties to discreetly intercept important information. Conversely, the kr00k attack occurs when a device disconnects from a Wi-Fi network while still encrypting data. Kr00k exploits a weakness in this circumstance by manipulating the flow of unencrypted packets, revealing fragments of previously encrypted data. Given the rapid growth of cyber threats, this omission creates a crucial information gap in the attempts to adequately protect wireless networks. Additionally, it’s evident that a substantial portion of prior research heavily relies on the AWID dataset. This dataset, however, has shown limitations over time, particularly because it does not include the most recent attack instances. This disparity is especially obvious in the case of protected management frames (PMF), a critical component in modern secure Wi-Fi networks. The absence of PMF in AWID is an important consideration for evaluating intrusion detection systems in the context of modern Wi-Fi security because it plays a critical role in reinforcing the authentication and association process. Another shortcoming is that many of the previous studies have focused on home-based Wi-Fi environments. These studies failed to recognize the necessity of testing their techniques and solutions in enterprise network environments. As the network setups, protocols, and security need to change significantly in corporate settings, this omission limits the practical relevance of research findings. Additionally, the absence of evaluation of generalization and transferability of features, so that the features can be used across different network conditions, is a major shortcoming in the existing literature. In this study, an innovative, lightweight cyber-attack detection model is proposed to identify existing attacks. These include Krack, Kr00k, de-authentication, and disassociation attacks. In the proposed methodology, recursive feature elimination (RFE) was used to extract 8 out of 16 MAC layer and physical layer features, proposed by , and tested using several classifiers including decision trees (DT), random forest (RF), extra trees (ETs), light gradient boost machine (GBM), multi-layer perceptron (MLP) and convolutional neural network (CNN). Moreover, the extracted features were used for the analysis across different datasets to test whether the given features are conceivably transferable. The results of this research offered valuable information regarding how transferable and generalizable the retained features are. If the features consistently show effective performance across diverse datasets, it suggests that the proposed cyber-attack detection model can be successfully implemented in real-world scenarios with varying network conditions, making it more practical and valuable. The following are the main contributions of this work: The summary of this research is structured in the following manner: section 2 sheds light on existing literature regarding cyber-attack detection. Section 3 discusses the pre-processing and feature selection process. Tree-based and MLP approaches for cyber-attack detection are reviewed in Section 4. Section 5 presents experimentation and results including feature transferability. Research work is concluded in section 6, with future work. The three primary methods for analyzing network traffic to detect attacks are classified as signature detection, anomaly detection, and hybrid techniques that integrate both signature and anomaly detection techniques . Signature-based detection identifies cyberattacks using predetermined signatures stored in the signature database. Whenever an attack occurs, the attack’s signatures are compared with the signature database, and the alert is generated if the attack signatures match the ones in the database. The signature database needs to be updated constantly to keep up with new attacks. Still, this technique only detects those attacks that are present in the database and does not detect zero-day attacks [ 20 – 22 ]. Anomaly detection is a dynamic approach that analyzes network traffic and notifies if there is any anomalous variation or abnormal behaviour in the network. Although it detects unknown attacks, there exists a greater risk of a high false-positive rate (FPR) as not every anomaly or variation in the network is a sign of intrusion . Conventional intrusion detection technology has been extensively studied for the past few years. The integration of AI, however, has transformed it even if it might not have excellent real-time detection performance. Nevertheless, researchers are focusing on machine learning (ML) and deep learning (DL) techniques since they have demonstrated a considerable increase in accuracy and a reduction in FPR. Several widely used publicly available benchmark datasets including NSL-KDD, CIC-IDS2017, AWID, and UNSW_NB15 are available for research purposes. Various ML and DL approaches have been proposed which can improve efficiency and lessen the execution time of intrusion detection mechanisms. In a research work , multiple supervised learning techniques embracing artificial neural network (ANN), decision tree, random forest, and unsupervised techniques including K-means, self-organizing map (SOM), and expectation maximization (EM) algorithms were applied to CIC-IDS2017. Some algorithms demonstrated high accuracy while others such as SOM and EM failed to detect targeted attacks. A novel network structure of deep belief network (DBN) was proposed based on an artificial fish swarm algorithm (AFSA), genetic algorithm (GA), and particle swarm optimization (PSO) to detect network intrusions in NSL-KDD . Although this model attained 98% accuracy, a higher number of layers can increase computational costs. The work in proposed a hybrid technique to detect intrusions based on feature selection and classification using UNB ISCX 2012 and CIC-IDS2018 datasets in the Apache Spark environment. A stacked auto-encoder (SAE) performed feature selection and a support vector machine (SVM) algorithm was used for intrusion detection. Results demonstrated 90.2% accuracy with reduced training time. A hybrid technique consisting of K-means clustering with RF, CNN, and long short-term memory (LSTM) was applied in the Apache Spark environment . Adaptive synthetic sampling was used to solve imbalanced datasets. The results showed 85% accuracy on NSL-KDD and 99.9% accuracy on CIC-IDS 2017. In , principal component analysis (PCA) and mutual information (MI) with LSTM were implemented for dimensionality reduction and classification of cyber-attacks. LSTM-PCA achieved the highest accuracy of 99.36%. Three feature selection techniques comprising autoencoder (AE), the stacked autoencoder (SAE), and deep autoencoder (DAE) with DNN were applied to indicate data breach in CIC-IDS2018 and NSL-KDD where DAE-DNN attained the highest accuracy. DAE for feature selection and recurrent neural networks (RNN) for classification were implemented on CIC-IDS2018 and Bot-IoT . The highest accuracy for the Bot-IoT dataset 98.39% was obtained with DAE while significant results for CIC-IDS2018 were obtained with RNN, 97.38% accuracy. The major shortcoming was the lack of details of actual experimentation. In this work , The BAT optimal feature selection method to identify the most relevant features. To evaluate the accuracy of intrusion detection, the Support Vector Machine (SVM) classifier was tested using the KDD99 benchmark dataset. When compared to alternative machine learning algorithms, this approach outperformed others with a detection accuracy of 99%. The Perceptual-Pigeon-Galvanized-Optimization(PPGO) approach was used to choose the best parameters for intrusion detection in datasets NSL-KDD, CICIDS, and Bot-IoT . Then the Likelihood Naive Bayes (LNB) classification method was implemented outperforming previous models with a remarkable accuracy rate of 99%. The study introduced a novel feature selection method based on the Capuchin-Search-Algorithm (CapSA). CNN-CapSA was evaluated using four IoT-Cloud datasets: NSL-KDD, BoT-IoT, KDD99, and CIC2017, and surpassed other state-of-the-art methods with approximately 99% accuracy. The study proposed HetIoT-CNN IDS, an advanced Intrusion Detection System (IDS) that used a Convolutional-Neural-Network (CNN) built for the HetIoT (Heterogeneous Internet of Things) environment. The HetIoT-CNN IDS achieved high accuracy scores of 99.75% for binary classifications, 99.95% for 8-class classifications, and 99.99% for 13-class classifications. The significance of intrusion detection in securing networks has drawn the attention of numerous researchers. Numerous publications proposed novel methodologies for intrusion detection for wireless sensors and Wi-Fi networks. Technologies like wireless networks, 4G, IoT, and others transmit a substantial amount of data and are pre-disposed to various cyberattacks and security risks that might jeopardize the reliability and confidentiality of information or services. Wi-Fi networks are nearly universally used in businesses nowadays to give employees access to the Internet. Business stakeholders have become more concerned about Wi-Fi networks and operational security. As the dynamics of technology and attack strategies are expanding, the IDS must be scalable and adaptable to counter new attacks. Several techniques have been proposed to detect cyberattacks on wireless networks. In , two models were introduced to draw out additional features using SAE, the features were then combined with the original features based on the amount of mutual information shared between the features and class labels. It was then merged with the radial basis function classifier (RBFC) to evaluate results on the AWID dataset. Results showed that RBFC acquired 98% accuracy with 7 optimal features. A novel system KTRACKER was proposed to detect novel cyber threats such as key re-installation (Krack) on Wi-Fi-protected access (WPA2) . It grouped handshake packets and used traffic analysis to find KRACK. Cat boost attained the highest accuracy out of the three machine learning models XGBoost, Light Gradient Boosting Machine (Light GBM), and Cat boost. In a recent study, a feed-forward-deep-neural-network (FFDNN) wireless-IDS system using a wrapper-based feature-extraction unit (WFEU) was introduced . The WFEU extraction approach involved the extra trees algorithm to extract optimum feature selection. The proficiency of the proposed model was examined using the UNSW-NB15 and AWID intrusion detection datasets. The proposed model acquired higher detection accuracy than existing techniques. Overall, the accuracies of 99.66% and 99.77% with 26 features from AWID, and 87% and 77% with 22 features using UNSW_NB for binary and multiclass classification were attained respectively. A novel, conditional deep-belief-network (CDBN), technique was proposed to detect wireless network intrusions in real-time and identify cyber-attacks . A stacked contractive auto-encoder (SCAE) approach was presented for the reduction of data dimensionality to mitigate the effects of its unbalanced nature and data redundancy on detection accuracy. The experimental results showed better detection accuracy and speed, with an average detection time of 1.14 ms and 97.4% detection accuracy. Most modern IDSs utilize machine learning approaches that suffer from performance deterioration when used against an adversary and are unable to achieve a balance between accuracy and false-positive rate (FPR). Due to the open-sharing nature of wireless technology, organizations continue to have serious concerns about Wi-Fi security. A significant number of impersonation attacks were misclassified into injection attacks in previous studies. To overcome this limitation, a dual-stage Wi-Fi network-intrusion-detection (WNIDS) method, based on machine learning, was proposed to increase the detection accuracy for injection and impersonation threats in a Wi-Fi network . In the first stage, the RF outperformed other models to classify the attacks into three classes normal, flooding, and unified impersonation or injection attacks from the AWID-CLS-R test dataset. In the second stage, NB outperformed other models by correctly classifying the unified attack instances into impersonation attacks and injection attacks with an accuracy of 99.42%. To prevent overfitting, a feature separation approach based on word embedding was developed to speed up calculations . For classification, a dual/limited attention mechanism was proposed instead of global attention. These approaches were utilized with the UNSW-NB15 and AWID datasets where the gated recurrent unit (GRU) attained the highest accuracy of 93.47% on the AWID dataset and RNN attained 94.96% accuracy on the UNSW-NB15 dataset. However, only the accuracy metric was used as an evaluation metric even though accuracy alone is not a reliable metric. Another novel system, the Wi-Fi intrusion-detection-system (WIDS), proposed an anomalous behaviour analysis technique to identify assaults on Wi-Fi networks with significantly high accuracy and reduced rate of false alarms . In this technique, n-grams were implemented to represent the normal behaviour of the Wi-Fi protocol, and several machine learning models were used to distinguish Wi-Fi traffic as normal or malicious. This technique was evaluated using numerous datasets gathered locally at the University of Arizona and the AWID dataset classified all Wi-Fi protocol assaults with low false positives and a variable low rate of false negatives for different attacks. classified DoS attacks using an ensemble technique. Recursive feature elimination (RFE) was used for the selection of features and then an ensemble classifier, using RF, SVM, and Swell with 10-fold cross-validation, for classification with AWID-CLS-Test dataset. The outcomes demonstrated a precision of 99.98% and 0.12 FPR. For wireless intrusion detection, a feature selection technique based on Fuzzy C-Means (FCM) was introduced, which used the distance between the FCM centre point and the data point to determine the difference between the normal and attack centre distances, and then used the distances to pick the features . This method was tested using the AWID dataset, and the findings demonstrated that it was quite accurate in attack detection. Researchers have lately deemed the 5G network environment to be significant, owing to the advancement of network communication and the growing number of users. As a result, wireless network security of 5G networks has become a crucial concern. This study made two major advances in the detection of network assaults . Numerous ML and DL approaches, including multi-class neural networks, multi-class decision jungle, decision trees, KNN and multi-class decision forest were used to construct an intelligent system that classifies data into normal and abnormal traffic to detect cyber assaults. Using the AWID3 dataset, the performance was evaluated using the Omnet++ simulator tool to retrieve a subset of the packet transmission performance dataset for a run time of 20 seconds. This network attained 99% accuracy, however, only accuracy is used for evaluation metrics. Furthermore, ‘frame.time.epoch’is a time series feature and should be preprocessed accordingly. proposed an intrusion detection technique for wireless sensor networks based on graph neural networks and Lyapunov optimization in this study. The AWID dataset was utilized for GNN with the Lyapunov optimization loss function. The acquired results were better than the previous SVM-based works. However, no confusion matrix or false alarm rate has been calculated. By resampling training data and redefining rewards in reinforcement learning, the research creates an environmental agent that improves intrusion detection . In a multi-classification experiment, the system, AE-SAC, achieves excellent performance, with an accuracy of 84.15% and an F1-score of 83.97% on the NSL-KDD dataset and an accuracy and F1-score exceeding 98.9% on the AWID dataset. Related work with the critical analysis is presented in Table 1 . In the extant literature, most of the research studies did not include modern Wi-Fi traffic. Many studies were conducted using publicly available datasets including even outdated KDD99 and NSL-KDD datasets launched in 1999 and 2009 with 42 features that do not reflect modern attack scenarios . Other datasets that are widely used in research do not include the latest Wi-Fi attack scenarios such as ISCX 2012 is based on emulated traffic with 82 features and does not reflect the effectiveness of a practical network environment. It is comprised of over 2 million traffic packets, and attacks represent 2% of the whole traffic . UNSW-NB15 is based on a simulated network and consists of 49 features, 175,341 normal traffic, and 82,332 anomaly classes making it a highly imbalanced dataset. In 2017, CIC-IDS2017 was introduced, and later CIC-IDS2018. These datasets contain various recent cyberattacks, such as brute-force attacks on FTP and SSH servers, denial-of-service attacks(DoS), Heartbleed attacks, and other online attacks such as XSS, SQL injection, and brute-force attacks. These statistics include assaults that were absent from the earlier datasets, such as infiltration, botnets, and DDoS attacks. Another benefit of this dataset is that the normal traffic generated in this dataset is based on network protocols such as HTTP, HTTPS, FTP, SSH, E-mail, etc., which is closer to a real-time network environment than the previous datasets . The major shortcoming of the research with these datasets is that they do not include Wi-Fi attack scenarios. All these datasets are based on a wired network. Aegean Wi-Fi Intrusion Detection Dataset (AWID) is the only benchmark dataset that consists of attacks related to wireless intrusion networks. It provides a freely accessible dataset of legitimate and malicious traffic directed against 802.11 networks. This is the first dataset that includes 802.11 attacks but still does not include Krack and Kr00k attacks. The focus of this work is to extract the most meaningful features to have a secure Wi-Fi system. Wrapper approaches, such as RFE, use machine learning algorithms to regulate the performance of selected features and frequently outperform filter methods in terms of predictive accuracy . Furthermore, the existing literature failed to extract and analyze the generalized features for each attack including Krack and Kr00k, and authentication attacks which include de-authentication and disassociation attacks. AWID3 benchmark dataset includes these attacks and focuses on enterprise adaptations of the protocol unit thus considered more challenging than AWID and providing greater security methods. However, another significant shortcoming in current research is the lack of evaluation on the generalization of trained models with other datasets. The lack of evaluation in this regard raises uncertainties about the transferability and generalizability of the features and models. Without such evaluation, it remains unclear whether the proposed cyber-attack detection model will perform well and provide accurate results in real-world scenarios with varying network conditions. Consequently, as a result, additional research and testing are crucial to ascertain whether the retained features can be used successfully across various datasets, ensuring their dependability and usability in a wider spectrum of network environments. Corporate Wi-Fi networks are vital for both businesses and public administrations, offering a highly adaptable and secure infrastructure. Access points face vulnerabilities like deauthentication, disassociation attacks, and the KRACK attack, which exploits the four-way handshake. Recently, the Kr00k attack has emerged as a critical threat, specifically targeting Wi-Fi chips. These risks demand vigilant security measures to protect wireless networks and devices. Fig 1 shows the framework for a secure Wi-Fi network. One of the prime objectives of this research is to improve the detection rate of attacks with fewer features. Fig 2 outlines the proposed methodology for this research study. To conduct the proposed strategy and experimentation, the AWID3 dataset is utilized. Generally, the datasets consist of missing values, special characters, and different data types. Therefore, preprocessing of the dataset is performed in the second step. In the third step, the feature selection algorithm is utilized to get a minimized set of features using a recursive feature elimination algorithm, and several classification algorithms including DT, RF, ET, Light GBM, MLP, and CNN were used for classification. In the next step, DT-RFE was used to get features for each attack and classification was performed to analyze the accuracy of these features. In the last step, it was analyzed if the features for each attack were transferable. It is worth noting that while deep learning algorithms are known for their capacity to automatically learn hierarchical features and could be powerful, they might come with increased computational demands. Deep learning models often require a large amount of data for training, and the effectiveness of such models is typically observed in more extensive datasets. As the AWID 3 dataset is relatively small and lacks the complexity that could benefit deep learning, simpler models like decision trees performed better. The purpose of feature selection is the minimization of the time and space complexity of the model. Detection of attacks through reduced features and processing time without any delay will lead to an efficient lightweight IDS. RFE is a method for feature selection that uses a classifier to build the model. A machine learning model is trained and assessed using various feature subsets to determine the optimal feature subset that leads to improved performance of the model. The RFE process begins with training a machine learning model on an entire set of features, followed by ranking the features in order of significance to the model’s performance. The model is then retrained and assessed with a smaller set of features after the least significant feature is eliminated. This process is repeated iteratively until a predetermined number of features is reached, or until a desired level of performance is achieved. The feature importance score for each feature is computed ( Eq 1 ), and the feature with the lowest value is removed from the subset. Wrapper-based RFE differs from other feature selection methods, such as filter-based or embedded methods, in that it evaluates the impact of feature subsets on the specific machine learning model being used, rather than just measuring the correlation between features and the target variable. The decision tree is a type of supervised learning technique to tackle classification problems. A DT’s components include leaves, branches, and nodes. The branches indicate the collection of features that result in the class labels, whereas the leaves represent the labels for each class. Both discrete and continuous data sets can be used with these branches. The samples are categorized into two or more homogeneous sets by the DT approach. The classification process works in a top-to-down sequence, and an optimal conclusion is attained when the proper category of the leaf node is discovered. However, decision trees face overfitting problems. Decision Trees separate data at each node using splitting criteria such as Gini impurity(D) from a different number of classes(C) in the dataset where p i is the probability of an instance in D belonging to class i as shown in Eq 2 . For each feature and value, the algorithm examines the splitting criterion and chooses the one that minimizes the criterion. The data is split recursively into child nodes until a halting criterion is reached. It assigns a class label or numerical value based on the majority class or mean value when it reaches a leaf node. Algorithm 1 demonstrates the decision tree process. The RF method, an ensemble learning technique, is used for tackling classification and regression issues. Unlike decision trees, the random forest classifier uses numerous DTs to classify a given dataset. These decision trees calculate the entropy of features and then split the samples layer by layer. As a result, the dataset instances are divided according to the desired column. Random forest overcomes the problem of overfitting as opposed to decision trees. Random Forest is an ensemble learning method that uses numerous decision trees to improve prediction accuracy. Let (X, Y) represent the dataset, with X representing the feature matrix of shape (n, p), with n instances and p features, and Y representing the target variable. Then bootstrap of the dataset is created by randomly picking n instances from the original dataset with replacement. The original dataset (Xb, Yb) is the same size as the sampled dataset (Xb, Yb). For each tree, choose a subset of m features at random from the total p features. Create a decision tree T with (Xb, Yb) and the randomly chosen m characteristics. Create N decision trees using the preceding procedure to construct a Random Forest {T1, T2,…, TN}. To determine the most prevalent class for classification, employ a majority vote among the tree predictions just like in Eq 3 . This algorithm attempts to fit randomized decision trees on distinct sub-samples of the dataset and implements the notion of averaging to improve accuracy as well as efficiency to overcome the overfitting problem. The extra tree algorithm uses the standard top-down approach to generate a sequence of raw gradient or regression trees. ETs are distinct from conventional tree-based clustering algorithms in such a way that they separate nodes by arbitrarily cutting points and construct the tree using the entire learning sample. Extra trees are similar to random forests. Compared to Random Forest, Extra Trees can be trained more quickly because the split thresholds are chosen at random and there is no need to look for the best thresholds. The LightGBM method incorporates two innovative techniques: gradient-based one-side sampling (GOSS) with exclusive feature bundling (EFB). XGBoost is a deep-learning algorithm used for regression and classification tasks. For classification, the goal is to minimize the log loss function for n number of data points where y i is the true class label and p i is the probability of class for data point I as given in Eq 4 . It builds an ensemble of decision trees, called boosted trees, to make predictions. The objective function uses two regularization terms: L1 Regularization (Lasso) and L2 Regularization (Ridge), aiming to prevent overfitting and maximize gain scores. The ensemble’s predictions are weighted according to performance. One of the most often used feed-forward neural networks is the multi-layer-perceptron (MLP) neural network. MLP neurons are linked in a one-way and one-directional manner. The MLP design is as follows: the initial layer that feeds the network with input variables is called the input layer, the last layer is called the output layer, and all the layers in between are called hidden layers. The hidden layer, which consists of m neurons, computes a weighted sum of inputs and expresses it for the jth neuron by passing it through an activation function as shown in Eq 5 . Where Z j is the weighted sum of neuron j. The weight known as W ij is what connects the ith input neuron to the jth hidden neuron and the bias for neuron j is b j . The weighted sum of the output neuron can be expressed as in Eq 6 where Z k is the k-th output neuron’s weighted sum, c k is the bias for the k-th output neuron, and V jk is the weight connecting the j-th hidden neuron to the k-th output neuron. It is provided with the necessary structural flexibility and representational capabilities, as well as access to a diverse set of data samples. CNNs are designed to learn spatial and temporal patterns in data. In the context of intrusion detection, CNNs can be used to learn patterns in the independent features of the dataset. The convolutional layer is the fundamental component of CNN, where a series of filters are applied to the input image to generate feature maps. The mathematical equation is given in Eq 7 where in the l th feature map, Z ij [l] is the value at position (i, j). The l-th layer’s filter at position (m,n) has assigned a weight W m,n [l] . In the (l-1)-th layer’s feature map, X i+m, j+n [l-1] is the value at position (i+m, j+n). Following feature extraction, pooling layers are used to analyze the data and minimize the spatial dimensions of feature maps where P i,j [l] is the pooled value at position (i, j) and A 2i2j [l], etc. are the values at corresponding positions in the activated feature map given in Eq 8 . This is then followed by fully connected layers, which make the final prediction. The weights of the filters and fully connected layers are learned through training the network on a labeled dataset. This is demonstrated in algorithm 2. For example, if the independent features of the intrusion detection dataset are network packets, a CNN can be used to learn patterns in the packet’s header fields such as source-IP and destination-IP addresses, port numbers, and protocol type. By learning these patterns, CNN can detect anomalies in the network traffic, which may indicate an intrusion. The effectiveness of intrusion detection systems (IDS) is assessed using consolidated metrics such as precision, recall, F1 score, and AUC score. As anticipated, all IDS models achieve remarkably high F1 scores, ranging from 0.98 to 1, and AUC scores, ranging from 0.97 to 0.99, when trained and tested on individual datasets like AWID and AWID3. These findings are consistent with prior research on IDSs applied to publicly available datasets and underscore the models’ efficacy in their specific contexts. However, the transferability of these high-performing models to an unseen dataset leads to diverse outcomes. The performance may vary, indicating that the models’ exceptional performance on a specific dataset does not automatically guarantee their ability to generalize to novel and unseen datasets under a distinctive network environment. Undoubtedly, a pivotal question arises: Can the chosen set of features be seamlessly transferred across datasets? To probe this matter, the model, having undergone training with the retained features, undergoes comprehensive testing using unseen network traffic. This evaluation encompasses real-time scenarios and diverse network environments, empowering researchers to gauge the enduring efficacy and broad applicability of the retained features beyond the confines of the original dataset. This evaluation assumes paramount significance as it validates the feasibility and versatility of the proposed cyber-attack detection model in dynamic and varied network conditions. Contrary to the first Aegean Wi-Fi Intrusion Detection Dataset (AWID), the AWID3 dataset focuses on enterprise adaptations of the protocol unit and is thus considered more challenging than AWID providing greater security methods such as the usage of protected management frames (PMF), introduced with the 802.11w amendment, and support for various network designs. AWID3 is a publicly accessible database of Wi-Fi network traffic that includes actual traces of both legitimate and unwanted 802.11 activity. It captures numerous different attacks launched against the IEEE 802.1X extensible authentication protocol (EAP) system. This dataset focuses primarily on attacks related to 802.11 and higher-layer attacks. Furthermore, new 802.11-specific attacks, such as Krack and Kr00k, have been included for analysis. In this research, a minimized edition of the dataset has been used consisting of four types of attacks de-authentication, dissociation, Krack, Kr00k, and benign traffic. Timestamps, numerals, hexadecimal digits, strings, etc. are examples of features’ data types. The AWID3 dataset is an unbalanced distribution of records. For this research, the imbalance property of the dataset is not altered. Fig 3 demonstrates the distribution of the number of instances in the dataset. For the feature selection phase, time-based features such as the frame.timedelta, frame.time_delta_displayed etc. have been discarded since the focus is not the time-based analysis. Only, cherry-picked MAC and physical layer features were selected and proposed in . These features were considered due to their potential to function as a solid foundation for the development of a reliable, easy-to-handle, and economical 802.11 cyber threat detection system. The features with their description are presented in Table 2 . AWID3 dataset contains a few missing values in the instances. To encounter this problem, records that contain any missing values have been dropped. Here, wlan.fc.ds represents hexadecimal strings which are converted to numerical using label encoding. There exists a significant difference between the ranges of feature values, such as radiotap. channel.freq begins from 1000 while, the maximum value of radiotap. length for this dataset is approximately 100. In the given formula, the step size of the gradient descent will change depending on whether feature value X is present in the formula. Different step sizes for each feature will result from the differences in feature ranges. Data needs to be normalized before feeding it to the model to make sure that the gradient descent progresses evenly towards the local minima and that the steps for gradient descent are updated at the same pace for all the features demonstrated in Eq 9 . For the trained classification algorithm to work properly, the primary data must first undergo some sort of data normalization due to the high level of irregularity present in the primary data. If the data is not normalized, the model will be dominated by variables on a larger scale, which will have a detrimental effect on the model’s efficiency. As a result, normalization is an absolute necessity. The min-max scaling method given in Eq 10 can be used to rationalize the set of diverse data. For the decision tree, the maximum number of leaf nodes is set to 200. The minimum sample size per leaf was raised to 2 to compel each leaf to collect pertinent information. Additionally, a minimal cost-complexity pruning impact was added using the ccp_alpha complexity parameter like regularization. When set to the minimum value, pruning iteratively locates the node with the "weakest connection." The weakest link is defined by its effectual alpha, with the nodes with the lowest effective alpha deleted first. The same parameter values used for the DT were evaluated for RF, with favourable results. Regarding ET, the maximum number of leaf nodes is set to 500, maximum depth of nodes is 300 with n_estimators set to 200. Table 3 shows the parameters of the tree-based algorithms. Multi-layer perceptron was applied to detect network attacks where the parameters included adaptive moment estimation (Adam) as an optimizer with a 0.0001 learning rate. As shown in Table 4 , the parameters of the convolutional neural network included a 0.001 learning rate and an Adam optimizer. Early stopping for both models was specified to run the model 2 times more before stopping to avoid overfitting. The activation function adopted was the rectified linear activation unit, and the output was softmax. To lessen overfitting, dropout layers, and early stopping were used. The main differentiation between AWID (possibly referring to AWID2) and AWID3 lies in their distinct emphases and contexts. Although both datasets pertain to Wi-Fi intrusion detection, AWID3 is specifically tailored for corporate applications of the protocol, which often entails more robust security features. The key differences can be summarized as follows: AWID (AWID2) centers around conventional Wi-Fi intrusion detection scenarios, while AWID3 is geared towards business implementations of the protocol. AWID3 considers the incorporation of enhanced security measures that are prevalent in business settings. This includes the utilization of Protected Management Frames (PMF), introduced with the 802.11w amendment, which enhances Wi-Fi network security. AWID3 considers a wide range of network architectures commonly observed in commercial organizations. Consequently, the dataset encompasses information from more intricate network configurations unique to business Wi-Fi deployments. Conclusively, while both AWID (AWID2) and AWID3 are pertinent to Wi-Fi intrusion detection, AWID3 provides a more specialized and focused dataset tailored for detecting intrusions in enterprise Wi-Fi environments. Its emphasis on better security mechanisms and diverse network designs enhances its relevance for real-world applications in the corporate sector. The primary objective of testing feature transferability is to identify the most robust and advantageous features that can be effectively applied and generalized across diverse network environments, especially between a general Wi-Fi network setting and an enterprise network setting. This research aims to uncover features that retain their effectiveness when transferred from a general Wi-Fi network environment to a corporate Wi-Fi network environment, with a specific focus on AWID3, which pertains to enterprise versions of the protocol. The incorporation of stronger security measures and varying network topologies in the corporate context may necessitate the utilization of specific features for efficient intrusion detection. Both datasets share common instances of deauthentication attacks. The training set encompasses AWID2-CLS-R, containing only the Normal and Flooding classes, while the test set comprises AWID3 Deauth. pcap, featuring solely the Normal and Deauthentication traffic. This section discusses the feature selection and classification process. The model was created on Google Colab with T4 GPU using the open-source TensorFlow Keras framework. Data preprocessing was performed to remove any inconsistencies. This consisted of handling missing information, standardizing data formats, and implementing the required transformations to ensure consistency and correctness. Machine learning models were trained and assessed. This entailed dividing the dataset into training and testing subsets, employing cross-validation techniques, and utilizing suitable evaluation metrics to measure model performance. The ethical aspects of implementing automated wireless intrusion detection methods were addressed. The data collected and analysed by these technologies was solely utilised to improve network security and combat cyber-attacks. Ethical considerations require that the data be treated responsibly and ethically, with strict measures in place to protect sensitive information from unauthorised access or misuse. By following these ethical standards, it was assured that automated intrusion detection systems positively contribute to network security while respecting individual privacy rights and sustaining faith in technology. The AWID3 dataset used in these experiments has 5 classes: normal, de-authentication, disassociation, Krack, and Kr00k. Intrusion detection datasets are generally highly imbalanced as attack traffic is always significantly less than normal traffic. In this case, balancing the data with the use of oversampling techniques would not be appropriate. In this approach, stratified k-fold with 10-fold cross-validation (CV) has been implemented to neutralize the imbalance characteristic of the dataset. The following are the appropriate evaluation metrics to detect cyberattacks. The efficiency of the proposed methodology has been evaluated using a confusion matrix that consists of true-positive (TP), true-negative (TN), false-positive (FP), and false-negative (FN) as defined below: Accuracy, Precision, Recall, and F1 measures have all been used in this study as assessment metrics based on the characteristics of the confusion matrix. Accuracy: The ratio of cases in the dataset that the model correctly identified. The higher the accuracy, the better the model applied. Precision: The ratio of the number of true positive instances that are classified exactly to the total number of positive instances (true positive and false positive). Recall: The ratio of true positive instances that are precisely labeled as true positive to all true positive instances. This means the value of recall will be low when the FN rate is high. F1-score: This is referred to as the harmonic mean of the accuracy and recall metrics. It is regarded as a useful assessment criterion for unbalanced data. The process of feature selection presented various limitations throughout the investigation. One notable limitation revolved around the potential existence of irrelevant or redundant features within the dataset. The extensive features, while furnishing information, introduced complexities in discerning the most pertinent ones. Moreover, ensuring the transferability of chosen features across diverse network environments emerged as a persistent challenge. The ever-changing landscape of wireless networks compounded the intricacies of the feature selection procedure. Despite the application of rigorous methodologies, the ongoing struggle to strike a balance between achieving a lightweight model and maintaining requisite detection accuracy remained a formidable constraint. The wrapper method ranks the feature subsets according to how well they can classify the objects using the learning machine. Fundamentally, recursive feature elimination prioritizes features based on a relevance metric. RFE acts as a greedy algorithm and strategically performs feature ranking by prioritizing features by recursively finding the reliant collinear features while removing the weak features. In this approach, RFE with a decision tree (DT-RFE) has been implemented and the eight most relevant features have been extracted that will be ideal for building a cost-effective, lightweight IDS, reducing the dimensions of data. DT-RFE can categorize powerful predictors of a given outcome without assuming the model’s internal mechanism. As mentioned above, DT-RFE selects the most meaningful features based on their ranks. Table 5 shows the most relevant selected features. Apart from these 8 features, another feature wlan_radio.signal_dbm, which represents the broadcasting device’s signal strength, was used for classification. When used with the radiotap.dbm antsignal, it can help pinpoint flooding and impersonation attacks. The use of this feature in combination with the other features reduced the false-positive rate. Certainly, "wlan_radio.signal_dbm" is essential to the cyberattack detection feature selection process. This feature represents a wireless network’s signal strength and provides important information about the reliability and quality of the Wi-Fi connection. When "wlan_radio.signal_dbm" was incorporated along with a subset of eight other relevant attributes, the number of false positives during cyberattack detection was significantly reduced. This decrease suggests that by providing insightful contextual information about the wireless network environment, "wlan_radio.signal_dbm" enhances the selected subset of attributes. This feature integration improves the intrusion detection system’s overall accuracy and effectiveness by providing more insights into network behavior and possible security threats. The classification results are shown later in Fig 4 . Since the dataset is imbalanced, the results should be considered in terms of the F1-score and area under the curve (AUC) score. From Table 5 , it is observed that CNN and DT have attained the best results in terms of accuracy, precision, recall, F1-score, and AUC score. However, the decision tree attained slightly better results in terms of F1-score, AUC score, and processing time of 99.82%, 99.90%, and 20.2s respectively. Decision Tree also attained the highest recall value of 99.82%. A high recall value is essential in wireless networks. If an attack instance is incorrectly classified as regular network traffic, it can cause a major loss of data in real-world businesses. Regarding deep learning architectures, the validation loss is less than the training because of the dropout layers used in the model . Figs 5 and 6 show the average loss among all folds. To train the MLP and CNN architectures, it was found that the number of samples was insufficient. The ideal loss value was easily obtained by these architectures within four to five epochs. The models were therefore trained with quite a small loss, with the loss decreasing by a trivial 0.001 after each epoch. Summing up the results, machine learning models can be used to train data for an economical and time-saving cyber-attack detection mechanism. That would be suitable for small and medium enterprises (SMEs) as well. However, for large-scale data, DNN models are preferred. The confusion matrices in Figs 7 – 12 demonstrate the average confusion matrices of classifier analysis. Normal, de-authentication, and Krack attacks demonstrated significant accuracy as less than 100 instances are falsely classified with tree-based models. Whereas Light GBM and MLP classifiers had a hard time differentiating between Kr00k and disassociation attacks. Approximately 400 instances of these attacks were falsely classified even with 16 features set. Fig 13 shows that DT and CNN have the least number of misclassified instances. These results demonstrate that the DT-RFE method can reduce features from 16 to 8 for the detection of these attacks using DT. This could be helpful for attack detection in an enterprise network. With 8 core features and an additional feature out of 16, the proposed method can detect Wi-Fi cyber-attack with little processing time and a high detection rate. Furthermore, stratified cross-validation proved efficient to alleviate the effects of overfitting and class imbalance. Despite being under-presented de-authentication and Krack classes with only 38,942 and 49,990 samples, the detection rate is quite high. According to the analysis done by , 30 features and 27 features were not transferable whereas 13 feature sets and 5 feature sets were transferable, but the results attained needed to be improved. For this purpose, radiotap.channel.flags.cck and radiotap.dbm_antsignal were excluded to better comprehend this result. Radiotap.channel.freq, radiotap.flags.type.cck, and other features fall under this category. The fact that AWID2 and AWID3 were recorded on several radio stations had an impact on this decision. Furthermore, neither of the two flag-based features provided insightful information for identifying flooding assaults. Only the best models from ML and DL techniques such as DT and CNN have been used to evaluate the transferability of features and the evaluation of the results is given in Fig 14 . The confusion matrices of both DT and CNN with these features are given in Figs 15 and 16 . The Decision Tree (DT) model shows zero instances of normal traffic being misclassified as flooding, resulting in no false positives. However, there is a significant issue with false negatives, where around 84,000 instances of deauthentication attacks are wrongly classified as normal. On the other hand, the Convolutional Neural Network (CNN) exhibits notably low numbers of both false positives and false negatives, making it a superior choice for feature transferability. CNN’s superior performance in accuracy, precision, recall, and F1-score indicate its effectiveness in handling the classification task compared to DT. These results suggest that for this specific problem, CNN is a more suitable choice, as it provides a higher overall predictive capability with better precision and recall. The results demonstrated that these 6 features are transferable, achieving 90% and 97% F1 scores respectively. DT-RFE was applied on 4 separate datasets (de-authentication/normal, disassociation/normal, Krack/normal, Kr00k/normal). A different subset of features has been extracted with three of the most useful features out of the mentioned 16 features. A stratified cross-validation score equal to 5-fold is used to avoid over-fitting and handle data imbalance problems. For each attack, different subsets of features are identified. These feature subsets could classify the incoming cyberattacks based on 3 top ranking features. Table 6 shows the combination of features for each attack. Table 7 demonstrates the performance of algorithms on each attack. Three classifiers with the best performance including DT, RF, and ET were used for testing the AWID3 dataset. Random forest attained superficial results with maximum AUC and F1 scores for each attack. The decision tree completed the analysis in 2 seconds for de-authentication attacks. However, for the rest of the attacks, the decision tree was prone to overfitting. The confusion matrices in Figs 17 – 25 show the average results of all 10 folds for machine learning analysis. It is observed that with ET, approximately, only 150 instances were misclassified. It is simpler to understand the model’s results when a more condensed collection of features is used. You can more easily see the significance of each aspect in classifying various attacks. The modelling process can be streamlined, model performance can be improved, and a better knowledge of the essential features driving attack classification can be obtained by choosing distinct feature subsets for classifying cyberattacks depending on the top-ranking features. When working with complicated, high-dimensional datasets, it is a useful strategy. Generalization refers to the capability of the classification model to adapt to previously unseen data. In this experiment, the attack classification is performed to evaluate the generalization of the extracted features of each attack. The reduced feature subsets for de-authentication and disassociation attacks were used to perform analysis on the AWID dataset. Table 8 demonstrates the performance of feature generalization on the AWID dataset. AWID consists of two attacks from the AWID3 dataset which are de-authentication and disassociation attacks. The three most relevant features of each attack from AWID3 were tested on the AWID dataset with tree-based models. For de-authentication attacks, RF, DT, and ET while for disassociation attacks, RF and ET were utilized. The results confirm the feasibility of feature generalization and demonstrate that the extracted features from each type of attack can be effectively applied to another dataset with different network conditions for attack classification. The proposed models exhibit impressive resilience against several cyberattacks, such as deauthentication, Krack, and Kr00k. These models have demonstrated the ability to efficiently identify and counteract these types of attacks, protecting wireless networks’ security and integrity, through extensive testing and assessment. Even in the midst of the complexity of real-world network environments, these models can recognize patterns and abnormalities indicative of these particular attacks by utilizing significant machine learning and deep learning approaches. This robustness highlights the dependability and effectiveness of the proposed approach in defending against a variety of cyber-attacks, offering enhanced security to both network administrators. Since the imbalanced nature of data is maintained, the F1-score should be considered rather than accuracy for evaluation. The execution time of the proposed methods was less than the previous state-of-the-art techniques. Table 9 compares the proposed work performance measures with state-of-the-art techniques. Only the weighted values of the evaluation measures were considered. Table 10 presents the outcomes of various models in terms of their performance metrics, including accuracy, precision, recall, and F1 score. To determine if the features could be applied to various network situations, these metrics were assessed using various sets of features. The results suggest that the proposed CNN model achieved a notable level of accuracy and well-balanced performance across precision, recall, and F1 scores, even with a smaller set of features. Furthermore, the performance of the Decision Tree (DT) models varied based on the number of features used, indicating the significance of feature selection in influencing the effectiveness of the models. An adversary can access a victim’s critical details by launching a series of attacks on the network. Intelligent machine/ deep learning-based cyberattack detection mechanisms have gained popularity due to their high efficiency and automation. This study aimed at developing a Wi-Fi-based attack detection system. The decision tree with recursive feature elimination was used to extract the most meaningful features for a cost-effective, lightweight, and time-efficient system to detect cyberattacks. However, apart from 16 features, wlan_radio.signal_dbm with eight other relevant features significantly reduced false positives. Different tree-based algorithms, such as decision tree, random forest, Light GBM, extra trees, MLP, and CNN have been used to detect four types of cyber-attacks (de-authentication, disassociation, Krack, and Kr00k) from the AWID3 dataset. In terms of accuracy, precision, recall, F1 score, and AUC, both Decision Trees (DT) and Convolutional Neural Networks (CNN) appear to perform exceptionally well. They obtain 99.82% accuracy for a five-class classification issue, which is comparable to or slightly better than other state-of-the-art models such as LightGBM, Extreme Trees (ET), and Multilayer Perceptron (MLP). The proposed approach showed that machine learning tree-based models would be appropriate for a lightweight IDS as it provides fewer computations with minimum execution time and better classification of attacks whereas MLP and CNN can be implemented for handling large and complex data. Furthermore, the evaluation of various metrics across extracted feature sets highlights the transferability of the features in diverse network contexts. The CNN model showcased impressive accuracy and a balanced performance with fewer features, while the DT models’ effectiveness varied based on feature quantity, emphasizing the crucial role of feature selection. The features for each attack were extracted using DT-RFE. The three best models DT, RF, and ET were used to evaluate the extracted features, and RF along with ET achieved excellent results across all performance metrics including accuracy, precision, recall, F1 score, and execution time for attack detection. During the evaluation, overfitting occurred with DT for disassociation, Krack, and Kr00k attacks. Conclusively, features were utilized with the AWID dataset to find out if the extracted features were generic or not. Both deauthentication and disassociation attacks in AWID were evaluated where RF and ET achieved high AUC and F1 scores. This research provides both theoretical and practical implications in the field of secure Wi-Fi communication in enterprise networks. The study’s practicality extends beyond specific datasets by taking into account the transferability of proposed features and models to various network contexts or situations. While the experimentation was carried out on benchmark Wi-Fi datasets, the absence of different benchmark datasets restricted the examination of other Wi-Fi network situations. Also, the suggested features are designed for Wi-Fi-based network setups which may limit their use in wired network settings. Despite this restriction, the study provides vital insights into Wi-Fi network security and sets the framework for future research to improve detection capabilities in a variety of network scenarios. The major shortcoming of this research is the unavailability of a proprietary dataset which is attributed to resource constraints, including budget, time, or the necessary infrastructure to collect and compile data for experimentation. For holistic security, future works can consider the development of newer datasets with attack and benign traffic from both enterprise and industrial networks and use different feature extraction and evaluation techniques for comparisons. The development of diversified datasets, exploration of different feature extraction methods, and rigorous evaluation can enhance the quality and applicability of intrusion detection systems in real-world network security. Collaboration and resource-sharing within the research community can also play a vital role in addressing these challenges.
|
Other
|
other
|
en
| 0.999996 |
PMC11694997
|
Anemia is a widespread global health concern that has been consistently linked to various negative outcomes in recent years . It is characterized by an imbalance between the production and destruction of red blood cells (RBCs), resulting in inadequate oxygen delivery to vital tissues like the brain and heart . The diagnosis of anemia typically relies on the hemoglobin level in the blood . The prevalence and incidence of anemia have risen significantly, attributed to combination of increased nutrient deficiencies, chronic diseases, inherited hemoglobin disorders, and the use of specific medications . Anemia can have detrimental effects on cognitive and physical functions , leading to decreased economic productivity and higher morbidity and mortality rates , presenting significant health challenge in modern society. Early recognition of anemia presents an opportunity to delay or prevent the onset of the disease and enhance treatment outcomes. Consequently, identifying new indicators closely associated with anemia is of great significance for developing more effective anemia prevention strategies . Oxidative stress is defined as an imbalance between the production of reactive oxygen species (ROS) and antioxidant defense mechanisms, and is acknowledged as a major factor in various conditions like inflammation, aging, cancer, and cardiovascular diseases . Numerous studies have emphasized a significant link between oxidative stress and the onset of anemia [ 9 – 11 ]. A modifiable risk factor for reducing oxidative stress is diet . By including dietary antioxidants, individuals can alleviate or prevent oxidative stress-related diseases by neutralizing the harmful effects of ROS . Consistent consumption of antioxidants can reduce oxidative stress levels and enhance the body’s ability to withstand it . Halima et al found that antioxidants exert protective effect against anemia and provide significant alternative benefits for RBC function. This is achieved by preventing lipid peroxidation in RBCs, increasing levels of reduced glutathione (GSH), and reducing RBC permeability . As research enhances our understanding of nutrition and oxidative stress, there is an increasing interest in the role of antioxidant-rich diets in the prevention of anemia. The composite dietary antioxidant index (CDAI) was developed by Wright et al as a tool to assess antioxidant intake, serving as a composite score to evaluate dietary antioxidant consumption . This index considers key nutrients such as vitamins A, vitamins C, and vitamins E, zinc, selenium, and carotenoids. While previous studies have mainly examined the relationship between CDAI and different diseases. Wang et al posited that the CDAI was positively correlated with lower prevalence of chronic kidney disease among American adults . Similarly, Yu et al discovered that elevated CDAI scores were linked to decreased risk of colorectal cancer (CRC) and concluded that food-based antioxidants might contribute to lowering the risk of CRC in the general population . Additionally, another study indicated that higher intake of dietary antioxidants, assessed through the Dietary Antioxidant Quality Score (DAQS) and CDAI, was associated with reduced risk of all-cause and cardiovascular disease mortality in adults with diabetes . However, the specific connection between CDAI and anemia is yet to be conclusively determined. A cross-sectional study was conducted using data from the National Health and Nutrition Examination Survey (NHANES) to assess the relationship between CDAI and anemia. The hypothesis posited that higher CDAI scores would correlate with decreased prevalence of anemia. The analysis is based on data from the NHANES, a cross-sectional survey conducted by the Centers for Disease Control (CDC) and the National Center for Health Statistics (NCHS). The survey’s objective is to evaluate the health and nutrition status of a representative population, encompassing institutionalized individuals . Since 1999, NHANES has been an ongoing study that releases data every two years . The database is available for free download from the official website. Approval for the study was obtained from the NCHS Research Ethics Review Board, and all participants provided written informed consent, and no external ethic approval was required for this study. This study utilized NHANES data spanning from 2003 to 2018, encompassing a total of 80,312 participants. After excluding individuals without hemoglobin data (N = 14,320), those lacking CDAI data (N = 4,656), and pregnant participants (N = 1,051), we further removed cases with missing covariate data. Ultimately, we obtained final sample size of 33914 participants. A flow chart depicting the exclusion criteria is presented in Fig 1 . According to World Health Organization (WHO) guidelines, patients with anemia are defined as having hemoglobin (HB) levels below 12 g/dL for women and below 13 g/dL for men . Data on dietary antioxidant intake were obtained from the average of two 24-hour dietary recall interviews conducted as part of the NHANES. The initial dietary recall was performed at mobile examination center (MEC) by trained interviewers who adhered to standardized protocol. During this face-to-face interview, detailed information regarding all food and beverages consumed by participants over the past 24 hours was collected. This was followed by second interview conducted via telephone 3 to 10 days later . Utilizing average dietary intake data from two non-consecutive days is deemed more accurate than relying solely on data from a single day . To evaluate the overall exposure to dietary antioxidants, the intake of six key antioxidants (vitamin A, vitamin C, vitamin E, zinc, selenium, and carotene) was analyzed and quantified using to calculate the CDAI . This involved standardizing the intake levels of the six antioxidants by subtracting the mean and dividing by the standard deviation, followed by summing up the standardized values: C D A I = ∑ i = 1 n ( x n − μ n s n ) Potential confounders considered in this study including age, sex, race, education levels, body mass index (BMI), poverty income ratio (PIR), smoking consumption, alcohol consumption, as well as comorbidities like hypertension, diabetes, cancer, and hyperlipidemia. Age was categorized into 40 years, ethnicity into non-Hispanic White, non-Hispanic Black, Mexican American, and others, and education levels into less than high school, high school, and more than high school. Marital status was divided into married/living with partner or widowed/divorced/separated/never married. PIR was categorized as low (≦ 2.14) and high (≧ 2.14). BMI was categorized as thin/normal (≦ 18.5kg/m 2 , 18.6–24.9kg/m 2 ), overweight (25.0–29.9 kg/m 2 ), and obese (≧ 30 kg/m 2 ). Smoking status was categorized into two groups: smokers, defined as those who had smoked at least 100 cigarettes in their lifetime (coded as ‘yes’), and non-smokers (coded as ‘no’) . Alcohol consumption was classified as ‘yes’ for those who had consumed at least 12 alcoholic drinks in the past year, and ‘no’ for others . Participants were considered to have a history of diabetes, hypertension, and hyperlipidemia if they reported physicians diagnosis of these conditions. The diagnosis of cancer required meeting two criteria: (1) positive response to ‘ever been told they have cancer or a malignancy of any kind’ (variable mcq220); (2) providing details for the inquiry ‘what kind of cancer?’ (variable mcq230A). Participants were categorized into four groups based on quartiles CDAI. Continuous variables were presented as mean ± standard deviation (mean ± SD), while categorical variables were expressed as percentages. The associations between baseline characteristics and CDAI quartiles were evaluated using chi-square or t tests. Weighted logistic regression model was employed to investigate the relationship between CDAI and anemia, with results reported as adjusted odds ratio (OR) and 95% confidence intervals (CI). Model 1 had no adjustments, while model 2 included adjustments for gender, age, education level, and race. Model 3 further adjusted for PIR, BMI, smoking, drinking, hypertension, diabetes, and hyperlipidemia as covariates in addition to those in Model 2. CDAI was analyzed both as continuous and categorical variable to explore correlations. All statistical analyses were conducted using R 4.3.3 with appropriate weights, and statistical significance was defined as P < 0.05. The survey procedures are in accordance with the standards outlined in the declaration of Helsinki . All information from the NHANES program is freely available to the public, therefore, the approval of the medical ethics committee board was not required . This study involved 33914 patients who met strict inclusion and exclusion criteria, with 3416 (10.07%) of them diagnosed anemia. The average age of the participants was 47.35 ± 0.22 years. Patients were categorized into 4 groups based on CDAI quartiles, and their baseline characteristics are presented in Table 1 . Higher GDAI levels were associated with certain characteristics such as younger age, non-Hispanic white married female, non-drinkers, non-smokers, higher BMI, higher PIR, higher education levels, absence of diabetes, cancer, hypertension, and anemia, but presence of hyperlipidemia. Univariable and multivariable logistic analyses showed significant link between CDAI and anemia ( P < 0.001) ( S1 Table ). Table 2 presents the results of the weighted logistic regression analysis investigating the relationship between anemia and CDAI. The analysis demonstrated robust negative correlation between CDAI and anemia when considering CDAI as continuous variable in model 1 (OR: 0.94; 95%CI: 0.93–0.96), model 2 (OR: 0.96; 95%CI: 0.95–0.97) and model 3 (OR: 0.97; 95%CI: 0.95–0.98). Particularly noteworthy is the significant trend in model 3 ( P for trend < 0.001) when CDAI was categorized into quartiles, indicating that higher CDAI scores were linked to lower likelihood of anemia. To further confirm this association, restricted cubic spline (RCS) analysis with 3 knots was performed, revealing linear negative correlation between CDAI and anemia ( P for nonlinearity = 0.619), as depicted in Fig 2 . The relationship between the six antioxidant components of CDAI and anemia was further examined ( Table 3 ), it was observed that vitamin A was linked to anemia in model 1 (OR: 1.00; 95%CI: 1.00–1.00, P = 0.010). Vitamin C showed an association with anemia in models 1 and 2 (OR: 1.00; 95%CI: 1.00–1.00, P = 0.030 or P = 0.002). After adjusting for all variables, zinc (OR: 0.97; 95% CI: 0.97–0.98, P < 0.001), vitamin E (OR: 0.98; 95%CI: 0.98–0.99, P < 0.001), carotene (OR: 1.00; 95% CI:1.00–1.00, P < 0.001), and selenium (OR: 1.00; 95% CI:1.00–1.00, P = 0.010) were identified as independent factors associated with anemia. To further investigate the influence of CDAI on anemia risk within specific subgroups, stratified analysis was performed based on gender, age, race, PIR, BMI, hypertension, diabetes, and hyperlipidemia. Results indicated that except for sex, smoking consumption, and diabetes, there were no significant interactions between CDAI and anemia risk . This indicates that the protective effect of CDAI on anemia is particularly pronounced in male non-smokers and non-diabetic individuals. This study is the first to investigate the relationship between CDAI and the prevalence of anemia based on data from NHANES. Upon adjusting for potential confounders, negative association between CDAI and anemia in American adults was observed, with linear trend established through dose-response analysis. Additionally, subgroup analysis indicated the protective effect of CDAI against anemia was particularly significant among male nonsmokers and nondiabetic individuals. Clinically, these results suggest that anemia can be effectively prevented, diagnosed, and treated through appropriate dietary modifications rich in antioxidants. Diet plays a crucial role in managing the body’s oxidative stress levels , with dietary antioxidants being essential in lowering the risk of aging, cancer, diabetes, inflammation, liver disease, and cardiovascular disease. The fast-paced lifestyle of modern society leads to an increase in free radicals in the body, which can demage cells, tissues, and organs, ultimately impacting longevity. Antioxidants are key in fighting free radicals, thereby contributing to prevent both short-term and long-term diseases . In this study, we focus on the CDAI for two primary reasons. Firstly, CDAI serves as a crucial indicator of dietary antioxidant capacity. Unlike single dietary antioxidant indicators, CDAI encompasses key antioxidant nutrients, including vitamin A, vitamin C, vitamin E, zinc, selenium, and carotene, thereby facilitating more comprehensive evaluation of overall diet quality. Secondly, research in related fields has garnered significant attention. A study within NHANES demonstrated negative linear correlation between CDAI and hypertension . These results also propose that increasing CDAI levels through a diet rich in antioxidant nutrients could potentially reduce the occurrence of metabolic syndrome . Similar, Ma et al in their study on the correlation between CDAI and coronary heart disease, they reported negative correlation, indicating that CDAI is inversely related to the incidence of coronary heart disease (Q4 vs Q1, OR: 0.65, 95%CI: 0.51–0.82, P < 0.001) . While research on the relationship between CDAI and anemia is currently limited. Our study reinforces these findings by showing an inverse relationship between CDAI levels and anemia in American adults (OR: 0.97, 95%CI: 0.96–0.98). Specifically, higher CDAI scores were linked to lower prevalence of anemia. Our findings align with those of Zhang et al, who reported that among 5,880 participants, higher CDAI was associated with reduced likelihood of renal anemia (adjusted OR: 0.96, 95% CI: 0.94–0.98) . In addition, studies focusing on specific anemic populations, such as patients with β- thalassemia, provide further insight into the relationship between anemia severity and antioxidant defenses. Allen et al suggest that the severity of anemia in these patients is linked to depletion of antioxidant defenses, indicating that antioxidant supplementation may be beneficial . Andrea et al summarize the intricate interactions between antioxidant-rich foods and gut microbiota, inflammation, and obesity, positing that plant-based antioxidant foods, such as vegetables, fruits, and nuts, are essential for health maintenance . Meanwhile, Jacques et al investigated the impact of various nutrients, including antioxidants like vitamin A and vitamin C, on iron metabolism to mitigate the risk of anemia . This finding aligns with our results, which show that higher CDAI scores are associated with lower prevalence of anemia, thereby emphasizing the potential therapeutic role of antioxidants. Therefore, a balanced diet, such as the Mediterranean diet, along with specific foods like fish, fresh vegetables, and fruits, is recommended for individuals with low CDAI scores . These foods are rich in essential components, including fiber, minerals, vitamins, and antioxidants, which can help prevent the occurrence of anemia. Subsequently, this study explored the correlation between antioxidant components and anemia, discovering that vitamins E, zinc, carotene, and selenium were independently linked to anemia after adjusting for all potential confounders. Vitamin E is recognized as a crucial lipophilic antioxidant in biological membranes, capable of scavenging free radicals and acting as a chain-breaking antioxidant . Severe vitamin E deficiency can lead to impaired immune response and hemolytic anemia caused by free radicals . Sue et al highlighted that the lack of multiple trace element biomarkers (iron, zinc, selenium) was positively associated with anemia . While Lisa et al showed that low zinc is an independent risk factor for anemia in school-age children and mediates the effect of low selenium on hemoglobin levels . Carotene exert antioxidant effects through direct interactions with free radicals, which occur via electron or hydrogen atom transfer . Additionally, carotene can be converted into vitamin A within the body, promoting iron absorption and metabolism, thereby exerting preventive effect against anemia . However, these studies focused solely on specific antioxidants and did not examine potential synergies among various antioxidant nutrients. Given that our dietary intake typically includes multiple antioxidants, the CDAI seems to exert a comprehensive effect on individuals’ pro-antioxidant status and underscores the advantages of thorough assessments of antioxidant exposure . This study has several strengths. Firstly, it is the first cross-sectional survey to explore the relationship between CDAI and anemia, emphasizing the potential impact of dietary antioxidants in reducing anemia prevalence. Secondly, the findings indicate that the consumption of antioxidant- rich foods could potentially decrease the risk of anemia and offer dietary guidance to boost antioxidant intake for individuals with anemia. However, this study also presents several limitations. Firstly, the dietary data in NHANES is based on self-reporting, which may introduce recall bias, and such errors are unavoidable. Secondly, the cross-sectional design complicates the establishment of causal relationship between CDAI and anemia, indicating that prospective multicenter studies will be necessary in the near future to validate our findings. Thirdly, the anemia data utilized in this study were derived from laboratory diagnoses and may not have accounted for the potential contributions of a history of anemia or biological differences, such as smoking or an increased incidence of thalassemia. Lastly, although this study accounted for numerous clinical variables, potential unmeasured confounders, such as medication use and laboratory indicators, including liver and kidney function, iron levels, and vitamins, may still influence CDAI levels and their correlation with outcomes. Consequently, future research should aim to explore the specific associations between these variables more comprehensively. This study revealed an inverse relationship between CDAI and the prevalence of anemia, with more pronounced effect observed in male nonsmokers and nondiabetic individuals. These findings offer valuable insights for healthcare providers to develop more targeted anemia screening and prevention strategies. Furthermore, Our study suggest the potential role of dietary modifications, particularly for individuals with lower CDAI scores, including the adoption of antioxidant-rich dietary patterns, such as iron supplements and plant-based diets, to help reduce the prevalence of anemia. Additionally, further large-scale prospective studies are necessary to confirm these findings.
|
Review
|
biomedical
|
en
| 0.999997 |
PMC11694998
|
Monitoring core temperature is important for patients under anesthesia. Temperature control and monitoring are critical, especially in cases of hypothermia or malignant hyperthermia in the operating room . Core temperature can be measured at various sites, including the pulmonary artery, esophagus, tympanic membrane, and nasopharynx. The esophagus is a commonly used site for temperature measurement in patients under general anesthesia . However, in cases where monitoring core temperature at the esophagus is challenging, such as in surgeries involving the head and neck or esophagus, alternative methods may need to be considered . An endotracheal tube (ETT) with a built-in thermometer allows for the measurement of tracheal temperature, which can be a suitable alternative to esophageal temperature measurement . This method eliminates the need for a separate esophageal thermometer, providing a convenient and reliable means of monitoring core temperature during anesthesia. The measurement of pulmonary artery blood temperature using a pulmonary artery catheter (PAC) is the gold standard for monitoring core temperature . In this study, we compared the tracheal temperature measured using an ETT thermometer with the blood temperature measured using a PAC. The study aimed to assess the clinical reliability and accuracy of the ETT thermometer compared to core temperature measured using a PAC. This observational cohort study was approved by the Institutional Review Board of Pusan National University Yangsan Hospital and registered at ClinicalTrials.gov . The study was conducted in accordance with the Declaration of Helsinki . A total of 12 patients aged >18 years who underwent coronary artery bypass graft (CABG) surgery with PAC insertion were enrolled , and written informed consent was obtained. Exclusion criteria were patients with unstable vital signs who underwent emergency surgeries. After enrolment, patients with an inappropriate position of the PAC and thermometer malfunction were excluded from the study. The CONSORT flow diagram is presented in Fig 1 . All patients underwent standard monitoring in the operating room, which included non-invasive blood pressure measurement, electrocardiogram, and pulse oximetry. After the induction of general anesthesia using 1–2 mg/kg of 1% propofol, 0.8 mg/kg of rocuronium, and remifentanil, the anesthesia was maintained with sevoflurane and an O 2 -air mixture. The patients were intubated using an ETT equipped with a thermometer on the cuff . Following central catheterization of the right internal jugular vein, a PAC was inserted, and its placement was confirmed using transesophageal echocardiography (TEE). Subsequently, tracheal temperature ( T T ) was measured using the ETT thermometer, and blood temperature ( T P ) was obtained using the PAC. Temperature measurements were taken at 5-minute intervals for 1 hour before starting cardiopulmonary bypass (CPB). Follow-up observations for the study are not required, and the research concludes on the same day. The primary outcome of the study was to determine whether the tracheal temperature, measured using the ETT thermometer, accurately reflected the core temperature measured using the PAC. We assessed the reliability and accuracy of the ETT thermometer as an alternative for core temperature monitoring and analyzed the agreement and correlation between these two temperature measurements. The agreement was investigated using the Bland–Altman plot with multiple measurements per subject, and the correlation was evaluated using the concordance correlation coefficient (CCC) . The sample size was calculated based on methods used in previous studies . We assumed that the mean difference between the tracheal and core temperatures was approximately 0.25°C, the standard deviation of the difference was approximately 0.1°C, and the maximum allowed difference between methods was 0.5°C . Assuming a two-side α of 0.05, a β of 0.1 (power of 0.9), the minimum required number of pairs was calculated to be 111. Since 13 pairs of data were collected per participant (data collected every 5 minutes for 1 hour), the result of 111 pairs of data divided by 13 was approximately 9 (8.5) participants. Considering the attrition rate of 20%, 12 patients were recruited. All data are expressed as numbers (proportions), the mean ± standard deviation, or median (interquartile range), unless otherwise specified. Statistical analyses were performed using MedCalc ® Statistical Software, version 22.007 (MedCalc software Ltd., Ostend, Belgium). A total of 12 patients were enrolled from September 20, 2022, to November 21, 2022. However, one patient was later excluded due to missing data as a result of starting CPB. Therefore, 11 patients were finally enrolled, with a total of 143 pairs of data included for analysis. The patient characteristics are summarized in Table 1 , and all measured data are presented in S1 Table . The primary outcome, which was the agreement between the tracheal and pulmonary artery temperature measurements using the ETT thermometer and PAC, respectively, was found to be significant. The mean difference between the tracheal and pulmonary artery temperatures was −0.10°C. The 95% limit of agreement (LoA), calculated as ± 1.96 standard deviation, ranged from −0.35°C to 0.15°C. The 95% confidence interval for the lower and upper LoA was −0.51°C to −0.27°C and 0.07°C to 0.31°C, respectively ( Table 2 ). These values indicate the range within which most temperature differences between the two methods are included. Additionally, the maximum allowed difference (Δ) was set at 0.5°C. The majority of temperature differences fell within the LoA and were well below the maximum allowed difference, suggesting a good agreement between the two measurement methods . Furthermore, the CCC was 0.95, indicating a substantial strength of agreement . In this study, the agreement between the tracheal and pulmonary artery temperature measurements using the ETT thermometer and PAC, respectively, was found to be clinically reliable and accurate, which indicates that the tracheal temperature measurement can effectively represent the core temperature of the patients. Consequently, the use of an ETT equipped with a thermometer on the cuff can be considered a viable alternative for measuring core temperature. This method offers practicality and accuracy, particularly in situations where the placement of a PAC and esophageal thermometer may not be feasible or preferred. Another study involving patients who underwent living donor liver transplantation reported similar results to our study, demonstrating the reliability of tracheal temperature monitoring with a percentage error of −0.15% . In our study, the percentage error was −0.28%. A low percentage error suggests that the tracheal temperature monitoring method provides accurate and precise measurements. The ETT used in our study had a thermometer on the cuff; therefore, temperature measurements were less affected by a breathing circuit with a heated wire humidifier . The ETT cuff pressure (via a manometer) for all participants was consistent. No complications due to the cuff thermometer were observed. The pulmonary artery temperature measurement is less commonly used than esophageal temperature monitoring, which is the usual method used in patients under general anesthesia . However, there are certain situations where esophageal temperature measurement may not be applicable or feasible. These include patients with esophageal diseases, such as severe varices, as well as surgeries involving the head and neck, thorax, or esophagus . In particular, the use of TEE during cardiac and transplant surgeries can pose challenges for core temperature monitoring using esophageal temperature probes. The TEE probe can interfere with the proper positioning and functioning of esophageal temperature probes, leading to inaccurate temperature measurements . In these cases, alternative methods for monitoring core temperature need to be considered. Tracheal temperature monitoring can serve as a reliable and independent method for measuring core temperature. Nevertheless, our study has several limitations. This study was conducted at a single center, and the sample size was relatively small. However, other studies have reported similar results, supporting the findings of our study . This consistency in the literature enhances the overall reliability and validity of the results, despite the limitations. In conclusion, the agreement between the tracheal and pulmonary artery temperature measurements using the ETT thermometer and PAC, respectively, was found to be clinically reliable and accurate. These findings indicate that the tracheal temperature measurement can effectively represent the core temperature of the patients. The use of an ETT equipped with a thermometer on the cuff can be a reliable and independent method for measuring core temperature.
|
Other
|
biomedical
|
en
| 0.999995 |
PMC11695002
|
Ticks are considered important arthropod vectors of various pathogens and have been associated with serious medical and veterinary health problems . Many tick-borne bacterial agents are significant causes of unknown morbidity and mortality in human and domestic animals of great public health importance, e.g., spotted fever group rickettsiae (SFGRs), Anaplasma spp. and Coxiella spp. . The majority of tick-borne infections are zoonotic, and their incidence and distribution are steadily increasing worldwide . The distribution of various tick species and tick-borne agents in China has been studied for a long time. To date, at least 124 tick species from 9 genera have been reported across 1134 counties in China . It was reported that more than 3,500 human cases had been confirmed infections with tick-borne pathogens ( Borrelia spp., Anaplasma spp., Babesia spp. and SFGRs) covering the west, north, and northeast of China, including 32 cases of unexplained fever caused by Rickettsioses . Rickettsia spp. and Anaplasma spp. belonging to the order Rickettsiales, of most concern in China, are common tick-borne bacterial pathogens. The SFGRs are obligate intracellular, gram-negative bacteria, generally associated with ticks, which can cause the emergence of spotted fever worldwide [ 8 – 11 ]. More than 20 species of SFGRs are causative agents of human diseases characterized by various clinical features, including fever, headache, rash, and cervical lymphadenopathy, which can be fatal in severe cases [ 12 – 15 ]. In mainland China, the emergence of SFGRs and corresponding rickettsial human cases have been predominantly reported in northeastern and central regions, where climatic conditions and human activities such as farming and livestock rearing favor the proliferation of tick populations, whereas in other areas, rickettsiosis is sporadic [ 16 – 20 ]. Several cases of rickettsial disease associated with acute fever and lymphadenopathy have been reported in Henan and Xinjiang regions of China . In addition to humans, SFGRs infections have been detected in a variety of animal reservoir, including artiodactyla animals (sheep/goat/horse) , rodents , and wild birds . Meanwhile, it is important to note that significant differences exist among tick species and their carried rickettsiae . Anaplasma species have been also frequently detected in ticks and animals from multiple provinces in China . Domestic animals, including sheep, cattle and goats, are often infected causing weight loss, reduced milk production, or even death, leading to great economic losses in animal husbandry annually . Human infectious agents with these pathogens, such as A . phagocytophilum , A . capra and A . bovis , have been found in Inner Mongolia, Heilongjiang and Anhui provinces of China, but not as often as rickettsial cases [ 2 , 28 – 31 ]. Diverse manifestations of diseases can make their clinical diagnoses rather difficult. With the aid of molecular techniques, recent studies have expanded our knowledge on the diversity of vetor-tick bacteria and many novel species are being discovered globally with increasing frequency . Some tick-borne bacteria species that were previously not considered pathogenic to humans are nowadays proven pathogens . Diagnosis of infections with rickettsiae is commonly achieved by employing molecular biology-based analyses, specifically polymerase chain reaction (PCR) and nucleotide sequencing of DNA extracted from the patient . Many different genes have been used for Rickettsia and Anaplasma phylogenetic systematics, including 16S rRNA, gltA , 17kDa , ompA , sca4 , groEL , msp2 and complete genomic sequences by conventional, nested, and real-time PCR techniques . However, relatively few investigations of tick-borne agents have been reported in the underdeveloped regions of northwestern China . Thus, effective surveillance helps to determine tick populations, pathogen presence and seasonal activity, which is critical to implementing control measures. To be specific, investigations on the presence of tick-borne bacteria circulating in our environment within ticks—are of great medical and public health significance. Located in northwestern China, Ningxia covers an area of 66,400 km 2 . The large topographic drop and the varied landform make for local complex ecological landscapes and diverse vegetation types. Combined with extensive livestock production activities, which is the major source of income for rural households, these conditions create a favorable environmental for tick growth and development . Meanwhile, the frequent contact between villagers and domestic animals makes it possible for tick-borne pathogens and related diseases to be easily transmitted from animals or ticks to humans. The abundance of tick species varies substantially across diverse biogeographic zones defined by climatic and ecological characteristics. This study focuses on Ningxia, a region with distinct ecological and socio-economic characteristics that may promote tick development and tick-borne pathogen transmission. However, data about the prevalence and diversity on tick-borne bacteria in this area are limited, and relevant case studies are equally lacking. There have been no reports of SFGRs and Anaplasma species or even other pathogens directly detected from ticks. In this study, we collected ticks from five locations in Ningxia and analyzed the presence, prevalence, and genetic characteristics of the SFGRs and Anaplasma spp. in ticks using PCR and multi-locus sequence typing (MLST). Nucleotide sequence analysis and phylogenetic relationships are helpful to pathogen identification. Here we report the first finding of ticks, harbouring the pathogen SFGRs and Anaplasma spp. in Ningxia. The results of this study will be valuable in creating effective control measures to prevent zoonotic pathogens from spreading in this underexplored region. The collection of ticks from the body surface of host animals in this study was verbally consented by the animal owners and approved by the Animal Experiment Committee of the Laboratory Animal Center, Academy of Military Medical Sciences, China. The animal ethics approval number is IACUC-DWZX-027-20. Ticks were collected from livestock by using forceps, and vegetation surrounding the farms or living areas of animals by dragging white flags between March and May of the tick active period in 2022–2023 in all cities of Ningxia (38°27’58.9”N, 106°16’41.4”E), including Guyuan, Shizuishan, Wuzhong, Yinchuan, and Zhongwei city. Only adult ticks were identified and classified based on morphological criteria by an entomologist (Y.S.). Ticks were frozen and stored at −80°C until DNA extraction individually. We employed ArcGIS v10.8.2 to create detailed maps illustrating the geographical distribution of tick species across Ningxia region. The basemap shapefiles were downloaded from the Chinese Resource and Environmental Science Data Platform . These visualizations provide a clear spatial context for the prevalence of different tick species . DNA extraction, amplification, and PCR product detection were carried out in separate rooms in order to prevent cross-contamination. Ticks were washed in distilled water for 10 min dried on sterile filter paper and homogenized individually with a single tick in Eppendorf tubes. Following the manufacturer’s instructions, the TaKaRa RNA/DNA Extraction Kit (TaKaRa, Dalian, China) was used for DNA extraction from homogenized ticks. Obtained total DNA was stored at –80°C. Ticks were examined for the presence of SFGRs and Anaplasma spp. by qualitative PCR (semi-nested and nested PCR) which amplify fragments of the 16S rRNA ( rrs ), outer membrane protein A gene ( ompA ), citrate synthase ( gltA ), and heat shock protein ( groEL ) genes . Additionally, 17kDa gene was recovered for the putative novel Rickettsia species by nested PCR. PCR conditions comprised initial denaturation at 94°C for 3 min followed by 35 cycles of denaturation at 94°C for 30 sec, annealing at temperatures, specified in S1 Table , for 30 sec and elongation at 72°C for 1.5 min. PCR primer sequences and conditions are listed in S1 Table . The DNA of R . raoultii and A . ovis were used for as positive control, whereas ddH 2 O was set as the negative control. PCR reactions were performed using a Veriti 96-Well Thermal Cycler (Applied Biosystems, Waltham, USA) and the PCR amplicons were subjected to Sanger sequencing in both directions after showing high intensity bands in 1.5% agarose gel electrophoresis. All the obtained nucleotide sequences were proofread, edited and assembled by CLC Main Workbench 5.0 (Qiagen, Redwood City, CA, USA) Samples that tested positive for all three genes of SFGRs or Anaplasma spp. were considered positive. Individual rrs , gltA , ompA or groEL sequence of the PCR products was compared to the sequences in the GenBank using the nucleotide Basic Local Alignment Search Tool (BLAST) . Individual gene sequence and concatenated sequences were used for phylogenetic analysis. The assembled gene sequences were concatenated in the order of rrs , ompA (for Rickettsia )/ groEL (for Anaplasma ), and gltA . Additionally, reference sequences of different genes from various strains were obtained from GenBank, from which the amplified regions were extracted and concatenated in order ( S2 Table ). These sequences were aligned using MAFFT v7.505 and adjusted using trimAl software . All phylogenetic trees were constructed with maximum likelihood (ML) in IQ-TREE v2.2.0.3 with 1000 bootstrap replicates . To further validate the evolutionary positions of gene sequences in newly discovered Rickettsia species, separate phylogenetic trees were constructed for the rrs , ompA , and gltA genes of Rickettsia species. These concatenated trees were annotated and visualized using the Tree Visualization by One Table online software . The data obtained in this study were analyzed to estimate the proportion or percentage of SFGRs in different tick species with a 95% confidence interval (95% CI) including continuity correction based on one tick for a sample. Pearson’s chi-square (χ 2 ) test or Fisher’s exact test was used to examine the differences in positive rates among tick species. Statistical significance was determined using GraphPad Prism 8 (GraphPad Software Inc., San Diego, California, USA); a P -value less than 0.05 was considered to indicate statistical significance. We used R, an open-source statistical programming platform, and the circlize package v0.4.16 to create chord diagrams to visualize the associations between tick species and the prevalence of Rickettsia species . A total of 425 adult ticks were collected in four cities from Ningxia.After morphological identification, the ticks were classified into 4 genera and 9 species. These included Argas vulgaris (14, 3.29%), Dermacentor nuttalli (121 28.47%), Dermacentor silvarum (65, 15.29%), Haemaphysalis concinna (10, 2.35%), Haemaphysalis japonica (36, 8.47%), Haemaphysalis longicornis (42/425, 9.88%), Haemaphysalis qinghaiensis (36, 8.47%), Hyalomma asiaticum (24, 5.65%), and Hyalomma scupense (77, 18.12%). The distribution of tick species varies in different regions of Ningxia. . Ticks were collected in each city except Yinchuan ( S3 Table ). Dermacentor ticks (186/425, 43.8%) were the majority of ticks sampled and notably widespread, spanning the eastern, central, and southern regions of Ningxia . We tested 425 adult ticks by nested or semi-nested PCR for the presence of SFGRs. In total, Rickettsial DNA were confirmed by Sanger sequencing in 210 (49.4%) of the 425 ticks. The gene sequences were subjected to BLAST analysis for a preliminary verification of their identity. BLAST results are shown in S4 Table . Phylogenetic trees based on the sequences of the rrs , gltA and ompA gene fragments with the ML method are shown in Fig 2 . From the phylogenetic analysis based on three genes, most rickettsial sequences from this study clustered with R . raoultii and R . aeschlimannii isolates, some sequences of them were clustered together with R . sibirica , R . slovaca , R . heilongjiangensis , and Ca . R. hongyuanensis in branches, respectively. In addition, a few rickettsial sequences were most closely related to different SFGRs, even in separate clusters on these phylogenetic trees. For example, sample (TIGMIC125) clustered with R . sibirica , R . africae and R . raoultii in the rrs , gltA and ompA phylogenetic trees, respectively . For further characterization of the detected bacterial strains, the sequences of ticks positive for all three genes ( rrs , ompA and gltA ) were concatenated and aligned for rickettsial phylogenetic analysis. The concatenated sequences also included validated Rickettsia species available in GenBank ( S2 Table ). In total, 210 tick DNA samples tested positive for Rickettsia were available for analysis. The validated Rickettsia sequences based on concatenated tree which overlooked these clade credibility values was delineated into the eight species of SFGRs, including six known species: R . raoultii , R . aeschlimannii , R . sibirica , R . slovaca , R . heilongjiangensis , and Ca . Rickettsia hongyuanensis . Among the ticks infected by Rickettsia species, the sequences of R . raoultii obtained in this study showed greater than 97.6% nucleotide (nt) identity with those of previously reported R . raoultii str. Khabarovsk isolated from D . silvarum in Russia and the R . raoultii isolate Datong-Dn-1 from D . nuttalli in China . R . aeschlimannii sequences had 99.7%–100% nt identity with multiple reference sequences ( R . aeschlimannii isolates Baiyin-Ha-14, Baiyin-Hm-150, and RH15). The R . sibirica and R . slovaca strains detected in this study clustered together, comprising twelve and four positive samples, respectively. Sample sequences of R . sibirica shared more than 98.5% nt identity with R . sibirica str. RH05 isolated from Hya . truncatum in Senegal. Four sequences of R . slovaca identified in this study showed greater than 99.2% nt identity with R . slovaca strains 13-B and D-CWPP. R . heilongjiangensis was detected in only one sample ( Hae . qinghaiensis ), showing 99.3% nt identity with sequences from R . heilongjiangensis isolate XY-1 from Hae . longicornis in China. Based on concatenated phylogenetic analysis, 22 samples from ticks with two novel SFGR genotypes with identical rrs , gltA and ompA gene sequences, which we designated Rickettsia sp. Av11 (TIGMIC196–206) and Rickettsia sp. DH11 (TIGMIC185–195). The sequences of 11 samples of Rickettsia sp. Av11 constituted an independent cluster on the concatenated tree, as a lone taxon between R . vini and Ca . R. jingxinensis, although showing exceeding 98.8% nt identity with the closest sequence from R . vini str. Boshoek1. Rickettsia sp. DH11. and Ca . R. jingxinensis comprise a separate cluster that appears most closely related to R . vini , R . japonica and R . heilongjiangensis . The rrs , ompA , and gltA sequences of SFGRs amplified from the tick samples were submitted to GenBank and assigned the accession numbers PP110549–PP110758 ( rrs ), PP117689–PP117898 ( ompA ), and PP150126–PP150335 ( gltA ) ( S5 Table ). In this study, we preliminary observed two possible new Rickettsia genotypes based on a phylogenetic tree constructed from concatenated sequences. To further describe the genetic characteristics of these new genotypes, single-gene segment phylogenetic trees were constructed based on the rrs , gltA , and ompA sequences of the new Rickettsia genotypes, and the results were comprehensively analyzed in conjunction with the concatenated sequence phylogenetic tree. Systematic analysis based on the rrs gene revealed a small genetic distance between branch sequences , possibly due to the highly conservative nature of the rrs gene, which limits its effectiveness in species differentiation . Therefore, we primarily referred to the phylogenetic trees based on the ompA and gltA genes and compared the two phylogenetic trees . The sequences of Rickettsia sp. Av11 (TIGMIC196–206) in this study on the ompA phylogenetic tree were closest to R . vini , showing 99.2%–99.4% identity, but formed an independent cluster on the gltA phylogenetic tree. A total of 11 sequences from Rickettsia sp. DH11 (TIGMIC185-195) were grouped together with Ca . R. jingxinensis on the ompA phylogenetic tree, sharing 100% identity, whereas formed a separate branch on the gltA phylogenetic tree . We amplified the 17kDa gene again with these samples from the possible new Rickettsia genotypes were compared with those available in GenBank. For the SFG Rickettsia sp. Av11, although the rrs gene was 100% identical to R . japonica str. LA16/2015 , its 17kDa , ompA and gltA genes show 99.7%, 99.0%, 99.2% nucleotide similarity to R . heilongjiangensis isolate 2022-Tick251 from Hae . flava , R . vini str. Boshoek1 from Ixodes arboricola and Uncultured Rickettsia sp. isolate S6 from D . reticulatus , respectively. In the phylogenetic trees, its rrs and ompA genes were apparently divergent from other SFG Rickettsia species . Notably, its gltA gene is in a basal location in the phylogenetic tree but far from SFG Rickettsia species . According to the gene sequence-based criteria for taxonomic classification of new Rickettsia isolates, a Candidatus status could be assigned to Av11, so we named this species Candidatus Rickettsia vulgarisiii. For the Rickettsia sp. DH11, its rrs , ompA , gltA and 17kDa genes all show above 99.5% nucleotide similarity to Ca . R. jingxinensis . Given that phylogenetic analysis of both rrs and ompA gene sequences revealed that the Rickettsia sp. DH11 strains were clustered with Ca . R. jingxinensis, so this SFG species was identified as Ca . R. jingxinensis. To screen for Anaplasma infection in ticks, we also amplified the rrs , groEL , and gltA gene segments using nested PCR and detected 98 samples for all three positive genes. BLAST results are shown in S6 Table and phylogenetic trees based on single gene are shown in S3 Fig . As mentioned above, phylogenetic tree for Anaplasma species was also constructed by concatenated sequences from the three genes . Among all positive sequences, 97 sample (TIGMIC001–097) showed 97.1%–100% identity to the Anaplasma ovis isolate TC249-5 detected in D . nuttalli from China, while the only remaining sample (TIGMIC098) fell between the two branches of A . ovis and A . capra , with 86.2% and 89.3% identity, respectively. Specific gene segments for A . ovis and A . capra were amplified again for this one sample, and only aligned with A . ovis , showing 99.6% identity. This positive sample was identified as A . ovis . The nucleotide sequences of Anaplasma rrs , groEL , and gltA genes amplified from tick samples were submitted to GenBank with accession numbers PP106263–PP106360 ( rrs ), PP117399–PP117496 ( groEL ), and PP117094–PP117191 ( gltA ) ( S5 Table ). We screened 425 adult ticks from Ningxia region and Rickettsia and Anaplasma DNA were detected in 210 ticks and 98 ticks. The distribution of bacterial species and quantity is shown in S4 Fig . The most species and the highest positive number of SFGRs were in Guyuan city (65/210), while the highest positive rate was in Zhongwei city (68.8%, 53/77); the highest positive number and positive rate of Anaplasma species were in Guyuan city (50/98) and Wuzhong city (40.3%, 27/67), respectively . We detected seven described species of Rickettsia and one novel candidate Rickettsia species . Sequencing of the Anaplasma amplicon determined the presence of one Anaplsma species in D . nuttalli , D . silvarum , Hae . longicornis and Hae . qinghaiensis . Eight species of Rickettsia were identified, while R . raoultii exhibited the highest infection rate (24.0%, 102/425), and R . heilongjiangensis had the lowest infection rate (0.2%, 1/425), detected solely in only one sample from Hae . qinghaiensis ( Table 1 ). The prevalence of R . raoultii , the most abundant species, in Dermacentor ticks (52.7%, 98/186) was significantly higher than that in Haemaphysalis (3.2%, 4/124), Hyalomma (0/101), and Argas (0/14) ticks (χ 2 = 149.6, df = 3, P < 0.001). R . aeschlimannii was exclusively detected in Hyalomma ticks (64.4%, 65/101), R . sibirica and R . slovaca were solely identified in Dermacentor ticks, Ca . R. hongyuanensis identified in Haemaphysalis ticks, whereas Ca . R. jingxinensis was detected in Dermacentor and Haemaphysalis ticks. The infection rates of the newly discovered SFGR was relatively high in Argas ticks (78.6%, 11/14). Furthermore, 98 samples tested positive for Anaplasma species, showing an overall positivity rate of 23.1% (95% CI: 19.1–27.1), all of which identified as A . ovis ( Table 2 ). Anaplasma infection was detected only in Dermacentor and Haemaphysalis ticks (51.1%, 95/186, 2.4%, 3/124), with no evidence in other tick species. The infection rate of A . ovis in Dermacentor ticks was significantly higher than that in Haemaphysalis ticks and other tick genera (χ 2 = 146.5, df = 3, P < 0.001). We identified the greatest richness of bacteria strains in D . nuttalli . Co-infection of Rickettsia - Anaplasma within individual ticks was observed in 14.6% (62/425) of the ticks tested. Co-infections of R . raoultii + A . ovis were 11.5% (49/425), R . sibirica + A . ovis were 2.4% (10/425), and R . slovaca + A . ovis were 0.7% (3/425). All co-infected samples were Dermacentor ticks, with co-infection rate of 38.0% (46/121) in D . nuttalli and 24.6% (16/65) in D . silvarum ( Table 3 ). The above analysis results revealed differences in the parasitic adaptability of various Rickettsia species to tick hosts. To further describe this phenomenon, a species correlation analysis was conducted on all the samples and Rickettsia species included in this study. As illustrated in the chord diagram , these Rickettsia species exhibited specificity to tick genera or species. R . raoultii demonstrated relatively high adaptability to various tick species, being detected in three Haemaphysalis species and two Dermacentor species, with a positive rate reaching 60.3% in D . nuttalli , followed by D . silvarum (38.5%). R . aeschlimannii was found in both Hya . asiaticum and Hya . scupense , with positive rates of 50.0% and 68.8% ( Table 1 ), respectively, and was not detected in tick genera other than Hyalomma ticks. R . sibirica and R . slovaca were exclusively found in Dermacentor ticks, Ca . R. vulgarisii was exclusively detected in Ar . vulgaris , whereas Ca . R. hongyuanensis was solely found in three species of Haemaphysalis ticks. While Ca . R. jingxinensis displayed a lower positive rate than R . raoulti , it was also detected in different tick species. These findings suggest that the distinct host preferences and adaptation ranges exhibited by different Rickettsia species. Rickettsia species exhibit broad pathogenicity, with some posing a lethal threat to humans , and ticks are critical vectors in the transmission cycle of these pathogens, affecting both human and animal health. Understanding the tick-host-pathogen interactions is essential for developing effective strategies to mitigate the impact of tick-borne diseases. Our results document the first detection in ticks of SFGRs and Anaplasma spp. collected from Ningxia, China. As a result, one Anaplasma species and eight SFGR species were identified in nine tick species. Sequence data provided evidence for the presence of A . ovis , R . raoultii , R . aeschlimannii , R . sibirica , R . slovaca , R . heilongjiangensis , Ca . R. hongyuanensis, Ca . R. jingxinensis, and one novel Rickettsia species, which we named “ Ca . R. vulgarisii”. SFGRs compose the important group in the genus Rickettsia . They are common tick-borne pathogens and have long been considered as the causative agents of various zoonotic diseases. In this study, a combination of single gene and concatenated genes ( rrs , ompA , and gltA ) were used to infer the evolutionary topology of Rickettsia using DNA samples obtained from ticks . Eight Rickettsia species, especially a novel SFGR species named Ca . R. vulgarisii, were identified in ticks. Since its characterization in 2008 , R . raoultii , as a causative agent of human tick-borne lymphadenitis, has been reported in ticks from the Romania , India and China . The second SFGR identified in this study was R . aeschlimannii , which has been found in Hya . marginatum collected from southern Europe , and Africa . R . sibirica , R . slovaca and R . heilongjiangensis are recognized as human pathogens that can cause mild rash associated with fever and eschars . Ca . R. hongyuanensis and Ca . R. jingxinensis, belonging to SFG Rickettsia , were first identified in Hae . longicornis in southwest and northeast China . These findings indicate a human infection risk of important SFGR tick-borne diseases in the region. Furthermore, we had detected an incompletely described Rickettsia in Argas ticks. This Rickettsia species had been detected with a highly positive rate in Zhongwei city, and had provisionally been named Rickettsia sp. Av11. The analysis of rrs , ompA and gltA genes and concatenated sequences confirmed the presence of this SFGR in Ningxia. As stated by de Sousa et al. , when they reported the detection of this bacterium, other gene sequences were required to establish its identity correctly according to the genetic guidelines published by Fournier et al. . Genetic analyses indicate that its rrs , ompA , gltA and 17kDa genes have the highest identities to different validated species and these stains in different trees were apparently divergent from other SFG Rickettsia species. Considering these criteria, we propose to give to this strain a Candidatus status and name it “ Ca . R. vulgarisii”, with reference to the name Ar . vulgaris ticks in which this Rickettsi a species had been detected. Argas ticks can easily transmit and acquire bacteria to and from different hosts during its life cycle on account of the fact that they feed multiple times during a given developmental stage . Wild birds, as the common hosts of Argas ticks, may be the non-negligible reason for introducing ticks and related pathogens into new environments due to their special migratory behaviors . The role of Ar . vulgaris in the transmission of SFGRs should be further considered, and investigations on the human pathogenicity of these Rickettsia species are still needed. This discovery suggested that the potential threat of novel species to humans and animals still can not be excluded although its transmissibility and potential pathogenicity should be further studied. A . ovis has been recognized as a tick-borne obligatory intraerythrocytic pathogen mainly infecting the ovine and caprine erythrocytes . This pathogen can cause ovine anaplasmosis characterised by subclinical signs such as weakness, anorexia, weight loss and anaemia . A . ovis was reported in all continents and has a widespread distribution in China , France and Italy . In this study, we observed remarkably high positivity rates of A . ovis in D . nuttalli (58.9%) and D . silvarum (45.7%). Hae . longicornis and Hae . qinghaiensis also tested positive for this A . ovis strain. Although the possibility cannot be ruled out that the A . ovis DNA may come from the blood meal of ticks, we suspect that Dermacentor ticks might play an important role in the transmission of A . ovis in domestic animals. The existence of A . ovis in Ningxia may indicate the risk of human infection and highlight the importance of surveillance in local populations though the pathogenicity of these strains to humans is still to be determined. Evaluation of SFGRs and Anaplasma spp. prevalence at tick level was possible through phylogenetic analysis inference. The infection rates of SFGR and Anaplasma spp. in ticks from the Ningxia region were 49.4% and 23.1% in the present study and their positive rate among different tick species existed significance difference. The ticks were collected during the peak activity period of ticks, and Dermacentor ticks were the dominant tick genus, suggesting their important role in the distribution and transmission of rickettsiae in this region. The positive sequences of SFGRs and A . ovis in this study were mostly derived from ticks feeding on animal hosts (sheep and goat) rather than free ticks. These pathogens were found in regions with high biodiversity where migratory birds may inhabit. Combined with extensive livestock husbandry in Ningxia, they might spread to other hosts through these domestic animals. Previous studies have shown that mammals including sheeps and goats, particularly rodents and other small mammals, are often key reservoirs for tick-borne pathogens [ 2 , 21 – 23 ]. Meanwhile, studies have shown that a higher diversity of mammalian hosts can influence the transmission dynamics of these pathogens, reducing the transmission rates of pathogens like A . phagocytophilum . This is due to the presence of non-susceptible species that can disrupt the cycle of transmission among more susceptible hosts. The ecological and economic factors in the Ningxia region have led to a rich array of animal resources. Therefore, according to the One health concept, it is not only human infections that are of concern, but infections in wild animals and livestock are also major health issues that deserve attention . Our results also showed that co-infection of A . ovis and SFGR ( R . raoultii , R . sibirica and R . slovaca ) existed in Dermacentor ticks, and suggested potential interactions between these pathogens within tick vectors. The results obtained in the current study coincide with those previously described in Thailand, which reported co-infection with Rickettsia spp. and Anaplasma spp. in Dermacentor ticks . It is reported that persistent infection of A . ovis can be accompanied by cyclical fluctuations in rickettsial levels, which may obviously alter the vector infection rate and thus alter transmission . Previous research also indicated that co-infection often leads to a broader spectrum of clinical symptoms, prolongs disease duration, and exacerbates disease severity . For instance, co-infection with both Borrelia burgdorferi and A . phagocytophilum , as opposed to single infections of either one, leads to more deep impairment of endothelial barrier function . The higher co-infection rate in Dermacentor ticks, particularly D . nuttalli , suggests potential synergistic or competitive dynamics between these two genera of pathogens. The interactions between co-infections of pathogens within ticks and the impact of tick-borne transmission on human health still require further study . Due to its intracellular lifestyle, rickettsiae are highly dependent on their primary tick vectors and tend to be selective for the tick species they infect. In this study, most bacteria were detected in Dermacentor ticks. Meanwhile, some bacteria were also detected in Haemaphysalis ticks, Hyalomma and Argas ticks. Further analysis of host preferences among different Rickettsia species highlighted differences in adaptability to various tick species. In this study, R . raoultii displayed relatively high adaptability across five tick species, with infections in Dermacentor ticks accounting for as much as 96.1% (98/102) of the tick population. In contrast, other Rickettsia species exhibited more specific host preferences. R . aeschlimannii showed a preference for two species of Hyalomma ticks. Among the newly discovered Rickettsia , Ca . R. vulgarisii was detected only in Ar . vulgaris , further emphasizing the genus-specific adaptation of Rickettsia species. Additionally, the detection of Anaplasma species in ticks also indicated that they have varying adaptability to different tick species. Beyond the Rickettsia and Anaplasma species studied herein, ticks generally engage in complex ecological interactions with their microbiota , where the presence of specific pathogens contributes to the stability of tick microbial communities and influences their population dynamics and metabolic functions . The adaptability of Rickettsia species to specific hosts holds significant value in maintaining the structure and function of pathogen-vector microbial communities , suggesting that research on the mechanisms of co-evolution between ticks and the microbiota will enhance efforts to control the risk of Rickettsia disease transmission. These findings presented here are of epidemiological importance. We have characterized a novel SFGR named “ Ca . R. vulgarisii”, which may enhance our understanding of our knowledge on the diversity of SFGRs. Although the human pathogenicity for this novel bacterium from a limited sources of Argas ticks is still determined, more attention should be paid to the risk of human infection and the possible circulation of these pathogens in local population. Thus, further studies are needed to explore its implications for human and/or animal diseases in Ningxia by extending ecological surveys with an increased number of tick species and tick specimens. The host range, distribution, and pathogenicity of “ Ca . R. vulgarisii” also merit further investigations. In conclusion, the present study provided a molecular detection on Rickettsia spp. and Anaplasma spp. for the first time in ticks from Ningxia, northwestern China. One Anaplasma species ( A . ovis ) and egiht SFGR ( R . raoultii , R . aeschlimannii , R . sibirica , R . slovaca , R . heilongjiangensis , Ca . R. hongyuanensis, Ca . R. jingxinensis and Ca . R. vulgarisii) including a novel species were detected and characterized. Our findings reveal a presence of diverse SFGRs including unidentified agents within tick species in Ningxia. Further investigations should be focused on expanding the sampling range, required to examine the effects of ecological and seasonal factors on ticks and pathogens, and ascertain the pathogenicity of newly emerged SFGRs in humans. Moreover, variations in host adaptability among different Rickettsia species highlight the complexity of the transmission and dissemination of tick-borne diseases. Therefore, there is a compelling need for intensified monitoring of tick and tick-borne pathogens in subsequent research attempt.
|
Study
|
biomedical
|
en
| 0.999998 |
PMC11695004
|
Development of chemotactic and phagocytic defects along with a decline in neutrophil count is common with anti-cancer treatment . Febrile neutropenia (FN) is a medical disorder developing in cancer patients getting strong anticancer chemotherapeutic agents. FN is said to be a single episode of fever higher than 38.3°C, and absolute neutrophil count (ANC) of <1.0 x 10 9 /L up to <0.5 x 10 9 /L . The deadline for a less ANC nadir is 0.1×10 9 /L, whereas ANC recovery time is the time from anticancer treatment until the patient’s ANC increased to 2 × 10 9 /L, after the expected nadir . FN can cause severe adverse effects , and a rapid medical attention is essential because of decreased immunity. Pus, abscesses, and infiltrates on chest X-ray are the distinctive symptoms of infections, and all these symptoms subside with recovery of neutrophil count . FN can be well managed by initiating empiric therapy with broad spectrum antibiotics and supportive care as soon as fever appears. No matter how well FN is being managed still death rate is 10% among FN patients experiencing infections. Hence, FN is the foremost menace to patients given anticancer therapy, consequently affecting quality of life, with higher infection risk and even death . Chemotherapy induced FN may also cause dose interruptions and even sometimes stoppage of chemotherapy, which adversely affects treatment outcomes. Effects of neutropenia leading to increased risk of infection in cancer patients was first documented in the mid of 1960s. In a study, patients having ANC less than 1.0×10 9 /L for 7 days had more than 50% probability of developing an infection and the risk of infection approached 100% as the neutropenia prolonged. Order of ANC severity is categorized as; Grade-1: ANC of 1500 cells/mm 3 Grade-2: ANC between 1000–1500 cells/mm 3 Grade-3: ANC between 500–1000 cells/mm 3 Grade-4: ANC less than 500 cells/mm 3 Granulocyte colony stimulating factor (GCSF) increases the proliferation and differentiation of granulocyte macrophage colony stimulating factors (GM-CSF). Reduction in the duration of neutropenia and FN has been proved by growth factors . GCSF has been recommended for preventing FN by different legal and professional clinical bodies . However, its efficacy for treating FN is debatable because different clinical trials and meta-analysis have reported contradictory results. There was no significant reduction in the hospital stay in two studies, whereas another study showed 1-day reduction in hospital period . Furthermore, any improvement in clinical outcomes like decline in neutropenia length and upgrading of neutrophil recovery have not been consistent . Consequently, unless and until a patient is at a higher risk of infections or shows worse prognostic factors predisposing them to bad clinical outcomes i.e. lengthy hospital stay or death, existing guidelines do not approve of a regular use of adjunctive GCSF. To identify such patients different clinical predictive models have been developed. In the light of these facts, present study was intended to estimate the effectiveness of adjunctive GCSF in treating FN caused by anticancer chemotherapy . Likewise, founding the patients types who would get benefit from concomitant GCSF will permit oncologists to make decision of GCSF therapy. This was a prospective cohort study conducted at the Hayatabad Medical Complex (HMC), Peshawar, Pakistan, from January 2023 to January 2024. This study was approved by the “Committee for Ethics in Research” Department of Pharmacy, University of Peshawar, and was endorsed by the “Ethical Review Board” Hayatabad Medical Complex (HMC), Peshawar, Pakistan. Detailed timeline of the study is; All the patients were briefed about the study and a written consent form was signed by the patients or their attendant before inception of the study. Fig 1 shows schematic presentation of the study design. Adult cancer patients of both the genders suffering from carcinom, getting anticancer chemotherapy and with developed FN, were included in the study. Furthermore, the definition of FN as mentioned by National Comprehensive Cancer Network (NCCN) guidelines was used for patients’ inclusion . Prophylactic GCSF was prescribed in accordance to existing myeloid growth factor guidelines laid down by the American Society of Clinical Oncology (ASCO) and NCCN. GCSF is recommended for all patients getting high-risk FN chemotherapeutic regimens (>20%) and some patients receiving intermediate risk chemotherapeutic regimens (10–20%). Oncologist had the sole authority to prescribe adjunctive GCSF for treatment of FN. In our hospital, the use of adjunctive GCSF would commence within the first 24 h of chemotherapy induction. Patients with multi organ failure, prophylactic antimicrobial therapy during initial 48 h of admission, fever due to blood transfusion or its components were excluded. Demographic (age, gender, and ethnicity) and medical information of the patients was collected by making a data collection form. Patients themselves, hospital information support system and the pharmacy prescription database were the sources of obtaining data. The clinical data consisted of; Baseline hematological and biochemical tests were conducted at the designated hospital. Blood samples will be collected for culture sensitivity tests. Upon completing the treatment protocol, follow-up tests for both hematological and biochemical parameters were carried out to assess the treatment’s effectiveness . The clinical safety of the treatment was evaluated by tracking any associated adverse effects. Participants were randomly assigned to study groups based on the presence or absence of G-CSF, and their treatment responses were assessed. Before starting antimicrobial therapy during hospitalization, a baseline evaluation of hematological and biochemical parameters was conducted. This included measuring White Blood Cell (WBC) counts, Red Blood Cell (RBC) counts, Hematocrit (HCT), Mean Corpuscular Volume (MCV), Mean Corpuscular Hemoglobin (MCH), Mean Corpuscular Hemoglobin Concentration (MCHC), platelet count, and different types of white blood cells such as Neutrophils, Monocytes, Eosinophils, and Basophils. Additionally, parameters like Red Cell Distribution Width (RDW), Mean Platelet Volume (MPV), Platelet Distribution Width (PDW), Alkaline Phosphatase, Serum Glutamic Pyruvic Transaminase (SGPT), bilirubin, blood urea, and serum creatinine were measured to gather the necessary data . Follow-up assessments were done after the completion of the antimicrobial and GCSF treatment, with both hematological and biochemical parameters reassessed after 5–7 days to evaluate the effects of the respective therapeutic intervention. Descriptive statistics for demographic and medical information were used. Independent sample t test was used for comparison of continuous variables. Odd ratios were calculated for statistically significant model variables. The p values were considered statistically significant when p <0.05. The statistical analysis was performed using Statistical Package for the Social Sciences (SPSS) version 20. In the current study all the patients diagnosed for solid tumor (n = 120) during January-2022 to January-2023, were enrolled in the study. A total of 109 patients were included in the final study while 11 patients were excluded because of incomplete data. Data showed that the ratio of female patients (69/109; 63.3%) was higher than the male patients (40/109; 36.7%), as depicted in Table 1 . Majority of the patients (97.6%) were Pakistani (Pashtun) and the remaining (2.4%) were Afghan national. Average age of the patients was 43.9 years. In age groups less than 30 years old there were 28 patients including 15 male (13.76% of the total patients) and 13 were females (11.93% of the total patients). The age group of 31–40 years had 19 patients (8 male and 11female). Age group containing highest number of patients was 51–60 years (33/109; 30.28%) while age group of 61 and above had the lowest number of patients (3/109; 2.75%). In higher age group, patients are more susceptible for developing FN due to a fall in bone marrow reserve and decrease in immune function . They need aggressive management and are benefitted more with prophylaxis of GCSF. Correlation of statistical parameters showed higher tendency of the age group 51–60 years for development of FN. The values of all the parameters (p value, odd ratio and BH value) pointed out to the higher likelihood of this age group as compared to the other age groups. Our findings were in line with the reported data [ 13 – 15 ] that chances of FN increases with the age. Most of the patients were treated for Ca gastro (52/109; 47.71%) while the least number of patients was observed for thyroid cancer (2/109; 1.83%), as shown in Table 2 . Initial clinical screening of the patients showed different co-morbidities as summarized in Table 3 . Some of these comorbidities can be benefitted by the adjunctive GCSF therapy. Comparison of the results of comorbidities showed a significant treatment output with therapy containing GCSF. Bacterial infection was found in 85.32% of the patients while 11.01% patients had fungal infections. There is a notable reduction in bacterial infections among patients receiving GCSF therapy (8.22%) compared to those not receiving it (69.23%). This suggests that GCSF therapy might be associated with a reduced risk of bacterial infections. Recent studies indicated that GCSF can reduce the FN incidence and infections related to it by improving recovery of neutrophils and their function. GCSF reduces neutropenia duration, which in turn decreases the risk of bacterial infections . The percentage of fungal infections appears to be higher in patients receiving GCSF therapy (2/12; 16.67%) compared to those not receiving it (0.00%). However, this is based on a very small sample size. Some studies suggested that GCSF may not significantly affect fungal infections . The increased risk of fungal infections in neutropenic patients is more often related to the severity and duration of neutropenia rather than GCSF use alone . Fever was significantly lower in GCSF treated patients (0%) compared to non-GCSF treated patients (39.13%) suggesting that GCSF therapy is effective in reducing fever (a sign of FN) by decreasing the duration of neutropenia and improving neutrophil function. This aligns with findings from studies that demonstrate reduced FN and associated symptoms with GCSF prophylaxis . The incidence of pneumonia is higher among GCSF treated patients (50%), but this is based on only two cases. Pneumonia risk is generally linked to the severity of neutropenia and the overall health of the patient rather than GCSF use. The small number of cases in this dataset limits the ability to draw strong conclusions, but larger studies suggest that GCSF does not significantly increase pneumonia risk . Hypotension is lower in GCSF treated patients (33.33%) compared to non-GCSF treated patients (100%).GCSF use is not typically associated with hypotension. The observed difference might be incidental or due to other factors. There is no strong evidence linking GCSF use with hypotension. The findings in this dataset might be influenced by the small sample size . The primary parameter to evaluate outcomes of the study were the rate, duration and incidence rate of FN. Secondary endpoints were; A total of 109 patients completed the study out of which 64 (58.72%) received adjunctive GCSF therapy, and 45 (41.28%) did not receive adjunctive GCSF. Details of cancer type, chemotherapy regime and GCSF are listed in Table 4 . Patients receiving GCSF therapy across various cancer types and chemotherapy regimens generally show a reduction in FN. While on the other hand FN was observed in all the patients not receiving GCSF. Numerous studies have demonstrated that GCSF reduces the incidence of FN by promoting faster recovery of neutrophil counts and reducing the duration of neutropenia . High-intensity chemotherapy regimens (e.g., adriamycin and cyclophosphamide, doxorubicin, cisplatin, and paclitaxel) are associated with a higher FN risk. High-intensity regimens increase the risk of FN, and GCSF prophylaxis is particularly beneficial . The efficacy of GCSF in reducing FN has been well-documented across a range of cancers. GCSF helps in preventing FN in patients undergoing high-risk chemotherapy regimens and has been shown statistically significant results in different clinical trials . The reduction in FN with GCSF therapy has been observed with different chemotherapy regimens, suggesting that GCSF is effective across different drugs and their combinations. However, choice of GCSF dose and timing might vary based on the chemotherapy regimen and patient factors . Prophylactic administration of GCSF is used to boost the production and function of neutrophils in cancer patients who are at high risk of developing FN due to chemotherapy . GCSF primarily affects neutrophils but can also have some impact on other types of white blood cells (WBCs). G-CSF is specifically designed to stimulate the production and release of neutrophils from the bone marrow. As a result, it significantly increases neutrophil counts in the blood. While GCSF primarily targets neutrophils, it can also have some impact on other granulocytes like eosinophils and basophils, as well as monocytes. Eosinophil and basophil counts may decrease temporarily as more neutrophils are produced and released into the bloodstream. Monocyte counts may also increase slightly due to GCSF’s effects on the bone marrow. GCSF’s primary effect is on granulocytes (neutrophils, eosinophils, and basophils), and it typically does not have a significant impact on lymphocytes. In some cases, there may be a mild reduction in lymphocyte counts during GCSF treatment, but this effect is generally not as pronounced as the increase in neutrophils. It’s important to note that the changes in blood cell counts seen with GCSF are generally well-tolerated and do not typically lead to clinical problems . The main objective of GCSF is to reduce the risk of FN, which can be life-threatening in patients undergoing anti-cancer chemotherapy. The specific impact on blood cell counts can vary depending on individual patient factors, the dosing and duration of GCSF treatment, and the underlying cancer and chemotherapy regimen. Healthcare providers closely monitor these parameters during treatment to ensure the patient’s safety and the effectiveness of the prophylaxis. GCSF primarily influences neutrophil counts and function, it does not have substantial effects on other blood cell types, including red blood cells and platelets. Any changes in MCH, MCV, RDW, HCT, MCHC, or MPV during G-CSF prophylaxis are likely related to factors other than the GCSF treatment itself, such as the patient’s overall health, underlying medical conditions, or the effects of chemotherapy. FN did not develop in all the patients who received GCSF while on other hand, FN was observed in all the patients without GCSF. Neutrophil count of most of the patients remained within the normal limits with GCSF therapy. About 19% of patients had neutrophil count below the normal limit but still above the limit for FN and they had no sign and symptoms of FN. About 98% of the patients were infection free. Infection was observed in 2% of patients and both were infected with multi resistant bacteria (MRSA) . In majority of the patients neutrophil count raised above the normal level and fever was not observed in any of the 98% patients. Filgrastim (FIL; Neupogen®) got the approval of US-FDA in 1991 and has been used for the treatment of FN . Different short acting GCSF products (like lenograstim and tbofilgrastim) and long acting products of GCSF (pegfilgrastim (PEG-F) and lipegfilgrastim) have been developed and are effectively used in different carcinomas . Long acting GCSF has the advantage of avoiding regular injections are always preferred over short acting GCSF by the oncologists. Hematological evaluation included determination of RBCs, WBCs, platelet count, neutrophil count, HCT value, hemoglobin level, renal function tests and liver function test. All the parameters were determined as per established protocols and results were summarized in Table 5 . In all the patients taking anti-cancer chemotherapy, rapidly growing cells were affected. Fewer patients with abnormal WBC counts suggests GCSF efficacy in managing and preventing FN. GCSF does not affect Hemoglobin, RBCs, HCT, and platelet count. These can be managed with other supportive therapies and does not impact elevated liver enzymes as well as kidney function markers . It was observed that the specific changes in complete blood count (CBC) results varied with the patient’s overall health, the severity of neutropenia, the type of infection, and the effectiveness of antibiotic therapy. The CBC is just one tool that healthcare providers use to monitor the progress of neutropenic cancer patients and their response to treatment. Details of hematological evaluation before and after therapy are presented in Table 6 . It is indicated by results that WBC and neutrophil counts are improved by GCSF effectively, which is in accordance with its clinical use for FN prevention and management. Any change in other blood parameters may reflect the overall influence of cancer treatments and patient health rather than direct effects of GCSF . Along with hematological evaluation, the measured clinical outcomes showed that patients receiving adjunctive GCSF had better treatment outcomes, as compared to the patients without GCSF, as shown in Table 7 . The benefits were clinically evident and patients showed significant improvement in symptoms like better control of fever, quick recovery in neutrophil count, decreased use of antibiotics, decrease in hospital stay, and resolution of infection. FN presents a significant challenge and make the patients prone to the severe and life threatening infections. These infections can lead to increased hospitalization, necessitate antibiotic treatments, potentially reduce or delay crucial chemotherapy, and adversely affect quality of life. Pyrexia is a key indicator for diagnosing FN, as it often results from infections in patients with compromised immune defenses. In this study, every patient was assessed for microbial presence, with blood samples collected and analyzed for pathogenic organisms. Microbes were identified in all patients who did not receive GCSF (Group-2). The difference in microbial presence between male and female patients was not statistically significant (p = 0.09). Among the positive cases, 53.33% (24/45) were male, and 46.67% (21/45) were female, indicating a minimal gender difference in the presence of microbes. Notably, MRSA and Salmonella typhi were only found in patients who received GCSF, and none were detected in those who did not receive the treatment. All patients in the GCSF group with FN had these infections. Both MRSA and Salmonella typhi are known for their resistance to multiple antibiotics and their potential to cause serious infections in patients with weakened immune systems. The fact that these pathogens were exclusively isolated in patients receiving GCSF suggests that these individuals may have been more susceptible to such infections, possibly due to severe immunosuppression or changes in immune function following GCSF therapy. While GCSF stimulates the production of neutrophils and can increase their count, it does not fully restore immune function. Patients undergoing GCSF treatment may still be at high risk for infections from resistant pathogens if their overall immune system remains compromised. This finding is consistent with research indicating that while GCSF can reduce the incidence of FN, it does not entirely eliminate the risk of infections, particularly with resistant organisms . Both E . coli and Klebsiella pneumoniae were isolated only after non-GCSF therapy, with no cases reported in the GCSF group. The absence of these pathogens in the GCSF group could indicate that GCSF therapy might be associated with a lower incidence of these specific bacteria, possibly due to more effective management of neutropenia and reduced infection rates. Alternatively, the findings might reflect a smaller sample size or variations in the patient population. Research suggests that GCSF can help reduce the incidence of infections by managing neutropenia effectively. The lack of these bacteria in the GCSF group could reflect the therapy’s success in controlling neutropenia, thereby reducing susceptibility to common pathogens like E. coli and Klebsiella. However, comprehensive data are required to confirm this hypothesis . Fig 2 shows images of blood culture and urine samples. The mean value decreases from baseline to follow-up, indicating a reduction in the measured parameter. The correlation is close to zero which is very low, suggesting minimal relationship between baseline and follow-up values. The p-value is close to (0.05) suggesting a trend towards significance but not meeting it strictly. The t-value supports this trend but also indicates that the result is not statistically significant at the 0.05 level. This result indicates that GCSF treatment showed improvement, without any statistically significant change in this specific parameter. Studies generally support that GCSF is effective in reducing FN by stimulating neutrophil recovery. The trend here, while not statistically significant, aligns with evidence suggesting GCSF helps manage neutropenia, but the degree of impact can vary . The noteworthy reduction in the mean value from baseline to follow-up for chemo without GCSF group suggests a large impact. The correlation is very low, indicating minimal relationship between baseline and follow-up values. The p-value is extremely small, showing a very significant outcome. The very high t-value additionally confirms that the difference is statistically significant. This indicating a strong treatment effect, with a substantial improvement in the measured parameter during the follow-up period. This result could reflect a situation where the lack of GCSF leads to more pronounced neutropenia or FN during chemotherapy. Research often shows that without GCSF, patients are at a higher risk of FN due to delayed neutrophil recovery. The significant change here underscores the effectiveness of GCSF in preventing FN and supporting neutrophil recovery . This study showed that adjunctive GCSF therapy has clinical benefits in patients with sarcoma. Along with hematological evaluation, the measured clinical outcomes showed that patients receiving adjunctive GCSF had better treatment outcomes, as compared to the patients without GCSF. The benefits were clinically evident and patients showed significant improvement in symptoms like better control of fever, quick recovery in neutrophil count, decreased use of antibiotics, decrease in hospital stay, and resolution of infection.
|
Review
|
biomedical
|
en
| 0.999998 |
PMC11695005
|
The meaning of happiness can vary from person to person. But it has a significant role in our lives and can have a great impact on the way we live. Oxford English dictionary defines happiness as a state of feeling or showing pleasure. Happiness can be characterized as a persistent mental state that encompasses not just emotions such as joy, contentment, and other positive feelings but also a perception that one’s life possesses significance and worth . There are four dimensions of happiness that’s are pleasure, engagement, meaning, and balanced life . Various research has indicated that happiness has benefits of a wider range . Physicians who experience happiness are inclined to reach quicker and more precise diagnoses, as suggested by multiple studies . Not only this, the learning ability of students also increases due to happiness during the learning period . In addition to the mentioned individual advantages, individuals who experience greater happiness also enjoy improved well-being and extended lifespans , exhibit safer behavior while driving, reduce the likelihood of accidents , and contribute positively to society as a whole . Recognizing the significance of happiness, the United Nations Sustainable Development Solutions Network has released World Happiness Reports (WHRs), ranking countries based on their happiness levels since 2012 . The happiness index, also known as the Happiness Score (HS) or ladder score or well-being index, is a measurement used to assess and quantify the level of happiness or well-being within a population or country. These reports aim to explore happiness by considering factors like Gross Domestic Product (GDP) per capita, social support, life expectancy, perception of corruption, freedom to make life choices, and generosity. The index provides a comparative measure of happiness across different regions or nations and is often used to guide policy decisions and prioritize well-being initiatives . Comparing the WHR rankings of pre-pandemic and post-pandemic , there is a combination of consistency and change. While eight out of the top 10 countries in 2023 were also in the top 10 in 2018, the specific rankings shifted. Finland emerged as the leader, closely followed by Denmark and Iceland, with average scores between 7.78 and 7.52. Norway made a significant downfall from third place in 2018 to claim the eighth spot in 2023. Luxembourg and Israel make a significant improvement in the Happiness Index to get involved in the top ten countries. However, Canada and Austria failed to maintain their position in the top ten countries. Since predicting HS of countries is an important task for researchers as well as countries various techniques have been used to predict world HS over the past decades. Statistical approaches, econometric approaches, Machine Learning (ML), and Deep Learning (DL) approaches have been utilized to model the intrinsic complexity of the prediction. Most of the studies used Deep Neural Networks (DNN) [ 11 – 13 ], Decision Tree (DT) [ 13 – 15 ]and Support Vector Machines (SVM) to predict the HS for different countries. The unsupervised learning such as clustering have also been used to predict this score. Particular attention is increasingly being paid to DL, such as Multilayer Perceptron (MLP) and DNN are also used for predicting happiness . Those techniques have a great impact in happiness score prediction but due to COVID-19 pandemic, the individual features value changes in an unorganized patterns that create the problems to analyze the HS based on the current models. The coronavirus, having infected numerous individuals, has escalated into a pandemic. Reports suggest that the total number of deaths during the COVID-19 pandemic in 2020 could exceed 3 million, surpassing the official count by 1.2 million . Amidst the COVID era, many have endured job losses, which indirectly contribute to feelings of sadness. Presently, people prioritize more than just financial stability; they also value aspects such as quality of life, mental well-being, and other related concerns. Programs like “Art of Living” have gained traction as happiness becomes a prominent topic . The COVID-19 period proved to be mentally exhausting for many individuals. There is a strong inclination to understand the factors contributing to the significant decline in mental health during this time. As mentioned earlier, the values of the features changes dramatically in the COVID-19 that create an impact on the models performances thus the story behind this change need to be analyze. Its motivate us to employ advance techniques like Explainable Artificial Intelligence (XAI) models to show the story behind the black-box ML and DL models. By leveraging XAI models, we can gain insights into the factors that significantly impact mental well-being during a pandemic. Many previous studies focused solely on predicting national HS, our study takes a novel approach by implementing XAI techniques. By doing so, we aim to identify the factors that can affect national HS. This approach provides a deeper understanding of how these factors interact and work together to influence the HS of countries, thereby contributing to a more comprehensive analysis of happiness determinants. Additionally, to get the better prediction, our study also propose two ensemble model that have the capability to predict the score more accurately than existing models. The core contributions of this study are below: The rest of the paper organized as in the Literature Review section the related literature’s, in the Materials and Methods section the details methods, the findings are in the Results and Discussion section and finally in Section Conclusion the conclusion and future research directions are described. This section provides an overview of prior research concerning the prediction of HS. It outlines existing methodologies and recent advancements in predicting HS. Current researches suggest that happiness serves as a vital indicator of societal well-being , aligning with Betham’s assertion that the best society prioritizes citizen happiness . Numerous studies have explored the positive aspects of happiness in policy making . However, manual analysis of the factors influencing happiness is both labor-intensive and costly. Hence, there is a growing need for automatic analysis to address the complexity and expense associated with this task. A decent paper in 2020 “Well-being is more than happiness and life satisfaction” analyzes the relationships between happiness and well-being across 21 countries . It distinguishes between happiness and various components like competence, emotional stability, positive emotions, and engagement. Unlike previous studies, it identifies separate dimensions of positivity and also find that while GDP and similar factors are related to happiness, they are not entirely correlated. In 2011, Louise Millard employs various ML techniques to analyze global happiness, utilizing Principal Component Analysis (PCA) for gender equality and life satisfaction assessment, DT for feature selection, and predicting life satisfaction, with key determinants identifies as life expectancy, income distribution, and freedom through permutation testing and bootstrapping . Khder et. al., conducts a study aiming to classify critical variables influencing life HS using ML techniques . They employs supervised learning, utilizing NN training for classification and OneR models for feature selection. Their analysis reveals ‘GDP per capita’ as the primary indicator and ‘health life expectancy’ as the second most significant factor impacting life HS. In 2020, a study on global happiness employs network learning approaches to gain deeper insights into the factors influencing happiness and their interconnections. The analysis reveals intriguing relationships, such as the lack of direct correlation between GDP per capita and generosity, and the connection between confidence in national government and freedom in life choices. Predictive modeling and Bayesian Networks are utilized to analyze historical happiness index data, with General Regression Neural Networks (GRNN) addressing predictive challenges . Prashanthi et. al., conducts a study on predicting countries’ future emotional status to understand economic well-being, using the Happiness Index and SVM Kernel . They introduces a supervised ML model to forecast life satisfaction scores based on parameters like environment, jobs, health, and governance. SVM performs the classification task, with a meta-ML model enhancing prediction accuracy. Another study uses ML to predict workplace happiness from office survey data, achieving 87.66% accuracy with K-Nearest Neighbors (KNN), DT, Naive Bayes, and MLP models, along with oversampling and under-sampling techniques . In a study “Exploring trends in the WHR,” ‘GDP’, ‘Social Support’, and ‘Healthy Life Expectancy’ are found to be the most significant factors. Instability affects ‘social support systems’, impacting life expectancy and trade, thus affecting GDP. Multiple linear regression results in an Root Mean Squared Error (RMSE) of 0.67 and an Mean Absolute Error (MAE) of 0.50 . In 2021, a study aims to analyze the world happiness dataset, extract insights, and predict HS using ML, employing algorithms like Linear Regression (LR), KNN, DT, and Random Forest(RF), with RF exhibiting the best RMSE result of 0.05, indicating its superiority in prediction accuracy . On the data of 2015 to 2019, a study utilize the Happiness Index and SVM Kernel to predict countries’ future emotional status and understand their economic well-being, achieving a 56.25% accuracy rate in classification. Happiness is determined using life expectancy, experienced well-being, and ecological footprint, with the SVM effectively handling data scarcity. By combining regression models through stacking, the predictive accuracy of the model reached approximately 90% in predicting a country’s life satisfaction score . Rajkumar investigates how the six cultural dimensions, as conceptualized by Hofstede and his team, relate to subjective happiness ratings across 78 countries. Data is drawn from the latest WHRs, capturing responses both pre-pandemic and during the pandemic . Gunjan Anand examines the causal link between economic factors like GDP per capita, Consumer Price Index, unemployment rate, and government expenditure with the happiness index. Results suggest GDP per capita and government expenditure influence happiness, while CPI and unemployment rate do not, indicating their significance in predicting happiness . The details comparison of the related literatures are in Table 1 . Our study endeavors to predict HS utilizing a variety of ML models, including RF, Extreme Gradient Boosting (XGB), KNN, Gaussian Process (GP), Ridge Regression (RR), and Polynomial Regression (PR), DL models including MLP, Long Short-Term Memory (LSTM), Bidirectional Long Short-Term Memory (BiLSTM), and Gated Recurrent Unit (GRU), and hybrid models such as stacking ensemble learning (LRGR) and blending ensemble learning (RGMLL). Additionally, we aim to identify the factors influencing these scores through the application of XAI techniques such as SHapley Additive exPlanations (SHAP), Local Interpretable Model-Agnostic Explanations (LIME), and Eli5. Subsequent sections will introduce each model and XAI technique employed in our study individually. In this section, we provide the details of the materials and methods of this work. First, we demonstrate the working process through an overview diagram, then describe individual elements one by one in details. We aim to predict the HS of different countries based on the data of the WHR. This study constitutes a regression problem and we design the prediction system based on the traditional process of ML, DL, and ensemble model development. We collect data from WHR and reform the final dataset by merging the report of six consecutive years and preprocess the whole dataset to make it more ML trainable. We handle the missing values and evaluate the models by dividing the dataset into 80:20, 70:30, and 50:50 training and testing partitions to check the stability of the algorithms. We train RF, XGB, KNN, GP, RR, and PR ML algorithm and MLP, LSTM, BiLSTM, and GRU DL algorithms. We also propose two ensemble model stacking LRGR and blending RGMLL and evaluate the performances of the models using R-squared ( R 2 ), MSE, and RMSE. Additionally, for more transparency and to find the hidden story of happiness and explain the black-box algorithms, we employ XAI techniques such as LIME, SHAP, and ELI5 to determine the global and local explainability. The overview of the proposed method is in Fig 1 . The details of the algorithms, performance measure techniques, and XAI techniques are in the below subsections. The WHR, a pivotal annual survey initiated in 2012, provides insight into the global state of happiness, ranking countries based on their citizens perceived levels of happiness. Our analysis utilize modified online data from the report to conduct our study . The data for the WHR source from the Gallup World Poll (GWP), which compiles survey responses from 156 countries. This extensive dataset tracks the evolution of key happiness indicators and their underlying factors over time. However, it’s worth noting that while the survey covers 156 countries, data availability in the earlier years is limited to only a few nations. We have compiled data from the past six years , encompassing all countries and divided them into three categories such as pre-pandemic , during-pandemic and post-pandemic . Table 2 provides a comprehensive description of all the features utilize for constructing the models. The target variable for prediction it the HS of countries, also known as the Life Ladder. According to the WHR, this score is derived from the average responses to the Cantril ladder life evaluation question in the GWP. Respondents are prompted to envision a ladder, where 10 represents the best possible life and 0 signifies the worst. They then rate their own current lives on this scale from 0 to 10. To effectively utilize and apply a Bayesian Network (BN) to the dataset, it needed to be discretized. The study from Beuzen provide a thorough comparison of various discretization schemes for this purpose . Missing values in features are addressed by imputing the mean of observations from other years, grouped by country for that specific feature. For instance, features like ‘healthy life expectancy’ have some missing values for the year 2023 which is substituted with the country-wise means calculated for the years 2018-2022. As a final measure, countries that have missing values for all years for a specific feature are filled using the mean calculated across all countries. After completing the aforementioned preprocessing steps, the data partitions into different ratio of training and testing datasets. The training dataset use to train the ML models, while the testing dataset is kept separate to evaluate the model’s performance on unseen data. This division ensures that the model’s performance can be accurately assess and generalize to new data. In this section, we provide the details of the ML algorithms one by one with necessary equations. RF is a regression technique that uses the performance of multiple DT algorithms to classify or predict the value of a variable [ 33 – 35 ]. That is, when RF gets a ( x ) input vector containing the values of the many evidentiary features examined for a specific training region, it constructs a number K of regression trees and averages their results. Following the growth of K such trees T ( x ) 1 K , the RF regression predictor is: f ^ rf K ( x ) = 1 K ∑ k = 1 K T ( x ) By allowing the trees to develop from several training data subsets produced by a process known as bagging, RF promotes the variety of the trees and prevents the correlation between the various trees. More stability is thereby attained, increasing prediction accuracy while strengthening the system’s resistance to even small changes in the input data . Recently, XGB, a ML algorithm, was invented and is now widely employed across multiple fields. Numerous applications will benefit from this method’s organization, portability, and flexibility . By integrating Cause Based DT (CBDT) and Gradient Boosting Machine (GBM) into a single, efficient method, this strategy improved the tree-boosting approach’s capacity to handle almost all data kinds rapidly and reliably. Additionally, this method provides effective and efficient solutions to new optimization issues, particularly when accuracy and efficiency trade-offs are taken into account . A non-parametric technique known as KNN regression uses the average of observations within the same neighborhood to roughly approximate the relationship between independent variables and the continuous outcome . The neighborhood in which KNN regression is applied is determined by measuring the distance between the data points; the most popular distance measure is Euclidean Distance , which is given for two data points ( x i and x j ) in a d-dimensional space by: d ( x i , x j ) = ∑ k = 1 d ( x i , x j ) where, x i k and x j k are the kth features of points x i and x j respectively. d is the number of dimensions (features) in the dataset. One kind of probabilistic and non-parametric approach is predicated on the idea that the function that needs to be learned comes from a GP . The model can produce predictions with a well-defined uncertainty thanks to this assumption, which is helpful for tasks like active learning and uncertainty-aware decision-making. In terms of math, f ( x ) = G P ( m ( x ) , k ( x , x ′ ) ) where, m ( x ) is the mean function. k ( x , x ′) is the covariance function, which determines the correlation between points x and x ′. RR is a basic regularisation method where the OLS objective function is modified by adding a penalty term to address multicollinearity problems and regularises the model . In ridge regression, the objective function is provided by: m i n β ∥ Y - X β ∥ 2 2 + γ ∥ β ∥ 2 2 where, Y is the vector of observed target values. X is the matrix of input features. β is the vector of regression coefficients to be estimated. γ is the regularization parameter. ∥ . ∥ 2 2 represents the squared L2 norm. Finding the values of the coefficient vector β that minimize the sum of the RSS and the regularisation penalty is the aim of RR. An n-degree polynomial is used to represent the relationship between the independent variable ( x ) and the dependent variable ( y ) in PR, a type of LR . With a polynomial of degree n, the PR equation is as follows: y ( x ) = β 0 + β 1 x + β 2 x 2 + … + β n x n + ϵ (5) where, y is the dependent variable (target). x is the independent variable (feature). β 0 , β 1 , β 2 , …, β n are the coefficients of the polynomial terms. ϵ is the error term. Finding the coefficients β 0 , β 1 , β 2 , …, β n that most closely match the observed data points is the aim of PR. MLP: MLP is a type of artificial neural network composed of an input layer, one or more hidden layers, and an output layer . Each layer contains neurons that compute outputs using weights W , biases b , and activation functions ϕ . For an input vector x , the output of each neuron in a layer is calculated as: z = W · x + b a = ϕ ( z ) where W is the weight matrix, b is the bias vector, and a is the activation. This process is repeated for each layer, passing the activations to the subsequent layer until the final output layer. The MLP uses backpropagation for training, adjusting weights and biases to minimize the error between predicted and actual outputs . LSTM is a type of recurrent neural network (RNN) designed to handle sequences of data and long-term dependencies . It consists of memory cells, each with a cell state c t and hidden state h t . The LSTM cell uses input ( i t ), forget ( f t ), and output ( o t ) gates, along with a candidate cell state ( c t ˜ ). The equations governing these components are: i t = σ ( W i · [ h t - 1 , x t ] + b i ) f t = σ ( W f · [ h t - 1 , x t ] + b f ) o t = σ ( W o · [ h t - 1 , x t ] + b o ) c t ˜ = tanh ( W c · [ h t - 1 , x t ] + b c ) c t = f t * c t - 1 + i t * c t ˜ h t = o t * tanh ( c t ) where σ is the sigmoid function, W and b are weights and biases, respectively, and x t is the input at time step t . These gates regulate the flow of information, allowing LSTMs to capture long-term dependencies and mitigate the vanishing gradient problem . BiLSTM network is an extension of the LSTM network that processes data in both forward and backward directions to capture context from both past and future states. Each BiLSTM cell consists of two LSTM layers: one that processes the input sequence forward h t → = o t * tanh ( c t → ) and another that processes it backward h t ← = o t * tanh ( c t ← ) . The final output at each time step t is the concatenation of h t → and h t ← : h t = [ h t → , h t ← ] This structure allows BiLSTMs to leverage context from both directions, improving performance on tasks where understanding the full sequence is crucial . GRU network is a type of RNN that efficiently captures dependencies in sequence data using gating mechanisms. Unlike LSTM, GRU has fewer gates, making it simpler and computationally less expensive. Each GRU cell has a reset gate r t and an update gate z t , which control the flow of information. The equations governing these components are: z t = σ ( W z · [ h t - 1 , x t ] + b z ) r t = σ ( W r · [ h t - 1 , x t ] + b r ) h t ˜ = tanh ( W h · [ r t * h t - 1 , x t ] + b h ) h t = ( 1 - z t ) * h t - 1 + z t * h t ˜ where σ is the sigmoid function, tanh is the hyperbolic tangent function, W and b are weights and biases, respectively, and x t is the input at time step t . The update gate z t determines how much of the previous state h t −1 to keep, while the reset gate r t controls how much of the previous state to forget when computing the candidate activation h t ˜ . This gating mechanism allows GRUs to efficiently manage long-term dependencies and mitigate the vanishing gradient problem . Stacking ensemble learning leverages the complementary strengths of various base models to boost performance and generalization . It involves two phases: training base models and training a meta-model . First, the original data is split into a training set and a testing set. The training set undergoes k-fold cross-validation, where it’s divided into k parts, and each part is trained using the remaining k-1 parts, generating predictions. These predictions form a new training set for the meta-model. In the second phase, the predictions from the base models’ testing sets are combined to create the meta-model’s testing set. The meta-model is then trained on this new dataset. In stacking ensemble learning, selecting appropriate base and meta models is essential. In this study, three types of base models are used: a general regression model (LR), a tree-based model (RF), and a boosting model (GB). A regularization technique (RR), is chosen as the meta model. The workflow is given in Fig 2 . The process begins with training the base learners on the given dataset D = { ( x i , y i ) } i = 1 n , where x i ∈ R d and y i ∈ R . Each base learner generates its predictions: y ^ LR , i = x i ⊤ β y ^ RF , i = RF ( x i ) y ^ GB , i = GB ( x i ) These predictions are then used to construct a new meta-training dataset D ′ , where each instance y ^ i is a vector of the base learners’ predictions: D ′ = { ( y ^ i , y i ) } i = 1 n with y ^ i = ( y ^ LR , i , y ^ RF , i , y ^ GB , i ) ⊤ The meta learner, RR, is trained on this new dataset. The RR model predicts the final output y ^ i using: y ^ i = θ ⊤ y ^ i The parameters θ are optimized by minimizing the regularized loss function: min θ ∑ i = 1 n ( y i - θ ⊤ y ^ i ) 2 + λ ∥ θ ∥ 2 where λ is the regularization parameter. For a new input x * , the final prediction y ^ final is computed by combining the base learners’ predictions through the trained meta learner: y ^ final = θ ⊤ ( y ^ LR ( x * ) , y ^ RF ( x * ) , y ^ GB ( x * ) ) ⊤ This combination of base learners’ outputs through a meta learner harnesses the strengths of individual models, leading to improved predictive performance. Algorithm 1 Stacking LRGR 1: Input: Train dataset D = { ( x i , y i ) } i = 1 n where x i ∈ R n , y i ∈ Y 2: Output: An ensemble regressor R 3: Step 1: Learn base regressors 4: for k ← 1 to K do 5: Learn a base regressor r k based on D 6: end for 7: Step 2: Construct new datasets 8: for i ← 1 to n do 9: Construct a new data set { x i ′ , y i } where x i ′ = { r 1 ( x i ) , r 2 ( x i ) , … , r K ( x i ) } 10: end for 11: Step 3: Learn a meta regressor 12: Learn a new regressor r ′ on { x i ′ , y i } 13: Return: R ( x ) = r ′( r 1 ( x ), r 2 ( x ), …, r K ( x )). In this study, we develop the RGMLL regression model, a blending ensemble learning approach combining five algorithms: RF, GBM, MLP, LSTM, and LR. Blending, unlike stacking, uses a small validation set (10–15% of the training data) instead of out-of-fold predictions for training the meta model. This ensemble method integrates the strengths of its individual models, enhancing predictive accuracy and performance. Ensembles also reduce prediction variability, improving reliability and overall predictive capabilities. The workflow is given in Fig 3 . Our proposed ensemble model incorporates Initially, we train the base models on the training dataset D = { ( x i , y i ) } i = 1 n , where x i ∈ R d and y i ∈ R . Each base model generates its respective predictions: y ^ RF , i = RF ( x i ) y ^ GB , i = GB ( x i ) y ^ MLP , i = MLP ( x i ) y ^ LSTM , i = LSTM ( x i ) To train the meta model, we construct a validation dataset V = { ( x j , y j ) } j = 1 m . The base models generate predictions on this validation dataset to create a new dataset D ′ for the meta model. Each instance y ^ j in this new dataset is a vector of the predictions from the base models: D ′ = { ( y ^ j , y j ) } j = 1 m where y ^ j = ( y ^ RF , j , y ^ GB , j , y ^ MLP , j , y ^ LSTM , j ) ⊤ We then train the meta model, LR, on this newly constructed dataset to predict the final output y ^ j : y ^ j = β ⊤ y ^ j The coefficients β of the LR model are optimized by minimizing the MSE: min β ∑ j = 1 m ( y j - β ⊤ y ^ j ) 2 For a new input x * , the final prediction y ^ final is obtained by aggregating the predictions from the base models through the trained LR model: y ^ final = β ⊤ ( y ^ RF ( x * ) , y ^ GB ( x * ) , y ^ MLP ( x * ) , y ^ LSTM ( x * ) ) ⊤ This blending method effectively leverages the strengths of the diverse base models, enhancing predictive performance through the integration of their outputs by the LR meta model. Algorithm 2 Blending RGMLL 1: Input: Train dataset D = { ( x i , y i ) } i = 1 n where x i ∈ R n , y i ∈ Y , validation dataset V = { ( x j , y j ) } j = 1 m 2: Output: An ensemble regressor R 3: Step 1: Learning base regressors 4: for k ← 1 to K do 5: Learn a base regressor r k based on D 6: end for 7: Step 2: Predict on validation set 8: for j ← 1 to m do 9: Construct a new validation data set { x j ′ , y j } where x j ′ = { r 1 ( x j ) , r 2 ( x j ) , … , r K ( x j ) } 10: end for 11: Step 3: Learn a meta regressor 12: Learn a new regressor r ′ based on the newly constructed validation data set 13: Step 4: Combine predictions 14: For a new instance x , obtain predictions from base regressors { r 1 ( x ), r 2 ( x ), …, r K ( x )} 15: Predict the final output using the meta regressor: R ( x ) = r ′( r 1 ( x ), r 2 ( x ), …, r K ( x )) 16: Return: R Here, y i represents the predicted value, and the x i element is the observed value. The regression method predicts the y i element for the corresponding x i element of the dataset. Coefficient of Determination ( R 2 ). The coefficient of determination can be interpreted as the proportion of the variance in the dependent variable that is predictable from the independent variables. Which can range from 0 to 1. R 2 = 1 - ∑ i = 1 n ( y i - x i ) 2 ∑ i = 1 n ( x ¯ - x i ) 2 MSE: It is defined as Mean or Average of the square of the difference between actual and estimated values . MSE = 1 n ∑ i = 1 n ( y i - x ^ i ) 2 RMSE. The underlying assumption when presenting the RMSE is that the errors are unbiased and follow a normal distribution. RMSE = 1 n ∑ i = 1 n ( y i - x ^ i ) 2 Local surrogate models that explain black box ML model predictions are called “LIME” . Why the black box model predict a data instance explain by training local surrogate models instead of global ones. LIME changes samples and predicts black box models. This dataset trains an interpretable model like Lasso or a DT using proximity-based weights. Instead of a strong global approximation, local surrogate models approximate the black box model’s instance predictions. In mathematical terms, local surrogate models, while adhering to the requirement of interpretability, can be represented as follows: argmin g ∈ G L ( f , g , π x ) + Ω ( g ) A loss-minimizing model g , like LR L explains example x . As with XGB, this loss evaluates how well the explanation matches model f predictions. G , with a simple model Ω ( g ) and fewer characteristics, includes all plausible explanations, including LR. The proximity measure π x defines the size of the neighborhood surrounding instance x . LIME lets users set complexity, like the LR model’s maximum features, to optimize loss. LIME supports tabular, text, and image data, unlike other techniques. Inconsistent LIME replies are troublesome. Two nearby data points may have distinct meanings, according to simulations . LIME explanations can be altered by data scientists to hide biases . Lundberg and Lee describe individual predictions using SHAP . It employs optimum game-theoretic Shapley values. The explanation approach calculates coalitional game theory Shapley values for each feature’s prediction contribution. These values divide prediction evenly among feature values as coalition members. LIME and Shapley values links via a novel linear model-like additive feature attribution mechanism. It describes both methods and unites interpretable ML. The SHAP explains: g ( z ′ ) = ϕ 0 + ∑ j = 1 M ϕ j z j ′ SHAP explanation model g uses coalition vectors z ′( binary 0, 1) for feature presence and Shapley values ϕ j for feature attributions. The author of Interpretable ML thinks “SHAP” was named because it collects feature superpixels instead of pixels in picture data . Coalition vector z ′s describe “present” (1) or “absent” (0) attributes, like Shapley values, which compute contributions by “playing” or “not playing” feature values. ϕ ’s may be explained using linear coalition models. The coalition vector x 6′ for interest x has all 1’s, indicating all feature values are “present”. Write the simplified formula: g ( x ′ ) = ϕ 0 + ∑ j = 1 M ϕ j Interpretable ML’s author attributes SHAP’s popularity to its quick tree-based model implementation. Slow computation is the largest Shapley value adoption barrier. SHAP may be used to purposefully falsify data and hide biases, according to Slack et al. . The Eli5 package aims to debug and explain ML classifiers, supporting Scikit-learn algorithms. It clarifies feature weights, and predictions, displays DTs, shows feature importance, and explains predictions made by DTs and tree-based ensembles . With functions like ‘show_weights()’ for global model interpretation, providing explanations for classifier parameters, and ‘show_prediction()’ for local interpretation, explaining classifier predictions. Global interpretation helps comprehend a model’s logic across all potential outcomes . These methods have broader applications on a population scale, addressing issues like drug consumption trends, climate change, and the global public health challenge of suicide . Global interpretation techniques were employed to assess the significance of each feature in algorithm outputs. For local interpretation, the focus was on explaining the rationale behind individual predictions . In sentiment analysis exploration, text samples were scrutinized to uncover why the model categorized instances as positive or negative, with each word carrying a weight impacting classification. Leveraging the Eli5 library enabled pinpointing specific features influencing the classification of individual samples. In this section, we present all the obtained results according to the list of contributions. We first show the explanatory data analysis and descriptive statistics in details and then show the performances of the ML, DL and proposed ensemble models. Finally, we present the explainability result to revel the hidden story and the impact of COVID-19 on the happiness. The following Table 3 presents descriptive statistics of the features used in this study which contains mean, variance, standard deviation, minimum, maximum and quartiles (25th, 50th, 75th percentiles) for each feature. The mean of the feature freedom to make life choices is 0.62, with a variance of approximately 0.05, indicating moderate variability around the mean. The distribution of this feature appears relatively symmetric, with the median of it also being 0.63. The range spans from 0.00 to 0.97, with the majority of individuals falling between 0.47 and 0.80. The feature GDP per capita emerged as positively skewed as the mean (5.14) greater than the median (1.95) with standard deviation 4.26 which indicating large variability in the range of 0.00 to 11.66. Similarly, Generosity, Happiness score and Social support showing relatively symmetric distribution as their mean and median are approximately same with small variability. Other features, Healthy life expectancy and Perceptions of corruption revealing some kind of positive skewness as their mean are greater than median. Here is the correlation heatmap of the features in Fig 4 showing the Pearson’s correlation coefficient among the features. As the heatmap there is highest correlation between ‘Healty life expectancy’ and ‘GDP per capita’ (0.99). GDP per capita also have some greater level of correlation with Freedom to make life choices and Perception of corruption. Healty life expectancy also moderately correlated with Freedom to make life choices and Perception of corruption. Freedom to make life choices and Perception of corruption are moderately related. Hyperparameters in ML refers to a parameter whose value is set before the learning process begins. These parameters influence the learning process itself, rather than being learned from the training data. The importance of hyperparameters lies in their ability to significantly impact the performance of ML models. By tuning hyperparameters appropriately, it is possible to improve the model’s ability to generalize to unseen data, thus enhancing its predictive accuracy and efficiency. We perform GreadSearchCV process to find the suitable hyperparameters for the algorithms then apply dataset for training. The hyperparameters of individual model with suitable values are in Table 4 . We employ various ML models to predict the HS of the six years data. We train different ML algorithms on the (80%, 70%, and 50%) training data on same experimental setup and assess the performances of each models by different ratio of the testing data. We summarizes the performance of both standard and the best-performing models using an 80:20 ratio that are in Table 5 . The analysis reveals that RF exhibit superior performance compare to other single models, achieving an R 2 of 0.83, MSE of 0.19, and RMSE of 0.44. Following closely, GP demonstrated notable performance with an R 2 of 0.81, MSE of 0.20, and RMSE of 0.45. Additionally, XGB delivered satisfactory results with an R2 of 0.80, MSE of 0.22, and RMSE of 0.47 when predicting HS on the test data. But the Stacking LRGR model outperformed all the single models with R 2 of 0.84, MSE of 0.18, and RMSE of 0.43. This suggests that the combined synergy of multiple models can surpass the performance of any single model. The R 2 values of the models are also presented in the Fig 5 below. Where Y axis representing the R 2 value and X axis representing the different models. The table above summarizes the outputs of the ML models when applied to a training and test ratio of 70:30 ( Table 6 ). The analysis revealed that RF exhibited superior performance compared to other models, achieving an R 2 of 0.82, MSE of 0.21, and RMSE of 0.46. Following closely, XGB demonstrated notable performance with an R2 of 0.81, MSE of 0.21, and RMSE of 0.47. Additionally, GP delivered satisfactory results with an R2 of 0.80, MSE of 0.23, and RMSE of 0.47 when predicting HS on the test data. When applying the ML model to the dataset divided into a train-test ratio of 50:50, RF displayed superior performance compared to other models, achieving an R 2 of 0.78, MSE of 0.25, and RMSE of 0.50. Following closely, PR demonstrated notable performance with an R2 of 0.77, MSE of 0.26, and RMSE of 0.51. Additionally, XGB delivered satisfactory results with an R 2 of 0.76, MSE of 0.27, and RMSE of 0.52 when predicting HS on the test data. Displayed in the Table 7 . The performance of various DL models and the blending RGMLL model is shown in below Table 8 . Which shows that the predictive performance of the blending RGMLL model is superior compared to the other models used in this analysis with R 2 of 0.85, MSE of 0.15, and RMSE of 0.38 following by MLP, LSTM and BiLSTM respectively. Where LSTM and BiLSTM performed almost similarly. MLP performs better than other single DL models with R 2 of 0.74, MSE of 0.23, and RMSE of 0.48. And GRU is the worst performer among the DL models when predicting HS on the test data. Which is also represents by the Fig 6 . The R 2 values of the models are also presented in the Fig 6 above. Where Y axis representing the R 2 value and X axis representing the different models. In this section, we show the result of the XAI techniques to analyze the predictions made by the optimal algorithm across three distinct time categories: pre-COVID, during-COVID, and post-COVID. The aim is to identify and understand the factors that influence happiness during these different temporal periods as well as the impact of the pandemic on the HS. We apply SHAP as a global explainer for the predictions generates by the superior model during the pre-COVID period, both the summary plot and bar plot indicates that ‘GDP per capita’ emerged as a significant factor influencing HS. This is followed by ‘Social Support’ and ‘Healthy Life Expectancy’. The global explainability is in Fig 7 as SHAP value plot and Fig 8 as a bar plot. These findings are further corroborate by permutation-based feature importance analysis conduct using ELI5, which display the average importance and standard deviation of each feature, as depict in Fig 9 . The waterfall plot in SHAP is a visualization tool that illustrates the contribution of each feature to the predicted output of a ML model, focusing on a specific instance or observation. The waterfall plot is in Fig 10(a) show the top, in Fig 10(b) show mid, and in Fig 10(c) show the last-ranked countries during the pre-COVID period. It reveals that for the top-ranked country, each feature makes a positive contribution to its top ranking. Conversely, for the mid-ranked country, the key factor exhibits a negative contribution, while the other features contribute positively. In contrast, for the last-ranked country, all factors demonstrate a negative contribution, leading to its lowest ranking. Local explainability using LIME indicating for the top ranked country, in here the order of the key features are remain same and each feature positively contributed while prediciting happiness score. But for the last ranked country the order has been changed loke ‘Healthy life expectancy’ emerged as key influencer followed by ‘Social support’ and they contributed negatively to the prediction as shown in Fig 11 . During the pandemic, ‘social support’ emerged as the primary factor influencing the HS of countries, as indicated by both the SHAP summary plot and bar plot depicted in Fig 12(a) and 12(b) respectively. Furthermore, ‘healthy life expectancy’ and ‘GDP per capita’ also emerged as significant factors following social support during the COVID-19 period. These results supports by permutation-based feature importance analysis conduct using ELI5, as depict in Fig 13 . Fig 14 exhibits the waterfall plot illustrating the top, mid, and last-ranked countries. It indicates that for the top-ranked country, every feature makes a positive contribution except for generosity. In contrast, GDP per capita and perception of corruption contribute negatively, while the remaining features contribute positively for the mid-ranked country. Finally, for the last-ranked country, each feature contributes negatively when predicting the HS. During the pandemic session for the top-ranked country LIME explore ‘Social support’ as a key factor followed by ‘Healthy life expectancy’ and ‘GDP per capita’, and ‘perception of corruption’, where every factor contribute positively to the prediction. For the last ranked country, the key factor changed to ‘Healthy life expectancy’ and only ‘perception of corruption’ contribute positively which represent in Fig 15 . Following the pandemic, ‘GDP per capita’ reclaimed its top position in influencing HS, with ‘social support’ and ‘freedom to make life choices’ emerging as the second and third influencing factors, respectively. These findings highlights using both the SHAP summary plot and bar plot depict in Fig 16(a) and 16(b) respectively. Furthermore, permutation-based feature importance analysis conduct using ELI5, as illustrat in Fig 17 , also corroborates these results. The waterfall plots display in Fig 18 illustrates the feature importance of individual countries, including the top, mid, and last-ranked countries. For the top-ranked country, all factors except ‘Generosity’ made positive contributions. In contrast, ‘freedom to make life choices’ and ‘healthy life expectancy’ positively contributed, while the rest had negative contributions for the mid-ranked country. Conversely, after the pandemic period, all features exhibit negative contributions for the last-ranked country. After pandemic session for the top ranked country LIME exploring GDP per capita as key factor followed by social support and healthy life expectancy also every factors contributed positively to the prediction. For the last ranked country the order of factors was same but every feature contributed negatively which represented in Fig 19 . Analyzing the performance of the model predicting the HS of countries reveals that the blending ensemble RGMLL model, which combines RF, GBM, MLP, LSTM, and LR, demonstrates the best performance across all indicators, including 0.85 R 2 , 0.15 MSE, and 0.38 RMSE. Overall, ensemble models such as LRGR and RGMLL outperforms individual ML and DL models. This suggests that the synergy from combining different models surpasses the performance of individual models. This finding aligns with previous research indicating the superiority of existing ensemble models . Among single models, RF and GP regressor exhibit the best performance. Although ensemble models compose of various model combinations outperform single models, the performance of the ensemble model varies depending on the specific algorithms combined. The contribution of each variable to the prediction measures in the final model using several XAI methods, including SHAP, LIME, and ELI5. Before the COVID pandemic, SHAP and ELI5 reveal that the order of features contributing to the world happiness ranking was ‘GDP per capita’, ‘social support’, ‘healthy life expectancy’, ‘freedom to make life choices’, ‘generosity’, and ‘perceptions of corruption’. During the pandemic, the order changed to ‘social support’, ‘healthy life expectancy’, ‘GDP per capita’, ‘freedom to make life choices’, ‘perceptions of corruption’, and ‘generosity’. It clearly defines that ‘social support’ was the crucial and important for happiness during the pandemic time. After the COVID period, the order revert to its previous ranking as ‘GDP per capita’, ‘social support’, ‘freedom to make life choices’, ‘healthy life expectancy’, ‘perceptions of corruption’, and ‘generosity’. However, local explanations indicate that this order can vary for low-ranked countries. Finland continues to hold the top spot in the rankings for the seventh year in a row by ensuring strong social support, an effective healthcare system, and a balanced work-life environment. Countries with lower rankings can utilize the system proposed in our study to pinpoint the factors they need to enhance to climb higher in the happiness rankings. This research introduces two ensemble models, one utilizing the stacking method and the other using the blending method, to improve the predictive accuracy of national HS using data from the WHR spanning 2018-2023. Four types of models are employed for prediction that are ML,DL, stacking, and blending ensemble model. The findings reveal that individual ML and DL models perform adequately as base models for the ensemble, with the RF and MLP models outperforming other single ML and DL models, respectively. To enhance generalizability and performance, the study combines the outputs of these individual models as inputs to the proposed ensemble models. Three evaluation criteria are used to assess model performance. The results indicate that the blending ensemble model, which combines both DL and ML models, generally achieves better predictive results than the stacking ensemble model, which combines only single ML models. Furthermore, comparing the proposed blending model with traditional stacking ensemble models demonstrates the former’s significant superiority and improved generalization capability. The XAI models use to capture logical insights by deriving information from the trained models. This paper aims to find the factors influencing countries’ happiness rankings based on the input values under the criteria as per the dataset and to determine if there is any change in the order of contributing factors due to the COVID pandemic. The XAI models LIME, SHAP, and ELI5 are employed to identify the major contributing factors from the given set of features and reveal that there is a change in the order of contributing factors influencing the happiness rankings of countries. It shows that during pandemic the features ranks changes and social support was the best background of happiness. Consequently, the proposed blending ensemble model is capable of generating accurate and reliable predictions for HS. However, despite its promising performance, the proposed model has certain limitations. Specifically, the model’s superior predictive performance can be compromised by the inclusion of underperforming models in the ensemble. Thus, there is a need to develop more effective methods for combining these models to mitigate this issue in further work.
|
Review
|
other
|
en
| 0.999997 |
PMC11695008
|
Sexual health education provides young people with the knowledge and skills needed to make informed sexual health decisions to prevent sexually transmitted infections (STIs), including HIV, and unwanted pregnancy. In Texas, the adolescent birth rate is 43% higher than the national average and Texas ranks first in repeat births among adolescents at 18% . Meanwhile, chlamydia and gonorrhea rates have increased by 25% and HIV rates by 4% over the last decade among adolescents and young adults between 15 and 24 years . Currently, Texas is one of 22 states that do not mandate sexual health education and one of 33 states that do not require the provided sex education to be medically accurate . Further, if a Texas public school chooses to provide sex education to students, the curriculum must emphasize abstinence . Although abstinence is the most effective way to prevent pregnancy and STIs, nearly 43% of Texas high school students have engaged in sexual activity at least once and 23% are currently sexually active . Further, among students who are sexually active, 50% report not using a condom the last time they had sex and 20% report not using any pregnancy prevention method . The lack of contraception use among Texas youth combined with elevated STI and unwanted pregnancy rates highlights the need to consider effective sexual health promotion strategies. Comprehensive sexuality education (CSE) programs are widely endorsed by national and global health organizations and research has shown that they are more effective in reducing poor sexual health outcomes than abstinence-based programs [ 7 – 10 ]. CSE is a medically accurate curriculum that provides age-appropriate information related to the physical, mental, emotional, and social dimensions of human sexuality . CSE is taught from early elementary through high school and consists of seven key topics: consent and health relationships, anatomy and physiology; puberty and adolescent development; gender identity and expression; sexual orientation and identity; sexual health; and interpersonal violence . Compared to abstinence-based programs, CSE is more effective in delaying sexual debut, increasing use of contraception, and reducing STI/HIV and adolescent pregnancy rates [ 8 – 10 , 13 , 14 ]. Additionally, CSE has been shown to reduce domestic and intimate partner violence, child sexual abuse, and homophobia and homophobic bullying . Although the effectiveness of CSE over abstinence-based programs is well-supported, few Texas schools provide CSE. To understand where and how Texas sex education policy can be improved, it is essential to learn what the current policies and standards are and who decides them. The state legislature plays a powerful role in sex education policy by passing laws related to sex education, included in The Texas Education Code . The Texas State Board of Education (SBOE) must adhere to all state laws when deciding the standards of Texas Essential Knowledge and Skills (TEKS). According to the Texas Education Code all sex education curricula must “present abstinence from sexual activity as the preferred choice of behavior in relationship to all sexual activity for unmarried persons of school age” and “devote more attention to abstinence from sexual activity than to any other behavior”. Moreover, Texas schools must teach condom effectiveness from the perspective of “human use reality rates” as opposed to “theoretical laboratory rates” , meaning the condom failure rate provided by schools reflects incorrect or inconsistent condom use. Through this mandate, students may be led to believe condoms are less effective than they actually are when used correctly or consistently. The Texas Education Code also defines the process that schools must abide by when selecting sex education curriculum and proposing curriculum changes. Prior to November 2020, all students were automatically enrolled in sex education unless their parents signed a permission slip prohibiting their child from attending . In November 2020, Texas became one of five states to enforce a new policy requiring parents to sign a permission slip opting their child in to receiving sex education at school . Additionally, the state legislature voted to implement new policies related to Student Health Advisory Committees (SHACs). SHACs play an integral role in sex education as they have the ability to work with community members to ensure the sex education provided represents the values of the community. Every school district is required to have a SHAC primarily consisting of parents who are not employed by the school district and who meet at least four times per academic year . The role of the SBOE is to select course materials and set health curriculum standards, the TEKS . According to the TEKS, all Texas public schools must, at minimum, provide a health education course, which includes a chapter dedicated to abstinence-based sexual health information, to elementary (4 th -5 th grade) and middle school students (7 th -8 th grade) . For high school students, the health course is optional, and does not have to be offered by public high schools . In November 2020, the SBOE revised TEKS standards for the first time in over 20 years, resulting in the inclusion of topics such as contraception, sexually transmitted infections (STIs), and characteristics of healthy relationships for middle school students . Conversely, the SBOE voted against requiring schools to discuss topics such as consent and human sexuality . Sex education is instrumental in providing adolescents and young adults with the knowledge and skills to engage in healthy sexual decision making and relationships. High unintended pregnancy and STI/HIV rates among Texas youth highlight the need to examine current sex education policy and curriculum standards and emphasize sexual health prevention. In the United States, CSE has been proven to reduce STI/HIV and pregnancy rates among adolescents when compared to abstinence-based sex education . Thus, the purpose of this study was to identify barriers and facilitators to implementing CSE at local and state levels in Texas. For this study, we used a qualitative design to explore barriers and facilitators to implementing CSE policy and curriculum at local and state levels in Texas. A qualitative approach was a suitable method for this study as it allowed us to explore stakeholders’ beliefs and views about barriers to policy implementation in the state of Texas . Qualitative designs are particularly good, argues Sofaer at “illuminating the experience and interpretation of events by actors with widely differing stakes and roles” . This study consisted of ten interviews with eleven key informants who were currently working or had previously worked in the field of sex education ( Table 1 ). For one interview, two key informants were interviewed together, while the other nine interviews had one key informant. We recruited key informants who were involved in sex education policy and/or research or had contributed to the development or implementation of various forms of sex education curricula, in order to gain a comprehensive understanding of existing barriers and facilitators to facilitating policy or curriculum change at local and state levels. Additionally, key informants represented various regions, counties, and school districts in Texas. Semi-structured interviews ranging from 30–60 minutes were conducted. A semi-structured interview format was appropriate for this study as it facilitated data collection, while still allowing participants to freely discuss thoughts or ideas that might not have been considered by the research team . Interview questions were drafted by the research team who specialize in policy and/or sexual and reproductive health to ensure content validity. The research team analyzed the Texas Education Code and TEKS Standards to develop interview questions aimed at identifying the individual, community, and societal-level factors that influence CSE policy and how these levels interact with each other . Additionally, the interview questions were developed to apply to participants with various professions pertaining to sex education policy. Further, participant input contributed to the development of additional questions that were asked in future interviews. Key questions that guided the interviews are presented in Table 2 . Initially, this study used purposive sampling to select participants based on the literature on CSE and personal knowledge of key actors in sex education policy. Purposive sampling was then supplemented by snowball sampling as initially selected participants were asked to provide referrals for additional potential participants. All selected participants were recruited via email from July 16, 2021 to October 25, 2021, and were provided information regarding the study. Once the participant agreed to the interview, they were provided with a written consent form that was signed before the interview. A total of ten one-time, semi-structured interviews were conducted with a total of eleven participants via Zoom after obtaining informed consent. By the fourth interview, five themes (three barriers and two facilitators) began to emerge and were solidified by the eighth interview. By the tenth interview, data saturation had been achieved. All ten interviews were conducted by one member of the research team (XX), who has training in qualitative research methods. To provide context for data analysis, field notes were written immediately after each interview to capture the interviewer’s thoughts, feelings, and perceptions. All interviews were audio recorded and transcribed verbatim by the interviewer, and then independently coded using NVivo 12 software by two coders (LH, SJ). Thematic analysis was used to identify barriers and facilitators to implementing comprehensive sex education. Each coder familiarized themselves with the data before identifying emerging themes in an effort to develop a codebook that would guide the data analysis process. Each coder independently analyzed the data and the two coders met intermittently throughout the coding process to discuss emerging themes and resolve any discrepancies. Once all interviews were coded, data from each coder was merged into one file to identify major themes and subthemes. It is important to acknowledge the research team members’ relationship to sex education policy. One member of the research team (XX) xxxxxxxxxxxxxxx where they are focused on xxxxxxxxxxxxxxxxxx. Another member (XX) is xxxxxxxxxxxxxxxxxx where they are focused on xxxxxxxxxxxxxxxx. One member of the research team (XX) is a xxxxxxxxxxxxxxx who focuses on xxxxxxxxxxxx. This study was performed in line with the principles of the Declaration of Helsinki. Approval was granted by the Duke University Institutional Review Board We identified three key barriers to implementing CSE curricula and policy: 1) ideological opposition to CSE, 2) discrimination against lesbian, gay, bisexual, transgender, queer + (LGBTQ+) people, and 3) myths and misconceptions about CSE. Ideological opposition to CSE was found to be a major barrier to implementing CSE at the local and state level. Several participants stated that opposition from parents and advocacy groups was particularly problematic for school officials and policymakers. Two key informants (KIs) provided further insight into the conflict and difficulties faced by those who advocate for students to have an informative and inclusive sex education: Specifically, informants reported that resistance from parents was the most difficult barrier for school administrators. One interviewee (KI 4), a physician and associate professor who works closely with school administrators, stated, “ But also to administrators , they’re not so much afraid of the subject as they are parents ” and continued to cite a conversation they had with a school administrator who reportedly told her, “ I don’t want any more headaches . I don’t want to have parents calling me ”. A different KI said one reason for parental pushback is the belief that sex education should remain in the home: Implementing an LGBTQ+ inclusive sex education curriculum in public schools is a contentious subject in Texas, particularly among parents, advocacy groups, and the state legislature. For example, KI 3 said: “And most of what they were opposing was the LGBTQ issue . That was that’s kind of become the big flashpoint . It’s almost like contraception is maybe like in the background a little bit now . And now the big thing is thing is , like , gender identity . ” Although very few Texas youth receive a sex education that covers LGBTQ+ sexual health topics or issues, two informants reported that students continued to ask questions about LGBTQ+ sexual health and relationships in class: Further, KI 4 stressed that providing LGBTQ+ inclusive sex education is important, because LGBTQ+ students are not being provided with sexual health information that could reduce their risk of STIs. She provided an example from her own experience as a sex education provider, “… I was teaching STDs [sexually transmitted diseases] , and one girl said , ‘Well , that’s okay . I can’t get them because I’m gay . ’ And I said , ‘No , if your partner’s infected , you can get them . ’” Our study found that discrimination against LGBTQ+ individuals is perpetuated by misconceptions about LGBTQ+ inclusive curriculum. Informants reported that opponents of CSE will often falsify and sensationalize the LGBTQ+ sexual health information that would be provided to students in the classroom. KI 7, a textbook manufacturer, stated, “… the way that they have opponents attack is they have to misportray what’s being taught in the classroom… that we’re indoctrinating students into the ‘homosexual lifestyle’ , and we are confusing kids about gender . ” The same participant further elaborated on the misconceptions and general lack of understanding that elected officials have regarding the realities and hardship faced by LGBTQ+ youth: Findings from our study showed that the sex education landscape is riddled with myths and misconceptions that prevent the implementation of sex education curriculum that is medically accurate, informative, and inclusive. One misconception, in particular, is that CSE encourages sexual activity among youth. KI 2 stated, “If you do these things to make kids healthier , parents perceive and the community perceives that it’s a free pass to have sex , which is obviously not the case . ” Similarly, KI 3 argued, “I think that there is a , you know , persistent belief that providing sex education to kids , sends them the message that it’s okay to have premarital sex , and will somehow make them more likely to have premarital sex . And the research tells us that that’s not true . But that’s the persistent belief . We see this in contraceptive access too . ” The fear that sex education promotes sexual activity extends to the topic of consent, which is not included in the state-required health education course. Three study participants discussed why CSE opponents and a member of the SBOE did not want the topic of consent included in sex education curriculum: Ultimately, the SBOE voted to exclude consent from the state-required health education curriculum. As an alternative, the SBOE shifted its focus to “respecting the boundaries of other people” (KI 3) because the SBOE “felt much safer with the word ‘boundaries’ than the word consent” (KI 5). KI 3 provided further insight into the shift from the word “consent” to “boundaries”: We identified two key facilitators to creating policy and curriculum change: 1) sex education champions and 2) collaboration with community stakeholders. All research participants emphasized the importance of having sex education “champions” who advocate to expand sex education curriculum and policy to be more medically accurate, informative, and inclusive at local and state levels. In particular, support from healthcare professionals, parents, and youth was identified as essential to initiating sex education policy and curriculum change. For example, KI 4 shared how healthcare professionals played a role in making sex education curriculum medically accurate and more informative: Two other participants discussed the importance of having healthcare professionals champion the expansion of sex education curriculum. KI 6 stated, “But in addition to that , I would like to have folks hear from medical practitioners a little bit more . I think that our family planning doctors , and our pediatricians are the people who see the fallout of inaccurate sexual health teaching in our schools . And I think physicians also tend to be trusted voices in our communities . So , I think that kind of effort is important” . Similarly, KI 11 stated, “ I think that helps to have physicians on board saying this is important .” Additionally, parents and youth play a critical role in advocating for CSE: In recent years, youth have also been using their voice to advocate for more informative and inclusive sex education curriculum by testifying in front of the SBOE: Key informants emphasized the importance of collaborating with community stakeholders when facilitating conversations about curriculum with opponents of CSE. While a CSE curriculum that meets all twelve National Sexuality Education guidelines is ideal, it is not always possible to convince everyone to agree on every guideline. Thus, to move towards a medically accurate, informative, and inclusive sex education curriculum, collaboration is essential. For example, KI 7 said it is important to “find that balance that everybody feels heard and respected” . Two other KIs echoed the importance of striking a balance and collaborating with community stakeholders: Specifically, KIs 4 and 5, who work together, were successful in working closely with a local faith-based organization to implement a more informative sex education curriculum in their local school district. Curriculum topics were divided between a university medical school and the faith-based group. The medical school focused on science-based topics such as anatomy, contraception, and STIs, while the faith-based group focused on emotions and engaging in healthy relationships: They further discussed the importance and value of compromise: This study adds to the current body of knowledge of CSE and Texas sex education policy by providing insight into barriers and facilitators to changing sex education policy and curriculum in Texas to be more inclusive, informative, and comprehensive. Our study identified three main barriers to policy change: ideological opposition to CSE, discrimination against LGBTQ+ people, and myths and misconceptions about CSE. The study also identified two key facilitators–sex education champions and collaborations with community stakeholders. Findings from our study were similar to the few other studies that have specifically examined sex education in Texas. One major finding from our study was the fear of parental backlash faced by school administrators who wish to improve the sex education curriculum in their school. A similar observation was made in a study examining barriers faced by instructors in delivering sex education in West Texas . Such findings highlight the importance of working closely with parents and other community stakeholders to ensure their voices felt heard throughout the developmental and implementation process. Additionally, our study found that there are pervasive myths and misconceptions about sex education that fuel resistance to change. Similarly, a 2012 study examining sex education materials from 990 Texas school districts found that myths and misconceptions about the consequences of sexual activity were common in curricular materials . In particular, the materials commonly used shame-based and scare tactics, which the researchers categorized into three types: “1) exaggerating negative consequences of sexual behavior; 2) demonizing sexually active youth; and 3) cultivating shame and guilt to discourage sexual activity” . These findings show how sexual health misinformation in curricular materials is not a new phenomenon and serves as a major barrier in improving sex education curriculum. To dispel myths and break long-standing sexual health misconceptions, it is essential for public schools to provide students with medically accurate and informative sex education and for sex education advocates to engage with community stakeholders. Our findings showed that community involvement, initiative and support are essential to implementing CSE. Similar findings have been demonstrated on an international level, where supportive school and community environments have been identified as facilitators of CSE implementation . Concurrently, our study showed that a lack of community support for CSE was often attributed to LGBTQ+ discrimination. Community backlash is a major barrier for schools implementing LGBTQ+ inclusive protocols or sex education curriculum in the United States . More research is needed to understand how to engage in meaningful conversation and provide education about the relationship between LGBTQ+ inclusivity and health outcomes to CSE opponents. One strength of this study is the professional diversity of the key informants who work in various fields related to sex education. By interviewing a variety of experts, we were able to develop a more complete picture of the current state of sex education in Texas and of how policy change can occur. Another strength is the use of a semi-structured interview format, which allowed participants to share facts, personal experiences, and opinions on Texas sex education policy, while still answering interview questions developed to answer this study’s research question. Along with its strengths, this study has two key limitations. First, the study sample mainly included viewpoints and insights from individuals who support implementing a medically accurate, informative, and, mostly, comprehensive sex education. It did not include perspectives of those who are against changing current school-based sex education curriculum or policy. Second, while we interviewed individuals who work with parents and students who advocate for sex education policy and curriculum change, we did not speak directly to the parents and students themselves. Findings from this study provide insight into the opposition faced by sex education advocates, which often stems from myths and misperceptions of CSE content and the stigmatization of sexual and gender minoritized groups. Parents, youth, medical professionals, and academic researchers who support CSE are essential to dispelling sex education myths and misperceptions, and can move CSE up local and state policy agendas by advocating to their local schoolboard and state officials. Further, our findings highlight the importance of developing relationships and working closely with community stakeholders to gain a better understanding of overall community values. Working closely and compassionately with community stakeholders can increase local support for schools to implement a sex education curriculum that is more informative, accurate, and comprehensive than previously implemented curriculum. Healthcare professionals and academic researchers are respected community members who can provide their medical knowledge, research, and work experience to dispel sex education myths, correct misunderstandings, and address concerns of sex education opponents. This study serves as a call to action for medical professionals and academic researchers to advocate for a medically accurate and more comprehensive school sex. Healthcare professionals and academic researchers can provide insight into important topics such as consent and LGBTQ+ sexual health to help reduce sexual assault, social stigma, mental health outcomes, and sexual health disparities. While there are several obstacles to implementing CSE in Texas schools, measures can be taken to gradually expand sex education curriculum and policy to be more informative, inclusive, and effective. CSE advocates play a critical role in eliminating barriers by engaging with community stakeholders and getting involved with their local SHACs. Additionally, medical professionals and academic researchers who support CSE could play a key role in dispelling sex education myths and misconceptions. As Texas adolescents continues to be plagued by STIs, HIV, and unwanted pregnancy, it is important to shine light on current sex education practices, identify areas for improvement, and implement changes to policy that benefit the health and well-being of Texas youth.
|
Other
|
biomedical
|
en
| 0.999998 |
PMC11695009
|
Breast cancer is the leading cause of death in women worldwide . Although many treatment methods have been developed and survival rates have increased, breast cancer remains a complex and heterogeneous disease with various clinical challenges, especially in determining prognosis and assessing responses to treatment. Patients with luminal type breast cancer (hormone receptor positive-HER2/neu negative) have a good prognosis, with a relapse rate significantly lower than in patients with other breast cancer subtypes . Nevertheless, recurrence remains inevitable in patients with luminal type breast cancer, making it difficult to predict both early (i.e., within 5 years after initial treatment) and late (i.e., more than 5 years after initial treatment) recurrence. A deeper understanding of the properties of individual tumors is therefore required to identify more precise prognostic factors. In vivo proton magnetic resonance spectroscopy (MRS) measures shifts of particular nuclei in magnetic fields, allowing noninvasive molecular analysis . The spectra produced by MRS represent all detectable metabolites with their individual chemical profiles in the region of interest . The presence of a compound resonance around 3.23 ppm has been associated with several chemical compounds, including phosphoethanolamine, choline, phosphocholine, and glycerophosphocholine, with the latter three referred to as total choline (tCho) . Increased tCho levels due to increased cellular membrane turnover have been observed in malignant tumors , suggesting thatand these increasesd tCho may be an therefore serve as an indicator of malignancy . Additionally, increased tCho levels have been associated with overexpression of the HER-2/neu gene and with aggressive phenotypes, such as triple-negative breast cancer (TNBC) . MRS measurements of tCho levels have also been used to assess axillary lymph node metastases , to evaluate responses to neoadjuvant chemotherapy or radiation therapy , and to evaluate the relationship of tCho with pathologic prognostic factors in primary breast cancer . Despite these findings, however, in vivo proton MRS remains inadequate for clinical applications, and related further research has been limited. A recent study of the relationship between ERα and choline metabolism found that ERα directly regulated the gene encoding Cho phosphotransferase 1 (CHPT1), an enzyme necessary for estrogen to affect Cho metabolism, including increased phosphatidylcholine synthesis . This finding suggests that choline metabolism is involved in the development and/or progression of hormone receptor (HR)-positive, relatively non-aggressive breast cancer, as opposed to the more aggressive subtypes of breast cancer studied previously. The present study therefore evaluated whether tCho, as measured by in vivo proton MRS, could predict 10-year survival, in particular late recurrence, in patients with HR-positive, HER2-negative early breast cancer. Single-voxel proton MRS was performed using a 3.0-T MRI scanner (Skyra and Verio, Siemens Medical Solutions, Erlangen, Germany) with a dedicated bilateral receiver-only phased-array four-element two-channel coil (one channel per breast) The protocol for bilateral breast imaging consisted of an axial STIR sequence , a 3D T1-weighted FLASH dynamic gradient-echo sequence (TR/TE, 4.5/1.6; flip angle, 10°; 0.9-mm thickness without an interslice gap; 0.9 × 0.9 × 0.9 mm3 isotropic voxel; one unenhanced and four contrast-enhanced acquisitions with a temporal resolution of 192 seconds), and, for dynamic contrast enhancement, the injection of 0.1 mmol/kg body weight gadobutrol (Gadovist; Schering AG, Berlin, Germany), followed by a 20-mL saline flush and an axial 3D delayed contrast-enhanced turbo spin-echo pulse sequence (TR/TE, 5.6/2.5; FOV, 380 × 380 mm2; matrix size, 384 × 384; slice thickness, 1.5 mm) to evaluate supraclavicular and axillary lymph nodes. Lesion size was determined by measuring the largest diameter of those obtained from the first or second subtracted axial images and their sagittal and coronal reconstructions. The technical parameters for the proton MRS sequence consisted of TR/TE, 7.6/3.53; flip angle, 20°; and spectral width, 890Hz. The total time to acquire MRS per lesion, including scan time and shimming, usually ranged from 8 to 10 minutes. MRS row data were postprocessed on a remote workstation using software supplied by the manufacturer (SW Numaris 4; Siemens Healthcare). For the water- and fat-suppressed spectra used to measure the tCho-containing compound peak, postprocessing was systematically performed to correct every signal with the zero-order phase of its residual water peak after Fourier transformation. Baseline corrections were performed to exclude ranges for lipids (0–2.8 ppm), water (4.0–6.0 ppm), and tCho (3.18–3.28 ppm). Curve fitting using a Gaussian function in the range of tCho (3.18–3.28 ppm) was finally applied to calculate the tCho peak integral. The tCho containing compound peak integral (tChoi) was expressed in arbitrary units (AU). Baseline oscillations with multiple peaks between 3.18 and 3.28 ppm, each with an integral lower than 0.1 arbitrary units, were excluded from the study as measurement errors. The volume of interest (VOI) was a rectangular box, positioned by a radiologist with 10 years of experience in breast MRI based on axial, coronal, and sagittal subtraction images. The VOI positions were chosen to be within each enhancing lesion as much as possible, with the goal of minimizing the inclusion of nonenhancing glandular tissue or surrounding fat. VOI size was defined as a single voxel of 1.0 X 1.0 X 1.0 cm, equivalent to 1 mL in volume, with the tChoi normalized by dividing by the VOI (AU/1 mL). Example of spectra obtained by MRS and patient data are shown in Fig 1 . Tumors were staged pathologically in accordance with the American Joint Committee on Cancer (AJCC) staging system, with all tumors reclassified based on the TNM 8th edition. Pathological and MRI tumor sizes were compared using a 2-cm threshold, with lymph node metastasis classified as positive (N1 or N2) or negative (N0). The levels of expression of estrogen receptor (ER) and/or progesterone receptor (PR) were scored immunohistochemically (IHC) using the Allred score (AS), with a maximum score of 8, and an AS score ≥3 considered positive. HER2/neu status was measured using IHC and silver-enhanced in situ hybridization (SISH). HER2/neu positivity was defined as an intensity of 3+ by IHC or 2+ by SISH. Other prognostic factors analyzed included age, Ki-67 index, lymphovascular invasion (LVI), histologic grade, nuclear grade, extensive intraductal component (EIC), and clinical risk score. Age ≤50 years was classified as high risk, and age >50 years as low risk. The cutoff for the Ki-67 index was set at 20%, which can distinguish between Luminal A and Luminal B breast cancer subtypes. Tumor grade (histologic or nuclear) was determined using the Nottingham method (with a range of 1 to 3), with grade 3 considered high risk, and grades 1 and 2 considered low risk. EIC was considered predictive of local recurrence after breast-conserving surgery and radiotherapy. Ductal carcinoma in situ (DCIS) occupying >25% of the cancer area or extending beyond the edge of the cancer was classified as tumorous. Clinical risk score was determined using the modified Adjuvant! Online tool, as reported in the MINDACT trial . The cutoff for tChoi was set at 15, with the mean tCho represented as a continuous variable in patients with HR+/HER2- breast cancer. This cutoff demonstrated significance in the subsequent Kaplan-Meier analysis and was therefore utilized in subsequent multivariable survival analyses ( Table 1 ). Normally distributed continuous variables, as determined by Kolmogorov-Smirnov tests, were presented as mean and standard deviation, whereas categorical variables were presented as frequencies and percentages. Because the Kolmogorov-Smirnov test for normality indicated that tumor size, as determined by pathology and MRI; age; and tChoi were normally distributed (p<0.05 each), they were compared by independent sample t-tests. 10-year disease-free survival (DFS) and overall survival (OS) were analyzed by the Kaplan-Meier method, with subgroups compared by log-rank test. Multivariable Cox regression analysis was performed to determine factors significantly associated with patient survival, with Harrell’s C-index used to evaluate tChoi as a prognostic factor in combination with other known prognostic factors. All statistical analyses were performed using SPSS for Windows (Version 23.0), with all tests being two-tailed and p-values <0.05 considered statistically significant. This study involved human participants as well as human data or tissues, with a focus on patients undergoing surgical treatment for breast cancer. On the day prior to surgery, as part of the process of signing the surgical consent form, the medical institution requires all patients to also provide consent for the donation and research use of human-derived materials. There were no minor participants, and informed consent was obtained from all participants. The recruitment period was from March 1, 2011 to July 30, 2014 and the access for research purposes was on August 1, 2023. The authors have access to the data of individual participants who have been anonymized. The study was approved by the institutional review board of Gachon University Gil Medical Center (GMC) . This was a single-center retrospective cohort study, in which all breast cancer patients from March 2011 to July 2014 underwent initial MRI along with in vivo proton MRS for research and diagnostic purposes. Of the 815 patients with MRS measurements, 148 were excluded based on exclusion criteria and an additional 107 were excluded due to criteria for rejecting spectra, which involved the presence of artifacts in the region of metabolites containing tCho compounds (range, 2.8–4.0 ppm) or failure to suppress water and fat signals. Of the 560 early breast cancer patients who underwent in vivo proton MRS, the study group (HR+/HER2-) consisted of 261 patients . Of the 261 HR+/HER2 patients, 54.8% and 45.3% had Ki-67 index <20 and >20, respectively, and 62.1% were negative for lymphovascular invasion. Of these patients, 54.8% were histologic grade II and 67.4% were nuclear grade II, with 71.3% being negative for EIC. Assessment of clinical risk showed that 47.9% of these patients were low risk and 52.1% were high risk ( Table 2 ). The mean tChoi in study group was 15.47(range, 0.13–55.7). Based on the mean tChoi in the HR+/HER2- study group, the cut-off for tChoi was set at 15 for subsequent survival analyses ( Table 1 ). Although mean tChoi was not significantly associated with other prognostic factors in these patients, higher mean tChoi tended to be associated with higher histologic grade (14.83 vs 17.93, p = 0.115), nuclear grade (14.82 vs 18.02, p = 0.091), and clinical risk (14.43 vs 16.42, p = 0.207) in the high-risk group ( Table 3 ). Kaplan-Meier analysis of patients with HR+/HER2- breast cancer showed that 10-year DFS differed significantly between patients with tChoi <15 and ≥15, with mean DFS times of 119.11 months and 115.09 months, respectively (log rank p = 0.017). The p-value for early recurrence (0–5 years) was 0.323, whereas the p-value for late recurrence (6–10 years) was 0.020. 10-year OS rates did not differ significantly between HR+/HER2- breast cancer patients with tChoi <15 and ≥15 . Tables 4 and 5 Multivariable Cox proportional hazard regression analysis showed that Ki-67 index (HR 3.28, 95% CI 1.04–10.32, p = 0.042), lymphovascular invasion (HR 2.97, 95% CI 1.03–8.6, p = 0.044), and tChoi (HR 2.69, 95% CI 1.02–7.09, p = 0.046) were significantly predictive of 10-year DFS in patients with HR+/HER2- breast cancer. Histologic grade tended to be associated with early (0–5 years) recurrence (HR 6.07, p = 0.07), whereas age (HR = 0.28, p = 0.07) and tChoi (HR 4.36, p = 0.066) tended to be associated with late (5–10 years) recurrence ( Table 6 ). The clinical significance of tChoi was additionally analyzed using Harrell’s C-index to compare areas under receiver operating characteristic (ROC) curves (AUCs). When tChoi was analyzed independently, it yielded an AUC of 0.506 (95% CI 0.439–0.573). The predictive accuracy of tChoi was improved when assessed in conjunction with lymphovascular invasion (AUC 0.553) and Ki-67 index (AUC 0.606), with the three factors having an AUC of 0.622 (95% CI 0.555–0.689). The difference in AUC between tChoi alone and the combination of the three variables was found to be statistically significant (p = 0.014), indicating that incorporating lymphovascular invasion and Ki-67 index with tChoi resulted in improved predictive ability ( Table 7 ). Although mean tChoi was not significantly associated with other prognostic factors in the present study, mean tChoi tended to be higher in more aggressive tumor ( Table 3 ). Similarly, higher phosphocholine (Pcho) levels have been reported to be associated with more aggressive histologic grade 3 tumors . Increased levels of Pcho in breast cancer cells have been attributed to the oncogenic activity of choline kinase (CK), the enzyme that converts choline into Pcho, with CK activity also reported to be strongly associated with high histologic grade . The present study also found that tChoi levels were significantly associated with 10-year DFS, particularly late recurrence, in patients with HR+/HER2- tumors. Moreover, the predictive power of tChoi was enhanced when combined with other factors, including lymphovascular invasion and Ki-67 index. Previous studies have also reported an association between CHPT1 and ERα. For example, the CHPT1 gene, which is directly regulated by ERα, was required for estrogen-induced effects on Cho metabolism, including increased phosphatidylcholine (PtdCho) synthesis . Additionally, immunohistochemical (IHC) analysis showed that the expression of CHPT1 is higher in breast cancer cells than in normal breast tissue and is higher in ER-positive than in ER-negative breast cancer . The present study showed that survival rates following late recurrence were associated with tCho levels. CHPT1 also plays a role in the invasion of tamoxifen-resistant breast cancer cells, with tamoxifen-resistant LCC2 cells exhibiting greater invasiveness than tamoxifen-sensitive MCF7 cells under CHPT1 expression . Moreover, CHPT1 depletion resulted in stronger suppression of invasion and metastasis by LCC2 cells than by MCF7 cells . ER-positive breast cancer has often been associated with late recurrence , with extended hormone therapy administered to many patients with ER-positive breast cancer to reduce the risk of recurrence . Adherence to hormone therapy, however, often decreases after 5 years , with many patients not receiving extended hormone therapy. This environment of minimal residual disease (MRD) may function similarly to tamoxifen-resistant breast cancer cells, potentially increasing CHPT1 activity. The present study confirmed that levels of tCho are higher in more aggressive breast cancer phenotypes, including tumors with HER-2/neu overexpression and TNBC [ 9 – 11 ], with similar findings observed in HR+/HER2- luminal-type breast cancer. Although studies investigating choline metabolism in breast cancer and agents targeting these pathways are ongoing, comprehensive biological evidence for the underlying mechanisms is still lacking, indicating the need for additional research. This study had several limitations. First, its design was retrospective, although patients were enrolled consecutively. The follow-up observation period has been set by our medical center at 10 years; however, due to the long study duration, many patients have been lost to follow-up. Furthermore, treatment approaches may have differed, due to changes in guidelines and variations in patient compliance. Large prospective studies are therefore required to confirm these results. Second, the tChoi of the total mass was not quantified. Several methods are available to quantitate tCho, including absolute tChoi, SNR, and internal/external reference approaches [ 24 – 26 ]. The latter method requires additional time to collect data from an internal or external reference, to correct partial volume effects, and to carefully calibrate differences in relaxation times between the tissue tCho-containing compound signal and references . The present study focused on the qualitative analysis of tCho, after positioning a single-sized voxel within each tumor, and aimed to determine an appropriate cutoff value to analyze differences in survival. However, the absolute quantification of tCho concentrations may be more desirable to assess cancer lesions. Third, the present study included tumors <1 cm in diameter. A single VOI measuring 1.0 x 1.0 x 1.0 cm was positioned, regardless of tumor size, and tCho was measured. Measurements may therefore be lower than the actual values for tumors <1 cm in size during postprocessing, especially water-fat suppression due to the increased proportion of water and fat around the tumor. In some patients, however, an ideal choline peak was detected even in smaller-sized tumors; because these tumors were <10% of the total, they were included in the study results, which may have affected the overall outcomes. Fourth, this study also included patients with multifocal/multicentric breast cancer. In these patients, the tCho of the largest main lesion was measured, although choline specificity may vary among individual tumors. This variation may have affected the study results. MRS parameters may play a role as biomarkers for predicting late recurrence of HR+/HER2- early breast cancer. AUC analysis showed that tChoi had a greater predictive ability when combined with previously identified prognostic factors than when considered alone. Although further prospective studies in larger patient cohorts are necessary, tChoi measured by in vivo MRS can serve as a valuable and non-invasive tool to predict prognosis when combined with other established prognostic factors.
|
Review
|
biomedical
|
en
| 0.999995 |
PMC11695011
|
Schistosomiasis is a parasitic disease caused by the intravascular parasite genus, Schistosoma . Schistosomiasis is a neglected tropical disease (NTD) that affects over 240 million people globally, with over 700 million people being at risk of infection . The disease is endemic in 78 countries worldwide and seriously impacts developing countries, especially sub-Saharan Africa . It is estimated that 3.3 million Disability-Adjusted Life Years (DALYs) were lost in 2010 due to urogenital or intestinal schistosomiasis . The majority of intestinal schistosomiasis cases are caused by Schistosoma mansoni and its intermediate freshwater snail host, Biomphalaria . East Africa is a well-known regional hotspot for schistosomiasis transmission with the disease being as prevalent as 2–18% in Kenya, 22–86% in Tanzania and 7–88% in Uganda . The high prevalence of S. mansoni infection in East Africa is due to the large number of freshwater environments that Biomphalaria snails can inhabit, with the largest source of freshwater being Lake Victoria . These favourable habitats combined with poor water hygiene and sanitation standards make the shoreline of Lake Victoria a hot spot for intestinal schistosomiasis . Biomphalaria is notoriously invasive and are capable of rapidly expanding their territory due to their high fecundity and ability to self-fertilise . This rapid expansion can lead to outbreaks of schistosomiasis as self-fertilisation and inbreeding leads to genetically homogeneous populations at the expense of schistosome resistance (as seen predominantly in B. pfeifferi ) . However, the distribution of S. mansoni is dependent on the ecological requirements of its intermediate host, with the availability of suitable freshwater habitats limiting the potential geographical reach of the parasite . Two Biomphalaria species, B. choanomphala and B. sudanica have been reported to inhabit Lake Victoria and its surrounding shoreline [ 13 – 17 ]. Previous studies described these species as having either a “lacustrine” ( B. choanomphala ) or “non-lacustrine” ( B. sudanica ) morphology due to B. choanomphala being commonly found in deeper parts of the lake, and B. sudanica being commonly found in the swamps adjacent to the shoreline [ 7 , 18 – 22 ]. However, using molecular methods, both Standley et al. and Zhang et al. demonstrated that the B. sudanica -like snails in Lake Victoria were genetically more similar to B. choanomphala than to other B. sudanica populations found in Africa. This suggested that the B. sudanica -like snails from Lake Victoria were instead an ecophenotype (ecological phenotype) of B. choanomphala , with the morphological differences between the two snails being the result of individual populations adapting their shell morphology to fit their environment. This was expanded upon further by Andrus et al. who found the B. sudanica -like snails in Lake Victoria were both genetically and morphologically separate from the B. sudanica snails found at Lake Albert. In this study, we follow Standley et al. , Zhang et al. and Andrus et al. and refer to these B. sudanica -like snails as B. choanomphala . Webbe and Prentice et al. were the first to document that B. choanomphala snails at Lake Victoria were capable of transmitting Schistosoma mansoni . Subsequent parasitological surveys have consistently reported B. choanomphala snails (including B. sudanica -like snails from Lake Victoria) having a lower prevalence of S. mansoni infection (0.06–10%) when compared to other east African species, such as B. pfeifferi (0.2–10%), B. stanleyi (4.8–15%) and B. sudanica (0.2–13.3%) (excluding B. sudanica -like snails from Lake Victoria) [ 14 – 17 , 27 – 29 ]. Schistosoma mansoni infection can be determined using both traditional cercarial shedding methods and molecular infection detection methods . However, molecular detection methods are underutilised in detecting schistosome infection in intermediate snail hosts collected from the field . Biomphalaria populations are known to be sensitive to a variety of abiotic factors in their habitat, which limits which environments they can inhabit . Likewise, biotic factors such as the genetic diversity of a snail-host population can affect host susceptibility and parasite infectivity . The ‘Red Queen’ hypothesis proposes that there is a continuous cycle of adaptation and counter-adaptation between populations of species that frequently interact with one another, such as parasites and hosts . These reciprocal adaptive changes from host-parasite interactions can explain the genetic variability observed in host susceptibility and parasite infectivity . Although the exact genetic mechanisms have not been fully identified in snail-schistosome systems, resistance and susceptibility in snails and infectivity of schistosomes have been shown to be heritable and maintained through cost-benefit trade-offs . In this study, using molecular xenomonitoring, we investigate the prevalence of S. mansoni infection in B. choanomphala snails made from a set of five extensive expeditions sampling the Kenyan, Tanzanian and Ugandan shorelines of Lake Victoria. We examined the habitat type, water depth, turbulence, temperature, conductivity, total dissolved solids, salinity and pH level of the lake, as well as the abundance and genetic diversity of B. choanomphala snail populations. This was done to determine the effect abiotic and biotic factors have on the infection prevalence of S. mansoni in B. choanomphala populations across Lake Victoria. This study is the largest and only lake-wide collection of B. choanomphala snails from Lake Victoria that investigates the factors influencing the prevalence of S. mansoni infection. Sampling was undertaken from 2008 to 2011 at 170 sites from the Kenyan ( n = 35), Tanzanian ( n = 82) and Ugandan ( n = 53) shorelines of Lake Victoria , as part of malacological surveillance of the EU Framework 6 project entitled EU-CONTRAST . Sampling sites were chosen opportunistically based upon accessibility, with common freshwater snail habitats such as marshes and the lake edge close to human settlements being chosen. Sampling was undertaken during the wet season, with samples collected during five field expeditions that took place over a two year period . Sites were within 10 meters of the lakeshore and were approximately 10–20 km apart to ensure an even spread of sites along the Lake Victoria shoreline across Kenya, Tanzania and Uganda. The Lake Victoria shoreline is approximately 3,450 km (Kenya: 16%, Tanzania: 33%, Uganda: 51%) , and in total, we surveyed approximately 3,175 km of the shoreline. However, ~275 km of the western Tanzanian shoreline could not be surveyed due to inaccessibility. At each site, the location was georeferenced using a handheld GPS device (Garmin GPS V, Garmin Ltd., Kansas City, USA) and the date, time and the weather conditions were documented. Next, qualitative measurements (habitat type, water depth, water turbulence, B. choanomphala abundance, and which shell morphotype were present) of the site were recorded in situ . The habitat type of a site was categorised as being either marshlands (type-a), lake edge (type-b) or other (type-c) . If the site featured a combination of several habitat types, no more than three sub-categories were assigned. Water depth was assessed as being either shallow (<10 cm), moderately-shallow (10–30 cm), moderate (30–50 cm), moderately-deep (50–70 cm) and deep (>70 cm). Likewise, water turbulence was classified as being low (still, no movement), medium (some movement but limited wave action) or high (high disturbance, white caps were present). Biomphalaria choanomphala snails were sampled semi-quantitatively at each of the sites, with two collectors looking for snails for 15 minutes, using either a short or long-handled metal mesh scoop. Biomphalaria choanomphala abundance was measured as either being absent (zero snails), low (<10), medium (10–30) or high (>30). When found, B. choanomphala snails were collected and placed into jars filled with lake water for later processing. Biomphalaria choanomphala snails were identified using conchological identification methods as described by Mandahl-Barth and Brown , collected snails had their shell morphologies recorded to see whether they exhibited non-lacustrine (morphotype-A) or lacustrine shell (morphotype-B) morphologies as described by Standley et al. and Andrus et al. . After processing, all snail samples were stored immediately in 100% ethanol for later DNA extraction. Biomphalaria collections were undertaken by Standley et al. (field expeditions 1–4) and Rowel et al. (field expedition 5), further information on these collections can be found in Standley et al. and Rowel et al. . Alongside the qualitative measurements, certain environmental factors such as water temperature (°C), conductivity (μS), total dissolved solids (g/L), salinity (g/L) and pH were measured at each site using a HI9813 handheld portable water meter (Hanna Instruments, Inc., Woonsocket, USA). These measurements were performed on site by taking the mean values from two separate measurements using two different water meters. Additionally, 15 ml of water was taken from each site, and was frozen for future detailed compositional analysis at the Natural History Museum, London. The concentrations of anions (fluoride, chloride, nitrate, phosphate and sulphate) in the water samples were determined using Reagent-Free Ion Chromatography (RFIC-EG), while the concentrations of cation (calcium, potassium, magnesium and sodium) were determined by Inductively Coupled Plasma Optical Emission Spectroscopy (ICP-OES). The equipment used to measure anion and cation concentrations were a ICS-3000 system (Dionex Inc., Sunnyvale, USA) and Varian Vista Pro axially viewed ICP-AES with a CCD detector and Varian SPS-5 autosampler (Varian, Inc., Palo Alto, USA), respectively. The water sample was tested twice and the mean value for each measurement was used. Further information about the abiotic data collection protocols can be found in Standley et al. . Following collection, B. choanomphala snails were stored immediately in 100% ethanol. DNA was extracted using a modified CTAB (Cetyltrimethylammonium Bromide) extraction method . Where numbers permitted, 12 B. choanomphala snails were extracted per site, while sites with fewer than 12 individuals had all of their snails extracted (for further information, please see S1 Table ). After extraction, samples were resuspended in 100–200 µl of TE (10mM Tris-HCl, pH 8.0, 0.1mM EDTA) buffer and DNA yields were measured using a NanoPhotometer N50 (Implen, München, Germany). All DNA extracts were stored at −20 °C and were archived at the Liverpool School of Tropical Medicine until use. All of the extracted B. choanomphala samples were tested for S. mansoni infection between 2021 and 2022, using two different infection detection primer sets as described in Andrus et al. . The first primer set used was Sm F/R designed by Sandoval et al. (SM F/R -F: 5’-GAG ATC AAG TGT GAC AGT TTT GC-3’ and SM F/R -R: 5’-ACA GTG CGC GCG TCG TAA GC-3’). If the sample was positive, it was then tested using the ND5 primer set designed by Lu et al. (ND5-F: 5’-ATT AGA GGC AAT GCG TGC TC-3’ and ND5-R: 5’-ATT GAA CCA ACC CCA AAT CA-3’) to determine whether the infection present in the snail was caused by S. mansoni or its closely-related sister species, S. rodhaini . The PCR reaction mixture and cycling conditions for the Sm F/R and the ND5 primers were followed precisely as described by Sandoval et al. and Lu et al. , respectively. The applicability of these methods for S. mansoni detection in Biomphalaria snails is discussed in Joof et al., 2020 . Alongside the B. choanomphala samples, two negative controls (water and uninfected B. glabrata DNA) and two positive controls (pure S. mansoni DNA and infected B. glabrata DNA) were also included. These controls were provided by Professor Mike Doenhoff, School of Biology, University of Nottingham. Additionally, all DNA extracts were tested using the LSU-1iii/LSU-3iii primers (LSU-1iii: 5’-TGC GAG AAT TAA TGT GAA TTG C-3’ and LSU-3iii: 5’-ACG GTA CTT GTC CGC TAT CG-3’) to ensure that the DNA was not degraded and was still amplifiable. The PCR cycling conditions for these primers were an initial denaturation at 96 °C for 2 min, followed by 35 cycles of 94 °C for 30 sec, 45 °C for 1 min, 72 °C for 2 min and a final extension step at 72 °C for 5 min. All PCR reactions were performed using Promega GoTaq G2 Master Mix. All PCR products were ran on a 2% agarose gel containing ethidium bromide and amplicons were observed under UV light. Schistosoma mansoni infection was confirmed based on whether a diagnostic band was present for both the Sm F/R (~350 bp) and ND5 (~302 bp) primer sets. Genomic DNA samples from 27 sites across Lake Victoria were selected to measure the genetic diversity and population structure of the B. choanomphala snails found across Lake Victoria. Selected sites were evenly distributed along the lakeshore and had a minimum of ten individuals. Population genetic analysis was done using 16S and COI genotyping, which used the 16Sarm/16Sbrm primer set (16Sarm: 5’-CTT CTC GAC TGT TTA TCA AAA ACA-3’ and 16Sbrm: 5’-GCC GGT CTG AAC TCA GAT CAT-3’) and the universal COI primers designed by Folmer et al. . All PCR reactions were performed using a 25 μl reaction volume containing 24 μl of PCR master mix (1U TAQ, 0.2 μM primers, 200 μM dNTP, 1.5 mM MgCl 2 ) and 1 µl of DNA template. The PCR cycling conditions used for both the 16S and COI primer sets were identical, with an initial denaturation at 96 °C for 1minute, followed by 34 cycles of 94 °C for 1 min, 50 °C for 1 min, 72 °C for 1 min and a final extension at 72 °C for 10 mins. PCR products were ran on a 2% agarose gel containing ethidium bromide and observed under UV light, with PCR products purified and sequenced by either the Natural History Museum, London or using Macrogen’s EZ Seq service. Biomphalaria choanomphala sequences were aligned using the MUSCLE (Multiple Sequence Comparison by Log-Expectation) algorithm in the program Seaview v5 , with misaligned sections of the 16S and the COI fixed by hand and sites for tree building selected using the Gblocks program . Phylogenetic trees were constructed using the Maximum Likelihood method, using a General Time Reversible model incorporating gamma correction (GTR+Γ) in the program MEGA v11 , with bootstrap analysis undertaken using 1000 replicates. DNASP v6 was used to determine haplotype (gene) diversity scores (Hd), nucleotide diversity (π) and to examine population structure among populations between countries using Wright’s F-statistics (F st ). Pairwise distances were calculated with MEGA 11 using the Maximum Composite Likelihood method . Due to the non-normal distribution of our dataset, several non-parametric tests were performed using SPSS v26 (IBM, Armonk, USA) . A two-tailed bivariate Spearman’s rank correlation analysis was performed to determine the relationships between B. choanomphala abundance, snail host haplotype diversity, prevalence of S. mansoni infection, and the abiotic factors of Lake Victoria. Similarly, a Kruskal–Wallis test (followed by a post-hoc Dunn’s test), a Mann–Whitney U test, and a Pearson’s chi-squared (X 2 ) test (with Yates’ correction) were performed to compare the abundance of B. choanomphala , haplotype diversity, prevalence of infection and abiotic factors between the Kenyan, Tanzanian and Ugandan shorelines of Lake Victoria. All of the 16S and COI sequences of B. choanomphala used in this study are available on GenBank. The B. choanomphala 16S gene sequences are available in accession numbers HM768950-HM769131 and OQ924869-OQ924928. Likewise, the B. choanomphala COI gene sequences are available in accession numbers HM769132-HM769258 and OQ849937-OQ849996. For further information, see S2 Table . Biomphalaria choanomphala snails were present at 107 of the 170 sites surveyed at Lake Victoria . Of these 107 sites, 44 had a low abundance (<10) of B. choanomphala , 25 had a medium abundance (10–30), and 38 had a high abundance (>30; Table 1 ). The Ugandan sites had the highest abundance of B. choanomphala snails, followed by the Tanzanian and Kenyan sites ( Table 1 ). When categorised by morphotype, we found 52 sites had morphotype-A and 44 sites had morphotype-B ( Table 1 ). Only 11 sites had both morphotypes present, with the majority of these sites being lake-marsh ecosystems on the Ugandan and the Tanzanian shorelines ( Table 1 ). We found morphotype-A snails were more prevalent than morphotype-B snails at the Kenyan and Tanzanian sites, while morphotype-B snails were more prevalent at the Ugandan sites ( Table 1 ). When categorised by habitat type, B. choanomphala snails were present at 40 marshland sites (a), 50 lake edge sites (b) and the remaining 17 sites were a mixture of different ecosystems (c) such as canals, rice paddies, ponds bordering the lake, and other hybrid environments . Of the 107 sites with B. choanomphala present, we found S. mansoni infection at 35.5% of sites (38 of 107 sites) . All of our Sm F/R positive Biomphalaria samples were confirmed to be infected with S. mansoni as every sample gave a diagnostic band length of ~302 bp when tested with the ND5 primer set. The Tanzanian shoreline had the highest number of infected sites, with 40% of the sites with B. choanomphala snails present (18/45) infected with S. mansoni . This was followed closely by the Ugandan shoreline with 39% (16/41) of sites with B. choanomphala infected and the Kenyan shoreline with only 19% (4/21) of sites with B. choanomphala infected . A Kruskal–Wallis test found there was no significant difference in the number of infected sites across the Kenyan, Tanzanian and Ugandan shorelines, X 2 (, n = 107) = 3.07, p = .215. The sites with the highest number of infected B. choanomphala snails were T027b (7 of 10 snails) and T033a (4 of 10 snails), with the remaining sites having a maximum of two (or less) infected snails ( S1 Table ). Of the 40 marshland sites with B. choanomphala snails (a), 14 had infection present (35%), while 16 of the 50 lake edge sites with B. choanomphala snails (b) had infection present (32%) and the mixed sites (c) had infection present at 8 of the 17 sites (47%) with B. choanomphala snails. A Kruskal–Wallis test found there was no significant difference in the number of infected sites at each of the three ecosystems (X 2 [, n = 635] = 1.25, p = .535). When partitioned by number of infected snails, the overall prevalence of S. mansoni infection for Lake Victoria was 9.3%, with 59 of the 635 B. choanomphala snails testing positive for S. mansoni infection. The Tanzanian shoreline of Lake Victoria had the highest mean prevalence of infection with 13.1% (31/237) of snails infected, followed by the Ugandan shoreline with 8.2% (22/269) and the Kenyan shoreline with 4.7% (6/129). Likewise, a Kruskal–Wallis test indicated that there was no significant difference in the number of infected snails found at the Kenyan, Tanzanian and Ugandan shorelines, X 2 (, n = 635) = 4.05, p = .132. Lastly, when categorised by morphotype, the morphotype-A form of B. choanomphala had an infection prevalence of 7.8% (27/347), while the morphotype-B form had an infection prevalence of 10.8% (32/288). A chi-square test of independence was performed to examine the relation between shell morphotype and infection. There was no significant association between the two B. choanomphala morphotypes and the prevalence of S. mansoni infection (X 2 (1, n = 635) = 1.69, p = 1.93). Of the 27 sites selected for host snail population genetic analysis, 166 unique 16S haplotypes ( n = 306) and 114 unique COI haplotypes ( n = 313) were found . The mean haplotype diversity (Hd) scores of the 27 sites were 0.845 (± 0.16) for 16S, and 0.787 (± 0.17) for COI. The mean nucleotide diversity (π) value for all Lake Victoria sites was 0.015 (± 0.009) for 16S, and 0.008 (± 0.005) for COI. The overall range of pairwise distances of the 27 sites genotyped at Lake Victoria was 0.0–3.5% for the 16S and 0.0–4% for the COI. When categorised by morphotype, we found 12 of the 16S (H9, H17, H48, H51, H60, H65, H69, H79, H110, H111, H160, H183) and 12 of the COI (H1, H5, H7, H8, H23, H27, H32, H42, H46, H48, H61, H64) haplotypes commonly found throughout Lake Victoria exhibited both morphotype-A and morphotype-B morphologies ( S2 Table ). When partitioned by shoreline, we found the Ugandan shoreline had the highest number of haplotypes with 99 16S haplotypes ( n = 151) and 60 COI haplotypes ( n = 156). Of the 12 Ugandan sites sampled, the mean Hd score was 0.883 (±0.19) for 16S and 0.747 (±0.19) for COI. The mean nucleotide diversity value for Ugandan sites was 0.018 (±0.01) for 16S, and 0.007 (±0.005) for COI. Pairwise distances for the Ugandan sites were 0.0–3.1% for 16S and 0.0–3.9% for COI. The Tanzanian shoreline had the second highest number of haplotypes with 50 16S haplotypes ( n = 93) and 38 COI haplotypes ( n = 93). Of the nine Tanzanian sites sampled, the 16S and COI had a mean Hd score of 0.863 (±0.09) and 0.802 (±0.16), respectively. The mean nucleotide diversity value for Tanzanian sites was 0.016 (±0.008) for 16S, and 0.008 (±0.005) for COI. Pairwise distances for the Tanzanian sites were 0.0–3.4% for 16S and 0.0–3.8% for COI. Lastly, the Kenyan shoreline had the lowest number of haplotypes with 24 16S haplotypes ( n = 62) and 24 COI haplotypes ( n = 64). Of the six Kenyan sites sampled, the mean Hd score was 0.748 (± 0.16) for 16S and 0.844 (± 0.07) for COI. The mean nucleotide diversity value for Kenyan sites was 0.007 (± 0.003) for 16S, and 0.009 (± 0.006) for COI. Pairwise distances for the Kenyan sites were 0.0–1.9% for 16S and 0.0–1.2% for COI. When comparing the amount of haplotype diversity (Hd) at the 13 sites found with infection against the 14 sites found without infection, we found sites with infection had a higher mean Hd score than sites without infection ( S3 Table ). The mean Hd score of the 13 B. choanomphala collection sites with infection was 0.881 (± 0.1) for 16S and 0.841 (± 0.11) for COI, while the mean Hd score of the 14 collection sites with no infection was 0.814 (± 0.2) for 16S and 0.737 (± 0.19) for COI ( Table 2 ). However, a Mann–Whitney U test found this difference in mean Hd score was not significant for either the 16S (U = 93.5, p = 0.903) or the COI (U = 118.5, p = 0.182). When partitioned by country, we find not all of the countries share this trend of sites with infection having a higher mean Hd score than sites without infection. For example, the mean Hd score of the 16S was higher for sites found without infection (0.933) than sites found with infection (0.828) on the Tanzanian shoreline ( Table 2 ). Likewise, the mean Hd score of the COI was higher for sites found without infection (0.879) than sites found with infection (0.774) on the Kenyan shorelines ( Table 2 ). A Spearman’s rank correlation test found that there was positive correlation between haplotype diversity scores and the prevalence of S. mansoni infection for both the 16S ( R s = 0.003) and COI ( R s = 0.229). However, the correlations between haplotype diversity scores and the prevalence of S. mansoni infection were not statistically significant for both the 16S ( p = 0.989) and COI ( p = 0.251). When measuring the population structure (F st ) between B. choanomphala populations using the 16S, we found the population structure was highest among the Kenyan and Ugandan populations (0.305), followed by the Tanzanian and Ugandan populations (0.242), while the Kenyan and Tanzanian populations had the lowest amount of structure (0.098) ( Table 3 ). Likewise for the COI, we found the population structure was highest among the Kenyan and Ugandan populations (0.195). However, the second highest F st value was between the Kenyan and Tanzanian populations (0.118), followed by the Tanzanian and Ugandan populations (0.067) ( Table 3 ). Next, we measured the population structure between B. choanomphala populations found with S. mansoni infection between countries. We found the F st values were highest between Kenyan and Ugandan sites for the 16S (0.490), followed by the Tanzanian and Ugandan sites (0.369), and lastly, the Kenyan and Tanzanian sites (0.192) had the lowest values ( Table 3 ). However, when using the COI, we found the Kenyan and Tanzanian sites (0.210) had the highest values, followed by the Kenyan and Ugandan sites (0.123), and lastly, the Tanzanian and Ugandan sites (0.104) had the lowest value ( Table 3 ). Likewise, when we measured the population structure between B. choanomphala populations found without S. mansoni infection between countries, we found the F st values were the highest between Kenyan and Ugandan sites (16S: 0.474; COI: 0.452), followed by the Tanzanian and Ugandan sites (16S: 0.340; COI: 0.287) and the Kenyan and Tanzanian sites (16S: 0.137; COI: 0.152) ( Table 3 ). Lastly, we mapped the distribution of private and shared 16S and COI haplotypes of B. choanomphala throughout Lake Victoria . The mean percentage of private haplotypes found within each B. choanomphala population was 46.7% for 16S haplotypes and 29.5% for COI haplotypes, while the mean percentage of shared haplotypes was higher for both the 16S (53.3%) and COI (70.5%). We found the Kenyan sites had the highest mean percentage of shared haplotypes with 71% for the 16S and 71.9% for the COI. Next, were the Tanzanian sites which had the second highest mean percentage of shared haplotypes with 58% for the 16S and 70.5% for the COI. The Ugandan sites had the lowest mean percentage of shared haplotypes with 43% for the 16S and 69.9% for the COI. When comparing the rate of shared haplotypes between B. choanomphala populations found with and without S. mansoni infection, we found sites with infection had more shared haplotypes for both the 16S (58.6%) and COI (78.9%) than sites found without infection (16S: 47.2%; COI: 61.1%) . However, we found on average the uninfected Kenyan sites had more shared haplotypes for both the 16S (75.6%) and COI (70%) than infected sites (16S: 61.9%; COI: 72.7%). Conversely, on average the infected Tanzanian sites had more shared haplotypes for both the 16S (61.9%) and COI (78.2%) than uninfected sites (16S: 50%; COI: 54.8%). Likewise, on average the infected Ugandan sites had more shared haplotypes for both the 16S (55%) and COI (81.7%) than uninfected sites (16S: 29.6%; COI: 56.8%). As mentioned previously, there was no significant difference in the prevalence of infection between the three shorelines of Lake Victoria. However, when testing the abundance of B. choanomphala snails between the Kenyan, Tanzanian and Ugandan shorelines, a Kruskal–Wallis found there was a significant difference between the three shorelines, X 2 (, n = 170) = 11.41, p = .003. Similarly, there was a significant difference in the abundance of both morphotype-A (X 2 [, n = 170] = 6.62, p = .036) and morphotype-B (X 2 [ n = 170] = 59.13, p < .001) B. choanomphala snails across the three shorelines. When testing the abiotic factors between the Kenyan, Tanzanian and Ugandan shorelines, it found that the water temperature (X 2 [(2) n = 165] = 18.07, p < .001), conductivity (X 2 [(2) n = 166] = 42.19, p < .001), total dissolved solids (X 2 [ n = 164] = 42.16, p < .001), salinity (X 2 [ n = 154] = 26.87, p < .001), fluoride (X 2 [ n = 141] = 57.99, p < .001), phosphate (X 2 [ n = 141] = 8.27, p = .016), sulphate (X 2 [ n = 141] = 39.12, p < .001), sodium (X 2 [ n = 140] = 23.58, p < .001) and potassium (X 2 [ n = 139] = 6.75, p = .034) levels were significantly different at each of the three shorelines. Conversely, the remaining abiotic factors, such as water pH ( p = .354), chloride ( p = .486), nitrate ( p = .067), magnesium ( p = .879) and calcium ( p = .461) levels showed no significant difference between the three shorelines of Lake Victoria. The Dunn’s post-hoc test (using a Bonferroni correction) output of each Kruskal–Wallis test performed can be found in Table 4 . When comparing the abundance of B. choanomphala snails between the three shorelines, we found the Ugandan shoreline had a significantly higher abundance of B. choanomphala snails when compared to the Kenyan and Tanzanian sites ( Table 4 ). When comparing the abundance of each morphotype, we found the abundance of morphotype-A snails was significantly higher only at the Kenyan shoreline when compared to the Ugandan shoreline ( Table 4 ). However, the abundance of morphotype-B snails was significantly higher at Ugandan shorelines when compared to both the Kenyan and Tanzanian shorelines ( Table 4 ). When comparing the abiotic factors collected at each shoreline, we found the Kenyan and Tanzanian shorelines were the most similar. The only significant differences between the Kenyan and Tanzanian shorelines were that the Kenyan sites had a higher median water temperature, fluoride, sulphate, and sodium levels than the Tanzanian sites ( Table 4 and S4 Table ). When comparing the Kenyan and Ugandan shorelines, the median water conductivity, TDS, salinity, fluoride, phosphate, sulphate and sodium levels were significantly higher at the Kenyan sites than the Ugandan sites ( Table 4 ). Lastly, when comparing the Tanzanian and Ugandan shorelines, the median conductivity, TDS, salinity, fluoride, phosphate, sulphate and sodium levels were significantly higher at the Tanzanian sites than at the Ugandan sites ( Table 4 ). However, the median water temperature and potassium levels were significantly higher at the Ugandan sites than the Tanzanian sites ( Table 4 ). A Spearman’s rank correlation analysis found B. choanomphala abundance had several significant relationships with chloride (0.354), magnesium (0.322), phosphate (0.319), potassium (0.316), pH (−0.311), calcium (0.238), nitrate (0.215) and water turbulence (−0.214) ( Table 5 ). Likewise, morphotype-A abundance had a significant negative relationship with morphotype-B abundance (−0.177) and vice versa . ( Table 5 ). This relationship indicates that each morphotype prefers inverse environmental factors to one another. For example, morphotype-A abundance had a significant positive relationship with sulphate (0.508), water conductivity (0.421), nitrate (0.404), sodium (0.402), calcium (0.398), phosphate (0.394), chloride (0.379), TDS (0.336), magnesium (0.307), salinity (0.252) and potassium (0.241). Whereas morphotype-B abundance has a significant negative relationship with sulphate (−0.359), water conductivity (−0.363), nitrate (−0.181), sodium (−0.316), TDS (−0.379), salinity (−0.256) and fluoride (−0.391) ( Table 5 ). Moreover, morphotype-A abundance had a significant negative relationship with water turbulence (−0.447), pH (−0.386) and water depth (−0.170), while morphotype-B abundance had a significant positive relationship with water turbulence (0.269) and water depth (0.161) ( Table 5 ). Lastly, there were several significant relationships with prevalence of infection, such as B. choanomphala abundance (0.445), morphotype-B abundance (0.306), morphotype-A abundance (0.271), pH (−0.199), calcium (0.184) and magnesium (0.175) concentrations ( Table 5 ). Our molecular xenomonitoring study with multivariate analyses of available abiotic and biotic data investigated the prevalence of S. mansoni infection in B. choanomphala snails across Lake Victoria. Specifically, we addressed whether genetic diversity and abiotic (temperature, pH, physiochemical parameters etc.) factors had an effect on schistosome infection prevalence. As previously mentioned in the introduction, we collectively refer to the B. choanomphala and B. sudanica -like snails at Lake Victoria as a single species that express two distinct ecophenotypes [ 22 – 24 ]. Standley et al. reported that B. choanomphala snails found at Lake Victoria had high levels of genetic diversity, high levels of both inter- and intra-population diversity, low levels of gene flow between populations and low levels of inbreeding. They theorised that this high level of genetic diversity could be caused by several factors relating to the environment (homogeneous habitats), human activity (mass treatment and snail control programs) and S. mansoni infection. However, Standley et al. were unable to examine whether S. mansoni infection prevalence was influenced by B. choanomphala population structure due to the lack of data on whether a snail was infected or not. Our study provides this missing infection data and incorporates it with snail host genetic diversity and abundance data as well as data on abiotic characteristics. Our study is currently the only survey of S. mansoni infection in B. choanomphala snails that encompasses all three shorelines of Lake Victoria. Our study found a mean prevalence of S. mansoni infection of 9.3% in B. choanomphala snails at Lake Victoria, with the highest prevalence of infection observed on the Tanzanian shoreline, followed by the Ugandan and Kenyan shoreline, respectively. Our study found a higher mean prevalence of S. mansoni infection when compared to previous parasitological studies. Previously, Gouvras et al. reported 1.2% of snails on the Tanzanian shoreline were shedding cercariae, while 1.8–2.1% were shedding on the Ugandan shoreline and 0.7–1.5% were shedding on the Kenyan shoreline . The reason for this increase in infection prevalence is most likely attributed to the use of molecular methods to detect infection in the present study rather than the traditional cercarial shedding method. Molecular detection methods tend to show a higher number of infected snails as they are able to detect infection in both prepatent and actively shedding snails and are thus less likely to give false negative results . When categorised by morphotype, we found the morphotype-B form of B. choanomphala had a higher mean infection prevalence than the morphotype-A form. Similarly, we found morphotype-B variants of B. choanomphala had a stronger relationship with S. mansoni infection than morphotype-A snails. However, this difference in infection prevalence was not statistically significant. Consistent with our findings, Mutuku et al. reported that S. mansoni infection and cercarial production was significantly higher in B. choanomphala (morphotype-B) snails than the B. sudanica -like (morphotype-A) snails found at Lake Victoria, regardless of miracidium dosage or whether the eggs came from allopatric or sympatric sources. However, Rowel et al. and Gouvras et al. found the opposite trend, with the B. sudanica -like (morphotype-A) snails at Lake Victoria having a higher S. mansoni infection prevalence than the B. choanomphala (morphotype-B) snails. A Spearman’s rank correlation test found that B. choanomphala abundance positively correlated with calcium, chloride, magnesium, nitrate, phosphate, and potassium levels, but negatively correlated with high water turbulence and increasing pH (abundance decreased with increasing alkalinity). When comparing sites, the Ugandan sites had significantly more B. choanomphala snails than the Kenyan and Tanzanian sites. The Ugandan shoreline predominantly had morphotype-B snails, while the Kenyan and Tanzanian shorelines had morphotype-A snails. When categorised by morphotype, the majority of the B. choanomphala snails collected from the Ugandan shoreline were morphotype-B, while the majority of the B. choanomphala snails collected from the Kenyan and Tanzanian shorelines were morphotype-A. This difference in morphology could be explained by the difference in abiotic factors between the Kenyan, Tanzanian and Ugandan sites, as the Kenyan and Tanzanian sites had significantly higher levels of nitrate, potassium, salinity, sodium, sulphate, TDS and water conductivity than the Ugandan sites. Likewise, morphotype-A abundance had a positive relationship with higher levels of nitrate, potassium, salinity, sodium, sulphate, TDS and water conductivity, while morphotype-B abundance had a negative relationship with higher levels of nitrate, salinity, sodium, sulphate, TDS and water conductivity. Conversely, morphotype-A abundance had a negative relationship with water depth and water turbulence, while morphotype-B abundance had a positive relationship with water depth and water turbulence. The morphotype-A form of B. choanomphala was predominately found in shallow and lentic (still) environments, while the morphotype-B form was predominately found in deep lotic (flowing) environments. Dillon found the American Planorbidae species, Helisoma trivolvis also exhibit different ecological phenotypes if they inhabited shallow, lentic waters, or deep, lotic waters. Dillon hypothesised that these two contrasting shell morphologies helped the snails adapt to their environment as the morphotype found in lentic waters use their shell to trap air and in order to regulate their buoyancy and reach floating vegetation. Conversely, the morphotype found in lotic waters use their wide aperture/foot to grip onto rocks while grazing in flowing water. This functionality could be analogous to the B. choanomphala ecophenotypes found in Lake Victoria. A Spearman’s rank correlation test found S. mansoni infection in B. choanomphala snails had a significant positive relationship with B. choanomphala abundance, calcium levels and magnesium levels. Conversely, infection prevalence had a significant negative correlation with pH levels of the lake water ( S. mansoni infection rate decreased with increasing alkalinity). Rowel et al. also observed this trend, with S. mansoni infection rates having a significant positive relationship with Biomphalaria abundance and a significant negative relationship with alkaline pH levels. Previous studies have also found that calcium and magnesium levels correlated with Biomphalaria abundances . However, B. choanomphala abundance itself also has a significant positive relationship with calcium and magnesium levels, as well as a significant negative relationship with alkaline pH levels. Therefore, it is likely that B. choanomphala abundance is the only direct factor influencing infection prevalence as the other factors indirectly affect infection via Biomphalaria abundance. The highest number of infected snails were found on the Tanzanian shoreline of Lake Victoria. However, no significant differences were found in the number of infected snails across the three shorelines. It is important to state, that other socio-economic, behavioral, or ecological factors not examined in this study may contribute to the higher prevalence of infected B. choanomphala when comparing different sites along the shoreline of Lake Victoria. It is also important to note that this analysis does not incorporate human-related data for the sites investigated, which are important when trying to observe patterns of transmission. Recently, both the genomes of Biomphalaria pfeifferi and B. sudanica -like snails from the Kenya shoreline of Lake Victoria were sequenced and published. Despite the importance of these species as intermediate hosts of intestinal schistosomiasis at Lake Victoria and the Afrotropical region as a whole, the few number of genomic studies on African Biomphalaria species (and other important snail vector species) emphasize how under-researched these vectors are . When we looked at levels of genetic diversity, we found the B. choanomphala populations found on the Ugandan shoreline had the highest genetic diversity, followed by the Tanzanian and Kenyan populations. However, the Kenyan populations likely had lowest number of 16S and COI haplotypes due to it being the smallest shoreline when compared to the Ugandan and Tanzanian shorelines. When we compared the level of genetic diversity at sites with and without S. mansoni infection, we found sites with infection had a higher mean haplotype diversity score than sites without infection. A Spearman’s rank correlation test found both the 16S and COI Hd scores correlated positively with infection prevalence, but this relationship between haplotype diversity and infection was not statistically significant. When we mapped the distribution of the 16S and COI haplotypes across Lake Victoria, we found infected B. choanomphala populations on average had fewer private and more shared 16S and COI haplotypes than B. choanomphala populations without infection , indicating there is greater amounts of gene flow occurring among infected sites than among uninfected sites. Our findings are contradictory to previous studies that found a link between lower genetic diversity within a host population and increased susceptibility to parasite infection . One possible explanation for this contradiction could be due to the higher amounts of gene flow previously mentioned. This is because the migration of B. choanomphala snails between sites helps to maintain a high amount of genetic diversity (via gene flow) and can introduce infection into new areas through the arrival of new snails carrying the parasite. This possible movement of snails could explain why sites with infection had higher amounts of genetic diversity compared to sites without infection. An alternatively explanation for this contradiction could be explained by the ‘co-evolution selective sweep’ phenomenon, where a host-parasite relationship results in selective sweeps of host resistance adaptations and parasite counter-adaptations . This causes a reduction in genetic diversity as individuals without this adaptation (e.g., S. mansoni resistance) are less successful than those who have it. Populations with high genetic diversity and high S. mansoni infection prevalence may not have undergone this selective sweep, while populations with low genetic diversity and low infection prevalence could have. Another explanation could be due to whether non-random mating behaviour is being exhibited or not. This is because non-random mating behaviour is involved in maintaining resistance to S. mansoni , and ultimately reduces the genetic diversity of a population . Populations with high genetic diversity and high infection levels may not exhibit non-random mating behaviour, favouring random mating as it promotes genetic diversity over S. mansoni resistance, resulting in longer life span, higher fecundity, and more successful offspring . Conversely, populations with low genetic diversity and low infection levels may exhibit this non-random mating behaviour, favouring resistance over genetic diversity. Overall, the relationship between S. mansoni and Biomphalaria snails is complex and can depend on many factors such as the genetic constitution of the snails, the environment in which they live, and the prevalence and virulence of S. mansoni within an area. In conclusion, the Lake Victoria region is still significantly understudied in terms of the environmental epidemiology of intestinal schistosomiasis. The addition of molecular xenomonitoring infection prevalence data represents a new development and establishes a better ‘infection baseline’ for future snail surveillance studies for understanding S. mansoni transmission. Further research is needed to fully explore the relationships between African Biomphalaria species, and to elucidate the complex relationship between Biomphalaria snails and S. mansoni across Lake Victoria.
|
Study
|
biomedical
|
en
| 0.999997 |
PMC11695014
|
Preterm premature rupture of membranes (PPROM) refers to the rupture of fetal membranes prior to 37 gestational weeks (GW) . It occurs in 2–3% of pregnancies and is responsible for one-third of preterm deliveries and around 20% of perinatal mortality . The main risk factors for PPROM are a history of premature delivery, preconceptional cervical abnormalities, vaginal bleeding, cervical shortening during pregnancy, genital infections and intrauterine infection. However, in most cases, PPROM occurs in the absence of any risk factor . In addition to prematurity, PPROM exposes the fetus and its adnexa to serious complications, including placental abruption, umbilical cord compression and intrauterine infection, also known as chorioamnionitis . The onset of chorioamnionitis significantly increases the risk of neonatal complications, including perinatal death, early neonatal sepsis, pneumonia, and long-term disability . The diagnosis of chorioamnionitis is based primarily on clinical symptoms, with a combination of maternal or fetal tachycardia, maternal fever, uterine tenderness or elevated white blood cell count. However, the presence of one or more of these signs does not necessarily confirm the presence of chorioamnionitis. Amniocentesis may suggest or confirm prenatal diagnosis, but is not considered a standard of care in cases of PPROM because it is a risky procedure . Anatomopathological tests are often conclusive in the diagnosis of chorioamnionitis, but are performed after the infant is delivered. It is therefore crucial to develop an early marker of chorioamnionitis that is specific, non-invasive, and easily accessible in routine clinical practice, in order to improve the neonatal prognosis of PPROM. More specifically, it would enable a rapid and reliable online analysis to predict chorioamnionitis and help clinicians make a decision on delivery criteria. In obstetrics, electronic fetal monitoring (EFM) or cardiotocography (CTG) has been widely used to monitor uterine contractions and fetal heart rate (FHR) with the aim of assessing fetal well-being during the intrapartum period . FHR assessment has been shown to be effective in reducing the risk of preventable intrapartum fetal death, and in alerting to a large proportion of neonatal complications such as fetal hypoxia, fetal acidemia, neonatal encephalopathy, and cerebral palsy . However, studies carried out on chorioamnionitis have shown contradictory results or an absence of association between FHR and chorioamnionitis [ 10 – 15 ]. The FHR reflects the behavior of the cardiovascular system and is modulated by the fetal brain and its autonomous nervous system (ANS) . The variability of the FHR (FHRV) is correlated with ANS function via sympathetic and parasympathetic tones. Studies in the literature have shown the importance of assessing HRV to detect infections in adults and infants, and particularly premature babies [ 17 – 19 ]. The study of FHRV features therefore appears to be a potential, as yet unexploited, avenue of research for the early detection of chorioamnionitis. It could enable a non-invasive approach to biological rhythms, offering a new diagnostic and/or prognostic approach. The aim of our study is to observe the evolution of the features of the FHR in a population of PPROM approaching delivery, according to the presence or absence of chorioamnionitis. After recording and preprocessing FHR data, we extracted features directly from the computerized FHR recordings (cFHR) and features from the HRV analysis in the time, frequency and nonlinear domains. We were interested in observing the features dynamics over the last three days of the PPROM latency period (last 72 hours before birth) to better characterize the period of onset of chorioamnionitis. The preceding days were not taken into account, as the presence of chorioamnioninis is not certain. Multiple factor analysis (MFA) was used to study each set of features described by the three days before birth. MFA provides partial projections from the features set of each day separately and a global projection from the concatenation of the features of all days. ROC analyses were then performed on the distances computed between the partial and global projections to distinguish subjects with chorioamnionitis from those without. To our knowledge, there is no equivalent study in the literature. In the following paragraphs, we describe the study design, the enrolled population, the data recorded, the preprocessing and the data analysis procedure. The whole pipeline is visualized in Fig 1 and described in the following sections. We have used the TREND reporting guidelines and provided in S1 Checklist . This is a multicenter prospective case-control study led by Rennes University Hospital (CHU of Rennes) to characterize the pattern of FHR based on the presence or not of chorioamnionitis in a population of PPROM . It was conducted in accordance with the requirements of clinical practice in Europe, French public health code and the ethical principles of Helsinki declarations. Patients were orally and fully informed of the objectives of the study, of their right to refuse to participate, and of the possibility of withdrawing consent at any time, including for their newborns. All this details were included in an information and no-objection letter given to patients. Patients gave verbal informed consent, and if they objected to participate in the study, they were asked to submit a written refusal. The study was approved by the local research Ethics Review committee of the CHU of Rennes (Notice number 14.76). The study protocol of the clinical trial can be found in S1 and S2 Files, for French and English versions, respectively. A pilot study by the CHU of Rennes was performed and suggested that a sample of 60 analyzable patients has a power of 95% to detect significant difference between the chorioamnionitis and non-chorioamnionitis groups. Given that 50% of PPROMs deliver within 24–48 hours after the rupture , a sample of 120 patients was required. Thus, 120 pregnant women were enrolled in 4 French hospitals located in Rennes, Angers, Nantes and Poitiers between 09 December 2014 to 16 December 2021. All participants were adult with a singleton pregnancy hospitalized for PPROM between 26 and 34 GW. Exclusion criteria were: pregnant women with pre-existing or gestational diabetes, multifetal gestation, fetal malformation on ante/post-natal examination, active maternal smoking, neonatal hypotrophy (birth weight <10 th AUDIPOG percentile), and presence of maternal pathology (heart disease, pulmonary embolism, hypertension, chronic renal failure, chronic obstructive pulmonary disease or autoimmune disease). PPROM was diagnosed either by frank discharge of amniotic fluid during speculum examination, or by a vaginal diagnostic test for Insulin Growth factor Binding Protein 1 (IGFBP-1) in case of doubt on clinical examination. According to French guidelines for PPROM, all patients received nifedipine or atosiban for tocolysis, amoxicillin intravenous injections as antibiotic prophylaxis and betamethasone injections as corticosteroid therapy . The name of the medicine, the method of administration, the dates and duration of treatment were collected. All patients with PPROM were closely monitored from the onset of PPROM until after birth. During the monitoring period, two types of data were collected from patients: clinical and physiological data. The data was accessed for research purposes on November 2022. Only one of the authors had access to information that could identify individual participants in one of the centers, by virtue of her role as the healthcare professional responsible of the data collection in the CHU of Rennes. Processing was done with respect of the french application of EU regulation related to protection of personal data, data processing, data files and individual liberties. All data were pseudonymized after monitoring by the medical staff. The FHR recordings were preprocessed to correct for outliers and ectopic beats, as shown in Fig 1B . First, All FHR data were visually inspected to identify and eliminate periods with false heart rate values at the beginning and end of the recordings. Next, all ectopic beats and outliers were identified and corrected. Ectopic beats are defined as any beat that differs from the previous one by more than 8 × 75 th percentile of values, or by more than 25 beats per minute (bpm). All the values that lies between two consecutive ectopic beats of opposite directions are considered an outlier period. Outlier beats are all beats with a value > 1.2 x 75 th percentile of values or a value < 0.8 x 25 th percentile of values. Periods with ectopic or outlier values are replaced by spline interpolation if they last less than 30 s, otherwise they are considered periods of data loss and replaced by blanks. No corrections were made during the period of data loss, in order to preserve as much of the variation in the original data as possible. Finally, recordings are filtered using a zero-phase digital filtering to eliminate baseline drift. A range of features were extracted based on the computerized analysis of the FHR and the heart rate variability (HRV) analysis using custom made Matlab codes . FHR recordings were transformed into RR-intervals before extracting HRV features. All features, further detailed in the following paragraphs, were calculated for each measurement session. A list of the extracted features can be found in Table 1 . The features follows the definitions of Jones et al. . They include the baseline FHR in beats per minute (bpm), accelerations and deceleration, long and short term variations, and episodes of low and high FHR variability . Accelerations are defined as an increase in the FHR of 10 or 15 bpm for more than 15 s. Decelerations are defined as a decrease in FHR of 10 bpm for more than 60 s, or of 20 bpm for more than 30 s. Accelerations and decelerations were presented by their relative number per minute, their average durations, and their relative cumulative sizes over the recordings. Long-term variation (LTV) over each minute is the difference in ms between the highest and lowest value of 16 sections of the minute. The overall LTV is the average of the LTV computed for each minute. The short-term variation (STV) corresponds to the measurement of micro-fluctuations in the FHR. It is computed as the 1/16 min epoch-to-epoch variation averaged over each minute and over the whole recording. Episodes of high (low) variation are any part of the recording where the variation over one minute is greater than 32 ms (less than 30 ms, respectively) for at least 5 of 6 consecutive minutes. These episodes are characterized by their relative number, and relative durations. From RR intervals, we computed the mean, standard deviation (SD), maximum, minimum, root mean square of successive differences (RMSSD), standard deviation of successive differences (SDSD), skewness, and kurtosis . HRV analysis in the spectral domain aims to describe the power distribution (power spectral density—PSD) of RR intervals over frequency bands. The Lomb-Scargle method waas used to estimate PSD because RR intervals are in nature unevenly sampled data and they may contain missing data due to removed long outlier periods . Frequency-domain features included very low frequency power (VLF: 0.001–0.02 Hz), low-frequency power (LF: 0.02–0.2 Hz), high-frequency power (HF: 0.2–1.5 Hz), the ratio of low-frequency power to high-frequency power (LF/HF), normalized low-frequency power (LFnu = LF/(LF + HF)), normalized high-frequency power (HFnu = HF/ (LF + HF)), and total power calculated over all frequency bands (TtlPwr = VLF + LF + HF) . They are used to describe the complexity and the unpredictability of RR intervals, which result from the complex interactions of the many mechanisms modulating the cardiac variability. These parameters are the approximate entropy (ApEn), sample entropy (SampEn), the short and long term fluctuation indexes (SD1 and SD2) from the Poincaré plot, short and long term correlations from the detrended fluctuation analysis, and the acceleration and deceleration capacities . Multiple Factor Analysis (MFA) analyzes a set of observations described by several groups of variables. The analysis provides an integrated picture of the observations and the relationships between the groups of variables . In this particular study, we found it interesting to investigate the relationships and thus the dynamics between the last three days of recordings before birth for a given type of features (i.e. cFHR, time-FHRV, frequency-FHRV and nonlinearFHRV). The originality of MFA lies in the fact that it allows to study the impact of groups of variables on an observation by simultaneously visualizing the observation described by all the variables (global projection) and by each group of variables (partial projection). For each features type, the features are sorted in three matrices X , X [−1] , X [−2] of size ( N , M ) each, denoting the datasets at the days 0, -1, and -2. Days 0, -1, and -2 correspond, respectively, to the last 24 hours before birth, 24 to 48 hours before birth and 48 to 72 hours before birth. N = 39 is the number of subjects, and M is the number of features in dataset X [ k ] . In our study, M c = 13, M t = 8, M f = 7, M nl = 8, respectively, for computerized analysis of FHR, time-FHRV, frequence-FHRV and nonLinear-FHRV. If the subject has several recordings per day, the median value of these recordings is considered. Matrices were centered and normalized according to each column. MFA consists of two steps as shown in Eq (1) . First, a principal component analysis (PCA) is performed for each group of variables (datasets) and expressed via its singular value decomposition (SVD) . The weights of each matrix are obtained by the inverse of the first squared singular value of its PCA and are used to normalize the datasets X , X [−1] , X [−2] . Second, the normalized datasets are combined all together to form a unique matrix X = [ X , X [−1] , X [−2] ] and a global PCA is performed on X . General factor scores and factor loadings are obtained as being the projections in the new principal components or dimensions of the observations and the variables, respectively. In addition, partial factor scores for each dataset X [ k ] can be obtained as its projection onto the global space. X = [ X 1 ⋯ X M X 1 [ - 1 ] ⋯ X M [ - 1 ] X 1 [ - 2 ] ⋯ X M [ - 2 ] [ x 1 , 1 0 ⋯ x 1 , M 0 ⋮ x n , m 0 ⋮ x N , 1 0 ⋯ x N , M 0 ] ︸ ↓ [ x 1 , 1 - 1 ⋯ x 1 , M - 1 ⋮ x n , m - 1 ⋮ x N , 1 - 1 ⋯ x N , M - 1 ] ︸ ↓ [ x 1 , 1 - 2 ⋯ x 1 , M - 2 ⋮ x n , m - 2 ⋮ x N , 1 - 2 ⋯ x N , M - 2 ] ︸ ↓ PCA PCA [ - 1 ] PCA [ - 2 ] × × × weights weights [ - 1 ] weights [ - 2 ] ] ︸ ↓ Global PCA (1) It is worth noting that the general factor scores matrix is the centroid or the barycenter of the partial factor scores . The observations can be visualized in 2- or 3-dimensional spaces using the general factor scores alone or by adding the partial factor scores. The importance of each dimension is measured by its eigenvalue and its percentage of variance, i.e. the amount of information it carries. In addition, each dimension is influenced either by observations, variables, or group of variables. This influence is determined by the contribution of an observation, a variable or a set of variables to a dimension . The analyses was performed using the FactoMineR package on RStudio . To better understand the variation in the data and take into account of the evolution over time, we calculated the distances between the partial factor scores and the barycenter or global factor scores of the MFA. This enables us to summarize the dispersion between groups of variables (days before birth) for a given type of parameter. Distances are calculated on the first 4 principal dimensions. It is defined by d 1−4 , where d is the 4-dimensional Euclidean distance function. Parameters are expressed as mean ± standard deviation. A Kolmogorov-Smirnov or Mann-Whitney statistical test was performed, as appropriate, to compare data. A p-value < 0.05 was considered statistically significant. ROC analysis was performed on the computed distances to distinguish between subjects with and without chorioamnionitis using RStudio. Non-chorioamnionitis subjects were all cases with a negative anatomopathological test (stage 0); whereas chorioamnionitis population included cases whose chorioamnionitis had been diagnosed at stages 1, 2 and 3. We performed the ROC analysis by first considering all the chorioamnionitis population, then considering stages 2 and 3 or only stage 3 subjects. The analysis is performed using the 5-fold cross-validation method repeated 10 times. The recruitment was performed from 09 December 2014 to 16 December 2021. Fig 2 shows patient exclusions throughout the analysis. Of the 120 pregnant women with PPROM initially included in the study, 12 were excluded at initial selection due to an unrecorded date of PPROM or the presence of gestational diabetes, 31 were excluded due to the absence of anatomopathological results, and 38 were excluded for one of the following reasons: 1) date of birth not recorded, 2) error in date and time of recordings due to a bug in the monitoring device, 3) No FHR recording for at least one day in the 3 days prior to delivery. Consequently, data from only 39 pregnant women were considered for the analysis, 25 pregnant women were diagnosed as chorioamionitis while 14 women were not. The mean age of the enpatients was 30 ± 6 years. Table 2 summarizes the clinical characteristics of the 39 pregnant women included in this study. The features presenting the most significant differences between chorioamnionitis and non-chorioamnionitis populations are shown in Table 3 along with the p-value of the statistical test. The comparison for all features can be found in S1 Table . As previously explained, the MFA was performed between the last three days before birth on features extracted from cFHR, time-FHRV, frequency-FHRV and nonlinearFHRV. The analyses showed that time-FHRV and nonlinear-FHRV seem to highlight the most distinctions between populations. Consequently, and for the sake of clarity, we have chosen to present in this paragraph an example of the MFA results performed on the HRV features extracted in the non-linear domain only in Fig 3 . The first dimension of the global MFA (λ 1 = 2.07) explains 30.35% of the total inertia, the second dimension (λ 1 = 1.8) explains 26.4% of the inertia, representing 56.75% of the information presented in the first factorial plane. The projection of individuals in the first factorial plane (Dim1, Dim2), presented in Fig 3(a) , shows that non-chorioamnionitis subjects are more spatially centered than infected subjects. Nevertheless, the distinction between different stages of chorioamnionitis is unclear. The average individual factor map in Fig 3(b) presents the barycenters of each stage of chorioamnionitis from the global MFA, superimposing the three partial MFAs. The global barycenters are thus plotted according to its different variables groups (nonlinear-FHRV D, D[-1] and D[-2]). The figure shows that the partial barycenters of stage 0 (non-chorioamnionitis) are more clustered at the global barycenter than those of stages 1, 2 and 3 (chorioamnionitis). The groups of the MFA are represented on each dimension by the weighted cumulative inertia of the groups’ features in Fig 3(c) . The coordinates of the groups show that the first dimension is mainly due to Day[-1] and that Day and Day[-1] both contribute to the second dimension. The proximity of these two groups can be interpreted by the similarity between the variables for days 0 and -1 before birth. The correlation circle in Fig 3(d) shows that the first dimension is correlated with poincaré and acceleration/decelerations features. This dimension is associated to the inter-beats variability. The second dimension is correlated with entropy measures and DFA features, which are related to complexity of RR intervals. After performing the MFA on all the types of features (cFHR, time-FHRV, frequency-FHRV and nonlinea-FHRV), the 4-dimensional Euclidean distances between the global and partial projections are computed for each individual and represented in boxplots by infection stage (0, 1, 2 and 3) as shown in Fig 4 . Distances are compared using the Mann-Whitney test or the Kolmogorov-Smirnov test, depending on whether or not the data follow a normal distribution. Significant differences (p-value < 0.05) are identified on the boxplots by horizontal lines. The boxplots show that distances generally increase with the stage of infection. There is no significant difference between distances in all stages for MFA of cFHR features, while there is a significant difference between stage 3 and stage 0 for the distance computed on MFAs of time-FHRV and nonlinear-FHRV. Other significant differences were observed for MFA of frequency-FHRV (stage 1 vs stage 3) and nonlinear-FHRV (stage2 vs stage3). Table 4 shows the performance of the ROC analysis performed on the distances calculated for each type of features. Firstly, the ROC analysis was performed to distinguish non-chorioamnionitis from all stages of chorioamnionitis (stages 1, 2 and 3). The results show low area under the curve (AUC) for all the types of features, with the highest AUC obtained for time-FHRV (AUC = 62.5% [59.3–65.8] CI). ROC results increased when stage 1 patients were excluded from the ROC analysis, and became higher when only stage 3 patients were considered. Higher AUCs were obtained for nonlinear-FHRV features with AUC = 90.4% [88.2–92.6] CI and for time-FHRV features with AUC = 84.4% [81.8–87.1] CI. This paper presented a prospective study to monitor pregnant women experiencing PPROM between 26 and 34 GW. The aim was to assess the FHR through computerized analysis of FHR and HRV analysis to detect the presence of chorioamnionitis. There are very few studies in the literature that evaluate the FHR to detect chorioamnionitis, yet their findings are ambiguous. To our knowledge, this is the first study in the literature to investigate the fetal heart rate variability. Moreover, the innovation of this study lies in the assessment of the day-to-day relationship using the multiple factor analysis. The diagnosis of chorioamnionitis remains unclear to this day, leading to confusion and complicating clinical decision-making . As described previously, there is a list of symptoms which help in the clinical diagnosis of chorioamnionitis, but do not have a high predictive value. FHR monitoring is increasingly used in assessing fetal well-being during the intrapartum period, particularly in cases of hypoxia, acidemia and cerebral palsy . However, the relationship between chorioamnionitis and FHR patterns is not yet well established, and previous studies have shown contradictory results. Salafia et al . reported that the FHR showed abnormal patterns in cases of chorioamnionitis . In a previous study, the FHR showed increased baseline FHR, reduced variability and loss of acceleration in recordings during labour compared with admissions in chorioamnionitis . Our study compared the features of the last three days before labor between chorioamnionitis and non-chorioamnionitis cases, as in . In accordance with , we showed significantly lower accelerations and variability in chorioamnionitis cases, but no significant difference was shown on the baseline FHR. On the other hand, the study by Kyozuka et al . showed no association between FHR patterns and chorioamnionitis . Other studies have also reported that FHR abnormalities are not useful in predicting intra-amniotic infection and chorioamnionitis [ 11 – 13 ]. Despite significant differences in certain cFHR features, our study showed that it was difficult to distinguish chorioamnionitis from non-chorioamnionitis cases with low AUC values obtained from the ROC analysis. In this study, we found it interesting to go beyond standard cFHR features and extract HRV-related features and assess their ability to detect chorioamnionitis. HRV analysis has been shown to have a high predictive value in the diagnosis of several cardiovascular and pulmonary diseases, both in infants and adults [ 17 , 19 , 31 – 33 ]. The results of our paper suggest that HRV analysis, particularly in the time and nonlinear domains, is useful and provides additional information on fetal health status. The results of the MFA realized on the FHRV analysis in the nonlinear domain, presented in this paper, showed that there are homogeneous zones for infected and non-infected patients, with no-chorioamnionitis patients mostly projected in the right part of first factorial plane, although overlaps between classes persist. This indicates that no chorioamniotis patients have a better short term HRV adaptation. Similar trends were also observed for features of time-FHRV and frequency-FHRV. This paper also showed that the day-to-day dispersion is higher in subjects with chorioamnionitis than in non-chorioamnionitis subjects. This was highlighted by the 4-dimensional Euclidean distances computed between the global projection and the partial projections of the MFAs for each type of features. This dispersion was more pronounced in subjects with stage 3 choriomanionitis. The ROC analysis performed on the distances, computed from the nonlinear HRV analysis, showed that it was possible to experimentally measure an AUC of 90% when trying to distinguish subjects with stage 3 chorioamnionitis from those without. By grouping stages 2&3 or 1&2&3 in the infected population, distinguishing them from non-chorioamnionitis subjects became more difficult. The AUC decrease respectively around 65% and 60%. The same trends were observed for the others sets of parameters. No improvement were observed when combining all the parameters. The main limitation of our study is the low number of subjects analyzed and compared in the groups of chorioamnionitis and non-chorioamnionitis. Despite the high number of inclusions made during the clinical study, a very small number of subjects were considered analyzable and retained for data analysis, particularly for subjects who were classified as infected with stage 3 of chorioamnionitis. The study is the initial step for a more generalizable clinical trial, taking into account the study population and the length of the follow-up. The next step would be the inclusion of a greater stage 3 chorioamnionitis population for the confirmation of the interesting observed results compared to the control subjects. This study reveals three main findings. Firstly, the assessment of HRV features showed interesting results over cFHR features. Secondly, the assessment of patients’ own dynamics over time seems to be an interesting method for detecting chorioamnionitis. Thirdly, the differentiation is obvious between cases of non-chorioamnionitis and stage 3 chorioamnionitis, whereas it is more subtle for stages 1 and 2 chorioamnionitis. Future work could include extending the study to other hospital centers, thereby enrolling a larger number of pregnant women with PPROM to improve the generalizability of our findings. It would also be interesting to carry out further analysis to improve the diagnostic performance of features extracted from cFHR, noting that these features can be directly and easily accessible during the FHR recordings.
|
Review
|
biomedical
|
en
| 0.999996 |
PMC11695015
|
The growing global energy demand, along with the need for clean and sustainable energy sources, has led to a significant increase in solar energy projects worldwide. However, one of the major challenges facing the solar industry is the unpredictability of solar energy production, which is highly dependent on weather conditions such as cloud cover, rainfall, and intensity sunlight. Therefore, developing accurate and reliable models to forecast the power output of solar energy projects is essential, which is important for the effective management of energy systems. In fact, solar energy is expected to play a key role in the global transition to clean and renewable energy sources. The International Energy Agency (IEA) estimates that solar energy could provide up to 30% of the world’s electricity by 2050 . This forecast highlights the need for robust solar forecasting models that can support effective integration of solar energy into the grid and optimization of energy systems. Furthermore, the need for solar energy forecasting is especially urgent in developing countries like Vietnam, where solar energy projects are on the rise and energy demand is growing rapidly. According to Vietnam’s National Power Development Plan, the country’s electricity demand is expected to rise significantly with strong economic growth, reaching around 124,000 MW by 2030 . However, the country is currently heavily dependent on fossil fuels, which not only contributes to greenhouse gas emissions but also exposes the country to fluctuations in global oil prices. Therefore, there is an increasing need to diversify the country’s energy structure, with solar energy being a promising alternative. Vietnam Electricity Group (EVN) said that in April 2022, the entire system’s electricity production reached 22.62 billion kWh, an increase of 1.9% over the same period. Cumulatively in the first 4 months of the year, the total electricity output of the entire system reached 85.65 billion kWh, an increase of 6.2% over the same period in 2021. Notably, renewable energy including wind power, solar energy, and biomass power reached 13.15 billion kWh, accounting for 15.4% of the total electricity produced in the entire system. In recent years, Vietnam has made significant strides in promoting solar energy, with the government implementing policies to encourage the development of solar energy projects. In 2019, Vietnam started construction of the largest solar energy plant in Southeast Asia, with a capacity of 688 MW. The plant is expected to produce about 1.2 billion kWh of electricity annually, enough to power 1.3 million households and reduce 1.2 million tons of carbon emissions each year. The success of this project highlights the potential of solar energy in Vietnam and the need for accurate forecasting models to support effective management of energy systems. According to Draft Power Plan VIII, it is expected that the installed capacity of solar energy will increase from 17 GW to about 20 GW . The proportion of solar energy is expected to account for 17% , 14% in the structure of power sources. In Vietnam, technology, techniques and the ability to develop solar energy projects are still heavily dependent on foreign countries, leading to large-scale solar energy deployment facing many difficulties, especially about price. This makes it difficult for solar energy to compete with other traditional power sources. The most important application of solar energy today and in the future is still electricity production. Two types of solar energy production technology are widely developed: photovoltaic technology (SPV—Solar photovoltaic) and concentrated solar energy technology (CSP—Concentrated solar energy). The most popular SPV technology today includes: crystalline solar cells (about 90% market share), the rest are thin film solar cells (about 10% market share). Currently in Vietnam, solar energy development investment projects use mostly Solar photovoltaic technology as described in Fig 1 . However, evaluating and designing solar cell energy using this technology Solar photovoltaic in Vietnam still has many limitations, mainly due to foreign consulting units. It would be very meaningful if we could make a preliminary assessment of the solar cell energy source. Implementing solar energy as a significant energy resource presents challenges due to the inherent uncertainty in electricity production, which is highly dependent on weather conditions. To maximize efficiency, it is essential to connect solar plants to the central electricity transmission grid. Accurate forecasting of solar energy production at specific plants is crucial to managing this uncertainty and ensuring smooth electricity transmission . Extensive research has been conducted on solar photovoltaic power forecasting. Machine learning (ML) has emerged as a powerful tool across various scientific disciplines, enabling accurate predictions, efficient optimization, and deeper insights into complex phenomena. In material science, Jain et al. conducted a comparative analysis of ML techniques to predict the wear and friction properties of MWCNT-reinforced PMMA nanocomposites, demonstrating the effectiveness of these models in material property evaluation. Similarly, Jain et al. applied ML to optimize terahertz metamaterial absorbers, showcasing its potential in enhancing design efficiency. The versatility of ML extends to electrical characterization, as illustrated by Vaja et al. , who used it to evaluate the electrical properties of methylene blue solutions via AC/DC conductivity. Furthermore, Prakash et al. reviewed ML’s transformative role in advancing functional materials, particularly single-crystal perovskite halides, from crystal growth to device applications. Beyond material science, ML’s utility is evident in other domains. In hydrology, Hayder et al. employed NARX neural networks and LSTM-based deep learning to achieve multi-step-ahead river flow predictions, emphasizing its ability to model dynamic natural systems. In geotechnical engineering, Solihin et al. utilized stacking ensemble ML techniques for landslide susceptibility mapping, underscoring its significance in environmental risk management. Additionally, Solihin et al. applied stacked ensemble ML models to calibrate spectroscopy data, revealing its importance in refining analytical techniques. Together, these studies highlight the extensive applicability of ML in addressing challenges across diverse fields, including material science, electronics, environmental engineering, and analytical spectroscopy, paving the way for more efficient and innovative solutions. These ML models utilize sophisticated algorithms to analyze various factors including weather conditions, solar panel efficiency, and geographical location . By harnessing historical data alongside real-time weather information, machine learning models can deliver precise and dependable predictions of solar energy output. This capability empowers energy managers to optimize energy systems effectively, leading to reduced operating costs and enhanced efficiency. The use of ML in solar forecasting has attracted significant attention in recent years, with several studies demonstrating the potential of ML-based models to improve accuracy and reliability of solar forecasts. Adaptive agent decision models based on deep reinforcement learning and autonomous learning have been developed to address complex decision problems such as solar energy forecasting . For example, these models have been applied to neurophysiological data, demonstrating their applicability to real-world solar energy forecasting challenges . Lorenz et al. provided a comprehensive overview of the field, while Raza et al. highlighted recent advancements. Many studies in this area focus on predicting irradiance or leveraging historical power output data. For instance, Yang et al. used exponential smoothing to improve predictions of horizontal irradiance, and Gueymard examined irradiance forecasting for surfaces at various angles. Lorenz et al. utilized regional weather data to predict irradiance, which was then converted into power forecasts. Several studies have explored the use of weather data and historical power output to predict both irradiance and power output . Daily mean solar irradiance is a critical factor in determining the size of solar energy generation units. Accurate forecasting of solar irradiation at specific locations aids in predicting the electricity output of solar panels, which is vital for calculating system size, return on investment (ROI), and load measurements. Various regression algorithms have been applied in conjunction with solar irradiance parameters to improve the accuracy of these forecasts . Gonzalez et al. proposed an ML-based forecasting model that uses machine learning algorithms to predict the hourly solar energy output of photovoltaic systems electricity. The ML model achieved 94.9% accuracy in predicting solar energy output, outperforming traditional forecasting methods . Ortiz et al. proposed an ML-based model using deep learning algorithms to forecast the power output of solar energy plants. The ML model leverages real-time weather data, historical solar output data, and plant operating data to provide accurate and reliable forecasts of solar output. The ML model achieved over 90% accuracy in predicting solar energy output, demonstrating the potential of ML-based models in improving the efficiency and reliability of energy systems. The variability of solar radiation often leads to a mismatch between energy demand and supply, highlighting the need for efficient thermal energy storage systems. These systems are crucial for bridging the gap and enabling solar thermal power plants to provide uninterrupted power generation to meet both current and future energy needs. Anand et al. employed popular machine learning models including K-nearest neighbors (KNN) and extreme gradient boosting (XGBoost)—to evaluate the performance of a packed-bed thermal energy storage system. Aksoy and Genc used three boosting models including XGBoost, Light Gradient Boosting (LightGBM) and CatBoost for forecasting the power energy to be generated by solar energy plants. Krishnan et al. used Gradient boosting (GB) for forecasting solar ration in various climatic zones. However, the use of machine learning models or artificial intelligence (AI) models in solar energy forecasting is still in its infancy, with limited research and practical applications in Vietnam. Therefore, five machine learning models including XGBoost, LightGBM, GB, CatBoost and KNN will be introduced for building five ML models in predicting power solar cell capacity from six input variables including Humidity, Ambient temperature, Wind Speed, Visibility, Pressure, and Cloud Ceiling. The primary objective of this study is to develop and evaluate the performance of five machine learning models XGBoost, LightGBM, Gradient Boosting (GB), CatBoost, and KNN for predicting solar energy output. By incorporating six key weather-related input variables (humidity, ambient temperature, wind speed, visibility, pressure, and cloud ceiling), the study aims to improve forecasting accuracy, facilitating effective energy management and integration of solar power into the electricity grid. This study contributes to the growing body of research on solar energy forecasting by:—Demonstrating the application and comparative performance of five machine learning models in predicting solar power generation, with CatBoost emerging as the best-performing model. In this study, the dataset used to build a machine learning model consists of 21.045 samples of solar energy derived from the investigation of Williams and Wagner . The dataset includes six input variables and one output variable. The input variables include factors such as Humidity, Ambient temperature, Wind Speed, Visibility, Pressure, and Cloud Ceiling depending on the specific context of the study. The output variable could be the amount of energy generated by the solar panels in each sample or a type of energy efficiency index. Statistical values for these variables include: Mean, minimum value, maximum value, std, mean, Median… Detailed information regarding these statistical values can be found in the Table 1 referenced in the study. This variable serves as the target of prediction or analysis within the machine learning model. Specifically in the realm of solar energy, it typically denotes the quantity of energy produced by solar panels over a certain period. This Power output is commonly quantified in units of Watts (W) or kilowatt-hours (kWh), providing insights into the effectiveness and efficiency of solar energy generation systems. Understanding and accurately predicting this energy output is vital for various applications, including optimizing solar panel placement, assessing system performance, and facilitating energy management strategies . The data used in this study pertains to the utilization of a specific set of input variables, which include: Humidity, temperature, wind speed, visibility, cloud ceiling, pressure. • Humidity: alters the path of incoming sunlight through phenomena such as refraction, diffraction, and reflection. These optical processes can scatter and disperse sunlight, potentially reducing the intensity of solar radiation reaching the solar panels. Consequently, variations in humidity levels can directly influence the amount of solar energy available for conversion by photovoltaic cells . Indirect Impact on Panel Efficiency: Moreover, humidity indirectly affects the efficiency of solar panels by contributing to the formation of dew. When water vapor in the air condenses on the surface of solar panels as dew, it can enhance the coagulation of dust particles. This increased dust accumulation on the panels can diminish their effectiveness by obstructing sunlight absorption and reducing overall energy output. Therefore, humidity indirectly influences solar panel maintenance requirements and long-term performance . Understanding the interplay between humidity and solar energy generation is essential for optimizing the design, operation, and maintenance of solar energy systems. Incorporating this knowledge into predictive models can help improve the accuracy of energy production forecasts and inform strategic decisions for maximizing solar energy utilization. Elevated Ambient Temperature can induce thermal stress within the materials of solar panels, potentially leading to material degradation and reduced performance over time. Thermal expansion and contraction cycles can cause mechanical stress on the solar cells and interconnects, compromising their structural integrity and electrical performance .Understanding the impact of temperature on the electrical performance of solar panels is essential for optimizing the design, operation, and maintenance of solar energy systems. Strategies such as proper panel orientation, ventilation, and thermal management techniques can help mitigate the adverse effects of temperature and maximize the overall energy yield of solar installations. Therefore, visibility plays a significant role in determining the performance and output of solar energy systems, with high visibility generally correlating with improved energy production and low visibility indicating potential challenges for solar energy generation. In some cases, atmospheric pressure might influence the cooling efficiency of solar panels. Lower atmospheric pressure at higher altitudes can lead to less effective convective cooling, which might increase the operating temperature of the panels and reduce their efficiency. In the context of solar energy, cloud ceiling can also refer to the height of clouds measured at a weather station or in a specific area. Information about cloud ceiling can be used to predict the intensity of sunlight and the impact of clouds on solar energy generation in a particular area. To describe the detailed distribution of the input variables with the output variable using histograms Fig 2 . By plotting histograms of the input variables and the output variable, we can compare their distributions and understand the relationship between them. This provides us with an overall view of the data and prepares us for building machine learning models. Fig 3 depicts the linear correlation among the variables of the dataset used. It can be observed that the input variables exhibit very little correlation. Only variable Ambient temperature has the highest correlation value, which is 0.58. In order to complete the description of correlation analysis, Table 2 summarizes the variance inflation factor (VIF) values for six variables, providing insights into multicollinearity within the dataset. All variables, including Humidity (2.360), Ambient Temperature (1.570), Wind Speed (1.020), Visibility (1.117), Cloud Ceiling (1.401), and Pressure (1.364), exhibit VIF values well below the threshold of concern (commonly 5 or 10). This indicates minimal multicollinearity, ensuring that no variable excessively influences others. Consequently, the dataset is suitable for regression analysis without requiring corrective measures for multicollinearity. Gradient Boosting is a powerful machine learning algorithm that enhances predictive performance by combining the outputs of multiple weak learners, typically decision trees, to create a single strong model. It is an iterative algorithm where each subsequent model is trained to correct the errors made by the previous models . Gradient Boosting is particularly effective in scenarios with complex data patterns and when high predictive accuracy is required. It can handle both regression and classification tasks and is known for its ability to reduce bias and variance, leading to robust models. The process begins by fitting the first model to the data, which is usually a simple decision tree. The predictions from this model are then compared to the actual values, and the difference, or residuals, is calculated. The next model is trained on these residuals, with the aim of reducing the error made by the first model. This process is repeated for a specified number of iterations, with each model learning to improve upon the errors of the combined model from the previous iteration. The core idea behind Gradient Boosting is to minimize a loss function, which measures the difference between the actual and predicted values, by sequentially fitting models to the residual errors of the combined model. One of the key aspects of Gradient Boosting is its use of gradient descent, a numerical optimization technique, to minimize the loss function. In each iteration, the algorithm calculates the gradient of the loss function with respect to the predictions and updates the model in the direction that reduces the loss. This approach allows the model to progressively "boost" its performance by focusing on the areas where it is weakest. However, it is computationally intensive and can be prone to overfitting if not properly tuned, requiring careful management of parameters like learning rate, the number of trees, and tree depth. The XGBoost model operates by constructing a sequence of decision trees, where each tree learns from the errors of its predecessors . During this process, XGBoost computes gradients of the objective function (typically the loss function) and utilizes these gradients to update the decision values. This iterative process continues until a specified number of iterations or stopping conditions are met. XGBoost is a powerful and widely used machine learning algorithm within the machine learning community. It inherits and extends upon previous algorithms such as Gradient Boosting Machine (GBM) and is particularly suited for predicting complex structured data, especially in areas like ensemble learning, time series forecasting, and natural language processing . Some advantages of the XGBoost model include: high performance, scalability to large datasets, flexibility, and fine-tuning of parameters. The k-Nearest Neighbor (KNN) algorithm is a widely used supervised learning method, notable for its simplicity and effectiveness . It is often listed among the top data mining algorithms due to its intuitive approach to classification and regression tasks. KNN creates a decision boundary that closely follows the distribution of the data, which helps in achieving high accuracy when the dataset is large and representative. KNN is a nonparametric algorithm, meaning it does not assume any specific form for the underlying data distribution. This characteristic makes it particularly suitable for real-world datasets that may not adhere to theoretical distributions like Gaussian mixtures or linear separability . Nonparametric methods like KNN can handle a variety of data distributions more effectively. Unlike other algorithms that build a model during the training phase, KNN has a minimal training phase and a more intensive testing phase. During training, KNN simply stores the dataset, while during testing, it classifies new data points by examining the ’k’ nearest neighbors from the stored dataset. This means that while the training process is fast, the algorithm requires access to the entire training dataset (or a significant portion of it) during the prediction phase, making the testing phase more computationally demanding . The Light Gradient Boosting Machine (LightGBM) is an advanced machine learning algorithm that has gained popularity for its efficiency and high performance in both classification and regression tasks . Developed by Microsoft, LightGBM is designed to be highly efficient and scalable, capable of handling large datasets with substantial features while maintaining rapid training and prediction times. Another distinctive feature of LightGBM is its leaf-wise (or best-first) tree growth strategy, as opposed to the level-wise approach used by many other gradient boosting algorithms. In the leaf-wise method, LightGBM grows trees by splitting the leaf with the highest loss, which can result in deeper trees with fewer splits, leading to better accuracy and efficiency. Overall, LightGBM stands out due to its speed, scalability, and accuracy, making it a preferred choice for many machine learning practitioners dealing with large-scale datasets and complex predictive tasks . CatBoost, short for Categorical Boosting, is a high-performance machine learning algorithm developed by Yandex . It excels in both classification and regression tasks. CatBoost is designed to handle categorical data without extensive preprocessing, making it a powerful tool for real-world applications where such data is prevalent. One of the standout features of CatBoost is its ability to directly incorporate categorical features into the model. While traditional gradient boosting algorithms often require categorical data to be converted into numerical format (e.g., one-hot encoding), CatBoost can directly process categorical variables, maintaining their inherent information and relationships. This is achieved through a process called target-based encoding, where the algorithm replaces categorical values with statistics computed from the target variable. CatBoost also implements efficient processing techniques to speed up training and inference. These include sophisticated algorithms for efficient memory and computational resource usage, making CatBoost suitable for large-scale datasets . During the evaluation process of machine learning models, employing evaluation methods such as R 2 , mean absolute error (MAE) and root mean square error (RMSE) offer detailed insights into the model’s predictive performance across both training and validation datasets. The R 2 coefficient of determination, quantifies how much of the variation in the dependent variable can be explained by the independent variables. It has a range from 0 to 1, with a value of 1 representing a perfect fit. R 2 = 1 − ∑ i = 1 n ( y i − y i ^ ) 2 ∑ i = 1 n ( y i − y i − ) 2 (1) Where, y i represents the actual value, y i ^ represents the predicted value by the model for the i sample, n is the number of samples in the test dataset, and y i − is the mean of the actual values y i The mean absolute error (MAE) represents the average of the absolute differences between predicted values and actual values. It provides a straightforward measure of prediction accuracy by calculating the average magnitude of errors in a set of predictions, without considering their direction. Lower MAE values indicate better predictive accuracy, as they signify smaller errors between the predicted and actual values. The root mean square error (RMSE) measures the average magnitude of the errors between predicted and actual values in a dataset. A lower RMSE indicates better predictive accuracy, while a higher RMSE suggests larger prediction errors. Additionally, using cross-validation techniques is an important method to evaluate model generality. By dividing the data into subsets, the model is trained on one subset and evaluated on the remaining subset. The results provide insight into the model’s average performance across multiple test datasets, supporting the assessment of the model’s overall robustness and stability. Fig 4 “Methodology flow chart” describes the main steps including 4 steps in this investigation using ML models for forecasting solar energy generated by photovoltaic panel. The ML models of this investigation are implemented by Python language programing with Sklearn library . The dataset utilized in this study comprises a total of 21,045 samples, each containing 6 input features including Humidity, temperature, wind speed, visibility, cloud ceiling, pressure, and 1 output feature “solar energy”. The input features represent various environmental, while the output feature corresponds to the power generation capacity of solar cells. To ensure that the data is in a suitable form for model training, comprehensive data preprocessing is carried out across the entire dataset. This preprocessing includes tasks such as data cleaning and normalization or standardization of features by Sklearn library . Once the dataset is fully preprocessed, it is then divided into two subsets: training data and testing data. The training data, which constitutes 70% of the entire dataset, is used for building and fine-tuning the machine learning models. The remaining 30% of the dataset is reserved for testing and validating the performance of the models. This split ensures that the models are trained on a large portion of the data while still leaving a significant amount of data for unbiased evaluation. In the model training phase, a systematic approach is taken to optimize the performance of the selected machine learning models. A parameter grid is established for each model, specifying a range of hyperparameters that will be tuned to find the optimal configuration. This process is critical as the choice of hyperparameters can significantly impact the performance of the models. To ensure that the models generalize well to unseen data, cross-validation is employed during training. Specifically, 10-fold cross-validation (CV = 10) is used, meaning that the training data is divided into 10 subsets. Each model is trained 10 times, with each iteration using a different subset as validation data while the remaining subsets are used for training. This method helps to mitigate overfitting and provides a more reliable estimate of model performance. The machine learning models under consideration include Gradient Boosting (GB), XGBoost, K-Nearest Neighbors (KNN), LightGBM (LGBM), and CatBoost (CB). After the cross-validation process is complete, the model with the best performance for each algorithm is identified based on the cross-validation results. The best parameters identified during this process are then used to retrain the model on the entire training dataset, ensuring that the model is optimized before final evaluation. Once the models have been trained and retrained with the best parameters, they are subjected to a thorough evaluation process to assess their predictive performance. This evaluation is carried out using a set of commonly used metrics that provide insights into different aspects of model performance. The coefficient of determination (R 2 ) is used to measure how well the model’s predictions match the actual values. A higher R 2 indicates better predictive accuracy. Additionally, Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE) are calculated to evaluate the models’ accuracy in predicting the output. MAE provides a measure of the average magnitude of errors in predictions, while RMSE gives more weight to larger errors, making it a useful metric when large deviations from actual values are of particular concern. To complement these numerical evaluations, various visualizations are generated to provide a more intuitive understanding of the models’ performance. Scatter plots will be used to compare predicted values against actual values, highlighting the accuracy and potential biases in the predictions. Histograms can be utilized to visualize the distribution of errors, allowing for a deeper understanding of how the models perform across different ranges of the data. After the initial evaluation, the best-performing machine learning models undergo a further, more detailed analysis using SHAP (SHapley Additive exPlanations). SHAP is a powerful method for interpreting complex models by breaking down the prediction of each sample into contributions from each feature. This analysis helps to understand how each input feature influences the model’s predictions, providing insights into the relationships between input features and the output. The SHAP values are visualized through various plots, such as SHAP summary plots and dependence plots, which help in interpreting the model’s behavior. By understanding the contribution of each feature, it becomes possible to explain the model’s decisions, identify key drivers of the output, and gain confidence in the model’s predictions. This interpretability is crucial, especially in applications like solar energy forecasting, where understanding the factors influencing predictions can lead to better decision-making and model trustworthiness. The dataset is divided into two sets: a training set and a validation set, with a split ratio of 70%/30%. The validation set aids the algorithm in building a machine learning model based on the values of hyperparameters available in the Sklearn library. Table 3 summarizes the optimal hyperparameters for five machine learning models tuned using GridSearchCV and Bayesian Optimization. For Gradient Boosting (GB) and LightGBM, both approaches converged to similar n_estimators (500) but differed in learning rates, with Bayesian Optimization favoring finer adjustments (learning_rate: 0.001). CatBoost demonstrated a notable difference in depth (depth: 10 for GridSearchCV vs. 4 for Bayesian Optimization). KNN showed slight variation in n_neighbors, favoring 21 under Bayesian Optimization. Table 4 evaluates the models’ predictive performance. CatBoost consistently achieved the best R 2 , lowest MAE, and RMSE across training and testing, highlighting its robustness. Gradient Boosting and LightGBM showed improved R 2 and RMSE with Bayesian Optimization, confirming the benefit of fine-tuning. However, KNN maintained a stable performance between both methods. Notably, XGBoost showed no significant improvement in R 2 , indicating potential limitations in hyperparameter exploration. Upon reviewing the performance metrics, GridSearchCV results indicate that CatBoost achieved the highest testing R 2 = 0.546, outperforming Bayesian Optimization’s R 2 of 0.538. This suggests that the hyperparameters derived from GridSearchCV were better tuned for the model under this dataset. For testing MAE and RMSE, CatBoost still displayed strong results under GridSearchCV, with MAE of 3.583 W and RMSE of 4.748 W, both comparable to Bayesian Optimization results. This emphasizes CatBoost’s capability to maintain high predictive accuracy across various evaluation metrics. Fig 5 illustrates the correlation between the true values and the predicted values of solar energy using the CatBoost model for (a) the training dataset and (b) the testing dataset. The red line represents the best-fit line, while the shaded regions depict the 80% Confidence Interval (CI) and the 80% Prediction Interval (PI). For the training dataset, the number of points within the 80% CI is 12,199 and the number of points within the 80% PI is 13,743. For the testing dataset, the number of points within the 80% CI is 5,166 and the number of points within the 80% PI is 5,882. The left plot (training dataset) shows a higher R 2 value 0.608 compared to the right plot (testing dataset), which has an R 2 value of 0.546. This indicates that the model performs slightly better on the training dataset than on the testing dataset. Both plots highlight the model’s ability to predict solar energy with varying confidence and prediction intervals. The predictive performance of the CatBoost model is not really robust, that implies suggesting a significant influence of the quality and quantity data on the predicted value. That will be more discussed in the following section. Fig 6A and 6B illustrates the comparison between predicted and actual solar panel energy values, primarily distributed within the ±10% error range, with a significant number of points lying along the y = x line. Therefore, the prediction error values between the model-predicted solar panel energy and the actual solar panel energy of the training dataset, as depicted in Fig 6 , are mainly distributed within the range of ±10 W. To gain a deeper understanding of the CatBoost model’s ability to forecast solar energy, the SHAP model will be employed to interpret the influence of input variables on the prediction of solar energy. The findings of this analysis will be presented in the following section. The SHAP (Shapley Additive exPlanation) framework, rooted in cooperative game theory as introduced by Lundberg and Lee , was initially designed to quantify individual contributions in cooperative games. Since then, SHAP has evolved into a powerful tool for interpreting machine learning model predictions . By integrating various existing interpretability methods, SHAP offers an intuitive, theoretically sound approach for explaining model outputs, representing a major advance in the field of model interpretation. Central to this framework are SHAP values, which provide detailed insights into the magnitude and direction (positive or negative) of feature influences on model predictions. These values are essential for understanding the relative importance of different features in shaping model outcomes. One of the key visualization tools in SHAP is the summary plot, which combines both feature importance and their effects on predictions. Each point on the plot corresponds to a SHAP value for a particular feature and instance, with the y-axis denoting the feature and the x-axis representing the SHAP value. The color gradient of the points reflects the feature values, from low to high. To address overlapping data points, jittering is applied along the y-axis, visually representing the distribution of Shapley values for each feature. Features are typically ordered by importance, making it easier to identify key drivers of model predictions. While the summary plot provides an initial understanding of the relationship between feature values and their predictive impact, more detailed insights can be gained from SHAP dependence plots, which will be discussed further. Fig 7 presents the SHAP global interpretation of feature importance for each input variable’s effect on the predicted value of solar energy, with (a) showing the absolute mean SHAP values and (b) displaying the global SHAP values. The results in Fig 7A indicate that Ambient temperature has the most significant impact on the accuracy of solar energy predictions made by CatBoost, followed by Humidity, Cloud ceiling, Pressure, Wind speed, and Visibility. This ranking reflects the relative importance of each feature in explaining the variability in the model’s predictions. Meanwhile, the SHAP values shown in Fig 7B illustrate the specific influence of each input variable on the predicted solar energy value. Specifically, higher Ambient temperatures lead to an increase in solar energy production, with the impact ranging approximately ±6W. Lower Humidity levels are favorable for generating solar energy from photovoltaic panels, with the influence ranging from -4W to +2W. Similarly, higher values of Cloud ceiling, Pressure, Wind speed, and Visibility generally enhance solar energy production. However, Visibility has a relatively minor effect on the variation of the global SHAP value compared to the other factors. To provide a more detailed quantitative assessment of the influence of each input variable on the predicted solar energy value, the Partial SHAP dependence values for each input variable will be described in the following section. In particular, SHAP local value analysis will help analyze errors, as well as improve the accuracy of solar energy prediction models. By examining the Partial SHAP dependence plots for these features shown in Fig 8 , we can gain a deeper understanding of how changes in their values affect the model’s predictions. This analysis helps us uncover the exact form of the relationship between each feature and the model output, including any non-linearities or interactions with other features. The results of the Partial SHAP dependence shown in Fig 8 reveal some interesting insights into the impact of the six input variables on solar energy production. The statistical analysis indicates that the average value for the 21045 solar energy samples is around 13 W, which corresponds to the intersection of the E(input variable) and f(x)|(input variable) lines. The Partial dependence plot aligns with the results depicted in the Global SHAP value in Fig 7B . Specifically, the Partial SHAP dependence curves for Cloud Ceiling , Pressure and Visibility show relatively small changes, hovering close to the average solar energy value, indicating that these three variables have minimal influence on solar energy production. In contrast, the impact curves for Wind Speed , Humidity , and Ambient Temperature on solar energy are non-linear, particularly for Ambient Temperature. Solar energy values increase almost linearly with Wind Speed, demonstrating a more significant and direct influence on energy production. Fig 9 illustrates the specific impact of each input variable on the predicted solar energy values using SHAP local value analysis. This analysis is applied to two actual solar energy values: 18.33 W and 12.42 W, with corresponding predicted values of 16.04 W and 12.58 W . The prediction errors depicted in Fig 6 can be partly explained by the results presented in Fig 9 . In this study, six input variables contribute to the solar energy predictions. Among these, Ambient temperature and Humidity have the most significant influence on the model’s accuracy and prediction capabilities. Consequently, the prediction errors are primarily attributed to the variability in these two input variables. Additionally, solar energy output depends on the technical specifications of the photovoltaic panels, which were not included in the database used for model development ( S1 Data ). The absence of these panel-specific variables across the 21,045 samples from different locations contributes to the prediction errors observed in this study. However, using the six weather-related input variables still allows for a simplified preliminary feasibility prediction for solar energy projects. This approach is especially useful during the initial assessment phase when specific technical details of the photovoltaic panels are not yet available. Fig 10 demonstrates the use of LIME (Local Interpretable Model-agnostic Explanations) to analyze a specific case where the true value is 30.058 W, while the model predicts 7.764 W, indicating significant underprediction. The predicted value lies within a range of 1.76 W to 25.56 W, with the orange bar highlighting the prediction. The analysis identifies key features influencing the prediction, with Ambient Temperature (≤ 21.93) having the largest negative impact (6.79), followed by Humidity (> 52.57) (2.05) and other factors such as Cloud Ceiling (≤ 140.00), Pressure (≤ 845.80), Wind Speed (≤ 6.00), and Visibility (≤ 10.00). The actual feature values—Ambient Temperature: 15.57, Humidity: 97.10, Cloud Ceiling: 42.00, Pressure: 800.30, Wind Speed: 3.00, and Visibility: 10.00—help explain the model’s low prediction. LIME reveals the specific reasons behind the poor performance of the model for this case. Key features such as low Ambient Temperature (15.57) and high Humidity (97.10) strongly influence the underprediction. These insights can guide further investigation into feature interactions or the model’s handling of extreme cases, ultimately improving its performance. The regression results achieved in this study reveal certain limitations in the predictive capability of the machine learning models, as indicated by the relatively modest R 2 values throughout the experiment. These findings suggest latent complexities in the relationship between the input variables (such as ambient temperature, humidity, and wind speed) and the target variable (solar energy output), which the models were unable to fully capture. A primary reason for the lower R 2 values is the inherent variability and non-linear interactions between solar energy output and its influencing factors. While the input features provide a preliminary basis for prediction, they do not comprehensively account for all the variables that govern solar energy production. Notably, the absence of photovoltaic panel-specific technical specifications such as panel efficiency, orientation, and degradation rates likely introduced significant noise into the predictions. These variables are critical for accurately modeling solar energy output but were unavailable in the dataset. Another contributing factor is the dataset’s inherent characteristics. The data used in this study comprises 21045 samples derived from a limited geographical and technological context, which may have introduced biases. For instance, if the dataset predominantly represents regions with specific weather patterns or solar radiation profiles, the trained models may struggle to generalize to broader or more diverse conditions. Furthermore, discrepancies in data quality or measurement precision could also have influenced the results, particularly for features like cloud ceiling and visibility, which exhibit lower importance in the SHAP analysis. Despite these challenges, the study offers valuable insights into the potential of machine learning models for solar energy forecasting. While the achieved R 2 values suggest limited accuracy for precise predictions, the models remain useful for initial feasibility assessments of solar farm locations. These assessments rely on readily available weather data and provide a foundation for further analysis. To enhance model performance and reliability, future research should address the following: Ultimately, addressing these limitations will enhance the accuracy and applicability of predictive models, enabling more effective integration of solar energy into the grid and supporting the global transition to sustainable energy sources. This study highlights the potential and challenges of using five machine learning models, particularly the highest performance of CatBoost model with training values of R 2 value of 0.608, RMSE of 4.478 W and MAE of 3.367 W and the validation value is R 2 of 0.46, RMSE of 4.748 W and MAE of 3.583 W, for solar energy prediction. The integration of weather-related input variables, such as temperature, humidity, wind speed, and visibility, provides a foundation for preliminary feasibility assessments of solar farm locations. However, the limited performance of the models, as demonstrated by the R 2 , RMSE and MAE values, suggests that the relationship between the input variables and solar energy output is more complex than captured by the current dataset and models. The lack of photovoltaic panel-specific technical data likely contributed to prediction errors, emphasizing the need for more comprehensive datasets that include both weather conditions and system specifications. The SHAP analysis offered valuable insights into the contribution of each feature to the predictions, with ambient temperature and humidity emerging as the most influential factors. The Partial SHAP dependence plots revealed non-linear interactions, particularly for variables like wind speed and temperature, further demonstrating the intricacies involved in accurately predicting solar energy output. While the study underscores the importance of leveraging modern machine learning techniques, it also highlights key limitations, including dataset biases related to geographical focus and technological constraints. Future research should aim to collect larger, more diverse datasets, incorporating a wider range of environmental conditions and solar technologies to enhance model performance and generalizability. Ultimately, these advancements will improve the accuracy of solar energy forecasting, supporting the effective integration of renewable energy into the power grid.
|
Other
|
other
|
en
| 0.999997 |
PMC11695016
|
Lower extremity amputation can have different causes, such as trauma, tumor, and vascular diseases, including diabetes [ 1 – 3 ]. The specific prosthetic knee device employed in rehabilitation plays an important role in the quality of the gait of transfemoral amputees and knee disarticulation subjects. For example, microprocessor controlled knee (MPK) devices have been shown to provide stability in the absence of knee extensor muscles . The development of innovative technologies has led to the design of various MPKs . In comparison to mechanically passive knee prostheses, MPKs have demonstrated improvement in gait symmetry, decreased metabolic rate, and enhanced smoothness of the gait . The Rheo Knee (Össur, Reykjavik, Iceland) is a variable-damping MPK that uses magnetorheological fluid to create knee flexion-resistance during the stance phase of gait . Given the multitude of treatment options and prosthetic designs that is available, applying computational models to investigate how the prosthetic gait of an individual is affected by treatment options, such as prosthesis setup, could be of value in the rehabilitation process of lower limb amputees. Recent advances in predictive simulation methods, in particular using optimal control, have shown promise in predicting new gait patterns independent of experimental data from motion capture devices or force plates. Thus, this type of musculoskeletal modeling may offer the opportunity to systematically investigate the cause-effect relationship of isolated interventions or pathologies that significantly affect the gait. Studies using optimal control with direct collocation have demonstrated the computational efficient prediction of unimpaired gait . Further studies have used optimal control to study the gait of transtibial amputees . However, a limitation of these studies was that the representation of prosthetic gait was two dimensional and restricted to the sagittal plane. Recent studies used optimal control and complex musculoskeletal models of patients with transfemoral amputation, demonstrating the feasibility of the method . Both studies performed tracking simulations, where the difference between the predicted gait pattern and an experimental result is minimized. Despite its potential, the use of predictive simulation in clinically relevant studies has thus been limited to date . The validation and analysis of the feasibility of this method in different gait scenarios are still important steps in the direction of clinical relevance and application, since there is a dearth of literature on the use of fully predictive simulation. In a previous study, we were able to predict drop-foot gait in a post-stroke patient using a 3D musculoskeletal model . The patient in that study also exhibited stiff-knee gait (SKG), which is a gait abnormality characterized by the lack of knee flexion during the swing phase of the gait. SKG is a common result of upper motor neuron injuries, observed for example after stroke, cerebral palsy, or spinal cord injury . SKG is also often associated with over-activity of the rectus femoris muscle and other mechanisms . Healthy individuals using a knee orthosis that limits knee flexion may present alterations of intra-limb coordination similar to those of stroke patients with SKG . In the current study, we chose to investigate the isolated effects of SKG, which was not possible in our previous study because the post-stroke patient investigated presented further complex gait abnormalities . Instead, we chose to evaluate experimental data from two further individuals: one healthy subject and one knee disarticulation subject fitted with a variable-damping microprocessor knee prosthesis. The experimental data was used to calibrate the respective predictive models, and as ground-truth data to validate the predictions. Since the gait of the participants was captured under various conditions, such as unperturbed gait and gait with restricted knee flexion, it was possible to compare the experimental data and predictive results. This permitted the analysis of the predictive capabilities of the method. The general aim of this study was thus to investigate the feasibility of the method in a well-defined context, in order to investigate its potential for preliminary clinical use. The specific aims of this study were thus to predict prosthetic and healthy gait and the effects of induced SKG in two cases: a knee disarticulation subject fitted with a MPK, and a healthy subject walking with a knee orthosis. We hypothesized that an unimpaired predictive model, possessing the same anthropometric characteristics as the amputee, would accurately represent the pre-amputation healthy state of the patient. Thus, we created this model with the purpose of investigating the effect of different amputation parameters on the subject’s predicted amputated gait, such as the use of prosthetic foot and MPK, and altered muscle-tendon parameters. Experimental motion capture data of a subject with unilateral knee disarticulation and of a healthy subject were collected (200 Hz) using two optical infrared systems: Vicon (MX, Vicon Motion System, Oxford, UK) and Qualisys (Göteborg, Sweden), respectively. Retroreflective markers were attached to the subjects in accordance to the Helen-Hayes marker set . Ground reaction force (GRF) was measured using two force plates . The anthropometric data collected for each subject are reported in Table 1 . Three conditions for the knee disarticulation subject (E-KD) and two conditions for the healthy subject (E-HS) were obtained . The knee disarticulation subject walked at a self-selected speed using the MPK Rheo Knee and a carbon-fiber foot Vari-Flex (Össur, Reykjavik, Iceland). Three conditions were measured: using the optimal setting of the prosthesis (REF); with the prosthesis deactivated, which is a mode of operation used when the battery is discharged (DACT); and in a SKG induced condition, in which the prosthetic knee was locked to prevent flexion (SKGI). Since the patient wore a socket on the thigh, surface electromyography (EMG) data (Delsys, Natick, MA, USA) was collected only from the contralateral (CL) lower limb. Thus, 10 EMG electrodes were placed on CL muscles: gluteus maximus, gluteus medius, rectus femoris, vastus lateralis, vastus medialis, biceps femoris, semitendinosus, lateral gastrocnemius, medial gastrocnemius, and tibialis anterior. The data for E-KD was collected as part of a 5-year follow-up case study and the data for our study was accessed on October 2021. We had access to information that could identify the participant during and after data collection. The healthy subject walked at self-selected gait speed under two conditions: unperturbed gait (REF); and fitted with a knee orthosis (medi GmbH, Bayreuth, Germany) which induced SKG by constraining ipsilateral (IL) knee flexion to 20° (SKGI). In both conditions, EMG data was collected bilaterally: vastus lateralis, rectus femoris, biceps femoris, medial gastrocnemius, and tibialis anterior. Written informed consent to participate in the study was obtained, and the study was approved by the local Ethics Committee at Hannover Medical School . The recruitment period for this study was between March 2023 and May 2023. The simulation software OpenSim (Version 3.3) was used to obtain the scaled anthropometric model, joint angles, and internal joint moments of the two subjects respectively. The 3D lower body generic model gait2392, which consists of 23 degrees of freedom (DoF) and 92 muscles, was scaled to represent the anthropometric characteristics of the subjects. The metatarsophalangeal (MTP) joint was locked. For the E-KD model, the IL side was adapted to represent the amputation. The model of the MPK was designed and the other components were adapted from the literature using an open-source computer-aided design (CAD) software (FreeCAD). The MPK and prosthetic foot consisted of one DoF each. For the E-HS model under the SKGI condition, the mass contribution of the knee orthosis was added to the IL femur and tibia segments. The predictive simulation of the gait and the parameter estimation of personalized muscle-tendon proprieties were formulated as optimal control problems by adapting the frameworks developed by Falisse et al. . The modeling procedure and choices were similar to those in previous work and the flowchart of the predictive simulation is presented in Fig 1B . In order to solve the optimization problem, we used the direct collocation method with a third-order Radau collocation scheme to transcribe the problem into a nonlinear programming problem (NLP). CasADi is a framework that facilitates the implementation of numerical optimization . It was used for the transcription of the optimal control problem into NLP. Since CasADi allows the application of algorithmic differentiation, the computational efficiency could be increased . The resulting optimization problem was solved using Interior Point Optimizer (IPOPT), which is a package that implements an interior point line search filter method to find a solution of large-scale nonlinear optimization problems . The implementation of the frameworks was performed in MATLAB . The estimation of personalized muscle-tendon parameters was essential for the prediction of the prosthetic gait, in order to represent changes in the proprieties of muscles on the amputated side. The generic muscle-tendon parameters were obtained from the E-KD model after scaling in OpenSim. Since parameter estimation was performed only on the IL side, only 27 muscles that span the IL hip joint were included in the formulation of the optimal control problem . Optimal fiber length, tendon slack length, and maximal isometric force were personalized. The objective function that was minimized was defined according to Eq 1 : J E s t i m = ∫ t i t f ∑ ( W E 1 a 2 + W E 2 L o p t + W E 3 R e 2 + W E 4 ( a ˙ 2 + F m ˙ 2 ) ) d t , (1) where ti and tf are the initial and final times, W E 1−4 are the weight factors, a is the muscle activation, L opt is the optimal fiber length, Re is the reserve actuator, and Fm is the tendon force. The number of mesh intervals was 100. The generic muscle-tendon parameters were used as the initial guesses, and the personalized parameters were limited to lie between 50% and 200% of the generic values. The predicted values are presented in S1 Fig in S1 File . The muscle redundancy problem was solved while the joint moments were reproduced . We created a predictive knee disarticulation model (P-KD) to represent E-KD by adapting the musculoskeletal model used to obtain the experimental results. Six contact spheres were included to model the foot-ground interaction represented as a Hunt-Crossley contact . The values for the GRF sphere parameters were calibrated for the patient . The MTP joint of the CL foot was unlocked and a passive moment, a linear rotational spring, and a damper were included . The passive moment, which represents the passive structures of the joint, was also included in all joints using an exponential function . Thus, when the joint angle exceeded a certain limit, a passive resistive moment in the opposite direction was created . We chose passive moment parameters of the prosthetic foot to reproduce the characteristics of energy storage and return (ESR) during gait. The model of the MPK included a passive moment to avoid hyperextension , as well as activation dynamics that consisted of MPK excitation and activation. A damper was also included in the formulation of the MPK to replicate the damping effect of the device. The function in Eq 2 was used to describe the damper: T M P K = a M P K D M P K q ˙ , (2) where T MPK is the moment created on the prosthesis, a MPK is the activation term, D MPK is the damper coefficient, and q ˙ is the angular velocity of the prosthetic knee. The only difference between the REF and DACT conditions for P-KD was the value of the damping coefficient D MPK . In REF, we used the value of 1 N·m·s·rad -1 while in the DACT 0.75 N·m·s·rad -1 was used. These values were chosen empirically. In the SKG15 condition, the flexion angle limit of the passive moment in the MPK used in the REF condition was reduced from 137.5° to 14.9°, thus constraining MPK flexion . The states of the P-KD were positions and velocities of the DoF, muscle activations, muscle-tendon forces, and activation of the prosthetic knee. The controls were the time derivative of the states and MPK excitation. The objective function used for P-KD is described by Eq 3 : J P − K D = ∫ t i t f ∑ ( W P 1 a 2 + W P 2 E ˙ 2 + W P 3 q ¨ 2 + W P 4 ( a ˙ 2 + F m ˙ 2 ) + W P 5 e M P K 2 ) 1 D i s t d t , (3) where M P 1−5 are the weight factors (S1 Table in S1 File ), E ˙ is the metabolic energy rate, q ¨ is the joint acceleration, e MPK is the prosthetic knee excitation, and Dist is the distance traveled by the pelvis in the forward direction. Periodicity and gait speed were imposed, but the time of the complete gait cycle was a variable of the system. The number of mesh intervals used in the gait predictions was 400. One trial of the E-KD in the DACT condition was used to obtain the initial guess, bounds and scaling for joint kinematics for all conditions of P-KD . To analyze the effect of altering the initial guess, further simulations were performed using a different trial to create a second initial guess (IG2), which are presented only in S3 Fig in S1 File . The muscle-tendon model used was the Hill-type model and the muscle activation dynamics were described using Raasch’s model . Polynomial functions of joint positions and velocities were used to define muscle-tendon lengths, velocities and moment arms . The personalized muscle-tendon parameters obtained from the parameter estimation were used on the IL side while the generic parameters where used on the CL side. We mirrored the CL side of P-KD in order to create a hypothetical healthy predictive model (P-HH) to represent the healthy state of the knee disarticulation. Since intact limbs weigh more than the prosthesis, the total mass of the patient was increased ( Table 1 ). Four conditions were simulated: REF, in which the P-HH model was symmetric; SKG15, in which the IL knee flexion angle limit of the passive moment was reduced from 137.5° to 14.9°; SKG30, in which the angle limit was 30°; and SKG35, in which the angle limit was 35° . The initial guess and bounds of the joint kinematics were based on the gait data of a healthy subject from Falisse et al. . The formulations of P-HH and P-KD were basically the same, with the exceptions of the initial guess, and the prosthesis and the related excitation term of the MPK in the objective function in Eq 3 , which were removed. We also created a predictive model based on the E-HS model (P-HS), which represented the anthropometric characteristics of the healthy subject. Three conditions of the P-HS were predicted: the REF, SKG30, and SKG35 conditions. The differences between P-HH and P-HS were the weight factors of objective function, the values of the GRF sphere parameters, and the body scaled model. The weight factors used in the objective functions are presented in S1 Table in S1 File . The same gait speed, which was the fastest speed observed in experimental data, was imposed in all the predictive models and conditions ( Table 1 ). Thus, the influence of gait speed was removed from the predictions. The computational time of the gait predictive simulations, which were run on a standard laptop , and other information about the convergence of the optimization problem are presented in S2 Table in S1 File . The differences between the conditions in the experimental results were analyzed using statistical parametric mapping (SPM) . We used an open-source one-dimensional SPM package (SPM1D, www.spm1d.org ) in MATLAB to compare joint angles, moments, and EMG results in different conditions of gait for E-KD and E-HS. The paired t-test was used with a critical threshold of 5% (α = 0.05). A bar below the graphs indicates a statistically significant difference based on SPM. Due to the relatively small number of trials in our data (5 and 7 trials), we were not able to perform a test for normality using SPM, which requires at least 8 trials. Pataky et al. suggested that a comparison between the parametric and non-parametric results should be performed in this case . We compared both procedures and some of the results diverged . Thus, we considered our data non-Gaussian and the non-parametric procedure was chosen . Since we obtained only one result from each prediction and not repeated trials as in the experimental data, SPM could not be used for comparisons of P-KD, P-HS, and P-HH. We thus used dynamic time warping (DTW) to analyze the similarities between the predicted muscle activation patterns and between mean experimental and predicted gait parameters, such as sagittal joint angles and moments. DTW allows shifting during the measurement of distance between two time series. This feature allows a better analysis of the shape of the curves that are not aligned, which can occur in time series data . A DTW score for each comparison was obtained in MATLAB whereby a value of zero would indicate that the curves are identical. The shifting was restricted to a maximum of 5%. The gait parameters were normalized by dividing the curves by the absolute maximum value in the data set . Thus, DTW was performed in the normalized series (values within the range of -1 to 1). Peak experimental (E-KD) MPK flexion angle in the swing phase increased from 68.4° in the REF condition to 73.4° in the DACT condition . This increase was significant only at about 70% of the gait cycle. The same effect was observed for peak predicted (P-KD) MPK flexion in the swing phase, which similarly increased from 69.2° to 73.5°. The simulations using a different initial guess also showed an increased MPK flexion during DACT in comparison to REF . No significant differences in the IL hip angle and moment were observed in E-KD between REF and DACT. In the prediction (P-KD), the change in damping also did not considerably affect hip angle or moment . Peak experimental (E-KD) MPK flexion was reduced to 8.1°, and peak predicted (P-KD) knee flexion to 16.3° in SKG conditions . This decrease in SKGI in comparison to REF observed in E-KD was significant according to SPM. A detailed SPM analysis is presented in S4 Fig in S1 File . We observed a significant increase in IL hip flexion angle and differences in hip moment in E-KD SKGI at the end of the gait cycle. In P-KD REF, we predicted patterns of IL hip angle and moment that were similar to those in E-KD REF, but the deviations caused by SKG15 did not correspond to the experimental results . In general, CL kinematics and kinetics showed good agreement between experimental (E-KD) and predicted (P-KD) values, with exception of the hip and ankle angles at the beginning and end of the gait cycle, i . e ., during swing-to-stance transition . CL sagittal knee and ankle angles were significantly affected in the stance phase by the lack of IL MPK flexion under SKG conditions. While the experimental knee angle pattern was similar to the prediction, we observed increased hip flexion and ankle dorsiflexion angle in the P-KD relative to those observed in E-KD . Experimental CL hip flexion, knee extension, and ankle plantarflexion angle, and knee flexion and ankle plantarflexion moments were all significantly increased during SKGI. The predicted CL joint moment patterns were similar to E-KD and the effects of SKGI were predicted in P-KD SKG15 . The increase in CL knee flexion moment was also predicted using a different initial guess . Only an increase of gluteus medius EMG at the end of the gait cycle, a decrease of lateral gastrocnemius at 85% of the gait cycle, and a delayed peak of medial gastrocnemius EMG at 43% of the gait cycle were significantly different between SKGI and REF of E-KD . At about 20% of the gait cycle, increased gluteus medius EMG was observed and also predicted. While an advanced offset of EMG in SKGI was observed in rectus femoris and vastus medialis, the P-KD model predicted the same changes in activation of the vastus lateralis and medialis muscles. The model did not predict the first peak of rectus femoris activity. An increase of rectus femoris activation was predicted during SKG15 in the middle of the gait cycle, but not observed in the EMG. From about 10% to 30% of the gait cycle, SKGI caused an increase of EMG of the knee flexor muscles (biceps femoris, semitendinosus, and lateral and medial gastrocnemius) . While not statistically significant, this increase was predicted (P-KD) in the same muscles between from about 15% to 40% of the gait cycle. We observed a slightly delayed peak of lateral and medial gastrocnemius EMG activity during SKGI, which was also predicted for muscle activation. The use of the knee orthosis caused significant differences on the IL side in E-HS during the entire gait cycle . In the stance phase, we observed increased knee flexion angle and extension moment in the experimental results of SKGI. During the swing phase, the peak of knee flexion was decreased, as expected. P-HS REF predicted less IL knee range of motion than E-HS REF. However, hip and ankle moments exhibited higher similarity than other gait parameters in both sides comparing E-HS and P-HS in REF . Both SKG conditions predicted increased knee flexion angle in the stance phase. Even though knee flexion was more constrained in SKG30 than in SKG35, SKG30 predicted more knee flexion in the stance phase than SKG35 in P-HS. In the swing phase, however, the decrease of peak knee flexion was consistent with the limitations imposed on the prediction . Both SKG conditions predicted increased knee extension moment, which was more evident in SKG30 than in SKG35 because of the larger passive moment . While we observed a statistically significant delay of the peak of IL hip flexion moment in E-HS SKGI, P-HS predicted an earlier peak in both SKG30 and SKG35. No significant difference was observed in experimental ankle moment in the stance phase, while an advanced peak of ankle plantarflexion moment was predicted in SKG30 and SKG35. On the CL side, significant differences in E-HS were observed mostly at about 30% of the gait cycle, and these differences were predicted . Statistically significant differences between REF and SKGI were observed in the IL EMG of all muscles. The EMG patterns of vastus lateralis and rectus femoris, which are knee extensors, were similar in E-HS . A significant increase in peak activity at the beginning of the gait cycle, increases between 25% and 80% of the gait cycle, and a decrease at the end were observed in the EMG of these muscles during SKGI. Once again, the peak activity of the knee extensors of E-HS at the beginning of the gait cycle was predicted only in the vastus lateralis of P-HS. The increase in peak of vastus lateralis muscle activity was predicted only in SKG30, while the other aforementioned effects were predicted in both SKG30 and SKG35. A statistically significant increase in biceps femoris activity at 75% of the gait cycle was observed in the EMG of E-HS SKGI and predicted in P-HS SKG30 and SKG35. The increase in medial gastrocnemius activity at 63% of the gait cycle was not predicted. However, the advanced onset in medial gastrocnemius activity, which was not statistically significant, and the increase in tibialis anterior EMG of E-HS during SKGI were predicted under SKG conditions. On the CL side, significant differences between REF and SKGI were observed in vastus lateralis, rectus femoris, biceps femoris, and tibialis anterior EMG of E-HS, and P-HS predicted more deviations in rectus femoris activation in SKG30 and SKG35 compared with REF . In general, the gait pattern of the hypothetical healthy predictive model (P-HH) was similar to the unimpaired gait in the REF condition . The effects of SKG30 and SKG35 on IL knee angle of P-HH were similar to those of P-HS. In SKG15, where knee flexion was constrained the most, we observed an intermediate peak of knee flexion angle and extension moment during the stance phase in relation to SKG30 and SKG35. This is related to the implementation of the passive moment . We observed the lowest peak of knee flexion angle during the swing phase in SKG15. SKG conditions affected IL hip and ankle moment in comparison to REF to a lesser extent than observed in the knee joint and caused different patterns on IL muscle activation. The peak and advanced onset of medial gastrocnemius and tibialis anterior activation observed in SKG15, SKG30, and SKG35 were greater than those observed for REF . The CL gait pattern of P-HH shows that we could predict a symmetric gait in REF and that the SKG conditions affected the CL side differently resulting in asymmetric gait . Since P-KD (amputee) and P-HH (hypothetical healthy) represent the same patient, comparing the similarities of predicted muscle activation between these models facilitated the investigation of which muscles were more affected by amputation. Comparing the muscle activation of the IL side in the REF condition of P-KD to P-HH, we observed larger differences in gluteus medius, rectus femoris, and iliopsoas muscles than in the remaining muscles . Here, all the muscles with increased differences in activation act as hip flexors. Furthermore, comparing the CL with the IL side, we further observed a clear increase in the DTW score on the CL side in several muscles: semimembranosus, biceps femoris long head, tensor fasciae latae, and piriformis, all of which act as hip abductors or adductors . In order to analyze the symmetry of predicted muscle activation, we compared the IL side to the CL side of the same model and condition . P-KD predicted asymmetrical muscle patterns both in the REF and SKG15 conditions. SKG15 increased asymmetry compared with REF, except in the iliopsoas and rectus femoris muscles . Both P-HS and P-HH predicted nearly symmetrical muscle patterns in REF. However, the introduction of various limitations on knee flexion (SKG conditions) increased asymmetry in different muscles between models. We observed that the biceps femoris short head, quadratus femoris, and tibialis anterior muscles were affected the most by SKG in both P-HS and P-HH models. The knee extensor muscles were clearly asymmetrical in P-HS SKG30 and SKG35 and in P-HH SKG30, while the gluteus medius, iliopsoas, vasti, and triceps surae muscles exhibited higher asymmetry in the more constrained condition of each model, i.e., P-HS SKG30 and P-HH SKG15 . The objectives of this study were to predict prosthetic and healthy gait under different conditions and to validate the predicted results against the experimental data. We have shown that in the context of the knee disarticulation, predictive simulation allowed the generation of new motions independent of experimental data. The predictions presented similar characteristics to the experimental data. For instance, we observed in the experimental results a subtle increase of MPK flexion when the patient walked with the prosthetic knee deactivated . We could predict a similar result when only the damping was altered in the predictive model, which indicates that we could model the dynamic response of the device. This complex prosthetic modeling along with the muscle-tendon parameter estimation and the application a fully predictive simulation could be considered improvements in comparison to the literature, since we did not track the experimental results . Moreover, we also observed an increased peak of IL hip flexion moment on the prosthetic gait in comparison to the CL side and to the unimpaired gait . We observed this effect in our predictive models and also in the hypothetical unimpaired model. We could also predict to a certain degree the effects of SKGI on CL joint moments, which were similar between E-KD and E-HS . This suggests that the cause-effect relationships of certain changes on the musculoskeletal system, such as amputation and stiff-knee gait, can be predicted using this method, even though the prediction of ankle angle differed from the experimental data . Knee disarticulation has been performed since 1581, but it is less common than transfemoral amputation, even though they might have similar functional outcomes . Due to the limited literature on knee disarticulation, we decided to also compare our results to studies performed in transfemoral amputees. We do not intent with this to draw generic conclusions, but it is important to analyze whether our results are in accordance with the literature. The gait of transfemoral amputees may be affected by muscle strength deficits . An MRI study has shown that the muscles of the stump from transfemoral amputees suffered atrophy differently, with the gluteus maximus and quadriceps femoris exhibiting greater changes in comparison to the intact side, the adductor muscles suffering less atrophy, and the adductor longus not being atrophied . Since these effects are not homogeneous, it is important that the predictive model can account for differences in muscle properties in order to better represent the patient. To this end, we personalized the muscle properties of the knee disarticulation subject by performing a parameter estimation procedure to personalize muscle-tendon properties of the IL side. In our personalized model, the maximal isometric force of the adductor brevis and magnus and gluteus maximus muscles was reduced . However, these results should be interpreted carefully as optimal fiber length and tendon slack length, two other parameters describing the muscles, were also altered. In general, these parameters are difficult to measure and they might affect the estimation of muscle forces . SKGI was modeled using large knee passive moments . In SKG15 and SKG30 conditions of P-HH and P-HS, respectively, we observed increased IL knee flexion angle and knee passive extension moment in the stance phase, but decreased knee extensor muscle activation . However, in P-KD, which does not have muscles spanning the knee joint on the IL side, this effect was not observed. Previous work has shown that during predictive simulation, the passive moment may be exploited to reduce muscle effort , which is also the case in the current study. Further investigation is needed to analyze whether this approach accurately represents the effects of using a knee orthosis. A peak in the measured rectus femoris EMG was observed for both E-KD and E-HS during swing-to-stance transition. However, this peak was not predicted. A study that measured surface and fine wired EMG of the rectus femoris reported that surface EMG is subjected to cross-talk with the vastus intermedius , which may partly explain the differences in muscle activity we observed between the experimental EMG and predicted muscle activation. With this exception, we found overall good agreement between predicted activation and measured EMG. A study that used SPM vector-field analysis could identify differences that are not observed by means of independent scalar analyses of EMG data . This could explain why some effects of SKGI on EMG were not statistically significant according to SPM, but were predicted. Since our work focused on the validation of the predicted results, we decided to analyze and compare individual muscles rather than compare group of muscles. Our predictive results showed that amputation affected more clearly the hip flexor muscles on the IL side, which is consistent with a study that observed high activation of hip flexor muscles in bilateral transfemoral and through-knee persons . Differences in activation of the gluteus medius and iliopsoas were observed on both sides, but more so for the gluteus medius on the IL side . This is in agreement with the literature, where this muscle also exhibited differences in EMG between controls and transfemoral amputees . We could predict nearly symmetric muscle patterns in P-HH and P-HS REF as demonstrated by very similar muscle activation of the IL and CL sides in the respective models . The asymmetry observed in the predictions was not intuitive, since SKG15, SKG30, and SKG35 affected the muscles differently in P-KD, P-HH, and P-HS. For example, in all results we observed a subtle increase in biceps femoris activation and EMG at 20% of the gait cycle on the CL side. However, for medial gastrocnemius, a similar increase was observed only in E-KD, P-KD, and P-HH, even though both muscles are knee flexors . This suggests that the models could predict some responses observed in experimental data, even though P-KD SKG15 differed more from the experimental results than other conditions, especially for IL hip angle . The possible reasons for this deviation could be that the implementation of SKG15 in P-KD did not represent accurately the locked MPK in E-KD SKGI. Furthermore, the simulations did not converge when MPK flexion angle was more limited than 14.9°, which was the case during E-KD SKGI. A limitation of our work is that for technical reasons, we were not able to measure EMG and muscle-tendon properties of the stump, and so we were not able to validate the muscle activation of the amputated side in our predictions. Accessing the muscles on the side of the prosthesis is difficult without altering the socket to facilitate EMG electrodes, which was not an option in our case. Another limitation is that we present the results of only two subjects. Therefore, our study is merely a proof of concept and our findings cannot be generalized to different type of amputations and other individuals. The gait pattern predicted using optimal control represents one possible solution for the optimization problem, so we cannot insure that the global minimum was found. To address this limitation of optimal control particularly for the knee disarticulation patient, we tested different initial guesses . Similarly to Miller et al. , some of these simulations did not converge. This made this process challenging, because the gait simulations converged between 1 and 13 hours (S2 Table in S1 File ) and if the prediction of one gait condition did not converge, the other results were not used. Changing the initial guess yielded different costs of the objective function (S2 Table in S1 File ) and slightly differences in the gait patterns, which did not affected our analysis, since the main characteristics of the prosthetic gait and effects of the gait conditions were the same as presented . However, in the current study we cannot draw conclusions about the correlation between these differences caused by altering the initial guess and variations observed when a participant walks repeatedly under the same condition. Even though one trial of the experimental results was used as initial guess, the prediction may still be considered independent of the experimental data, since we did not minimize the differences between experimental and predicted results. Furthermore, the same initial guess was used for all gait conditions and the alterations of parameters of the system predicted different results. In conclusion, we predicted the prosthetic gait of a knee disarticulation subject, including the effects of different prosthetic knee settings. Moreover, we created a hypothetical model of the patient that predicted an optimal gait pattern of the amputee without impairment which was similar to that of a healthy subject. Thus, we were able to investigate which muscles in the predictions were affected by amputation and also by the imposed restrictions of knee flexion in the respective models. In this work, we showed that predictive simulation using optimal control can be a feasible approach to predict changes in gait due to musculoskeletal deficits such as lower limb amputation or imposition of SKG. Future work could focus on applying these methods to a greater number of subjects, improving prosthesis modeling, and investigating the effects of SKG in other clinical contexts.
|
Study
|
biomedical
|
en
| 0.999996 |
PMC11695020
|
Due to its good performance characteristics, high modulus asphalt can be used in heavy-load traffic sections and roads with severe rut deformation. Moreover, using high modulus asphalt concrete as pavement base is an important technical requirement for the structural design of long-life pavement. Road experts have done a lot of research and application on high modulus asphalt, and exploration from a micro perspective has also been carried out [ 1 – 6 ]. The current research on the micro layer of high modulus asphalt is mostly limited to qualitative analysis of its modification mechanism through image experiments such as infrared spectrum and scanning electron microscope, and it is difficult to conduct in-depth analysis of the intrinsic mechanism and performance of high modulus asphalt materials. In recent years, molecular dynamics simulation has been gradually used in the field of material performance prediction. Molecular dynamics simulation can be used to analyze the macro performance of polymer modified asphalt under the limited experimental conditions through iterative calculations of the constituent elements, molecular structure, and intermolecular interaction forces of materials. In the study of molecular dynamics of high modulus asphalt, Ding et al. analyzed the effect of SBS on the aggregation behavior of asphalt molecules using radial distribution functions. The results show that the influence level of SBS on the performance of asphalt depends on the length of alkane side chain of asphaltene. Sun et al. calculated the density and glass transition temperature of SBS modified asphalt through molecular dynamics simulation, and The self-healing properties and healing mechanism of SBS modified asphalt were studied by the simulation results of diffusion coefficient and activation energy of asphalt. Feng et al. studied the molecular aggregation state of SBS and asphalt at the molecular level by molecular dynamics simulation. At present, the quantitative analysis of high modulus asphalt performance using molecular dynamics simulation methods are limited, and the relationship between micro technical indexes and the macro road performance of high modulus asphalt seldom investigated. Therefore, it is difficult to predict the performance of high modulus modified asphalt under some extreme test conditions. If the relationship between the micro technical indexes and the macro road performance of high modulus asphalt can be established, and the performance of high modulus asphalt can be measured from the perspective of molecular dynamics, then the performance of high modulus asphalt can be evaluated under limited experimental conditions. For the purpose, the study is conduced as follows, first, two kinds of high modulus modified asphalt, LLDPE/SBS composite modified asphalt and rubber/PPA composite modified asphalt were prepared according to the technical requirement. The modulus simulation calculation results and high temperature rheological property test results of LLDPE/SBS composite modified asphalt were obtained by micro molecular dynamics simulation method and actual measurement method. Then, the correlation between simulation results and test results were established and analyzed, the appropriate parameters of molecular dynamics simulation were obtained, and the estimation formula between the results of molecular dynamics simulation and high temperature rheological properties was established. Finally, the simulation results and the rheological performance test results of rubber/PPA composite modified asphalt was selected to evaluate and verify the rationality of the estimation formula. Establishing the relationship between microscopic indicators and macroscopic road performance is of great significance for predicting the macroscopic performance of high modulus modified asphalt in the future. SK70 base asphalt used in this study was obtained from Xiamen City, Fujian Province, China. The technical indexes of the base asphalt are listed in Table 1 . PE modifier used in this study was LLDPE-DFD7042, produced by United Petroleum and Chemical Co., Ltd (Fujian Province, China), which has a good effect on improving the high temperature performance of asphalt. The technical indexes are shown in Table 2 . SBS modifier used in this study was Li Changrong 3411 (Star SBS), obtained from Guangzhou City, Guangdong Province, China. The technical indexes are shown in Table 3 . The rubber used in this study comes from the high-grade tire rubber particles with more than 90% natural rubber content produced by Qiangcan rubber and plastic products Co., Ltd (Shanghai City, China). The technical indexes are shown in Table 4 . PPA modifier used in this study was No. 80104518, produced by Sinopharm Chemical Reagent Co., Ltd (Shanghai City, China). The technical indexes are shown in Table 5 . According to the previous research results of the research group, the preparation process of LLDPE/SBS composite modified asphalt and Rubber/PPA composite modified asphalt was determined as follows. 22.5g LLDPE and 22.5g SBS were added to 500 g heated SK70 base asphalt (175 ± 5°C) and swelled for 30 min. Then the mix was sheared at 180°C for 50 min in a high-shear mixer at 4000 rpm/min and developed for 1 hour in an oven at 160°C to ensure the performance of final product. A different method was used to prepare the rubber/PPA modified asphalt specimen. Firstly, 100g rubber powder was added to 500 g heated SK70 base asphalt (175 ± 5°C) and sheared for 40 min at 170°C in a high-shear mixer at 5000 rpm/min. Next 10g PPA was added to the blend and sheared for 20 min under the same shear conditions. Finally, the blend was developed for 30 min in an oven at 160°C to ensure the performance of final product. To explore the relationship between the micro technical indexes and the macro road performance of high modulus asphalt, rheological property test and molecular dynamics simulation of high modulus asphalts were performed. Rheological property testing was performed using a dynamic shear rheometer (Physical MCR101, manufactured by Antonpaar, Austria). Molecular dynamics simulations of asphalt were conducted using Materials Studio (MS) on the high-performance computing platform of Chang’an University. According to the test results of rheological properties of high modulus asphalt at the early stage of the research group , the shear modulus G* , rutting factor G*/sinδ and unrecoverable creep compliance ( J nr ) with good regularity were selected as the rheological properties indexes of correlation analysis in this study. The performance indexes are shown in Table 6 . The complex shear modulus G* and phase angle δ are the indexes used to characterize the high temperature performance of SHRP system. According to AASHTO MP1a-04 specification, G*/sinδ was not thought a good parameter to reflect the rutting resistance in many researches because that it did not take into account of recovery capacity of the modified asphalt. As G*/sinδ cannot effectively evaluate the high temperature performance of modified asphalt , the repeated creep recovery test (MSCR) was adopted in NCHRP 9–10 which was proposed by the National Highway Cooperation Research Program of American. The repeated creep recovery test (MSCR) can judge the resistance to permanent deformation of asphalt, which is usually used to determine the elastic response of asphalt binder under shear creep and recovery at two stress levers. The unrecoverable creep compliance ( J nr ) is used as the main performance indicators, and the creep behavior can be used to investigate the rutting susceptibility of asphalt concrete [ 13 – 15 ]. The high modulus modified asphalt was subjected to Dynamic Shear Rheometer (DSR) tests at 60°C and 76°C. The component content and element composition of asphalt of asphalt was tested according to the specifications and specifications , The separation of the four components was carried out using solvent filtration method and chromatographic column separation method according to the specifications, as shown in Fig 1 . The separated four components are shown in Fig 2 . Then, the representative molecular of each component was selected to establish the asphalt molecular model based on existing research results [ 16 – 20 ], as shown in Figs 3 – 6 . Finally, the relative mass fractions of the main elements in the model were compared with the measured values in the elemental analysis test to verify the rationality of the four-component asphalt model, and based on the preliminary calculation results of the research group, the asphalt molecular model was assembled using the amorphous Amorphous Cell Calculate interface in MS. Firstly, the selected molecular structures of the four components of asphalt were randomly placed into a unit cell with a size of 39.7Å×39.7Å×39.7Å. Then, asphalt molecules were assembled at the interface of the amorphous “Amorphous Cell Calculate”, the task option was set to “Construction” in the interface parameter setting. Considering the computing power and accuracy of the server, “Quality” was set to “Fine”. In order to avoid large deviations in simulation results caused by the entanglement of molecular chains in the unit cell during the simulation process, the initial density of the simulation system was generally set to a smaller value. In this paper, the density was set to 0.8g/cm 3 . The electrostatic force and van der Waals interaction force were set to Ewald and Atom Baselaw, respectively. D. Calculate the force field using COMPASS force field, and establish a model after setting the parameters. Firstly, combined with the results of asphalt molecular model, the molecular models of high modulus asphalt with different amounts of modifiers were assembled by Amorphous Cell Calculate interface in MS. Considering the computing power and precision of the server, Simulation quality is set to fine. In order to simulate the real state of material molecular movement as possible, Andersen and Berendsen methods were adopted for temperature and pressure control under periodic boundary conditions and Compass force fields, and The electrostatic and van der Waals forces were set to Ewald and Atom Based, respectively. After the parameters were set, the molecular model of the composite modified asphalt can be established. Then, the simulation system was subjected to 2000 steps of Geometry optimization by using the comprehensive method, and 100ps molecular dynamics simulation of the molecular system after the Geometry optimization was carried out under the NVT ensemble. That is, the model system in the state of energy stability was obtained. Finally, the physical modulus of high modulus asphalt molecular model in stable configuration was simulated at 333.15K (60°C) and 349.15K (76°C), respectively. The Constant Strain method in the Forcite module was used to solve the physical modulus of the modified asphalt blend. After the dynamic superposition operation, the stiffness matrix and the flexibility matrix of the modified asphalt at 60°C and 76°C were obtained, as shown in Eq 1 . [ c ] = λ + 2 u λ λ 0 0 0 λ λ + 2 u λ 0 0 0 λ λ λ + 2 u 0 0 0 0 0 0 u 0 0 0 0 0 0 u 0 0 0 0 0 0 u (1) Where λ and u are lame constants; c ij is stiffness constant. Then the Young’s modulus ( E ), bulk modulus ( K ) and shear modulus ( G ) of high modulus asphalt were calculated by Formula ( 2 )–( 4 ) according to the stiffness matrix and flexibility matrix. The calculation formulas are as follows. E = u 3 λ + 2 u λ + u (2) K = f 1 c i j + f 2 s i j 2 (3) G = f 3 c i j + f 4 s i j 2 (4) Where s ij is flexibility constant. According to the test results of rheological properties and the results of molecular dynamics simulation of LLDPE/SBS composite modified asphalt, the correlation between macro performance indexes and micro indexes was analyzed. Then through correlation analysis and regression calculation, the appropriate parameters of molecular dynamics simulation were obtained, and the estimation formula between the results of molecular dynamics simulation and high temperature rheological properties was established. Finally, the estimation formula was verified by combining the high temperature rheological test results and molecular dynamics simulation results of rubber/PPA composite modified asphalt. Six kinds of LLDPE/SBS composite modified asphalts with a total content (LLDPE: SBS in LLDPE/SBS composite modified asphalt is 1:1) of modifiers of 6%, 7%, 8%, 9%, 10%, and 11% were selected for rheological performance testing. The test results are shown in Table 7 . It can be seen from Table 7 that the rutting factor and shear modulus of the composite modified asphalt increase with the increase of the total content of LLDPE and star SBS, and the unrecoverable creep compliance decreases, which indicates that the high temperature performance of the composite modified asphalt increases with the increase of the modifier. When the total content of the modifier reaches 9%, the high temperature performance of LLDPE/SBS composite modified asphalt meets the requirements of high modulus asphalt. The results of the four-component separation test of SK70 asphalt are shown in Table 8 . According to the test results of asphalt components in Table 8 and the test results of asphalt chemical components in earlier stage of this paper , the molecular structure of each component of asphalt were selected combined with the existing research results and the trial results of the atoms number allowed in the simulation system, as shown in Table 9 . The content of each component of the asphalt was calculated according to the molecular composition of the asphalt molecular model in Table 9 , and compared with the measured values, the comparison results are shown in Table 10 . Table 10 shows that compared with the measured value, the four-component content of the asphalt model system obtained through trial calculation is slightly higher, but the proportion of wax content in the asphalt meets the specification requirements, so the impact of error on the asphalt performance can also be ignored. In order to verify the rationality of the asphalt model structure, the relative mass fraction of each element in the asphalt model system is calculated and compared with the measured value of the relative mass fraction of elements in the element analysis test. The comparison results are shown in Table 11 . It can be seen from Table 11 that the relative error between the relative mass fraction of each element in the four-component model established by the assembly method and the measured results in the element analysis test is less than 10%, and the effect of this error range on the molecular dynamics simulation of asphalt can be ignored. According to the composition of asphalt in Table 9 , the asphalt molecular model was assembled through the Amorphous Cell Calculate interface in MS, as shown in Fig 7 . Combined with the four component asphalt molecular model, taking the modified asphalt with the minimum content (the total content is 6%) in Table 7 as an example, the blending system model of LLDPE/SBS composite modified asphalt is as shown in Fig 8 . The initially constructed blend system model is in a high-energy state, which is easy to cause the discreteness and distortion of the molecular dynamics simulation results. The Geometry optimization of the high-energy unsteady simulation system was carried out by the comprehensive method, and the result is shown in Fig 9 . Fig 9 shows that with the increase of the optimization step, the total potential energy of the simulation system gradually decreases and tends to be gentle after the Geometry optimization process. In order to further improve the simulation accuracy and make the simulation system approach the actual structure of the material to the largest extent, the molecular dynamics simulation of the molecular model after the Geometry optimization was carried out. The molecular dynamics simulation result is shown in Fig 10 . According to the judgment standard for stable state of molecular dynamics simulation system , when the energy-time curve tends to be stable and the fluctuation presents regular changes, it can be determined that the system reaches the energy stable state, that is it reaches the termination condition of dynamics calculation before the simulation analysis of molecular dynamics performance. The total energy of the system is the sum of the total potential energy and the total kinetic energy of the system. From Fig 10 , it can be seen that the change rule of the total potential energy and the total kinetic energy of the system is basically the same as the change rule of the total energy through molecular dynamics simulation. After 10ps molecular dynamics calculation, the total kinetic energy, the total potential energy, the total energy and the non-bonding energy of the system tend to be stable. That is to say, the simulation system has reached an energy stable state, and the structure system is in a stable state after 10ps molecular dynamics simulation, which can be further used in molecular dynamics simulation calculation and analysis. The physical modulus of LLDPE/SBS composite modified asphalt stabilized system after molecular dynamics simulation was simulated by the mechanical properties command of the Force module in MS, and the simulation was carried out at 60°C and 76°C respectively. The results of simulation are shown in Tables 12 – 15 . By the same way, the model establishment and molecular dynamics calculation of LLDPE/SBS composite modified asphalt with different content of modifier were carried out. The stiffness matrix and flexibility matrix of high modulus composite modified asphalt with different content of modifier at 60°C and 76°C were obtained. Then, the physical modulus molecular dynamics simulation results of LLDPE/SBS composite modified asphalt at two temperatures were calculated by Formula ( 2 )–( 4 ), as shown in Table 16 . From the molecular dynamics simulation results in Table 16 , it can be seen that the simulation results of modulus are quite different from the data measured in the laboratory, but with the increase of the total content of modifier, the molecular dynamics simulation results of shear modulus, bulk modulus and Young’s modulus of LLDPE/SBS composite modified asphalt increase, which is consistent with the change trend of the test results measured in Table 7 . This shows that for the physical modulus, there is a certain correlation between the simulation results and the measured data in laboratory of LLDPE/SBS composite modified asphalt. According to the rheological performance test results and physical modulus molecular dynamics simulation results of the LLDPE/SBS composite modified asphalt in Tables 7 and 16 , the correlation analysis between the macro and micro indexes was obtained. The linear fitting results of the modulus simulation results and the rheological performance test results of the LLDPE/SBS composite modified asphalt at different modifier content and temperature are shown in Figs 11 and 12 . Figs 11 and 12 show that the correlation between micro and macro indexes of LLDPE/SBS composite modified asphalt is relatively high. And the correlation coefficient statistical results of micro and macro indexes are shown in Table 17 . The correlation coefficient of linear fitting is above 0.9, which indicates that the correlation between the two indexes is good. Table 17 shows among the three modulus, only the correlation coefficient of shear modulus between the molecular dynamics simulation results and the rheological property test results is above 0.9, which not only shows that the modified asphalt can be effectively simulated and evaluated by molecular dynamics simulation, but also shows that the shear modulus is a high temperature index with a good correlation between molecular dynamics simulation results and rheological test results. Thus the estimation formulas between molecular dynamics simulation indexes and high temperature rheological performance indexes were established according to the linear fitting results of micro and macro indexes of High modulus asphalt, the estimation formula is shown in Formula ( 5 )–( 8 ). y G * , 0.1 % ( 60 ° C ) = 920.12 x G − 619648 (5) y G * , 12 % ( 60 ° C ) = 810.75 x G − 541895 (6) y G * / sin δ ( 76 ° C ) = 97.77 x G − 56563 (7) y J nr , 3.2 ( 76 ° C ) = − 33.36 x G + 25560 (8) Where x G is the result of molecular dynamics simulation calculation of shear modulus, MPa; y is the result of rheological property test, Pa. To further verify the accuracy of the estimation formulas, the other modified asphalt, the rubber/PPA high modulus asphalt was subjected to molecular dynamics simulation of shear modulus and rheological properties test. Herein the Rubber asphalt (20% rubber) was modified by PPA with a content of 1%, 1.5%, 2%, 2.5%, and 3% respectively. Molecular models of rubber/PPA high modulus asphalt with different modifier content were established, and the shear modulus of different molecular models was simulated by the Constant Strain method in the Forcite module. The simulation results are shown in Table 18 . It can be seen from Table 18 , with the increase of PPA content, the results of molecular dynamics simulation results of shear modulus of rubber/PPA high modulus asphalt gradually increase, while with the increase of simulation temperature, the shear modulus shows the opposite trend, has the same trend as laboratory test results. According to the estimation formula of Formula ( 5 )–( 8 ), combined with the molecular dynamics simulation results of the shear modulus in Table 18 , the estimated value of rheological properties of rubber/PPA composite modified asphalt with different PPA content is shown in Table 19 . In order to verify the accuracy of the estimation formulas of micro and macro indexes, the rheological properties were tested, and the test results are shown in Table 20 . The estimation value of rheological property in Table 19 was compared with the measured value in Table 20 , and the relative error between the estimation value and the measured value is shown in Table 21 . Table 21 shows that the relative error between the measured value and the estimation value of high temperature parameters were obtained from the estimation formulas of micro and macro indexes of the high modulus asphalt is less than 7%, and the relative error of most of the indexes is less than 3%. It can be seen that the high temperature parameters obtained by molecular dynamics simulation can predict the high temperature performance of high modulus modified asphalt to a certain extent. The molecular dynamics simulation results of modulus are different from the data measured in the laboratory, but the molecular dynamics simulation results of shear modulus, bulk modulus and Young’s modulus of high modulus asphalt are consistent with the change trend of the test results. This shows that, for LLDPE/SBS composite modified asphalt, there is a certain correlation between the simulation results of the physical modulus and the measured data in laboratory. The estimation formulas of macro and micro indexes of the high modulus modified asphalt were established according to the correlation analysis between the rheological performance test results and physical modulus molecular dynamics simulation result. And among the three modulus, only the correlation coefficient of shear modulus is above 0.9, which shows that the shear modulus is a high temperature index, showing a good correlation between molecular dynamics simulation results and rheological test results. For the high modulus asphalt, the relative error between the measured value and the estimation value of high temperature parameters were obtained from the estimation formulas of micro and macro indexes is less than 7%, and the relative error of most of the indexes is less than 3%, which show that the high temperature parameters obtained by molecular dynamics simulation can predict the high temperature performance of high modulus modified asphalt under the limited experimental conditions. Molecular dynamics is an important simulation characterization method and is very helpful for simulating and predicting various properties of asphalt. The research results of this paper can effectively predict the high-temperature performance of high modulus asphalt, which has important reference value for improving the design level of high modulus modified asphalt and its mixtures, and have important theoretical guidance significance for the promotion and application of polymer modified asphalt. However, there are still some differences between the model established in the paper and the model of high modulus asphalt. In the future, the model will be further optimized to study the modification mechanism and mechanical properties of high modulus asphalt.
|
Study
|
other
|
en
| 0.999998 |
PMC11695021
|
The WHO declared the end of the COVID-19 pandemic as early as May 2023. However, in December 2023, an increase in COVID-19 incidence was observed globally . Vaccination for immunocompromised patients has been advocated, especially for probable endemic cases. Calls for expanding nonclinical health protection have been made despite the significantly lower number of documented COVID-19 cases and deaths than during the pandemic era . Seroprevalence can indicate the presence of an immune response against certain pathogens and may aid in public health policy. In general, the seroprevalence of SARS-CoV-2 in the general population was underreported compared to that among individuals who presented with COVID-19 symptoms . Prior to the introduction of vaccine, a number of serological surveys were carried out during the early pandemic and indicated characteristics such as age, sex, immunological state, and the test used may have been associated with the severity of the disease [ 3 – 6 ]. Supporting these data, two cross-sectional studies in Indonesia conducted before the introduction of vaccination showed that the seroprevalence was greater in dense populations than in less dense populations, in slums than in non-slums, and among those of reproductive age (30–50 years old). Interestingly, a smaller study (n = 425) in the Bantul District, Yogyakarta, Indonesia, showed that the seroprevalence did not significantly differ between individuals in terms of their preventive measures and mobility . A prospective study in Manaus, Brazil, involving 1,638 seronegative participants who had no COVID-19 diagnosis at recruitment, showed that they had seroconversion after 57 days (interquartile range (IQR): 54–61 days) with 48.1% asymptomatic. The risk factors for seroconversion were the incidence of COVID-19 in the family, not using a mask when in contact with someone with COVID-19, easing physical distancing, and symptoms similar to flu or COVID-19 . There is a consensus that primary SARS-CoV-2 infection provides some form of protective immunity, as shown by several studies among individuals before vaccination which also reported that protective effects of natural infection can last for five to eight months . A meta-analysis reported that subjects with severe COVID-19 had greater protection against reinfection after 12 months than those with mild symptoms (74.6% vs. 24.7%). Following vaccination, antibody levels dropped after several months, resulting in a decrease in protection from infection [ 14 – 17 ], while still protecting against COVID-19 hospitalization. This combination of natural infection-induced immunity and vaccine-induced immunity conferred the protection is called hybrid immunity. The effectiveness of hybrid immunity against hospital admission or severe disease was 97.4% (95% CI: 91.4–99.2) at 12 months after primary series vaccination, and 95.3% (81.9–98.9) at 6 months after the first booster vaccination, after the most recent infection, or vaccination. The effectiveness of hybrid immunity against reinfection following primary series vaccination decreased to 41.8% (95% CI: 31.5–52.8) at 12 months, while the effectiveness of hybrid immunity following the first booster vaccination decreased to 46.5% (36.0%–57.3%) at 6 months . This study aims to describe the seroconversion and serodynamics of IgG antibodies against the receptor-binding domain (RBD) of SARS-CoV-2 in the general population of Sleman, Yogyakarta Special Province, over a period of two months. This study is part of the SYNTHESIS (Surveillance System to Observe Seroconversion to SARS-CoV-2 in Human) study that was described elsewhere . It was conducted during the lifting of lockdown restrictions and initiation of vaccination in 2021 , a time when many factors could have played a role in the seroconversion and serodynamics of IgG RBD-SARS-CoV-2 antibody levels . Within this study, we observed the indication of hybrid immunity despite the short observation period. This was a prospective cohort study on seroconversion and serodynamics among healthy individuals in the general population. It was nested within the Sleman Health and Demographic Surveillance System (HDSS), which includes the residents of Sleman District within the Yogyakarta Special Province. The utilization of a convenience sampling method was necessitated by the lockdown policy enforced during the study period. In total, 11 out of 17 subdistricts were selected. The Sleman HDSS respondents residing in these selected subdistricts who met the eligibility criteria (adults aged ≥18 years) were invited to participate in the present research’s baseline data collection. Data and sample collection were carried out at community centers within each subdistrict, strictly adhering to all COVID-19 restrictions. All eligible individuals who accepted the invitation were subsequently screened to determine whether they met any inclusion or exclusion criteria of this study. The inclusion criteria for this study included healthy adults (≥18 years old) who agreed to participate and tested negative for SARS-CoV-2 infection by utilizing the GeNose C19 ® (Swayasa Prakarsa, Indonesia) a noninvasive device that analyzes an individual’s breath to detect volatile organic compounds indicative of SARS-CoV-2 infection prior to their interview. The GeNose C19 ® is a breathalyzer with high potential for rapid COVID-19 screening, as it has a sensitivity of 86–94% and a specificity of 88–95%. Additionally, it does not require an invasive procedure. Furthermore, the Indonesian Ministry of Health formally acknowledged and distributed it for COVID-19 screening at the end of December 2020. Any patient’s affirmative GeNose C19 ® result must be verified by a diagnostic test in accordance with the World Health Organization (WHO) guideline. The GeNose C19 ® test results were provided to participants within 20 minutes. Positive results led to a referral to a public health center for further evaluation, while negative results were followed by an explanation of the research, consent collection, blood sample collection, measurement of weight and height, and a short interview (10–15 minutes). The exclusion criteria included the presence of flu-like symptoms (such as fever, malaise, exhaustion, nausea, cough, headache, muscle aches, convulsion, skin rash, vomiting, chills, diarrhea, anosmia, and ageusia) at the time of screening, as well as the presence of immunosuppressive conditions, immune deficiency diseases, or receipt of any immunosuppressive therapy. All study participants were subsequently invited to participate in monitoring session 1 (4 weeks after baseline) and monitoring session 2 (8 to 9 weeks after baseline collection). Both monitoring sessions followed the same procedure as the baseline data collection. Study participants were categorized as drop-out cases if they did not attend or were deemed ineligible for both of the monitoring sessions. We began recruiting participants on April 29, 2021, finished on December 16, 2021, and completed monitoring session 2 on February 21, 2022. Survey, physical measurements (height and weight), and collection of blood sample were carried out at three distinct time points: baseline, monitoring session 1, and monitoring session 2. The interviews collected data on essential demographic information (e.g., age, sex, education, employment status, marital status), smoking habits, COVID-19 vaccination status), comorbidities (diabetes, cancer, stroke, hypertension, obesity, heart disease, chronic obstructive pulmonary disease, chronic liver disease, chronic kidney disease, hemorrhagic fever, tuberculosis, human immunodeficiency virus, and autoimmune disease), history of COVID-19 symptoms, and contact with COVID-19 patients, mobility (travel history), and preventive measures (compliance with health protocols). The weight and height of the participants were measured for body mass index (BMI) calculations. A 3-mL venous blood sample was collected from each participant using an EDTA-containing vacutainer. Blood samples were transported to the Biobank Unit at the Faculty of Medicine, Public Health, and Nursing, UGM, in Yogyakarta, Indonesia. Blood plasma was obtained through refrigerated centrifugation at 350 × g for 10 minutes. All specimens were processed and stored on the same day of receipt, and the isolated plasma was stored at -80°C until testing. The SARS-CoV-2 IgG II Quant assay (Abbott, Ireland) was used for the quantitative determination of IgG antibodies against the receptor-binding domain (RBD) of SARS-CoV-2. The laboratory analysis was performed according to the manufacturer’s instructions. The resulting chemiluminescent reaction was measured as a relative light unit (RLU). IgG seropositivity to the RBD of SARS-CoV-2 was determined at an RLU ≥ 50 AU/mL, as written in the manual of the product. This study was reviewed and approved by the Medical and Health Research Ethics Committee (MHREC), of the Faculty of Medicine, Public Health, and Nursing, UGM, with reference no. KE/FK/0882/EC/2020 on August 6, 2020, and the ethical approval was extended with reference no. KE/FK/1063/EC/2021 and KE/FK/1443/EC/2022 on September 24, 2021 and November 16, 2022 respectively. Written informed consent was obtained from all respondents prior to enrollment in this study. The interviews were conducted by a research nurse using a digital questionnaire app installed on the study tablet PC. The e-Synthesis app (version 1.9.8, a survey tool; Sleman, Yogyakarta, Indonesia) was purposefully developed by the e-HDSS team exclusively for this study . The respondents’ responses were directly entered into the e-Synthesis application and then uploaded to the Sleman HDSS server. Next, the data management team downloaded the data from the server to conduct the data cleaning and analysis. All the research data was securely stored on both the server and a computer with limited access. Prior to the analysis, all variables were dichotomized for simplification into “yes/no” by using the lowest category (e.g., "no/never/hardly" = "no") versus all other categories ("yes") (see S1 Table for a list of variables and S2 Table for descriptive analysis of all study variables). BMI was categorized based on obesity status (< 27 kg/m 2 = "no"; ≥ 27 kg/m 2 = "yes"). We analyzed age, a continuous variable, using its mean value and standard deviation (SD). A two-tailed P-value of ≤ 0.05 was considered statistically significant. Descriptive statistics were used to describe the characteristics of the study population based on anti-RBD-SARS-CoV-2 IgG status at baseline, across variables of demographics, smoking status, obesity status, COVID-19 vaccination status, comorbidities, history of symptoms, contact with COVID-19 patients, mobility, and preventive measures. Logistic regression analyses were then performed to examine associations between demographics, clinical status, social behaviors, and vaccination status and anti-RBD-SARS-CoV-2 IgG seropositivity adjusting for confounding variables (variables with p-values ≤ 0.05 in the descriptive analysis) using a stepwise method. Further descriptive analyses were conducted to compare subject characteristics across four groups defined by baseline vaccination status and IgG anti-RBD-SARS-CoV-2 seropositivity. Group 1 comprised of IgG seronegative-unvaccinated subjects; Group 2, seronegative-vaccinated subjects; Group 3, IgG seropositive-unvaccinated subjects; and Group 4, IgG seropositive-vaccinated subjects . The anti-RBD IgG antibody levels against SARS-CoV-2 at baseline, monitoring 1 (Mon-1; 4–5 weeks post-baseline), and monitoring 2 (Mon-2; 8–9 weeks post-baseline) were compared using the Wilcoxon signed-rank test, as well as to compare groups with and without PI vaccination during the monitoring period. These comparisons were conducted separately for the four groups defined by baseline vaccination status and anti-RBD-SARS-CoV-2 IgG seropositivity. Statistical analyses were performed using Stata V.17.0 (Stata Corp LLC, College Station, TX, USA), and GraphPad Prism 9.0.0.121, for data visualization. This study was conducted from April 2021 to March 2022. It is important to note that at the time of study initiation , the local government distributed the first doses of an inactivated whole-virus vaccine (Sinovac) to the general population. A national policy has been implemented for delivering the second dose to the expanded population as of August 2021. The vaccination rate for the first dose in Sleman District in December 2021 was 91.5% . From July to October 2021, most COVID-19 cases in Sleman District and throughout Indonesia were primarily caused by the Delta strain [ 22 – 24 ]; furthermore, the Omicron strain dominated beginning in late November/early December 2021 . All vaccinated subjects in this study received the vaccinations independently; thus, they could have been vaccinated at any time before and during study participation. Additionally, the study was conducted during the period in which the Indonesian Government implemented Community Activities Restrictions Enforcement (CARE) to limit the transmission of SARS-CoV-2 and reduce COVID-19 morbidity. This restriction encompassed the closure of schools, the restriction of commercial service hours, the policy of working from home, and the tightening of personal prevention measures. The restriction was adjusted over time, by considering the transmission of SARS-CoV-2, and COVID-19 morbidity/ mortality. Of the 441 healthy pre-selected participants who were available at the time of the survey and who came to the agreed-upon public health center, 32 (7.26%) had a positive GeNose C19 ® test and were thus excluded. Twenty-four (24) subjects were lost to follow-up; hence, ultimately, 385 subjects were analyzed . Further analyses are described below. The characteristics of the subjects (n = 385) based on their baseline anti-RBD-SARS-CoV-2 IgG level and seropositivity are shown in Table 1 . At baseline, 307 (79.7%) individuals tested positive for the IgG anti-RBD of SARS-CoV-2 antibody. We observed that sex, marital status, smoking habits, obesity, vaccination status and preventive measures were significantly different between the IgG anti-RBD SARS-CoV-2 seropositive and negative individuals (p≤ 0.05). Further analysis to observe correlations of the variables with IgG seropositivity is shown in Table 2 . Overall, we identified that the best model to observe the correlation between variables and anti-RBD-SARS-CoV-2 IgG positivity at baseline was by including sex, marital status, smoking status, and preventive behaviours. By using this model, vaccination was the only variable that correlated with anti-RBD-SARS-CoV-2 IgG seropositivity [OR = 20.58; 95% CI 10.82, 39.15]. Based on the strong correlation between vaccination and anti-RBD-SARS-CoV-2 IgG seropositivity at baseline, we further analyzed all variables comparing anti-RBD-SARS-CoV-2 IgG seropositivity in vaccinated and unvaccinated group at baseline (Group 1 vs Group 3; and Group 2 vs Group 4). Table 3 shows the descriptive analysis. Table 3 shows the differences in IgG seropositivity among those who were unvaccinated (Group 1 vs. Group 3) and vaccinated (Group 2 vs. Group 4) at inclusion. In the unvaccinated group, the proportion of individuals who were obese (P = 0.04) and had a history of contact with people with COVID-19 was greater among those who were IgG seropositive (P = 0.034). As for those who were vaccinated, the proportions of individuals who were younger, female, unemployed, married, and never smokers were greater among those with IgG seropositivity (P<0.05) than among those with IgG seronegativity. Overall, we observed that seroprevalence in our cohort were 307/385 (79.74%), 268/325 (82.46%), and 303/353 (85.83%) on inclusion, monitoring 1 and 2, subsequently . Furthermore, the distributions of the anti-RBD-SARS-CoV-2 IgG titer in the four groups at baseline and at the subsequent monitoring sessions were analyzed . When comparing seroconversion at baseline and at monitoring 2, overall seroconversion from seronegativity to seropositivity was observed in Group 1 (23/51; 45.09%) and Group 2 (10/25; 40%), while seroconversion from seropositivity to seronegativity was observed in Group 4 (9/257; 3.50%). Additionally, all the subjects in Group 3 remained seropositive during this study. Group 1 exhibited a significant increase in the median IgG titer at monitoring 1 and 2 (P≤0.05). Group 3 showed a significant increase in IgG levels at monitoring session 1, in contrast to Group 4, in which the median IgG level decreased at monitoring 1 and 2 compared to baseline, respectively. Regarding vaccination status, we observed that 1 or 2 vaccine doses were received in each group during this study. By monitoring-2, among those who were not vaccinated at inclusion, in Group 1, sixteen (16) subjects had received one dose and 11 had received two doses of vaccination; in Group 3, three (3) subjects had received one dose and 7 subjects had received two doses. Among those who were vaccinated at inclusion, most had already received 2 doses (21/27 subjects; 77.8% and 251/282 subjects; 89% in Groups 2 and 4, respectively). By the end of monitoring, 5 and 27 subjects having received the second dose, in Group 2 and 4 respectively; leaving 1 and 4 subjects in Group 2 and 4, respectively with no additional vaccine (see S4 Table . Numbers of vaccination at inclusion, monitoring-1 and monitoring-2 among four different groups). Furthermore, an analysis was performed to distinguish the IgG SARS-CoV-2 profile among those who received and did not receive postinclusion (PI) vaccination. PI vaccination was considered different among those who were unvaccinated (Groups 1 and 3) and vaccinated (Groups 2 and 4) at inclusion. Among those in Groups 1 and 3, 1–2 doses of the vaccine were administered during the monitoring period, while in Groups 2 and 4, some of the subjects received the second dose during monitoring period. Hence, by the end of this study, 10.65% (41/385), 4.95% (19/385), and 85.68% (324/385) of the participants had 0, 1, and 2 doses of vaccine, respectively . Fig 4 shows that, in general, among those who were seronegative (Groups 1 and 2) and did not receive PI vaccination, they mostly remained seronegative (median values of 1.5 to 0.95 and 25.7 to 29.55 , in group 1 and 2 respectively), while among those who were seropositive (Groups 3 and 4), a decrease in the median IgG titer was observed (median values of 1928 to 1284 and 1003 to 523.7 , in group 3 and 4 respectively). On the other hand, the group of subjects who received PI vaccination showed a trend toward an increase in the median IgG titer (median values of 0.3 to 579 , 0.00–401.4 , and 971.3 to 2163 in group 1,2, and 3 respectively), except slight decrease of IgG value for Group 4 . Serosurveillance may be useful for determining the burden of COVID-19 in the general population. It has been reported that within two weeks after infection, IgM, IgG and IgA antibodies against SARS-CoV-2 are detected in the bloodstream, followed by the decay of IgM and IgA, while IgG is detected for weeks and months . Vaccination against SARS-CoV-2 has been accepted as a key factor for ending the COVID-19 pandemic. In Indonesia, a national vaccination program against SARS-CoV-2 was launched on January 13, 2021. At the beginning of this study, 91.5% of the population in Sleman District had received at least the first dose of the vaccine, as indicated in Fig 1 . Notably, the inactivated COVID-19 vaccine (SINOVAC) was the only vaccine provided for the mass population in Indonesia at the time of this study; and it induced the production of anti-RBD antibodies [ 28 – 30 ]; therefore, anti-RBD IgG can be utilized to detect antibodies generated by natural infection or by vaccination within the scope of our study. We also noted that from July to October 2021, most COVID-19 cases in Sleman District, as well as throughout Indonesia, were primarily caused by the Delta strain [ 23 – 25 ], which is associated with a higher transmission rate than the original strain and high clinical mortality. The high transmission and mortality rate of the Delta strain was the main reason for the long period of subject inclusion, at which point the Omicron peak occurred. Compared to the Delta strain, the Omicron strain is characterized by a greater transmission rate but much milder clinical symptoms and mortality. This may have led to the relaxation of health-related prevention measures at the community level; thus, potential changes in the perception of mobility or personal protection might have occurred. Our study showed that by the end of February/March 2022, the seroprevalence of SARS-CoV-2 antibodies in the Sleman population was 79.7%, which was much greater than that reported in previous studies in Indonesia. Studies performed in Jakarta, Denpasar in Bali, Surabaya and Jombang in East Java, and Bantul District in Yogyakarta showed IgG prevalence rates of 28.52, 44.5, 11.4, and 31.1%, respectively [ 7 – 9 , 31 ], which were much lower than the 79.7% in our study. All studies except that by Ahmad et al. were conducted prior to the introduction of the vaccine. Additionally, only 3.05% of subjects recruited in the study by Ahmad et al. had received their first dose of vaccine, whereas 86.5% of our participants received at least the first dose. This supports our finding that vaccination was the strongest driver of seroprevalence [OR = 20.58; 95% CI 10.82, 39.10]. Ours and other Indonesian studies agree that the seroprevalence was greater among adult as they have a higher potential to be exposed to SARS-CoV-2 either naturally or through vaccination. Factors that may contribute to the differences in seroprevalence and titers are differences in the methods and biomarkers used for seroprevalence status, as well as differences in the prevalence of the dominant SARS-CoV-2 strain during this study. Zaballa et al. reported a serosurvey during the Omicron peak in Geneva, Switzerland, it showed a lower neutralizing capacity against Omicron than against Alpha strain (79.5 and 46.7%, respectively). Severe COVID-19 is thought to trigger an earlier and more intense immune response in hospitalized patients . The IgG titer is believed to decrease substantially more than 7 months after discharge from the hospital as natural infection-induced seroconversion occurs. On the other hand, individuals with subclinical or asymptomatic responses may exhibit a low response or no seroconversion, as demonstrated in Group 1. This particular group may also consist of subjects who could have been unexposed and thus did not seroconvert. The main sources of the natural SARS-CoV-2 antigen were exposure to/contact with those who had COVID-19, as observed in Group 3 of our study. Seropositive individuals due to natural infection had higher antibody titers after vaccination compared to those who were vaccinated . We observed a decrease in the IgG titer in those who had been vaccinated 2 times , and 3.2% of individuals were seronegative at the end of this study. Antibody decay times differ between one vaccine and another, as do population characteristics. In our study, the longest possible duration since the second dose of the vaccine for participants in Group 4 was estimated to be 9 months . Research in Aceh, Indonesia, showed that the total anti-SARS-CoV-2 RBD antibody titer decreased five months after the second dose of the Sinovac vaccine . Furthermore, Sughayer et al. . reported that the reduction of the IgG RBD neutralizing antibody level was reduced faster in patients who were vaccinated with an inactivated vaccine than in those who received the mRNA-based SARS-CoV-2 vaccine. We acknowledge that our monitoring period was shorter than that of the two references but agree that anti-SARS-CoV-2 antibody levels tended to decrease even during this short time. This calls for further booster doses. Nonresponders, subjects who did not exhibit seroconversion, had low antibody titer, or experienced fast-decaying antibodies; showed no effective humoral immune response in our study (Group 2). This group was characterized by older age (57±16.9 vs. 50 ±12.0 years) compared to those with seropositive IgG (Group 4), despite receiving two doses of the vaccine. Studies have reported that nonresponder generally include infants, young children, elderly people, and immunocompromised patients [ 32 , 39 – 41 ]. However, it is known that humoral immune response is not the only immune protection mechanism against viral infection . For nonresponders, different strategies should be developed to prevent morbidity when there is an endemic outbreak. For immunocompromised individuals, vaccination and measures to prevent viral transmission are essential to maintain antibody protection . This study has various limitations. The number of subjects recruited was small, particularly when correlation analysis was done, showing the wide 95% convidence interval of COVID-19 vaccination data and IgG seropositivity [OR = 20.58; 95% CI 10.82, 39.15]. We asked about major signs and symptoms instead of COVID-19 history to avoid the negative stigma associated with COVID-19 status. Despite the short duration of the monitoring, the study’s recruitment window was broad, allowing for the dynamic nature of the SARS-CoV-2 variant, the dynamic of the transmission, changes in local governmental policies, and changes in the target population’s behavior, thereby making the study challenging. To our knowledge, this is the first prospective longitudinal study to describe the role of hybrid immunity in shaping the dynamics of SARS-CoV-2 antibodies in a general population in Indonesia; given the seroprevalence profile, findings on seroconversion, and hybrid immunity, these conclusions are relevant to other findings. The nonresponder identified after COVID-19 vaccination should receive health recommendations, especially during endemic outbreaks. In this study, we found that COVID-19 vaccination was the main driver for seroconversion in Sleman. We found a small proportion of individuals who did not seroconvert following two doses of COVID-19 vaccination, particularly older adults, and also a small proportion whose antibodies have waned to undetectable after eight months. These suggest the need for a booster COVID-19 vaccination. Our data has implications for the Indonesian government regarding COVID-19 vaccination and highlights the need for continued surveillance of COVID-19 population immunity.
|
Other
|
biomedical
|
en
| 0.999999 |
PMC11695023
|
Improving product lifetime is a meaningful approach to reduce environmental issues . However, some products are cast aside because they won’t meet customers’ needs any longer, and it will result in consuming more natural resources and generating more wastes . Normally, upgrading these unsatisfactory products is a simple, efficient and economical way to meet customer needs again. The diversity of innovation ideas and thoughts was observed, and it was generally a brainstorming process, which is simple but always inefficient and resource-consuming due to the uncertainty in the quantity and quality of ideas . Therefore, it is of great significance to formulate a set of effective and convenient innovation design for product innovation and upgrades, and it is also one of the issues to accelerate manufacturing industrial upgrading. Meanwhile, modern knowledge-driven and data-drive techniques are transforming many disciplines by knowledge acquisition, uncovering patterns in data, etc., and supporting human decision-making. To make the design intelligent, design researchers and experts have established relevant knowledge representation schema to describe the product design development. Suh proposed a conceptual model for the four design steps of a "thinking design machine" by using the Axiomatic Design (AD) approach . The four steps are respectively corresponded to Customers Attributes (CAs), Functional Requirements (FRs), Design Parameters (DPs) and Process Variables (PVs), which are defined as Customer Domain, Function Domain, Physical Domain and Process Domain, so that the design roadmap is the mutual mapping process between two adjacent design domains. Quality Function Deployment (QFD) is a customer-driven product development roadmap. QFD is a helpful tool to translate the voice of customers (VoC) into technical languages , and the technical languages gradually map to engineering characteristics (EC), parts characteristics (PC), process operations and production requirements with the House of Quality (HoQ), forming a complete product development roadmap. AD and QFD proposed the concept of a step-by-step implementation and the process object sets, but did not form the knowledge representation scheme. Therefore, a large number of other scholars have carried out more design analysis processes on the basis of knowledge representation. Gero proposes a knowledge-oriented Function-Behavior-Structure (FBS) representation model. The behavior (B) is a bridge between function (F) and structure (S), that is, it is the principle of function realization, and it is also reflected by the structure of the product. Therefore, behaviors are divided into expected behavior (Be) and the behavior derived from structures (Bs). Based on FBS, some scholars have conducted further research on it. Helfman provided a structure–function pattern for biomimetic applications, and the TRIZ method was used for modeling biological systems of the biomimetic design process. Umeda proposed function-behavior-state model, in which the “state” includes structures and other physical knowledge about structures. Some other representation models, such as FPS , SBF , RFBSE etc., are put forward for a similar intent. In recent years, the requirements from users, designers or policy have been placed greater emphasis on product design, there some research is incorporating these requirements into FBS. For example, Christophe proposed an extended RFBS, and the abbreviated letter R refers to requirements, which means taking customer needs or requirements into account. Fu et al proposed a constraint-driven function-behavior-structure design process, which converted customer requirements into design constraints. Luo et al proposes an interval-valued Pythagorean fuzzy set-based FBS model integrating AHP and HOQ methods to reduce the influence of user requirement ambiguity. Li and Lou et. al proposed a cerebellar operant conditioning-inspired constraint satisfaction approach to solve the mapping process from behaviors to structures to facilitate the cognitive activities of designers. Li and Tang et al presented a framework of the dynamic function-behaviour-structure cell model, which generated the optimal scheme for open design based on co-designer involvement. Zheng et al presented a function-structure synthesis approach based on case-based reasoning to meet the requirements of the low carbon policy. Although Zhu et al proposed the RFPS model, there was no innovative design based on classification according to requirements. These studies consider the requirements from various parties, but they do not classify the requirements according to innovation directions to improve the efficiency. At the same time, it can be seen from the above points that: (1) Requirements, Functions, Principles, Behaviors, and Structures are considered as key elements for design. The CAs, FRs, and DPs in AD and the VoC and EC in HoQ can also be transformed to one of them. (2) There is a certain mapping relationship between these key elements. This paper aims to propose a design methodology for product upgrade and innovation. In QFD, the requirements are mapped into VoC, EC and PC, etc., and they represent the most key objects for product design in different stages. It reminds us that the object of innovation or upgrade is one of the important considerations for product upgrade or innovation. It can be also seen that functions and structures are key elements in product design as many researchers believed, and the mapping from functions to structures can be straightforward or indirect. Motivated by the above facts, product upgrade and innovation processes are proposed with RFPS design knowledge representation scheme, which divides the innovative design requirements R into different levels of problems according to the upgrading objects, refering to function F, principle P, or structure S. For these problems, TRIZ is an effective theory of inventive problem solving with a lot of toolkits that summarize the successful experiences of past solutions, such as the inventive principle, Su-Field model, Effects, and so on. In general, TRIZ can independently solve general conflicting issues or combine with other methods to improve the effectiveness of solutions [ 20 – 22 ]. In the remainder of this paper, the requirements are classified into the requirements for function, principle and structure innovation, and the Extension analysis methods are used to map top-level requirements to function, principle and structure requirements in Section 2. In section 3, the RFPS model is set forth, and different design procedures are shown to solve different design object with TRIZ; and in section 4, a case study of cutting table is presented to show the methodology. In the last sections 5 and 6, some conclusions about this methodology will be drawn, along with some discussion on future work. Requirements, such as the low cost or low carbon emission, are the motive force behind product innovation. There are two main requirements analysis methods for product design at present—QFD-based and Kano-based method. Kano method focuses on user’s perception while QFD method pays close attention to product performance, and both of them aim to increase users’ satisfaction, so that they are a kind of “adaptive requirements”. On the other hand, from the perspective of the designers, product should also actively improve with the new technology, that is, the product needs to be updated by “creating requirements”. For “adaptive requirements”, there are two innovative ways—One is to optimize the structures under certain constraints continuously, such as the further improvement of surface accuracy, etc.; the other is to change the principle with the same function realization, such as the screen display from the CRT to LCD technology, etc. For “creating requirements”, it is mainly through changing the old functions or adding new functions of product. The main way is through the transfer of technical fields, such as applying military GPS technology to civil navigation. The function of a product represents an abstract delineation of its operational inputs and outputs, encompassing both the overarching function and the essential sub-functions that facilitate the top-function. The structure, conversely, represents the tangible manifestation of the product design’s foundational object—a structure may manifest as a part, a component, or a module, serving as the vehicle for function and the embodiment of the underlying principles. Thus, the principle serves as a pivotal bridge, linking the domain of function to that of structure, encompassing the spectrum from physical phenomena and scientific methodologies. In the context of functional design, a single function may be associated with one or multiple principles. For instance, the function of "reducing noise" can be addressed through noise reduction at the transmission stage, at the source, or at the receiving end. Conversely, a single principle may correspond to one or several structures. This mapping relationship is depicted in Fig 1 . Given this interplay, the function, principle, and structure are identified as the three fundamental entities for product enhancement and innovation, tailored to the specific design requirements. This tripartite approach ensures a comprehensive understanding of the product and provides a structured pathway for innovation and improvement. Design tasks are divided into different types of designs according to the degree of innovation, and the majority of tasks are adaptations and variations on existing designs. Based on this, combined with the above hierarchical FPS mapping relation, all the design requirements goals are classified into the three categories first to foster and guide the abilities of designers: (1) Requirements for Function Innovation ( FR ), (2) Requirements for Principle Innovation ( PR ), and (3) Requirements for Structure Innovation ( SR ), shown as Fig 2 . Before extension analysis, the basic element should be introduced first. The basic element is used for formalized description of matters, affairs and relations, and expressed as B = ( O , c , v ) = [ O , c 1 , v 1 ⋮ ⋮ c m , v m ] (1) Where O is the object, and c is a characteristic of O , and v is a measure of O about c . The simplified basic element B = ( O , ⋯) can be used to represent the unknown or indescribable characteristics. These data about requirements, functions, principles and structures are expressed as basic element and stored in database. For a complex design requirement, some analysis methods are needed to obtain the simple design objects. Extension analysis methods, including divergent analysis, correlative analysis, implicative analysis, and opening-up analysis, are specific tools to analyze these objects . The divergent analysis is the generalization and expansion of objects. It is mainly aimed at obtaining other similar objects from the divergence of "one object with multiple characteristics" and "same characteristic of multiple objects." Divergent analysis includes analysis based on objects, characteristics, and values. They are defined as follows: These divergences can be formulated as: B = ( O , c , v ) ┤ { { ( O i , c , v i ) , i = 1 , 2 , ⋯ , n } = { ( O 1 , c , v 1 ) , ( O 2 , c , v 2 ) , ⋯ , ( O n , c , v n ) } { ( O , c i , v i ) , i = 1 , 2 , ⋯ , n } = { ( O , c 1 , v 1 ) , ( O , c 2 , v 2 ) , ⋯ , ( O , c n , v n ) } { ( O , c , v i ) , i = 1 , 2 , ⋯ , n } = { ( O , c , v 1 ) , ( O , c , v 2 ) , ⋯ , ( O , c , v n ) } (2) Where the symbol “┤” represents divergence. The correlative analysis is a method to analyze the relation for better and clearer comprehension among all the basic elements, which reflects the mechanism of correlation and interaction. If there is a certain relation between two or more basic elements, these basic elements can be defined as correlation, and the general definition is formulated as B 1 ∼ B 2 (3) Where the symbol “~” represents the correlation. The correlation among some basic elements can be represented in a network as Fig 3 . The implication analysis is based on the connection of basic elements, which refers to a hierarchical relation, so there is a pair of superior and inferior. If the inferior is the premise of the superior, that is, the implementation of superior basic element must have the implementation of inferior basic element, then the inferior implies the superior, and it is expressed as B 1 ⇒ B 2 ( or B 1 @ ⇒ B 2 ) or B 2 ⇐ B 1 ( or B 2 @ ⇐ B 1 ) (4) Where B 1 refers to the inferior, and B 1 refers to the superior. The symbols “⇒” and “⇐” represent the implication, and the symbol “@” represents the implementation of basic element. Specifically, there are two types of implication—AND implication and OR implication. The opening-up analysis is a method that takes the possibility of composability, decomposability, scalability into account. Therefore, there are 3 types of opening-up analysis, such as composable or decomposable analysis and scalable analysis. a) Composable analysis . For any basic element B i , there is at least one other basic element B j that can be composed into B with B i according to the objects or characteristics. That is B = B i ⊕ B j = { ( O i , c i ⊕ c j , v i ⊕ v j ) , i f O i = O j , c i ≠ c j ( O i ⊕ O j , c i , v i ⊕ v j ) , i f O i ≠ O j , c i = c j [ O i ⊕ O j , c i , v i ⊕ v j c j , v i ⊕ v j ] , i f O i ≠ O j , c i ≠ c j (5) Where the symbol “⨁” represents the composition; b) Decomposable analysis . For any basic element B , it can be decomposed into a set of basic elements B = { B 1 , B 2 , …, B n }( n ≥2) under certain condition. That is B // { B 1 , B 2 , … , B n } (6) Where the symbols “//” represents the decomposition; and c) Scalable analysis . For any basic element B = ( O , c , v ), it can be scaled into another basic element αB at multiple of α ( α > 0) for change of the value v . That is α B = ( α O , c , α v ) (7) Where αB and αO are the basic element and object after scaled respectively. According to the description of knowledge representation of abovementioned requirements, functions, principles and structures, and the extension analysis methods, a hierarchical analysis can be performed for specific objects with these methods. The mapping process consists of two objects and a mapping method. Mapping from A to B , for example, can be formulated as A → B . The mapping objects and method for R , FR , PR and SR are shown in Table 1 where the first vertical is the former item A , the first horizontal is the later item B , and the other data in the table are available mapping methods. According to the Table 1 , It should be noted that the requirements R can be mapped not only to functions FR directly, but also to principles PR and structures SR with different extension methods. For requirements for function or principle innovation, it is also achieved through structure changing and optimizing ultimately. Based on the content in Section above, the design methodology is proposed here , which is referred as RFPS, and the detail processes are explained in Table 2 . It can be seen that: a) The design model starts with requirement R , which represents the innovative intention; b) The process 1 in Fig 5 means that innovative design intentions are classified into innovative requirements for structures, principles and functions according to the extension analysis result. c) The process 2 in Fig 5 means that the requirements for function innovation ( FR ) are mapped into functions ( F ). The functional design problem is modeled and there obtains an innovative product design scheme with TRIZ, and then to choose an innovative principle solutions and design a novel structure solution. This process can be briefly described as “function-principle-structure (FPS)”, and it is shown in Fig 5 as steps 1→2→5→6→7 in turn. The requirements for function innovation are detailed in following part. d) The process 3 in Fig 5 means that the requirements for principle innovation ( PR ) are mapped into principles ( P ). The principle design problem is modeled and then there obtains an innovative product design scheme by searching the effects database and then to design a novel structure solution. This process can be briefly described as “principle-structure (PS)” design process, and it is shown in Fig 5 as steps 1 ’ →3→6→7 in turn. The requirements for principle innovation are detailed in following part. e) The process 4 in Fig 5 means that the requirements for structure innovation ( SR ) are mapped into structures ( S ). The structural design problem is modeled and then there obtains an innovative product design scheme according to the optimized or varied solution. This process can be shown in Fig 5 as steps 1 ’’ →4→7 in turn. The requirements for structure innovation are detailed in following part. The design process for function innovation is shown in Fig 6 , and there are two aspects–the analysis process based on RFPS design knowledge and the innovation process based on function innovation method. The RFPS design knowledge of the initial product is generally stored in the early database and can be reused directly, and new RFPS design knowledge of the innovative product can be stored in database after decision of innovation process. The design process takes the model of the existing product and the innovation requirements R F as inputs. The existing product model expresses the initial platform of the product. Generally, the RFPS design knowledge of the existing product is reused from databases by referring to the product model and the analysis process of mapping functions, originated from the requirements FR , to principles and then to structures. Function innovation ultimately focus on product structure. The specific steps of innovation process are as follows: Step 1: Model the function innovation design as a technical contradiction or physical contradiction based on TRIZ; Step 2: Obtain some inventive principle solution(s) by referring to the contradiction matrix with the inventive principles; and Step 3: Determine the innovative structure entity and optimized parameters. This innovation process can be formalized as: F R → { F ↔ R F ′ } → { P ( F ) ↔ F P ′ ( F ′ ) } → { S ( P ) ↔ P S ′ ( P ′ ) } → D (8) The principle innovation of a product refers to that it does not change a function goal rather than changing the function principle. Therefore, it is necessary to determine the unchanged function goal before this kind of innovation process. The design process of PR is shown in Fig 7 . For analysis process of principle innovation, on the one hand, the initial principle solution stems from the requirement for principle innovation, and it reflects implementation of a certain function and is expressed by a specific structure, which means that the mapping relations constitute the RFPS design knowledge. RFPS design knowledge is stored in the database and can be reused directly. On the other hand, new RFPS design knowledge can be made with changed principle solution and structure variation to achieve the function goal, and the specific steps of innovation process are as follows: Step 1′: Establish a set of principles according to the initial design principle solution under the function F ; Step 2′: Obtain a principle changed solution according to the principle requirements and the function constraints; and Step 3′: Do variation and optimization of the structure, which mapped with the principle P, based on the principle changed solution. This innovation process can be formalized as: P R → { P ( F ) ↔ R + F P ′ ( F ′ ) } → { S ( P ) ↔ P S ′ ( P ′ ) } → D (9) Structure innovation is the easiest way to achieve product innovation and upgrade, and it is also direct and fast. The product structure innovation mainly focuses on the optimization of the material, size, shape, attributes, and characteristic parameters, etc. of the structure with certain constraints on the basis of the principles and functions determined. The design process of SR is shown in Fig 8 . The optimization of structure should not change functions or principle, but they should be used as constraints, which constitutes new RFPS design knowledge. RFPS design knowledge is stored in the database and can be reused directly. The specific steps of innovation process are as follows: Step 1: Identify design variables X = { x 1 , x 2 , …, x n } of the structure entity; Step 2: Determine objective function f ( X ) and constraints g ( X ) with the constant function and principle; and Step 3: Choose an appropriate optimization method to get proper design parameters, such as genetic algorithm. This innovation process can be formalized as: S R → { S ( F , P ) ↔ R + F + P S ′ ( F , P ) } → D (10) Take a CNC cutting table as an example to illustrate the innovative design process. The CNC cutting table is a complex product that integrates functions such as cutting, feeding, pressing, etc., shown as Fig 9 . There are main working steps of the equipment as Table 3 . Steps 1 through 5 are repeated for the subsequent materials. Once all materials have been cut, the system is powered down, concluding the operation. Evidently, the presence of idle periods during the material removal step adversely affects work efficiency. To bolster productivity, a strategic top-level design requirement has been proposed: the simultaneous execution from material feeding to cutting. This approach, which is called feeding cut, is intended to optimize the transition from material intake to the cutting process. Empirical evidence suggests that implementing this requirement could enhance production efficiency by a notable margin of 5% to 10%. The main innovative structure is the cutting table for the design requirement. The structural decomposition of the cutting table is shown as Fig 10 . As described above, the requirement for innovation can be represented as R , and this is a top-level requirement. This requirement is analyzed as Eq ( 11 ) with extension analysis. In Eq ( 11 ), the letters are requirement representations with basic elements, and they are as follows (where the characteristics and values of objects are omitted here): a) the basic elements of functions requirement: RF 1 = (Transport, …), RF 2 = (Pressing, …); b) the basic elements of principles requirement: RP 1 = (Drive mode, ⋯), RP 2 = (Friction source, …), RP 3 = (Support mode,…), RP 4 = (Seal mode,…), RP 5 = (Pump mode,…), RP 11 = (Chain drive mode,…), RP 12 = (Belt drive mode,…), RP 21 = (Pressure condition,…), RP 22 = (Surface condition,…); and c) the basic elements of structures requirement: RS 1 = (Chain components,…), RS 2 = (Mane holder,…), RS 3 = (Mane brick components,…), RS 4 = (Chamber,…), RS 5 = (Pump components,…), RS 11 = (Chain,…), RS 12 = (Sprocket,…), RS 13 = (Connecting rod,…), RS 21 = (Rail,…), RS 22 = (Support,…), RS 31 = (Mane brick,…), RS 32 = (Brick Plate,…), RS 41 = (Chamber floor,…), RS 42 = (Inlet and outlet,…), RS 43 = (Chamber frame,…), RS 51 = (Pipeline,…), RS 52 = (Air pump,…). In the case of non-feeding cut, the adsorption system is turned off. The chain ( S 1 ) drives the mane brick components ( S 3 ) to run on the mane holder ( S 2 ). At this time, the resistance is only related to the gravity of the mane brick ( S 31 ) and the friction coefficient between the rail support ( S 22 ) and the guide rail ( S 21 ), there is only static friction. { f 1 = μ ( G F + G M + G S ) G F = m F g l h n F = 250 × 10 × 2 × 2 × 80 = 800 N G M = m M g n r n c = 0.6 × 10 × 22 × 22 = 2900 N G S = m S g n S = 5.2 × 10 × 22 = 1144 N (12) here, μ is the dynamic friction coefficient between S 21 and S 22 . G F is the gravity of the material, m F is the mass of material per unit, g = 10 is the gravitational acceleration, l and h are length and height of the material, and the n F is the layer number of the material; G M is the gravity of the mane bricks, m F is the mass of mane brick, n c and n r are numbers of row and column of S 3 ; G S is the gravity of the rail support, m S is the mass of rail support, n S is numbers of rail support. The friction coefficient is different with and without lubrication, and the result is { f 1 = μ 1 ( G F + G M + G S ) = 87.192 N f 1 = μ 2 ( G F + G M + G S ) = 1937.6 N μ 1 = 0.018 , with lubrication μ 2 = 0.400 , without lubrication (13) S 1 , S 2 and S 3 are integrated and connected. when transporting materials, they are hindered by the guide rails. Based on the analysis above, the design requirement can be met by improving the function F 1 . Therefore, this design problem constitutes a requirement for function innovation, RF 1 = (Transport, …), and Its corresponding function is F = (moving, object, material), which can be solved with the TRIZ, as described in function innovation process. It is found that when the driving force was increasing for better transportation, it was that the system reliability would be greatly decreased via testing, especially for the life of chain, and this is a technical contradiction. Do a retrieval of contradiction matrix, principles of invention can be obtained, shown as Table 4 . Some structure schemes can be gotten based on these principles. According to a detail item in Principle 17 of the invention—make the objects multiple arrangements rather than single one, the multi-row chain can be used to disperse the tensile force of the chain and strengthen the overall performance. The design structure is shown in Fig 11 . Another scheme can be obtained according to a detail item of principle 15 of the invention—make stationary objects movable. The rail structure, which is made of rigid resin material currently, can be converted into flexible materials and moves synchronously with the bristle bricks. In addition, the design requirement can be met by improving the principle RP 2 in another way. Therefore, this design problem constitutes a requirement for principle innovation, which can be solved with the steps described in section 3.2. There are 2 main frictions of the cutting table during transportation to hinder sprocket movement. One is the sliding friction force f 1 between the mane bricks and the support, and the other is the contact friction f 2 between the falling bricks surface due to gravity and the deformed chamber floor due to the air pressure, as shown in Fig 12 . In the case of feeding cut, the adsorption system is turned on. From the perspective of product structure, the bricks are fixed with the chain, and it is obstructed by the rail and the chamber floor at the same time when the table running. Therefore, the structure pairs involved in the friction surfaces are ( S 21 , S 32 ) and ( S 31 , S 41 ), which generate friction forces f 1 and f 2 respectively. Considering the factors of air infiltration into the chamber after cutting, it can ensure that the material does not slip. the air adsorption force is F = 10867.106 N when adsorbing, so the maximum friction force on the rail is f 1 max = 0.4 × = 6284.443 N (14) and the friction coefficient between the chamber floor and the mane brick is 0.6, so the resistance on the bottom is f 2 max = 0.6 × = 8946.664 N (15) so, the total resistance is f = f 1 max + f 2 max = 15231.107 N (16) The friction of transportation with adsorption is 7.8 times greater than without adsorption. By searching the database, there are four basic principles to reduce friction by 1) reducing positive pressure; 2) reducing the roughness of the contact surfaces; 3) adding lubrication, air cushion or separation between the contact surfaces; and 4) Changing sliding to rolling. According to these principles, the friction force f 1 can be improved by reducing the surface roughness, adding some lubricant or creating a gap between the brick plates (steel) and the rail (PC material). For the friction force f 2 , the third principle can be used to separate the bricks and chamber floor or reduce the contact area by adding reinforcing ribs to the chamber floor to reduce deformation, and the simulation shows that the range of deformation is reduced by an order of one, shown in Fig 13 . These are all conceptual schemes of product structures, so that it should be more improved in a structure optimization perspective. This paper will not go into detail, neither does the requirements for structure innovation. This paper provides a RFPS framework to support innovation design from the perspective of different object of design requirements. It’s not a product design framework from scratch based on certain determined requirements, but an innovation or upgrade based on existing products according to the different requirements from all the parties. This design methodology has some similarities and differences with some existing design methodologies. For example, the RFPS model and the function-behavior-structure (FBS) model both use key elements to represent design knowledge, but the RFPS model emphasizes more on the hierarchy and flexibility of requirements, as well as the extension analysis methods. Another example is that the RFPS model and the axiomatic design (AD) method both implement the design process by mapping between different design domains, but the RFPS model focuses more on the mapping from high-level requirements to low-level requirements, while the AD method focuses more on the mapping from functional requirements to design parameters. By comparing and analyzing the existing design methodologies, it can be seen that the innovation and advantages of the RFPS model, as well as its potential shortcomings and improvement space. The main feature of the proposed methodology is that different design processes are adopted according to the different requirement objects, and the different requirement objects correspond to the function, principle and structure innovations respectively. From the start of the requirements for innovation or upgrade, the extension analysis methods were used to map the requirements to the functions, principles and structures. This is similar to decomposition and classification of the requirements, that is, the complex top-level requirements are decomposed into simple bottom-level ones, so that the bottom-level requirements can be classified according to the objects of innovation intention. The other feature of the methodology is shown in Fig 5 . RFPS knowledge management platform is a necessary computer-aided technology to store the design knowledge from innovation process into database for further use, although there are some default elements of FPS for principle and structure innovation. The different innovation processes were used to make the product upgrade, as the detail processes shown in Figs 6 – 8 , which means the function, principle and structure are regarded equal for product innovation. On the one hand, different steps are applied to get the design scheme for each kind of innovation process, so that it may increase complexity to the design process; on the other hand, these steps are not unique, which means that other innovative methods can be used to obtain the schemes, and it will significantly increase the probability for product innovation. Given the non-uniqueness of the steps involved in the innovation processes and the equal emphasis on function, principle, and structure, the RFPS platform stands to benefit significantly from the implementation of more efficient and innovative design methodologies, which are crucial for navigating the intricacies of product innovation across different levels. Design knowledge representation is an important part for knowledge-based design . As described in section 2, the basic element is used to represent the knowledge of design object and some symbols are introduced to represent the analysis processes. There still needs some ways to represent the innovation process, the conduction transformation provides a viable way to describe the dynamic knowledge for the design process. Another important missing piece is evaluation system , although some innovative design schemes based on the proposed methodology was succeeded in designing a cutting table, as described in section 4, there should be an evaluation system to evaluate these design results according to the time consumption, cost, and other considerations. The redesign and remanufacturing of the outdated product are important way of reducing waste of resources to avoid the environmental problems. In addition, the upgradable existing product is the most common innovation design object. In this paper, a systematic design methodology for product innovation and upgrade is proposed. The extension analysis methods are introduced to map the requirements to function, principle and structure requirements; and RFPS methodology supports designers to choose a suitable process to deal with the uncertain complex requirements. This paper mainly focused on the analysis and innovation operations, and the case study verified feasibility of the methodology. Due to the important role of knowledge representation in conceptual design, the proposed method, which refers to an Extenics-TRIZ integrated RFPS knowledge representation model that combines different requirement objects and the top-to-bottom requirement analysis, can not only illustrate the mapping relations between requirements and function, principle and structure, but also enhance the internal knowledge representation of each layer and the knowledge reuse. In light of the limitations mentioned in the discussion, there are identified several avenues for future research to address these shortcomings and further advance our work: a) Development of a Comprehensive RFPS Knowledge Management Platform. It is necessary to construct a systematic platform for RFPS knowledge management. This platform will serve to consolidate design knowledge across various product types and will feature a graphical interface that enhances the visualization and interaction with both design knowledge and product entity. b) Integration of Advanced Design Methodologies. The future work will focus on integrating additional sophisticated design methodologies to enrich the innovation design process, such as the Taguchi Methods , which are recognized for their ability to bolster quality management within design processes. The incorporation of these methods is expected to elevate the caliber of our design outcomes. c) Establishment of a evaluation system. The evaluation system encompasses all proposed schemes., and it will be anchored in the principles of innovation, economic viability, and environmental sustainability, providing a robust framework for assessing the multifaceted success of our design strategies.
|
Other
|
other
|
en
| 0.999995 |
PMC11695043
|
As energy shortage and environmental protection issues become increasingly pressing, the world’s major automobile countries are actively carrying out the research and development of new energy vehicles. Hybrid electric vehicles (HEVs) can optimize energy management and thermal management systems to enhance heat transfer efficiency within the vehicle’s energy transmission system. According to Animasaun et al. , the efficiency of heat transfer processes plays a crucial role in addressing energy shortages. Efficient heat transfer determines the effectiveness of energy utilization and conservation within a system, directly influencing overall energy demand and supply dynamics. Commercial vehicles are relatively lagging behind in their new energy transformation due to factors such as large load capacity, complex working environment, high cost, and regulations and policies .Compared with pure electric and fuel cell commercial vehicles, methanol hybrid commercial vehicles are less restricted by battery technology. Methanol, as a vehicle fuel alternative, holds promise for easing petroleum shortages and further improving economic and environmental sustainability . With relatively low transition costs and easy industrialization, methanol hybrid commercial vehicles are the trend for future development of hybrid commercial vehicle. The EMS of hybrid vehicles is the core technology, which allocates the torque distribution between the engine and the electric motor in real time during the driving process of the whole vehicle, and the goodness of the EMS affects the economy of the whole vehicle to a large extent. The EMS of hybrid vehicles is the core technology, and the goodness of EMS greatly affects the economy of the whole vehicle. The EMS of hybrid vehicles are mainly divided into four types: energy management strategies based on deterministic rules, energy management strategies based on fuzzy rules, energy management strategies based on global optimization and energy management strategies based on instantaneous optimization [ 4 – 6 ]. The rule-based deterministic EMS assigns fixed control parameters and switches the vehicle to different modes based on its current state, adhering to predetermined limits. This approach primarily relies on subjective experience to optimize economic efficiency. Davis et al. proposes a rule-based adaptive strategy to realize the adjustment of battery charging power according to the change of vehicle demand power. Jeoung et al. obtains the optimal logic threshold value through comparative optimization to achieve the best economic efficiency. The use of swarm intelligence optimization algorithms to find the optimal logic thresholds with the objective of economy and emission minimization[ 9 – 11 ], which significantly improves fuel economy is gaining more and more English in energy management strategies. Although this strategy is simple and efficient, it has poor adaptability and robustness, and cannot be adjusted and optimized with the real-time state of the vehicle. The fuzzy rule-based EMS fuzzyfies the precise values of inputs and uses fuzzy control rules to make decisions on the operating state of the vehicle to achieve regulation and control of the system, thereby improving the adaptability and robustness of energy management. Li et al. proposes a fuzzy EMS in which the real-time state of the vehicle is used as an input, and the vehicle is able to adjust the torque distribution with the real-time state changes, enhancing the flexibility and adaptability of control. Lei et al. proposes the use of a dual-mode controller to control the drive and brake energy recovery, which improves the fuel economy by 6.7% compared to the conventional single fuzzy controller. However, the above study did not consider the driving conditions in the driving charging mode and hybrid driving mode, the two battery SOC working area is not the same, and the setting of fuzzy control affiliation and rules are mostly affected by subjective factors, and can not achieve the global optimization. The EMS based on global optimization is based on a cost function that has been defined and a minimum cost function for the driving conditions, resulting in the best fuel economy for the vehicle. The control parameters are selected through dynamic planning and applied to a rule-based EMS to significantly improve the overall vehicle economy . Mashadi et al. utilized dynamic programming in combination with particle swarm algorithm to determine the optimal operating conditions of the powertrain and then develop control rules. Anselma et al. added the slope weighting method in dynamic programming to improve the computational speed of dynamic programming, which not only greatly improves the computational speed, but also limits its fuel increment relative to the traditional dynamic programming to less than 3.3%. However, the dynamic planning algorithm is complicated to calculate and requires the data of the whole working condition, which makes the practical application relatively difficult. Energy management strategies based on transient optimization are divided into equivalent fuel minimum energy management strategies and model predictive energy management strategies. Transient optimization involves selecting the equivalent fuel consumption or total power consumption as the objective function for optimization at each moment of vehicle operation, thus determining the instantaneous optimal operating point for distributing torque between the engine and the electric motor. Deng et al. used a dynamic programming algorithm to obtain the global optimum equivalent factor, and applied the equivalent factor to the constructed equivalent fuel-minimum control model. Wang et al. used a fuzzy control algorithm as well as an optimization algorithm to find the optimal equivalent factor, respectively. However, the process is complicated and depends on accurate models and data, which is difficult to realize. Xue et al. Model prediction combined with an adaptive equivalent fuel consumption minimization strategy leads to power allocation at planned speeds, thus ensuring fuel economy, adaptability, and global optimality. Zhou et al. designed a mixed-model predictive controller for optimizing a multi-objective EMS problem under vehicle following state to achieve fuel efficiency improvement. But the model prediction needs to solve the optimization problem all the time, which leads to a very complicated computation, and the equivalent fuel minimization strategy is highly dependent on the equivalence factor and the dynamic driving situation, which makes it relatively difficult to be applied in practice. In view of the above problems, this paper proposes an improved DBO optimized multi-fuzzy control EMS for hybrid commercial vehicles. Unlike conventional fuel-hybrid commercial vehicles, this paper utilizes a more environmentally friendly and economical methanol engine. Additionally, compared to the traditional fuzzy control EMS, we incorporate fuzzy control to optimize both the driving charging mode and the hybrid drive mode. This approach avoids the problem of excess engine output capacity caused by different battery SOC working areas in the two modes. In order to achieve the most economical control strategy for the entire vehicle, this paper employs the DBO to optimize the fuzzy controller. However, to address the DBO’s shortcomings, such as susceptibility to local optima and slow convergence, we integrate several enhancements: Tent chaotic mapping for population initialization with positive cosine random assignment, a dung beetle forager fused with the Lévy flight strategy, and the Cauchy Gaussian mutation strategy. These improvements enhance the global optimization speed, convergence speed, and search accuracy of the DBO. The improved DBO is then used to optimize the fuzzy control with the objectives of maximizing vehicle economy and minimizing battery SOC fluctuations. The results indicate that the improved DBO exhibits faster convergence speed, higher convergence accuracy, and greater stability compared to the traditional DBO. This leads to improved vehicle economy and reduced battery SOC fluctuations when optimizing the EMS with multi-fuzzy control. The structure of this paper is as follows: First, a vehicle simulation model is established in AVL Cruise. Second, a rule-based EMS is developed in Simulink. Subsequently, an improved DBO is introduced and compared with the original DBO, genetic algorithm (GA), and particle swarm algorithm (PSO). By incorporating fuzzy control in the driving charging mode and hybrid drive mode, a multi-fuzzy control EMS is established. Then, the improved DBO is used to optimize the fuzzy control, resulting in an EMS based on the optimized fuzzy control. Finally, the feasibility of the proposed method is verified and analyzed through simulation results. Flowchart of the article is shown in Fig 1 . This paper studies a heavy-duty hybrid commercial vehicle with a methanol engine, and the structure of the system is shown in Fig 2 . The whole vehicle has torque inputs from two power sources, the engine and the electric motor, and the engine is connected to the electric motor through a clutch, which is opened and closed by controlling the clutch so as to realize the switching of driving modes. Hybrid electric vehicles can recharge the power battery when the battery SOC falls below a set value, maintaining the balance of the battery SOC . This can reduce the dependence of hybrid vehicles on charging stations during operation and minimize the negative impact on the microgrid . The main parameters of the whole vehicle are shown in Table 1 : According to the analysis of the longitudinal forces during the travel of the vehicle and based on the resistance that the car must overcome on the road, the equation of travel of the vehicle is obtained as: F t = F f + F w + F i + F j (1) Where F t is the driving force; F f is the rolling resistance; F w is the air resistance: F i is the slope resistance; F j is the acceleration resistance。 F f = f m g (2) Where f is the rolling resistance coefficient; m is the vehicle mass. The rolling resistance coefficient is affected by the road surface condition and vehicle speed. F w = C D A v a 2 21.15 (3) Where C D is the air resistance coefficient; A is the windward area of the vehicle; v a is the traveling speed of the vehicle. F i = G sin α (4) Where α is the inclination angle of the ramp; G is the vehicle gravity. F j = δ m d v a d t (5) Where δ is the mass proportionality coefficient; the driving force required to move the vehicle, F t is shown in Eq (6) : F t = m g f + C D A v a 21.15 + G sin α + δ m d v a d t (6) Where i g is the transmission ratio; i 0 is the main reduction ratio; η is the mechanical efficiency of the entire driveline; r is the wheel rolling radius; that is the demand torque T req is shown in Eq (7) : T r e q = F t r i g i 0 η (7) Methanol, which has a higher content of oxygen atoms, is more easily and completely combusted, resulting in lower emissions . The development of alternative fuels for automobiles is one of the most effective ways to solve the shortage of oil resources, and methanol is an alternative fuel for automobiles with great potential for application and market prospects. On the one hand, compared with other alternative fuels, the industrial chain of methanol production and manufacturing is mature, with a good industrial base and a large industrial scale, and it is liquid at room temperature and pressure, which makes it easy to be transported and used. The engine used in this paper is 15L methanol engine. The parameters are shown in Table 2 . Since the focus of this paper is not on the dynamic characteristics of the engine, the engine is simplified to a static model for the convenience of the study, and its engine universal diagram is shown in Fig 3 . The transient engine methanol consumption rate can be expressed by Eq (8) , which is a function of engine speed and torque. m ˙ f = f T e t , n e t (8) Where m ˙ f is the engine methanol consumption rate; T e ( t ) is the engine torque; n e ( t ) is the engine speed. The selection of the electric motor is critical to the overall EMS, and the motor should be able to replace the engine in its inefficient operating zone during the entire vehicle’s travel, ensuring that the engine operates in its efficient zone. Table 3 shows the parameters of the motor. The motor map is obtained from the motor speed, torque, and efficiency as shown in Fig 4 . The power of the motor can be expressed as a function of motor torque, speed and efficiency. Eqs (9) and (10) indicate that the motor is used as a drive motor as well as a generator, respectively. p e m t = T e m t ⋅ n e m t 9550 η m t (9) p e m t = T e m t ⋅ n e m t ⋅ η g t 9550 (10) Where p em ( t ) is the motor power; T em ( t ) is the motor torque; n em ( t ) is the motor speed; η m ( t ) is the efficiency when the motor is driven; η g ( t ) is the efficiency of the motor when generating electricity. In order to obtain simulation results close to the real driving conditions, this paper utilizes AVL Cruise software to establish the physical model, establishes the control strategy in Matlab/Simulink software, and utilizes the co-simulation interface to make the established control strategy as the vehicle control unit (VCU). As shown in Fig 5 . The key to the design of the rule-based torque coordination control strategy is to utilize the logic threshold value parameter, the optimal operating region of the engine and the power battery to control the switching of the operation mode and the torque allocation. The logic threshold value parameters are set as in Table 4 . The EMS for hybrid vehicles aims to keep the engine operating in its high-efficiency zone while limiting the SOC variations of the power battery, avoiding overcharging and over-discharging to extend battery life. In order to realize the above purpose, the efficient operating zone of the engine needs to be determined first. Its high-efficiency speed zone is set as 1000rpm-1400rpm,and the maximum output curve T max , the optimal economic curve T Opt , and the minimum torque output T min , and the high-efficiency working zone of the engine in this paper is shown in Fig 6 . According to the power battery’s charging and discharging internal resistance change curve, delineate the minimum SOC min and maximum SOC max value of the battery SOC work, so that the battery is always working in the high efficiency zone, but in the actual operation process we need to set a target value of SOC , as shown in Fig 7 . When the battery is lower than this target value for replenishment, this paper sets the battery to work in the range of [0.3 0.8], and sets the target value of the battery SOC to 0.45, so as to ensure that the power is maintained during the driving process, and to avoid excessive charging and discharging of the battery. Since the power source of the whole vehicle consists of the engine and the motor together, the optimal torque distribution is realized in order to improve the economy of the whole vehicle. The operation modes and switching conditions are shown in Table 5 . In the rule-based control strategy, the allocation of torque can only be deployed according to human regulations, and the adaptability is poor, so most scholars integrate fuzzy controllers into control strategy to improve the robustness and adaptability of the control strategy. In this paper, the improved DBO is used to optimize the multi-fuzzy controller to maximize the economic efficiency of the whole vehicle under the premise of ensuring the dynamics of the whole vehicle. Despite its outstanding performance, the DBO algorithm still has some problems that need to be solved, such as the relative lack of global exploration capability, which makes it easy to fall into local optimal solutions . For applications that require accurate optimization in the problem solution space, the algorithm must have both good global search capability and local search capability. To address this, the Tent chaotic mapping fusion positive cosine population initialization improvement strategy, the dung beetle forager fusion Lévy flight strategy, and the Cauchy Gaussian variation strategy are incorporated into the DBO, and the Chaotic Cauchy Variation DBO (TLK-DBO) is developed. (1) Tent chaotic mapping combined with sine and cosine population initialization improvement strategy. Population initialization significantly influences the convergence speed and optimal solutions of global optimization searches. The traditional random distribution used in DBO algorithm population initialization may fail to ensure diversity and effectiveness, whereas Tent chaotic mapping is renowned for its extensive randomness, continuity, smoothness, and enhanced diversity. These properties together enable the initial population to uniformly cover the entire search space, thereby avoiding premature convergence to local optima and ensuring a smooth search process that balances exploration and exploitation needs, thereby enhancing the algorithm’s global search capability and robustness .Additionally, this paper integrates Tent chaotic mapping with sine and cosine random assignment strategies, leveraging the advantages of chaotic and trigonometric mappings to further optimize population initialization. The sine and cosine strategies contribute to the breadth and smoothness of the search process, ensuring not only diversity within the population but also uniform distribution across the entire search space. The mathematical model is as follows: x i + 1 = 2 ⋅ x i x i < 0.5 + ϵ 2 ⋅ 1 − x i x i ≥ 0.5 + ϵ (11) x i + 1 = x i + 1 + α ⋅ sin 2 π x i + 1 + α cos 2 π x i + 1 (12) The use of Tent chaotic mapping when the value is equal to 0.5 will cause the initialization to fall into a short cycle, in order to avoid this, so that a very small value is added to ensure that it will not be equal to 0.5. At the same time, in order to reduce the emergence of duplicate populations, the chaotic mapping process is coupled with a positive cosine random assignment strategy. Eq (12) is a random assignment value ranging from [0 0.2]. Fig 8 shows the distribution and histogram of the initialization of the population of the DBO algorithm, and the comparison of the figure shows that the repetition of the initialization of the population in the TLK-DBO algorithm is significantly reduced, the distribution of the population is more uniform, and the randomness is higher. (2) Dung beetle foragers incorporate Lévy flight strategies. Lévy flights utilize their long-tailed distribution to effectively prevent algorithms from falling into local optima, thereby enhancing their ability to discover global optimal solutions. They can generate large step sizes, accelerating the algorithm’s exploration of the entire search space and improving the efficiency of finding optimal solutions to optimization problems. This strategy is suitable for various complex and diverse optimization scenarios, showing strong adaptability without being limited by specific problem types. Additionally, Lévy flights maintain stable search performance and excel in handling complex, high-dimensional, and nonlinear problems, thereby enhancing the algorithm’s robustness and practicality . The forager in the DBO will forage in the generated optimal foraging area, and with the addition of Lévy flight the forager is able to perform an all-around search in the optimal foraging area to improve the global search. Introducing the dung beetle forager position update, the forager position update formula is as follows: x i t + 1 = x i t + L e v y * ( c 1 * x i t − L b b + c 2 * x i t − U b b (13) (3) Cauchy Gaussian Variation Strategy. The DBO, like other swarm intelligence optimization algorithms, is prone to fall into local optimal solutions. When the optimal solution of the population after three consecutive iterations remains unchanged, the algorithm is judged to be trapped in the local optimal solution, and at this time, the auxiliary strategy intervenes and uses the Cauchy Gaussian mutation strategy to make the algorithm jump out of the local optimal solution. The Cauchy Gaussian mutation strategy utilizes the randomness of the Cauchy distribution to effectively enhance the algorithm’s global search capability and robustness. It allows the algorithm to jump to distant positions, avoiding the risk of premature convergence to local optimal solutions, thereby enabling greater exploration and variability. This strategy enhances the algorithm’s exploratory and diverse capabilities in complex, high-dimensional, and nonlinear optimization problems, significantly increasing the probability of finding global optimal solutions. Moreover, it is versatile and practical, as it is not limited by specific problem types or parameter settings . The Cauchy Gauss variation assisted jumping out of the local optimum method is given in the following Eq (14) : x n e w i t = x i t + σ c a u c h y * tan π * r a n d i − 0.5 + σ g a u s s i a n * r a n d n i (14) Where σ cauchy = 1−(t⁄m) 2 ,is the scale parameter of the Cauchy distribution, σ gaussian = (t⁄m) 2 ,is the standard deviation of the Gaussian distribution. The Cauchy variant takes a wider value to enhance the global search ability of the algorithm, and the Gaussian variant is more intensive to make up for the ability of local search, the Cauchy variant in the early stage can be a fast global search, and the Gaussian variant in the later stage enhances the local search ability of the algorithm, which can accelerate the algorithm iteration speed. Through improvement, the flow of the TLK-DBO algorithm is shown in Fig 9 . In order to verify the optimization ability of TLK-DBO algorithm, 8 test functions (4 single-peak benchmark functions and 4 multimodal benchmark functions) are selected, as shown in Table 6 , to evaluate the global solution ability and the ability to jump out of the local optimum of the TLK-DBO algorithm by comparing with the GA, PSO, DBO algorithm. To comprehensively verify the convergence accuracy and stability of the algorithm while minimizing randomness, we conducted 50 independent runs of the GA, PSO, DBO, and TLK-DBO algorithms based on the studies of Ye et al. and Yang et al. . Taking into careful consideration their analysis of the population size and number of iterations, we set the population size to 50 and the number of iterations to 200 for each run. The average adaptation degree of 50 cycles and the change curve are used to react to the running accuracy and convergence speed of the algorithm. The running results and data comparison are shown in Table 7 and Fig 10 . After calculating through 50 cycles, f 1 − f 8 as shown in Table 7 , the mean and standard deviation of the optimal fitness of TLK-DBO are greatly improved relative to GA, PSO, and DBO. As shown in Fig 10 , TLK-DBO has higher solving accuracy and faster convergence speed relative to GA, PSO, and DBO, achieving optimal solutions in shorter iteration times. These results demonstrate that the TLK-DBO algorithm outperforms GA, PSO, and DBO in terms of faster convergence speed, higher convergence accuracy, and stronger stability. The input signal needs to be fuzzified before the fuzzy controller is built, and then the fuzzy output is derived from the established fuzzy rule base. First of all, a reasonable scale transformation is carried out so that it can ensure that the driving demand of the car is satisfied, and then ensure that the driving charging mode can be carried out properly, and it is required that the thesis domain is in the appropriate range, Q as shown in Eq (15) . In the traveling charging mode, the demand torque is less than the economic torque T e_Opt , and its effective thesis domain is [0 1], the thesis domain of the battery SOC is [0 0.5], while the range of the output is [0.3 1]. For Q and M , we divide it into five fuzzy subsets {SL (minimum), L (small), Z (moderate), H (high), SH (high)}, while the battery SOC is defined as {L (small), Z (moderate), H (high)}, whose fuzzy rule table is shown in Table 8 . To carry out the design of the affiliation function, as shown in Fig 11 . The hybrid driving mode defines the first input as L . L is shown in Eq (16) . The hybrid driving mode, where the demand torque T req is greater than the economic torque T e_Opt , has an effective range of [1.0 1.8], the battery SOC has a range of [0.3 1], and the output has a range of [0.5 1]. The inputs and outputs are in the same form as the traveling charging mode. Its fuzzy rule table is shown in Table 9 . The design of the affiliation function is carried out as shown in Fig 12 . Fuzzy control mostly relies on engineering experience, which cannot guarantee the control accuracy and global optimization results. In this paper, an optimization algorithm will be used to find the optimal affiliation function in the set fuzzy controller, in order to find the optimal control effect. As an example, the inputs Q in the line charging mode are shown in Fig 13 , x 1 ~ x 11 which are all the affiliation parameters that need to be optimized, and with the change of these parameters, the coordinates of the triangle and trapezoid are determined, which affect the output of the engine torque. The total number of parameters to be optimized for the affiliation function of the above two fuzzy controllers is 59. On this basis, the fuzzy controller is optimized by the TLK-DBO optimization algorithm to reduce the methanol consumption and the variation of the battery SOC value as much as possible to obtain the maximum economic benefits and extend the life of the power battery. Therefore, the objective function expression is established as: F x = ω 1 L F u e l ∫ F u e l t d t + ω 2 L S O C ∫ S O C t d t (17) Where, ω 1 , ω 2 are the weight factors of the optimization objectives of engine methanol consumption and battery power, which take the values of 0.7,0.3, respectively;, L Fuel , L SOC are the Methanol consumption and battery SOC before optimization, respectively. Setting the number of algorithmic iterations to 50 and the number of populations to 30, the affiliation function in the fuzzy controller is optimized by GA, PSO, DBO, TLK-DBO algorithm, respectively, and optimization process of the TLK-DBO algorithm is shown in Fig 14 , with the same process applied to the other algorithms. In the actual driving process, when the battery SOC is lower than the target value, it is necessary to target the whole vehicle economy with small changes in battery SOC fluctuations for charging the vehicle to ensure the optimal economy of the whole vehicle. The changes of battery SOC values of the six strategies under the China Semi-trailer Tractor Cycle (CHTC-TT) conditions are shown by Fig 15 . The initial values are all 40%, and the battery SOC at the end of Rule-EMS is 45.11%; the battery SOC at the end of Fuzzy-EMS, GA-Fuzzy-EMS, PSO-Fuzzy-EMS, DBO-Fuzzy-EMS, TLK-DBO-Fuzzy-EMS are 45.21%, 45.15%, 44.76%, 44.22%, and 43.74%, respectively. The variations were 12.78%, 13.02%, 12.88%, 11.90%, 10.55%, and 9.35% for the six strategies, respectively. Table 10 shows the methanol consumption under six different strategies. The methanol consumption in Rule-EMS is 121.75L/100km, Fuzzy-EMS, GA-Fuzzy-EMS, PSO-Fuzzy-EMS, DBO-Fuzzy-EMS, and TLK-DBO-Fuzzy-EMS reduce the methanol consumption by 3.89%, 4.45%,6.28%,7.73%, and 9.07%, respectively, under the same conditions It can be seen that the TLK-DBO-Fuzzy-EMS is able to find a better economy compared to the other controller. Fig 16 shows the cumulative curve of methanol consumption for the engine under CHTC-TT conditions for the six strategies. Comparative analysis indicates that the TLK-DBO-Fuzzy-EMS strategy maximizes the overall vehicle economy relative to the other five strategies. Based on the methanol consumption and battery SOC changes, it can be inferred that, while ensuring optimal economy, reducing SOC fluctuations throughout the entire driving cycle helps minimize instability in internal chemical reactions and occurrences of electrochemical corrosion. This contributes to extending the battery’s lifespan. Fig 17 shows a comparison of the engine operating points under six strategies. As shown in the figure, the engine can operate within the delineated range. In Rule-EMS ( Fig 17(A) ), the engine operating point is relatively fixed. In Fuzzy-EMS ( Fig 17(B) ), it is evident that after fuzzy control adjustment, the engine operating point is closer to the high-efficiency zone. In GA-Fuzzy-EMS ( Fig 17(C) ), PSO-Fuzzy-EMS ( Fig 17(D) ), DBO-Fuzzy-EMS ( Fig 17(E) ), and TLK-DBO-Fuzzy-EMS ( Fig 17(F) ), the optimized energy management strategies result in more engine operating points clustered in the high-efficiency zone. With the minimum torque set at 800 Nm , the engine operating points in the inefficient zone for the six strategies are 34.42%, 25.10%, 16.46%, 17.27%, 19.55%, and 16.52%, respectively, as shown in the histogram in Fig 18 . Although the engine operating points in the high-efficiency zone for TLK-DBO-Fuzzy-EMS and GA-Fuzzy-EMS are not significantly different, TLK-DBO-Fuzzy-EMS achieves the best methanol economy and reduces battery SOC fluctuations under the same conditions. It can be seen that TLK-DBO-Fuzzy-EMS can adjust the engine’s output curve based on the real-time state of the vehicle during driving, showing greater robustness and adaptability compared to Rule-EMS. Compared to Fuzzy-EMS, it finds better fuzzy control strategies to further improve vehicle economy and reduce battery SOC fluctuations. To further validate the universality and effectiveness of the proposed control strategy, we conducted tests using the China Heavy Truck Cycle (CHTC-HT) under various algorithmically optimized control strategies, as shown in Fig 19(A) . The methanol consumption rates for the corresponding strategies were 144.68 L/km, 124.57 L/km, 118.98 L/km, 116.40 L/km, 115.76 L/km, and 115.07 L/km. The methanol consumption comparison results are depicted in Fig 19(B) . The results verify that the improved dung beetle algorithm EMS has better economy and adaptability. In this paper, in order to improve the adaptability and poor robustness of the rule-based EMS, a multi-fuzzy control EMS with improved optimization of DBO is proposed. In order to improve the optimization effect of fuzzy control, this paper incorporates the population initialization strategy of Tent chaotic mapping fused with sinusoidal cosinusoidal random assignment, the dung beetle forager fused with Lévy flight strategy, and the improvement strategy of Cauchy’s Gaussian variation on the basis of traditional DBO, which is shown to be able to greatly improve the global optimality searching ability, convergence speed, and optimality searching accuracy of the traditional DBO demonstrated through simulations on eight test functions. Unlike most scholars who only add a single fuzzy controller in drive mode, this paper incorporates fuzzy controllers in both linear charging and hybrid drive modes, allowing for a better division of the battery SOC working area in different modes and effectively addressing the issue of excess engine output power. Compared to the currently most widely used rule-based EMS, the improved DBO-optimized multi-fuzzy control in this paper reduces the overall methanol consumption of the vehicle by 9.07% and the fluctuation of the battery SOC by 3.43%, effectively enhancing the vehicle’s economy, decreasing the fluctuation of the battery SOC , and greatly improving the adaptability and robustness issues of the rule-based EMS. Comparison of the optimization results proves that TLK-DBO has better optimization effect than the traditional DBO, and the engine working point is more in the delineated high-efficiency working area, which effectively improves the economy of the whole vehicle and reduces the fluctuation of the battery SOC . Meanwhile, this paper further validates the feasibility and effectiveness of the optimization method through different operating conditions.
|
Other
|
other
|
en
| 0.999998 |
PMC11695121
|
Wheat is among the most extensively cultivated food crops worldwide, supplying vital sustenance and nutrition to billions of people globally . As reported by the Food and Agriculture Organization of the United Nations, wheat constitutes over one-third of the world’s cereal production and is a crucial component of the human food supply . However, wheat is subject to a variety of diseases during its growth, including wheat leaf rust, powdery mildew, red mold and stripe rust . These diseases substantially diminish both the yield and quality of wheat, leading to significant losses in the agricultural economy . For instance, in the case of wheat blast disease, this condition not only impacts yield but also results in elevated levels of toxins in the grains, thereby posing a serious threat to both human and animal health. Hence, timely and accurate diagnosis and management of wheat diseases are crucial for ensuring food security and sustainable agricultural development . Traditional methods of wheat disease diagnosis rely on field observation and empirical judgment of agronomists, which is not only time-consuming and laborious, but also involves a certain degree of subjectivity and risk of misjudgment. In modern large-scale agricultural production, there is an urgent need for an efficient and reliable automated disease diagnostic tool to improve the diagnostic efficiency and accuracy, to realize early warning and precise prevention and control . Traditional methods of wheat disease diagnosis mainly include visual inspection and laboratory testing. The visual inspection method relies on the experience and knowledge of agricultural experts, and although it can be carried out in the field in real time, it is inefficient and highly dependent on experts . In addition, visual inspection is difficult to cover and detect quickly when dealing with large planting environments and is prone to missing early disease symptoms. Traditional laboratory testing methods, while accurate, are hindered by their cumbersome, time-consuming, and costly nature, making them impractical for large-scale monitoring and real-time diagnosis. In response to these challenges, automated and intelligent methods for diagnosing wheat diseases have emerged as key areas of research focus. Recent advancements in information technology and agricultural techniques have fostered the adoption of image processing technology for identifying and classifying crop diseases, marking it as a burgeoning diagnostic tool. However, traditional image processing methods rely on artificially designed features, which are difficult to adapt to the complex and changing field environment and diverse disease symptoms, and the classification effect is limited. Deep learning, a leading technology in artificial intelligence, excels particularly in tasks involving image recognition and classification . Convolutional Neural Network, as an important model of deep learning, mimics the processing of the human brain’s visual system through a hierarchical architecture, and is able to automatically extract multi-level features from image data, thus realizing efficient image classification and recognition . Deep learning methods have already achieved remarkable results in the fields of medical image analysis, automatic driving, security monitoring, etc., and have brought new hope for agricultural image processing. In the field of agriculture, image analysis techniques based on deep learning are gradually applied to crop pest detection and classification. By constructing large-scale image datasets and training deep learning models, automated and intelligent diagnosis of crop diseases can be realized. Deep learning methods exhibit superior accuracy and robustness compared to traditional methods, adeptly adapting to complex and dynamic field environments. There have been some research attempts to apply deep learning to wheat disease image classification. Studies have shown that deep learning-based methods for wheat disease classification have achieved better results to some extent. For example, some studies have used convolutional neural networks to classify wheat leaf diseases and achieved high classification accuracy. However, these methods based on convolutional neural networks often neglect the extraction of global features from wheat leaf disease images. Global features play an important role in wheat leaf disease images because they not only contain information about the overall distribution and morphology of the disease, but also capture important information about the environmental background, lighting conditions, and overall leaf morphology. Therefore, to address this issue, we propose the Global Local Feature Network (GLNet) for classifying wheat leaf disease images. GLNet initially employs a bottleneck block, comprised of small convolutional kernels, to extract features from wheat leaf disease images. Subsequently, an inverted bottleneck block utilizes a large convolutional kernel to capture global features from the images, while another inverted bottleneck block, employing a similar architecture but with small convolutional kernels, extracts local features. Furthermore, a feature fusion block effectively enhances the interaction between these global and local features. The main contributions of this paper include: Alexnet with its unique network architecture and technological innovations such as ReLU activation function, Dropout regularization, data augmentation, and local response normalization, which opens up new paths in the field of deep learning. At the same time, by introducing deeper network layers, it is able to extract more levels of abstract features compared to the previous shallow networks, and thus performs well in dealing with complex image recognition tasks. The core design idea of VGGNet is relatively simple and intuitive, which reduces the number of parameters of the model while increasing the depth of the network by connecting multiple smaller-sized convolutional kernels (usually 3x3) in series instead of larger-sized convolutional kernels. This design strategy not only improves the accuracy of the model, but also enhances the model’s ability to extract image features. ResNet addresses the issues of gradient vanishing and model degradation in deep neural networks during training by incorporating residual connections. This innovation enables the successful training of deeper neural networks compared to previous models, resulting in significant improvements in performance across various tasks. InceptionNet enhances model accuracy and performance through the introduction of the Inception block. This block facilitates simultaneous utilization of convolutional kernels of varying sizes and pooling operations at the same network level, enabling the capture of image features across multiple scales. By reducing computational demands and parameter count, this network architecture efficiently extracts features while managing resource consumption. It finds extensive application in computer vision tasks such as image classification and target detection. ACNet improves model accuracy and efficiency by introducing asymmetric convolutional strategies. These enhancements strengthen feature extraction capabilities during training while maintaining consistent computation through convolution kernel fusion during inference. The fundamental concept involves allowing convolution kernels to dynamically adjust their size or shape based on input data, or combining kernels of different sizes to capture multi-scale features. This approach enhances the model’s understanding and generalization of complex scenes. EfficientNet achieves a unified scaling of the network depth, width and resolution through composite scaling techniques, thus significantly reducing the number of parameters and computation of the model while maintaining high performance. MobileNet is a compact convolutional neural network architecture developed by the Google team. Its primary objective is to substantially reduce model size and computational complexity while preserving model accuracy. This design makes it particularly well-suited for deployment on mobile devices and embedded systems. DenseNet greatly facilitates feature reuse and gradient propagation by using the output of each layer directly as the input of all subsequent layers through a dense connectivity mechanism, effectively mitigating the problem of gradient vanishing and improving the training efficiency and performance of the model. Meanwhile, DenseNet further reduces the number of parameters and improves the computational efficiency of the model by introducing Bottleneck and Transition layers to control the width and depth of the network. ShuffleNet effectively reduces the number of parameters and computational complexity of the model by adopting innovative techniques such as Group Convolution and Channel Shuffle. M-bCNN uses a unique convolutional kernel matrix arrangement, employing parallel convolutional layers. Techniques like DropConnect, exponential linear units, and local response normalization are integrated to combat overfitting and gradient vanishing. Compared to traditional networks, M-bCNN effectively boosts data streams, neurons, and connectivity channels with a modest parameter increase, enhancing its nonlinear mapping capabilities and data characterization. Feng et al . constructed a wheat leaf disease image recognition model based on MobileNetV2 and used the training parameters on the ImageNet dataset as the initial parameters of the model. Jiang et al . enhanced the VGG16 model through multi-task learning, leveraging pre-trained weights from ImageNet for transfer learning and fine-tuning to improve wheat leaf disease understanding. RFE-CNN combines RCAB, FB, EML, and CNN to enhance Convolutional Neural Networks’ accuracy in classifying wheat leaf disease images.WR-EL integrates multiple CNN models using bagging, snapshot ensembling, and SGDR algorithms to boost accuracy in wheat leaf disease image classification. Khan et al . developed an efficient machine learning framework for identifying and categorizing various wheat diseases, focusing on brown rust and yellow rust. The method involves several stages: initially, gathering data from diverse fields in Pakistan while accounting for illumination and orientation parameters. Next, preprocessing the data using segmentation and scaling techniques to distinguish healthy from affected areas. Lastly, training the machine learning model on the prepared dataset.Abdulaziz Alharbi et al . proposed a wheat disease classification network using Few-Shot Learning with EfficientNet as the backbone, capable of classifying 18 wheat diseases. Introduced an attention mechanism to enhance feature selection effectiveness. Bansal et al . proposed a hybrid model for detecting and classifying wheat leaf spot diseases, combining Faster R-CNN for regional convolutional neural network-based detection with SVM for classification. Shafi et al . Utilized a pre-trained U2 Net model for background removal and extraction of rust-affected wheat leaves. Applied deep learning classifiers, specifically Xception and ResNet-50, to assess the severity of stripe rust disease. Kukreja et al . proposed a deep learning based method called Deep Convolutional Neural Network (DCNN) to automatically classify wheat rust infestation without human intervention. In addition, this DCNN training and testing process produced definitive and high classification results for wheat rust disease. As shown in the Figure 1 , GLNet mainly consists of the following parts, which are feature extraction block, local feature block, global feature block and multi-scale feature fusion block. As shown in the Figure 2 , the feature extraction block consists of two branches, the first branch consists of two 1x1 convolutional layers and a 3x3 convolutional layer. Specifically, the first 1x1 convolutional layer is used to reduce the dimensionality of the input features, thus reducing the computational complexity and the number of parameters. The next 3x3 convolutional layer is used to increase the sensory field, thus capturing more complex spatial features. Finally, a second 1x1 convolutional layer further extracts and combines the features. The second branch is relatively simple and consists of a 1x1 convolutional layer. This 1x1 convolutional layer is mainly used to directly extract and combine input features, providing additional nonlinear transformations. With the combination of these two branches, the feature extraction block is able to efficiently extract multi-scale and multi-level features, thus improving the expressiveness and performance of the model. As shown in the Figure 3 , the local feature block consists of two branches. The first branch consists of a 3x3 convolutional layer and two 1x1 convolutional layers. First, the 3x3 convolutional layer enhances feature representation by extending the receptive field to capture more complex and diverse spatial features. Following this, a 1x1 convolutional layer reduces input feature channel numbers, thereby decreasing computational complexity and parameters. Subsequently, another 1x1 convolutional layer further extracts and recombines features based on this dimensionality reduction. The second branch is a Residual Connection that passes the input features directly to the output, skipping the intermediate convolutional operations. This connection helps to alleviate the gradient vanishing problem and promotes the training stability and efficiency of deep neural networks. Through the combination of these two branches, the local feature block can effectively extract multi-scale and multi-level features, while the residual connection is utilized to maintain the stability and efficiency of the model training, ensuring that the input features are combined with the convolutionally processed features, thus achieving better feature learning results. As shown in the Figure 4 , the global feature block consists of two branches. The first branch consists of a convolutional layer sufficient to cover the size of the feature map and two 1x1 convolutional layers. First, the convolutional layer that is sufficient to cover the size of the feature map is used for global information extraction; this convolutional layer captures the global features of the entire feature map and provides richer contextual information. If the feature map size is 32, the convolution kernel size for Conv-BxB is 31. If the feature map size is 16, the convolution kernel size for Conv-BxB is 15. Then, the first 1x1 convolutional layer is used for dimensionality reduction to reduce the number of feature channels, thus reducing the computational complexity and the number of parameters. Next, the second 1x1 convolutional layer further extracts features and recombines them on the basis of dimensionality reduction to enhance the feature representation. The second branch: is a Residual Connection, which passes the input features directly to the output, skipping the intermediate convolution operation. This connection can alleviate the problem of gradient vanishing and promote the stability and efficiency of deep neural network training. Through the combination of these two branches, the global feature block can effectively extract global features, while using residual connection to maintain the stability and efficiency of model training, ensuring the combination of input features and convolutionally processed features, thus achieving better feature learning results. This design not only captures the global information, but also reduces the computational complexity through the 1x1 convolutional layer, making the model more computationally efficient while retaining efficient feature representation. As shown in the Figure 5 , feature fusion block mainly consists of two 1x1 convolutional layers and Softmax function. The process is as follows, first, the global and local features are stacked together to form a comprehensive feature map. Then, the Softmax function is used to calculate the weights of the global and local features so as to assign appropriate weight values to the features of both scales. The computed weights are then assigned to the original global and local features, thereby adjusting the importance of the respective features. The weighted global and local features are then summed element by element to form the fused features. Finally, the fused features are further processed to further extract and combine features through two 1x1 convolutional layers to enhance feature expression. Through this process of feature fusion block, the global and local features can be effectively combined to make full use of multi-scale information, thus enhancing the feature learning ability and expression ability of the model. After the weight adjustment and element-by-element summing operation, the global and local features can work better together in the fusion process, and finally the feature representation is further optimized by the 1x1 convolutional layer to enhance the overall performance of the model. The classification layer consists of two fully connected layers, the first fully connected layer has an output dimension of 256 and is used to map the input features to a more compact feature space, thus capturing more discriminative features. The output dimension of the second fully-connected layer is the number of categories, which is responsible for mapping the features extracted from the previous layer to specific classification results. Each output node corresponds to a category, and the output values of these nodes are transformed into the probability of each category through the Softmax function. Magnetic tiles, a Dropout layer with a dropout rate of 0.5 is included between the two fully connected layers to prevent overfitting.The Dropout layer randomly discards half of the neurons, thus making the model more robust during training and avoiding over-reliance on training data. Through this design, the classification layer can effectively extract and utilize the input features, and improve the generalization ability of the model through the Dropout layer, and finally achieve accurate classification results. GLNet is implemented based on Tensorflow and Keras with a batch_szie size of 40, epoch of 100, optimizer of Adamax, learning rate of 1e-4, and loss function of cross-entropy loss function. This paper reproduces all the comparison networks based on the same hyperparameters, and all the experiments in this paper are performed in a Tesla P100. The training is stopped when the accuracy does not increase for more than three epochs. To evaluate the performance of GLNet and comparison networks, we use Accuracy (ACC), Precision (Prec), Recall and F1 score (F1). And the categories of Prec, Recall and F1 are balanced in a way using macro. This paper validates the performance of GLNet using the Philippines Rice Diseases dataset, which has a total of 14 categories. They are Rice Blast (140 photos), Sheath Blight (98 photos), Brown Spot (150 photos), Narrow Brown Spot (98 photos), Sheath Rot (98 photos), Stem Rot (100 photos), Bakanae (100 photos), Rice False Smut (99 photos), Bacterial Leaf Blight (140 photos), Bacterial Leaf Streak (99 photos), Tungro Virus (100 photos), Ragged Stunt Virus (100 photos), and Grassy Stunt Virus (100 photos). Figure 6 shows examples of the different categories. To validate the performance of GLNet, we compare it with typical image classification networks including VGGNet, InceptionNet, InceptionNet, DenseNet, and EfficientNetb0. We can get the following conclusions from the data in Table 1 . First, upon examining these results in detail, it becomes evident that traditional convolutional neural networks (CNNs) such as VGGNet16, ResNet152, ResNet50, and ResNet101, tend to excel at capturing local details but often overlook global contextual information, particularly in the context of wheat leaf disease image classification where this issue is particularly pronounced. Their performance, as indicated by metrics like ACC, Prec, recall, and F1 score, is generally lower compared to more advanced networks. GLNet, on the other hand, addresses this limitation by introducing a global feature block that effectively captures the overall image architecture and contextual information, thereby compensating for the traditional network’s shortcomings in global feature perception. This enhancement allows GLNet to excel in understanding and classifying wheat leaf disease images, as evidenced by its top-tier performance across all metrics, with an accuracy of 0.9638, a precision of 0.9665, a recall of 0.9635, and an F1 score of 0.9637. Second, GLNet leverages a combination of local and global feature blocks, seamlessly integrating the information from both through a feature fusion block. The local feature blocks focus on capturing local details and texture features within the image, while the global feature blocks provide a broader context and overall architectural information. By utilizing soft weight assignment and element-wise summation, the feature fusion block ensures that the advantages of both local and global features are comprehensively utilized. This dual focus enables GLNet to analyze wheat leaf disease images from a more holistic perspective, significantly improving recognition accuracy and robustness across different disease types. In comparison to other advanced networks like DenseNet121, DenseNet169, EfficientNetB0, InceptionNet, RFE-CNN, DCNN, and M-bCNN, GLNet demonstrates superior performance, with higher accuracy, precision, recall, and F1 scores. This highlights the effectiveness of GLNet’s architecture in capturing both local and global features, which is crucial for accurately classifying wheat leaf disease images. In summary, the superiority of GLNet in the wheat leaf disease image classification task stems from its ability to effectively integrate local and global features and achieve a more comprehensive feature understanding and expression through the feature fusion block, which improves the classification performance and the practicality of the model. As shown in detail in Figure 7 , the GLNet model exhibits excellent classification ability for each category of wheat leaf disease images, and this remarkable result strongly demonstrates the effectiveness of the global feature introduction strategy. This strategy enables the model to capture and learn the complex features of wheat leaf disease images from a more comprehensive perspective, which greatly improves the classification accuracy and generalization ability. To verify the validity of different blocks in GLNet, we designed the following real: GLNet(w/o global) represents that GLNet does not use global feature blocks, GLNet(w/o local) represents that GLNet does not use local feature blocks, and GLNet(w/o fusion) represents that GLNet does not use feature fusion blocks. By comparing the result of Table 2 , we can get the following conclusions: First, In the field of wheat leaf disease image recognition, local features play an important role. In the GLNet model, the local feature block is responsible for capturing these subtleties, which can be clearly demonstrated by the experimental results in Table 2 . This is clearly evidenced by the experimental results in Table 2 , where the ACC of GLNet (w/o local) is 0.9493, Prec is 0.9549, Recall is 0.9492, and F1 is 0.9491, which is a significant decline compared with the full GLNet. This indicates that in the absence of the local feature block, GLNet is difficult to effectively focus on detailed features such as the unique texture of localized lesions on leaves, and is unable to accurately differentiate and identify these key local lesion information, which in turn leads to a significant reduction in classification performance, highlighting the irreplaceable nature of localized features in providing precise detail information for the model to accurately identify different disease types. Second, Global features are also indispensable in the task of wheat leaf disease image recognition, which is responsible for capturing the overall architecture of the entire leaf image as well as the background information. Leaf blade as a whole, its lesions are not only reflected in the local lesions, but also include the overall color change, the distribution of lesions on the leaf blade, and the contrast relationship with the surrounding healthy tissues, which are important clues reflecting the overall pathological state of the leaf blade. Analyzing the experimental data, the indexes of GLNet (w/o global) were relatively poor, with ACC of 0.9130, Prec of 0.9149, Recall of 0.9135, and F1 of 0.9115, which were much lower than that of the complete GLNet. Without the guidance of global features, GLNet will not be able to fully understand the overall pathology of the leaf, leading to inaccurate judgment of the overall lesion distribution and severity, thus affecting the improvement of classification performance. The feature fusion block plays a key role in GLNet, which allows local and global features to work together. Wheat leaf disease images contain multiple levels of information from microscopic localized spots to macroscopic leaf overall status only when they are effectively integrated can they be maximized. For example, if local texture features are combined with global features such as overall color and spot distribution, the model will be able to judge and classify the disease more comprehensively and accurately. The data in Table 2 show that the performance of the GLNet (w/o fusion) version shows a significant decrease, with ACC, Prec, Recall, and F1 of 0.9493, 0.9541, 0.9492, and 0.9497, respectively, which are different from the best performance of the full GLNet. As can be seen from Figure 8 , we can clearly see that each building block in GLNet plays an active role in processing all types of wheat leaf disease images. It is worth noting that only when these blocks work in concert, i.e., are utilized simultaneously, GLNet can perform at its best and achieve optimal classification results. This fully illustrates the close cooperation and complementarity between the various components of the GLNet architecture, which together promote the overall model’s ability to recognize wheat leaf disease images. We visualized the output of Local feature block and Global feature block using Grad-CAM. As can be seen from Figure 9 , we can see that the Local feature block can focus more on the local region of the wheat leaf disease image, while the Global feature block can focus on more regions than the Local feature block with its ability to learn global features. When dealing with the wheat leaf disease image classification task, traditional convolutional neural networks often face the problems of insufficient local feature perception and incomplete understanding of global information. To overcome these shortcomings, GLNet is proposed as a new solution in this paper.GLNet adopts a global-local network architecture, which effectively integrates local and global features by introducing parallel processing of global and local feature blocks and utilizing feature fusion blocks. This design not only enables the model to better capture the multi-scale features of an image, but also significantly improves the performance and accuracy in the wheat leaf disease classification task. The innovation of GLNet is its ability to simultaneously process and fuse local details and global background information at different scales. Experimental results show that the performance of GLNet significantly decreases in the absence of local features, global features, or feature fusion blocks, further validating the effectiveness and necessity of its design. This makes GLNet a powerful tool for dealing with the task of classifying wheat leaf disease images and provides new technical support and methodology for disease identification and prediction in the agricultural field.
|
Other
|
biomedical
|
en
| 0.999998 |
PMC11695122
|
The intersection of digital technology and democratic processes presents a transformative avenue for enhancing public health responses during health emergencies. This special issue, titled “Digital Democracy and Emergency Preparedness: Engaging the Public in Public Health,” explores how digital platforms and democratic engagement can work together to strengthen EP (emergency preparedness) and response mechanisms. The advent of digital technology has revolutionized the way information is disseminated and how communities engage with health authorities. From social media campaigns to mobile health apps, digital tools offer unprecedented opportunities for public participation in health-related decision-making processes [ 1 – 3 ]. This paradigm shift towards DD (digital democracy) in public health not only facilitates real-time communication and feedback but also empowers communities, ensuring their voices are heard and their needs addressed in times of crisis . However, leveraging DD for public health is not without its challenges. Issues such as the digital divide, privacy concerns, fragmented governance structures, and the spread of misinformation pose significant hurdles to effective engagement [ 5 – 9 ]. Despite these challenges, the potential benefits of integrating digital tools with democratic practices in emergency preparedness are very promising . This special issue explores critical aspects of public health during the COVID-19 pandemic, including communication strategies in nursing homes in Southern Switzerland Bernardi et al. , the impact of social media overload on depressive symptoms among Chinese students Xie et al. , the role of communicative behaviors and organizational reputation in shaping public health intentions Akbulut , public sentiment toward easing COVID-19 measures in China Xin et al. , and the use of digital diary methodologies to capture real-time insights and amplify diverse voices during crises Kaiser-Grolimund et al. DD offers a novel approach to confronting public health challenges, as digital platforms can be used to foster a more engaged and informed public . This digital engagement is crucial for disseminating health-related information , for empowering communities to participate actively in health decision-making processes, and for building resilient health systems . Community empowerment in public health is one of the aims of the UN Sendai Framework for Disaster Risk Reduction . DD facilitates this empowerment: incorporating it into public health initiatives aligns with the Sendai Framework’s call for a more inclusive and participatory approach to disaster risk management. This two-way exchange enhances the transparency and accountability of public health initiatives and ensures that EP and response strategies are grounded in community knowledge and experience . By adopting DD tools, public health authorities can move beyond top-down communication, leading to a more dynamic, inclusive, and empowering approach to health governance. Public engagement in the context of EP involves active participation, collaboration, and empowerment of communities to take charge of their health and safety. A truly resilient public health system is one that incorporates the public as a key stakeholder in this preparation [ 16 – 18 ]. Thus, engaging the public in EP involves educating communities, promoting a culture of preparedness. This ensures that emergency plans are reflective of and responsive to the needs and vulnerabilities of different communities, so that strategies are both effective and equitable . Engaging the public helps to build trust between health authorities and communities. Trust facilitates the positive reception of accurate information and the negative reception of mis/disinformation, which undermines emergency response efforts . Moreover, by involving local communities in the planning process of EP, authorities can harness local knowledge and insights, which are valuable for the creation of locally relevant EP measures [ 21 – 23 ]. DD plays a fundamental role in facilitating this engagement . Effective EP thus requires a paradigm shift from a top-down approach to a collaborative model that values and incorporates public input . This shift enhances the effectiveness of preparedness measures through the promotion of resilient communities. Arguably, engaging the public in EP is not just a strategic necessity, but also a moral obligation to ensure that communities are not merely passive recipients of monitoring or interventions, but active participants in safeguarding their health and wellbeing. The integration of DD into public health comes with its challenges and ethical considerations . These issues must be carefully considered to ensure that the benefits of digital engagement are realized without compromising individual rights or exacerbating existing inequalities. For example, the digital divide : access to digital technologies presents significant disparities, and this can limit the effectiveness of DD initiatives. Privacy concerns represent another significant challenge. DD approaches in EP involve the collection, storage, and analysis of personal data. Without stringent data protection measures, there is a risk of privacy breaches, which can undermine public trust and deter participation . The rapid spread of infodemics can have profound consequences, undermining public health efforts and leading to confusion and panic. Combating infodemics while respecting freedom of expression requires a delicate balance [ 27 – 29 ]. Ethical considerations also extend to the design and implementation of DD initiatives, which should engage communities without reinforcing existing power imbalances. Participatory design processes can help ensure that digital tools are accessible, user-friendly, and culturally sensitive . The articles presented in this special issue highlight the importance of integrating DD into EP strategies. The convergence of digital tools and democratic engagement presents a powerful avenue to enhance public health responses to emergencies and to build more resilient communities, leveraging technology that facilitates communication and participation, using bottom-up and bi-directional approaches . This special issue also underlines substantial gaps in our understanding and application of DD approaches in public health. It is evident that there is a strong need for continued research, innovation, and interdisciplinary collaboration. Thus, the advancements highlighted within these pages serve as a foundation for future work. The journey towards integrating DD into public health EP is full of challenges. However, the potential rewards—more resilient communities, enhanced public engagement, and more effective emergency responses—underline the relevance of continuing these efforts.
|
Other
|
biomedical
|
en
| 0.999996 |
PMC11695125
|
From the perspective of understanding how to improve child healthcare service systems (CHCSS), Europe's pediatric community is aware of the diversity of provision of pediatric healthcare offered in 53 different countries ( 1 – 3 ). However, Europe has lacked a comprehensive understanding of how this diversity affects health outcomes. Neither the pediatric workforce resources nor the training capacities and needs in pediatrics were fully understood. Differences in the delivery of pediatric nephrology care are reported for European countries since the 1990ies ( 4 – 6 ). However, the underlying “root-cause-effect-outcome relationships”—which are the basis of today's needs and wishes of pediatric nephrologists and their patients—are still non transparent for many countries. After the fall of the Berlin wall in 1990, general health care services changed in several East European countries from the former Soviet Union system to a Western orientated structure to fill their obvious gaps. Following the 2008 financial crisis, many East European countries started discussing changes in existing health care systems essentially as part of cost containment ( 7 , 8 ). There is no information available on whether this has led to an improvement in healthcare in these countries Indeed, concern have been raised about persistent inequalities in the health status of children and adolescents with acute and chronic kidney diseases (CKD) in Europe ( 1 , 9 ). This is further complicated by the gap between public health research and clinical research, and the lack of quality of statistical data on the subject ( 10 ). Compared to adults, children make up only 3% of the total CKD population and are therefore not considered a priority for a country's healthcare system ( 11 ). However, many kidney diseases and conditions in adults are inherited and manifestin early life. Using the mother and child health life course model, one would assume that investing in services for children would pay off in adulthood ( 12 ). The European Society for Paediatric Nephrology (ESPN) is a nearly 60-year-old association aiming to strengthen the individual efforts of all European pediatric nephrologists ( 13 ). Three surveys conducted by ESPN aimed to identify the existing pediatric nephrology healthcare systems in 48 European countries covering a population of more than 200 million children ( 4 – 6 ). Based on the analyses of these surveys, ESPN aims to improve future services by understanding disparities and translating research into practice, with a focus on “learning across borders and making a difference”. The first part of this article highlights the range of country profiles on national healthcare systems and policies, i.e., not only in terms of successes and failures in pediatric nephrology, but also in terms of priorities of care needs and highly specialized workforce provision in Europe in 2020. As complete and accurate official data on the logistical structures and organizational networks of pediatric nephrology centers were not available in many countries, the answers to our questions had to be based on the long-term experience of national ESPN members who are health system leaders in their countries and have consulted with their staff. The second part of this paper identifies challenges in the prenatal, preventive, rehabilitative and palliative care of children with kidney disease in order to improve the conceptualization, recommendations and standardization of multidisciplinary renal care for European children. The aim of this work is to explore the different national approaches to the organization and delivery of pediatric nephrology services and to provide a basis for comparative analysis. This is a cross-sectional survey designed to assess organization of European pediatric nephrology, the achievements and failures of healthcare services, needs and desires of pediatricians, workforce planning in these highly specialized centers, and multidisciplinary care in pediatric nephrology. A survey with twelve questions assessed the organization of renal care in children. All participants were asked to answer multiple-choice and open-ended questions. The questions about ESPN policy addressed workforce planning, health care delivery systems, organization of inpatient care for children with kidney disease, and multidisciplinary care including prenatal diagnosis, preventive treatment and rehabilitative and palliative therapy. The authors selected a leading pediatric nephrologist from each of 48 of the 53 European countries and asked them to represent their country and complete the questionnaire after consulting with colleagues where appropriate. All 48 participants were members of ESPN, either presidents of national pediatric nephrology societies or senior pediatric nephrologists in highly specialized pediatric renal centers. Representatives from Iceland in the west to Kazakhstan in the east and from Norway in the north to Malta in the south participated in the survey. Five of 53 European countries with a total population of fewer than 200.000 inhabitants were excluded from the study. In selecting the European countries for our study, we followed the definition of Europe in the World Health Organization (WHO) list. The WHO Regional Office for Europe (WHO/Europe) is one of the six WHO regional offices in the world responsible for the WHO European Region, which comprises 53 countries. The survey was administered by e-mail communication and all the 48 invited experts agreed to participate in the study. All respondents were fluent in the English language. Data were entered into the study database designed in Excel. Data completeness and accuracy assessment was conducted by JE at the coordinating site in Hanover. In the case of incomplete data, the respective survey participants were contacted and missing information collected. Part A of the survey asked for achievements and failures of national health care services for children with kidney diseases, workforce planning and ESPN policy ( Table 1 ). Part B identifies challenges in the prenatal, preventive, rehabilitative and palliative care of children with kidney disease in order to improve the conceptualization, recommendations and standardization of multidisciplinary renal care for European children ( Table 2 ). Data collected by the questionnaire were analyzed using descriptive statistics. When evaluating the reported data, they were not viewed as statistical facts, but as assessments and opinions of experts on the actual situation, which made statistical analyses not seen as appropriate. Therefore, similar to political opinion polls, percentages or ratios are given that could come close to the truth. For the purpose of analysis, countries were divided into groups based on (a) population size, (b) gross domestic product (GDP)/gross national product (GNP) per capita (low, lower-middle, upper-middle, and high income), (c) political systems and (d) geographic region. Physicians from 45 countries responded to the questionnaire's open-ended question about the top three priorities in the care of long-term kidney patients that require urgent changes to current treatment strategies ( Table 3 ). Forty-one countries each reported 1–3 priorities in relation to different needs to improve the management of services. Four countries (Croatia, Germany, Iceland and Norway) reported no need for change and 3 countries did not respond to the question. The most frequently reported priorities were better training of staff ( n = 7), more incentives for physicians to reduce staff shortages ( n = 3) and more hospital beds ( n = 1), a coordinated national nephrology program for CKD patients ( n = 1) with a focus on establishing an adequate number of high-level pediatric nephrology centers ( n = 1), better collaboration between pediatric and adult nephrology/urology ( n = 2), earlier referral of patients by primary care pediatricians to pediatric nephrologists ( n = 3), and an improvement in long-term follow-up of children with CKD ( n = 3). Furthermore, a change in legislation with approval of drugs used in adult nephrology ( n = 1), improvement of the transplant program ( n = 1), need for national guidelines ( n = 1), a national registry for children with CKD ( n = 1), telemedicine and incentives for research at university hospitals ( n = 1) among the issues reported. Reports from 6 countries called for an improvement in the national diagnostic abilities, e.g., access to genetic testing for rare diseases ( n = 5), improved kidney pathology services ( n = 2), screening tests for kidney diseases ( 1 ), biomarkers for prognosis of CKD ( n = 1) and improved criteria for diagnosis of AKI ( n = 1). New additions to the therapeutic arsenal for the treatment of childhood kidney and urinary tract diseases was reported by 12 countries, such as the use of novel biologics and immunosuppressants for nephrotic and nephritic syndromes ( n = 9), intensive care ( n = 1), multidisciplinary care ( n = 4), dietary ( n = 1) and rehabilitative care ( n = 1), treatment of CKD stages 2–4 ( n = 2) and long-term follow-up for congenital kidney disease ( n = 2).Thirteen countries specified 9 reasons explaining the need for improvement in pediatric dialysis care. Four countries called for home hemodialysis, overnight hemodialysis ( n = 1), hemodialysis for small patients, including vascular fistulas for very young children ( n = 1), and modern technologies ( n = 4), catheters ( n = 1) and biocompatible solutions ( n = 1) for peritoneal dialysis. Ten countries reported a need for further improvement in their pediatric kidney transplant (Ktx) programs, including all types of KTx ( n = 8), living donation ( n = 1) and infant KTx ( n = 1). Eighteen positive achievements in the field of pediatric nephrology were reported from 46 European countries to have taken place in recent years in their national healthcare systems ( Table 3 ). Eighteen countries had established new specialized pediatric nephrology centers. Sixteen countries had built facilities for peritoneal and hemodialysis and 12 countries had opened pediatric transplant units in the past 15 years. Accreditation of pediatric nephrology as a pediatric medical subspecialty was newly established in three countries. Multidisciplinary care became routine in 5 countries, including a new transition program to adult nephrology in one country. A standardized training program was created in one country for pediatricians. The range of diagnostic methods and abilities had expanded in 3 countries. Five countries reported improved medical and dietary care for children ( 2 ). The treatment of HUS, urinary tract infections and stones was standardized in one country. Two countries established a functioning cross-border care program to compensate for their own deficits. The diagnosis of kidney disease was improved by new techniques in six countries, and one country reported an improvement in national kidney research programs. Cost free treatment was introduced in seven countries. Treatment guidelines for doctors were published in two countries and information brochures for patients and families were published in one country. Forty-two countries reported up to three unresolved problems in childhood kidney care in their national health system ( Table 3 ). The three most common problems included no access to any type of dialysis ( n = 12), inadequate transplant programs for all ages of children ( N = 12) and lack of well-trained physicians and dialysis nurses ( n = 12), inadequate reimbursement of hospitals for expensive therapies ( n = 10), lack of multidisciplinary care by psychologists, dieticians, physiotherapists, social workers and vocational counsellors ( n = 6). The lack of (a) genetic testing ( n = 5), (b) electronic health records systems ( n = 2), (c) histopathology services ( n = 2), (d) research resources ( n = 2), (e) national registries ( n = 1), (f) highly specialized reference centres ( n = 2) and (g) problems of local, national and international collaboration ( n = 1) were reported. Six countries identified communication gaps in pediatric nephrology between primary, secondary, tertiary and quaternary renal care ( 6 ), which was responsible for various problems such as overburdened outpatient clinics in tertiary and quaternary care centres, delayedor late referral of critically ill children to dialysis facilities, and also bureaucratic overload of staff members. Less frequently mentioned challenges included the drain of workforce from Eastern to Western European countries ( n = 1), national healthcare crises ( n = 1), high numbers of immigrants in EU countries ( n = 1) and the lack of nationally adapted guidelines ( n = 1). Seven countries had limited access to novel and expensive drugs, and in four countries patients had difficulty accessing highly specialized pediatric nephrology centres. Twenty-nine countries reported that there had been unsuccessful attempts in the last 15 years to fill different gapsin childhood kidney care services ( Table 3 ). The most frequent failure turned out to be the in access to kidney transplantation in 16 countries ( n = 13, all from East Europe). All these countries reported that they had unsuccessfully tried to adapt transplant care in the last 15 years to the needs of children with CKD. Insufficient improvement in the fields of peritoneal or hemodialysis was reported from 7 Eastern countries. The persistent lack of pediatric nephrology centres ( n = 2) and workforce ( n = 7) due to insufficient training of doctors and nurses ( n = 6), high workload ( n = 1), or loss of specialists to other countries ( n = 1) was reported mostly from East Europe. Managerial failures were claimed to have blocked merging between tertiary or quaternary hospitals ( n = 3), closer cooperation between primary, secondary and tertiary care ( n = 2) or between different pediatric nephrology centres ( n = 3) and establishing multidisciplinary teams ( n = 1). Regarding workforce planning, 25 of 48 countries expected to have a shortage of pediatric nephrologistsin the year 2025, 30 countries of clinical nurses and 27 of dialysis nurses ( Table 3 ). All three groups of health care professionals were expected to be lacking in 38% of countries. A lack of pediatric nephrologists was anticipated in 14 of 28 European Union countries (EU) and in 6 of 20 Non-EU countries. The numbers were 9 of all 12 countries with high GDP/GNP per capita and 13 of 32 countries with either low or middle-income. Likewise, 9 of 10 countries with more than 21 million inhabitants reported a shortage as compared to 9 of 25 countries with a population of 4–21 million inhabitants. The main incentives for young pediatricians to choose a training in pediatric nephrology were career opportunities in 34 of 48 countries, research in 30 and reputation in 25 and salaries in only 3 countries. Altogether 98% of countries reported that academia and research in nephrology was a key motivator for choosing pediatric nephrology, however, one third of countries reported too few pediatricians involved in research in their country. This proportion was the same for EU and Non-EU countries. The question—if there were enough qualified candidates for leading positions in highly specialized pediatric nephrology centers—was answered with “no” in 19 countries. The national and regional planning and allocation of pediatric nephrology services in tertiary and quaternary care children's hospitals was determined by the ministries of health alone in 14 countries, together with the universities in 8 countries or by the universities alone in 6 countries and, last but not least, by the initiative of individual pioneers of pediatric nephrology in 12 countries. In the UK, the national health system was responsible for coordination of care; in the Netherlands the health insurance companies played an additional role to all of the influencers listed. Forty-three pediatric nephrologists from 48 European countries reported that pediatric nephrology centers should be closely linked to cardiology, neonatology, intensive care and pediatric surgery/urology in highly specialized pediatric centers. Only Denmark reported a desired close contact between pediatric and adult nephrology. Pediatric nephrology was not an accredited subspecialty in one third of countries. Unfortunately, there were not enough data reported on the guidelines for accreditation of pediatric nephrology centers and for training curricula of pediatric candidates. For 27 out of 48 countries the first of the chosen top three ESPN priorities was the development of European guidelines for workforce planning in national pediatric nephrology services, secondly the development of operational manuals for nephrology service systems ( n = 22), and thirdly written recommendations for patient pathways in outpatient renal care ( n = 23) and multidisciplinary children's hospital care ( n = 27). When congenital anomalies of the kidneys and urinary tract (CAKUT) were suspected during prenatal assessment, one third of the countries reported that obstetricians, geneticists, pediatric nephrologists and pediatric surgeons formed a joint consultation team planning postnatal care. Only in five countries did the consultation team consist of obstetricians only and in 6 countries did it consist exclusively of pediatric nephrologists. Teams of two or three specialists were reported less frequently ( Table 4 ). Seventeen percent of countries reported the need to improve preventive care through screening and genetic testing. The need to establish a national registry of the number of patients with severe kidney disease was reported in the open questions on the most important needs of national pediatric nephrology services. In a third of countries, families were given special analogue medical passports for individual children with chronic kidney disease (CKD). Vaccinations for children with kidney disease were provided by general practitioners and different specialists. Twenty-eight countries offered a mixture of 11 different combinations of care givers. In one country the vaccines were exclusively given by pediatric nephrologists, in five countries only by general practitioners, in seven countries only by primary care pediatricians, and in seven countries only by public health facilities. Rehabilitation, including psychosocial care, schooling, health education, physiotherapy and nutritional counselling for children with CKD, was organized and coordinated within tertiary and quaternary care children's hospitals in 15 countries and by external providers in 27 countries. Only four countries reported having special rehabilitation centers for children with kidney disease that also offer vacation dialysis. Twenty-nine percent of countries reported the need to improve rehabilitative care by supporting education and vocational training for adolescents and guiding the transition from pediatric to adult care. A quarter of countries reported the need to increase the availability of multidisciplinary teams for both inpatients and outpatients, particularly by recruiting more dieticians, psychologists and teachers. Finally, palliative care for children with severe adverse outcomes of AKI and CKD was organized and coordinated within tertiary care children's hospitals in 21 countries and through a combination of hospital and home care in 18 countries. Our study shows that, despite all the achievements of recent decades, there are still very significant differences in pediatric health care systems across Europe, and it highlights the need need for appropriate services for children with kidney disease in all European countries. The most common challenges included no access to any type of dialysis, the lack of kidney transplant programs for young children, well-trained physicians and dialysis nurses, adequate reimbursement of hospitals for expensive therapies, and multidisciplinary care by psychologists, dieticians, physiotherapists, social workers and vocational counsellors. Putting the achievements and failures of the management of pediatric nephrology and their impact on health outcomes for European children with kidney diseases at the center of our survey was justified because of great diversity of healthcare and of needs and desires of pediatric nephrologists. What are the needs of young people with kidney diseases? What is the need of pediatric nephrologists for material and non-material things in a country? What is the outcome of different national strategies in pediatric nephrology? What is important, what has priority and what should politicians pay attention to? Unfortunately, the scientific literature answering these questions is scarce. The term “special healthcare” is often understood as a subjective national attitude. The late philosopher Harry Gordon Frankfurt took a different perspective on this question ( 14 ). He argued that caring for people—whether they belong to majority or minority groups—makes needs equally important. In the current paper we focussed on the various elements of competence required of pediatric nephrologists. One of the most worrying results of our survey was prospect of even fewer well-trained doctors and nurses working in the field of pediatric nephrology in the year 2025. It was therefore not surprising that one half of all reporting countries had sent an appeal to ESPN for the establishment of acollective action to develop European guidelines for workforce planning in national pediatric nephrology services, and to design operation manuals for service systems and planning pathways for renal outpatient care and multidisciplinary hospital care for kidney patients. A look at the structure of European governments showed us that interest in pediatric nephrology appears to be low in some countries. Weak points can be the fragmentation of responsibilities, which leads to a lack of uniformity, and the fact that ministries do not have a budget. The different results concerning priorities, successes, challenges, failures and workforce planning in European pediatric nephrology cannot be discussed in detail here because of lack of published comprehensive national reports. Therefore, our article may become the basis for discussions on this issue. For instance, with respect to unsuccessful attempts, it would be interesting to know what was “managerial failure” due to regulations, leadership bias, cultural differences? Another important aspect is the role of cost-free care in 7 countries which must be explained by local experts. Moreover, several other aspects concerning roots of success, causes of failure and last but not least outcomes need to be clarified for each country. There is a great diversity of pediatric workforce and education offered in European countries which appears to be based not so much on science but on historical factors ( 3 ). The range and quality care offered by pediatric nephrologists is endangered in those European countries reporting major deficits. In spite of an overall decrease of mortality in children under 14years of age in Europe there is a considerable concern about the fact that some countries had poorer outcomes irrespective of their Gross National Product ( 1 ). Future research should focus on the question whether this unacceptable variation could be improved by better organization of services. Regular prenatal care matters for pregnant women. Women of childbearing age living with CKD or any type of organ transplantation should be informed on the potential risks and reported outcomes. Maternal and fetal outcomes have improved since the introduction of regular prenatal monitoring by obstetricians and nephrologists ( 15 ). Healthy pregnant women may benefit from ultrasound at certain time points to detect CAKUT ( 16 ). Pediatric nephrologists can make an important contribution to ethical decision making when they make recommendations to families about possible termination of a fetus with severe CAKUT ( 17 ). In less severe cases, they coordinate multidisciplinary postnatal management with pediatric surgeons, neonatologists, radiologists, and others ( 18 ). In our survey, one-third of European countries reported that prenatal consultation teams consist of obstetricians, geneticists, pediatric nephrologists, and pediatric surgeons. Ehrich et al. ( 3 ) reported that 42 out of 46 European countries had a medical passport for all children in which routine outpatient clinical examinations in childhood are documented. Theoretically, early documentation of kidney disease in these passports or in separate passports for children with CKD could contribute to a better long-term outcome for affected patients. However, the benefit of early detection tools such as urine sticks was less clear. Urine screening was performed in one-third of countries, and the age at screening ranged from 4 months to 6 years ( 19 ). The current ESPN survey shows that vaccinations for children with kidney disease were provided either by family physicians, pediatricians, pediatric nephrologists, public health centers, or all of these. Half of the countries offered different combinations of vaccination centers. Immunizations of children with kidney disease are a mainstay of infection prevention. However, the individual vaccination calendar must be adapted to the specific needs and risks of kidney patients which requires the of pediatric nephrologists. Modern vaccines are generally well tolerated and permanent side effects are rare. Achieving immunity against vaccine-preventable viral and bacterial infections through early immunization prior to kidney transplantation is essential ( 20 ). Vaccination data collection and linkage to immunization information systems are integral components of this management. To this end, paper and electronic medical records should allow interoperability with these systems, including the ability to download, upload, and synchronize a child's immunization data ( 18 ). The tradition and scope of paediatric rehabilitation in Europe varies widely, ranging from physical, sensory, intellectual, psychological and social functioning in children with CKD and disabilities ( 19 ). While some countries, such as the German-speaking countries, have largely adopted the 1980s trend of establishing pediatric rehabilitation as a separate discipline, other countries consider rehabilitation to be the responsibility of hospitals or other existing health care providers. There is still some uncertainty as to which children and adolescents with kidney disease are eligible for rehabilitation. Some legislators regarded rehabilitation as a measure to “restore the ability to work” and thus excluded children by definition. Others differentiated between congenital and acquired diseases and only provided rehabilitation for the latter ( 21 ). Whether or not children and adolescents received appropriate rehabilitation services depended largely on national regulations and, to some extent, on the individual commitment of pediatricians and other health professionals. However, rehabilitation of children with CKD and children receiving kidney replacement therapy plays a crucial role in empowering children with the association of CKD and disability and preparing young patients for adult life and social integration ( 22 , 23 ). Our survey found that rehabilitative care, including psychosocial care, schooling, health education, physiotherapy and nutritional counselling, for children with CKD was mainly organized and coordinated within hospitals or in combination with multidisciplinary caregivers from outside the hospital. Very few countries reported having special rehabilitation centers for children that also offer vacation dialysis. Our previous study ( 2 ) documented “the shortage of non-physician health workers in many countries, leading to suboptimal psychosocial and nutritional support and poorly planned transition programs from pediatric to adult renal care”. Therefore, we propose the development of harmonized recommendations for the age-related rehabilitation of children with CKD according to the needs and wishes of European countries and young patients in particular. The ideal clinical model for palliative care of young patients with advanced kidney disease is currently unknown. Internationally, outpatient renal palliative care clinics have been described with positive results ( 24 ). In our exploratory survey, we report data from the perspective of European pediatric nephrologists. We identified gaps in palliative care for children with adverse outcomes of acute and long-term kidney disease. In half of the countries, palliative care was organized and coordinated within the children's hospital or through a combination of hospital and home care. There were no reports on the role of hospices. Further studies are needed to determine the appropriate model of palliative care in pediatric nephrology ( 24 ). A major limitation of our study is its qualitative, rather than quantitative research due to the variable availability of hard data in study centres. When planning the survey, the organisers were aware of the fact that—even if available—institutes of medical statistics did not contain enough data on pediatric nephrology; or, for political reasons, official statistics might not always reflect the true medical data in some special European countries. This mostly East European problem had been discussed by one of us (JE) with Professor Martin McKee when he was research director of the European Observatory on Health Systems and Policies. Finally, our ESPN teams had come to the conclusion in the late 1990ies that all responding national pediatric nephrologists of ESPN surveys should be very well known to ESPN. In our present survey the responders represented altogether more than a cumulative 1000 years of experience in European pediatric nephrology. Moreover, all responders knew that their individual wish was respected if confidential news should not be published or if the origin of a country should not be identifiable. Each question included the option to answer either “I don't know” or “yes or no, or other”. The percentage of “I don't know” responses to all questions given by all countries was less than 5%, indicating that the questions were well understood. When analyzing this percentage for 13 countries that were formerly part of the former USSR as republics, there were slightly more “I don't know” than indicated for the EU countries. Respondents also had the option of refusing to answer a particular question without giving a reason, but this option was very rarely used. ESPN has taken action to close these gaps by joining forces and becoming a member of the European Kidney Health Alliance (EKHA). The EKHA is a common effort by stakeholders for the challenges of management of people with CKD in Europe through effective prevention and a more efficient care pathway. EKHA works on the principle that the issue of kidney health and disease must be considered at European level and that both the European Commission and European Parliament have vital roles to play in assisting national governments with these challenges. This cross-sectional survey on the existing pediatric nephrology healthcare systems in 48 European countries showed many unmet needs. The most common problems included no access to any type of dialysis, inadequate transplant programs for all ages of children, lack of well-trained physicians and dialysis nurses, inadequate reimbursement of hospitals for expensive therapies, and lack of multidisciplinary care by psychologists, dieticians, physiotherapists, social workers and vocational counsellors. If pediatric nephrologists had too many priorities, they probably risked doing a little bit of everything, and with less success. Our study shows that there are still very marked differences in child health care systems across the European countries and that there is an urgent need to set up appropriate services for children with kidney disease in all European countries.
|
Review
|
biomedical
|
en
| 0.999997 |
PMC11695126
|
It is well documented that children with autism spectrum disorder (ASD) demonstrate language delays. Studies indicate that within the first 2 years of life, children with ASD often display delays in comprehending phrases, comprehension and usage of single words, and utilization of gestures as compared with non-ASD siblings and peers . Bilingual parents may fear that exposure to two different languages can cause further delays in language and social–emotional development for their children with ASD. Recent research denotes the immense apprehension parents have in regard to teaching their child with ASD more than one language, and that professionals often suggest focusing on only one language . Many mothers of children with ASD reported that, regardless of their own level of comprehension in English, they were advised by professionals such as teachers, psychologists, and healthcare providers, to only speak English with their children . However, parents are more effective in communicating with their children when using their native language than when using English, the majority language of their current community . Children whose parents mostly spoke to them in English often had difficulties participating in family conversations conducted in the parents’ native language. Parents’ limited proficiency in English may disrupt the exchange of ideas, and shorten interactions with children . The sociocultural perspective of language development understands that language is essential to social development and acquired in social contexts . Children learn how to socialize through language, making it important that they speak the same language as their parents . The sociocultural perspective may have particular importance for children exposed to more than one language . Relationships with family members are promoted through communication, and native languages serve as an important way cultural traditions and values are transmitted to children in immigrant families . Newborns have been shown to discriminate between maternal native language and another language , even showing a preference for the language they were exposed to prenatally by their mothers . Communication encourages intimacy, and possibly even facilitates the development of children’s attachment to their parents . It is conceivable that professionals believe children with ASD will have difficulty learning more than one language, considering that ASD leads to general communication delays and specific deficits in joint-attention and attention to voices . Joint-attention uses cues such as referential pointing and eye-gaze, which allow children to “map” word labels to specific objects and concepts . Bilingual children have the challenge of mapping a particular word, from different languages to one concept, while children raised in monolingual homes only need to map one word to that concept. Since a child with would already have trouble mapping words to concepts, it may be reasonable to think that bilingual children with ASD would have amplified delays in language acquisition and performance. Additionally, young children with ASD have a strong preference for non-speech analog signals, as opposed to responding to infant-directed speech or “motherese” . Some children with ASD and more significant support needs, also do not elicit the mismatch negativity (MMN) response that occurs in the brain when syllables change, while typically developing (TD) children and higher functioning children with ASD do . This response is typical when there is an auditory stimulus change, and demonstrates word discrimination . It can be inferred that lower functioning children with ASD are unable to, or have trouble, discriminating between words. ASD presents unique challenges to language and communication development causing reasonable hesitancy to expose such children to more than one language. However, it is also possible that children exposed to more than one language may get redundant information that could enhance their language performance. The semantic network model of memory was proposed in the Collins and Quillian . According to this theory, memories are made possible by networks of nodes (concepts) that are connected by links or associations. Applying this theory to bilingual language learning and usage, it is reasonable to see how multiple words in different languages signifying specific referents would strengthen understanding of meaning and enhance language and knowledge. Thus, children exposed to more than one language during the course of language acquisition might have stronger semantic networks. Although there may be the fear that a multilingual home environment will further delay language acquisition, much of the current literature on the subject does not support this approach . Kimbrough Oller et al. determined that monolingual and bilingual children reach the same language milestones at similar ages, suggesting that bilingualism does not have a negative effect on language acquisition. Bilingualism has even been shown to moderate some delays in language and executive functioning commonly exhibited by children with ASD. Gonzalez-Barrero and Nadig found that bilingualism mitigated the effects of ASD on set-shifting, demonstrating that bilingual children ages 6–9 with ASD outperformed monolingual peers with ASD on a dimensional change card sort (DCCS) task. Peristeri et al. analyzed a matched sample of monolingual and bilingual children with ASD finding that bilingual children with ASD scored higher on measures of sustained attention, and comparable to monolingual children with ASD on all other measures of executive functions. Dai et al. compared toddlers with ASD exposed to one language since birth to children with ASD exposed to more than one language. When compared to children with a developmental disorder other than ASD, the children with ASD performed lower on verbal skill measures, but no main effect of bilingual language exposure was found . A recent pilot study using a small sample of elementary aged children (ages 6–9) found no significant difference in language performance between monolingual and bilingual children with ASD, nor a language difference between bilingual children with ASD and typically developing peers . A recent study of bilingual Spanish-English speaking children showed no difference in receptive and expressive language or social communication between the bilingual and monolingual children in a large sample of children between 14 and 36 months of age who were participating in Early Intervention programs . In this report, the authors include an extensive review of other US-based and non-US-based studies examining the effects of bilingual language exposure on a variety of language outcomes in children. None of these studies report composite language performance, such as the Language composite on the Bayley Scales of Infant Development. Most studies also do not compare children at different ages to determine if bilingual language expose affects younger children differently than older children. The present study was conducted to explore the effects of bilingual language exposure on language development, cognitive development, and social–emotional development in toddlers being evaluated for Early Intervention Services with a specific interest in the influence of age. The cross-sectional approach result in illustrating differences in developmental outcome for younger toddlers (under 24 months of age) and older toddlers (older than 24 months of age). It was hypothesized that (1) older toddlers would perform better on the language, cognitive, and social–emotional portions of the assessment, in comparison with the younger toddlers, (2) toddlers exposed to more than one language before the age of two would have lower language performance than those exposed to only one language before the age of two, (3) children with ASD would perform worse than typically-developing children on the cognitive and language portions of the assessment, (4) Bilingualism would affect language acquisition in toddlers with ASD. Participants constituted a convenience sample and included 412 toddlers (male = 56.1%) between the ages of 15 months and 35 months recruited from several agencies in New York City that evaluate children under 36 months for possible developmental delays. The vast majority of children included in the study were New York State Medicaid-eligible. Toddlers came from diverse backgrounds and were exposed to several different languages. A total 129 of the children came from bilingual homes where two language were spoken, and 293 children from monolingual homes, representing 25 different languages ( Table 1 ). Children were categorized as monolingual if only one language was spoken at home, even if the primary language was not English. The sample included 143 children who had been diagnosed with ASD using the Childhood Autism Rating Scales-2. Participants were tested prior to entry into Early Intervention. All children received the Bayley Scales of Infant Development - Third edition (Bayley-III) as part of the EI evaluation. The study was approved by the overseeing IRB and informed consent was obtained from all parents or guardian prior to enrollment in the study. The Bayley-III is used to evaluate infant and toddler cognitive, linguistic, motor, and social–emotional development by direct observation and probing with graded tasks . These scales show notable predictive validity with the Wechsler Preschool Primary Scale of Intelligence (WPPSI-III) . The Bayley-III includes parent rating scales with which the parent can rate the toddler’s social–emotional behavior and adaptive behavior. The social–emotional scales are based on research by Greenspan and Shanker , Greenspan et al. , and Greenspan et al. . The adaptive behavior scales are derived from the Adaptive Behavior Assessment System-Second Edition and show excellent test validity with the Vineland Adaptive Behavior Scales . The CARS-2 is used to rate severity of ASD in children and ranges from scores of 15 to 60. A score of 30 serves as the cutoff score for a diagnosis of autism spectrum disorder (ASD) . Criterion-related validity is reported at r = 0.80, indicating that the CARS diagnosis was in agreement with clinical judgments. The CARS-2 has also been shown to have 100% predictive accuracy when distinguishing between groups of children with ASD and children with intellectual disability, which is superior to the commonly used ABC and Diagnostic Checklist . One parent or parent substitute was interviewed to obtain relevant background information about the child, including: bilingual/monolingual household status; circumstances of pregnancy, labor, and delivery; relevant health status information; family background; developmental milestones; and challenging behaviors. Each child was evaluated using the Bayley-III by a licensed clinical psychologist in the child’s primary language using an interpreter when necessary. All five domains of the scale were tested either by direct observation, test probe (cognition, communication, motor skills), or by parent report (social–emotional, adaptive behavior). The diagnosis of ASD was confirmed by the psychologist, who considered the score on the CARS-2, observation of the child during the evaluation, record review, and information provided by the parent. To evaluate the effect of bilingual exposure and age, the sample was divided into children younger than 24 months (about 2 years) and children 24 months or older, while also comparing bilinguals with monolinguals, and children with ASD with non-ASD, typically developing, children. A three-way ANOVA was conducted to analyze the main effects of age, bilingual status, and ASD and the interactions between these variables on composite language scores. The analysis did not reveal a significant interaction between age, bilingual status, and ASD ( p = 0.183). However, the two-way interaction between age and ASD rendered a significant effect on composite language scores [ F (1, 334) = 8.1333, p = 0.005, η 2 = 0.024] illustrated in Figure 1 . Bonferroni post hoc tests showed a significant ( p = 0.011) difference between young children with ASD ( m = 53.2) and without ASD ( m = 62.5) and a significant ( p < 0.001) difference between older children with ASD ( m = 55.5) and without ASD ( m = 76.4). There was also a significant interaction between age and bilingualism [ F (1, 334) = 3.868, p = 0.050, η 2 = 0.011]. As shown in Figure 2 , Bonferroni post hoc tests showed a significant difference ( p < 0.001) between bilingual children younger than 24 months of age ( m = 54.8) and older than 24 months of age ( m = 66.9). The difference in age groups for monolingual children was not significant ( p = 0.055). The language composite scores were then broken down into expressive and receptive language scores. The factorial ANOVA analyzing the effects of age, bilingual status, and ASD on expressive language did not yield a significant three-way interaction ( p = 0.061). A main effect for ASD [ F (1, 321) = 38.780, p < 0.001, η 2 = 0.108] and for age [ F (1, 321) = 8.525, p = 0.004, η 2 = 0.026] was found with small effect sizes. ASD and age had a significant interaction [ F (1, 321) = 6.742, p = 0.01, η 2 = 0.021] demonstrating a non-significant difference in scores for the older children with ASD ( m = 2.582) than the younger children ( m = 2.463), yet a significantly higher score for the older typically developing children ( m = 5.837) than their younger counterparts ( m = 3.802) . There also was a significant interaction between age and bilingualism [ F (1, 321) = 4.238, p = 0.040. η 2 = 0.013] with younger bilingual children scoring lower than monolingual peers and older bilingual children scoring slightly higher than monolingual peers. ASD and bilingualism did not have a significant interaction ( p = 0.237) . The three-way ANOVA analyzing the effects of age, bilingual status, and ASD on expressive language did not yield a significant three-way interaction ( p = 0.061). ASD and age yielded a significant interaction [ F (1, 321) = 6.742, p = 0.010, η 2 = 0.021]. Bonferroni post hoc test showed a significant ( p = 0.039) difference between young children with ASD ( m = 2.5) and without autism ( m = 3.8) on expressive language scores. The interaction between bilingual status and age was also significant ( p = 0.040). Post hoc tests revealed a significant difference ( p = 0.044) between young bilingual children ( m = 2.5) vs. monolingual children ( m = 3.8) while older bilingual and monolingual children performed similarly ( p = 0.550). A similar pattern emerged for receptive language scores. The three-way ANOVA analyzing the effects of age, bilingual status, and ASD on receptive language did not yield a significant three-way interaction ( p = 0.169), but the interaction between ASD and age [ F (1, 321) = 7.498, p = 0.007, η 2 = 0.023] and the interaction between bilingual status and age was significant [ F (1, 321) = 4.189, p = 0.042, η 2 = 0.013]. Post hoc tests demonstrated that the difference in scores between young children with ASD ( m = 1.6) and without ASD ( m = 3.4) was significant ( p = 0.020) and the difference between older children with ASD ( m = 2.1) and without ASD ( m = 6.2) was significant ( p < 0.001). Additionally, there was a significant difference in receptive language scores for only bilingual children between the two age groups ( p < 0.001). Older bilingual children scored ( m = 4.4) significantly higher than younger bilingual children ( m = 1.9) . Similar statistical analyses were carried out for the cognitive composite scores. The three-way interaction among the independent variables was not statistically significant ( p = 0.154). Only ASD showed a significant main effect [ F (1, 339) = 2.456, p < 0.001, η 2 = 0.133]. As expected, children with ASD performed lower than those without ASD ( m = 73.312, m = 86.815 respectively) on the cognitive measure. Bilingualism, again, did not yield a significant main effect ( p = 0.703) or interaction with ASD ( p = 0.435) or age ( p = 0.205). The interaction between ASD and age was significant [ F (1, 339) = 4.603, p = 0.033, η 2 = 0.033]. Children <24 months of age showed a significant difference in scores ( p = 0.004) with children with ASD ( m = 73.8) scoring lower than children without ASD ( m = 83.3). Children older than 24 months also showed a significant difference in cognitive scores ( p < 0.001) with children with ASD scoring lower ( m = 72.8) than children without ASD ( m = 90.3) . The three-way ANOVA analyzing the effects of age, bilingual status, and ASD on social–emotional scores did not yield a significant three-way interaction ( p = 0.398). Only a main effect for ASD yielded a significant effect [ F (1, 334) = 58.729, p < 0.001, η 2 = 0.150]. As would be expected, children without ASD scored ( m = 82.2) significantly higher on the social–emotional subscale than children with ASD ( m = 66.3). The present study sought to explore the influence of language exposure among typically developing toddlers and toddlers with ASD to extend the current literature by also examining the implication age has on language development. Age and bilingual status did show significant interactions on measures of composite language, expressive language, and receptive language. There were no significant interactions with bilingual status on cognitive scores or social–emotional scores. At younger ages (< 24 months) bilingualism did affect composite language scores as well as expressive and receptive language scores among all toddlers. These deficits resolved among older toddlers (> 24 months) with bilingual toddlers scoring slightly higher than their monolingual peers. While ASD had a significant effect on both language scores (expressive and receptive), cognitive scores, and social–emotional scores bilingual status and ASD did not interact with any of the measures in the present study. Prior to 24 months children in a bilingual environment may show language delays, but older toddlers did not have the same delays resulting from bilingual language expose. Clinicians and educators may want to be causioned when suggesting that bilingual language exposure will have lasting effects on language development. The hypothesis that bilingualism would affect language development overall, even for typically developing children, was not supported by the results. Results demonstrated that younger bilingual children scored significantly lower on the language subscale than their monolingual counterparts, performing similarly to the younger autistic children, but performed similarly to their peers when older than 24 months (about 2 years). An unintended and important finding from this analysis was demonstrating that older bilingual children scored higher on all three language measures than older monolingual children. Including the effect of age into this analysis allowed for a more nuanced look into the language development of children raised in bilingual environments. Bilingualism did not impact cognitive performance in young or older children. Some research has shown that bilingual language exposure may provide some cognitive development benefits to children as they grow older that exceed their monolingual peers. It has been demonstrated that bilingualism can positively affect executive-function development skills such as shifting between tasks, controlling attention, and expanding working memory . Other research suggests that speaking more than one language on a daily basis may also augment executive-functioning throughout a person’s lifetime . Furthermore, bilinguals have demonstrated advantages in certain areas of metacognitive and metalinguistic functioning . For example, children with language delays may be able to use skills developed in one language to aid in learning another language . Parents and professionals must take into consideration the ramifications of only teaching the child one language if another is primarily spoken at home. Only ASD affected social–emotional performance. Children without ASD scored higher on the social–emotional subscale than children with ASD. Bilingual status did not influence performance. Recent research has found that bilingual proficiency can show benefits on executive functioning and social–emotional outcomes . Bilingualism may also be a protective factor against the negative effects of low-income neighborhoods on executive functioning and social–emotional development . As expected, ASD does affect language, cognitive, and social–emotional development. No interaction between bilingualism and ASD was found suggesting that bilingual exposure will not further delay language acquisition in children with ASD or disrupt cognitive or social–emotional development. These findings contradict common suggestions from professionals to teach children with ASD only one language to avoid further language delays. According to the current study, bilingualism will not delay language development in autistic children so parents and caregivers should be encouraged to communicate with their children in their native language. Communication and the development of social skills through social interactions and verbal communication is vital for children with ASD who are already at substantial risk for social and communication deficits. Children with ASD and bilingual parents should be encouraged to communicate with their families in their parents’ native language(s). An ASD diagnosis of a child may be a profound stressor for a family. Families have reported concerns about bonding with their child because of lack of social reciprocity and communication difficulties . Adding concerns about bilingualism increasing language delays is unfounded by the current study and may only increase parental stress. The results of this study are strengthened because of the use of a diverse set of bilingual language pairs (e.g., English/Spanish, Cantonese/Taishanese, English/Hebrew). The present sample consisted of children who were learning a large variety of languages and include monolingual children whose home language was not English. These methods allow the assumption that specific language pairs do not affect the results or efficacy of bilingualism on cognitive development. However, the present study did not consider the socioeconomic (SES) status of the participants and their families, though, many of the participants qualified for Medicaid. Many multilingual learners in this country are immigrants of lower SES which may limit their access to intervention services . Furthermore, lower SES families may not have the means of sending children to daycare or preschool, where they would have increased exposure to the majority language. SES may have had an indirect impact on the bilingual children’s outcome scores. Age of language exposure may also have a meaningful impact on language skills. Infants 4–6 months old have demonstrated the ability to discriminate between a rhythmically different language and their native language using only visual cues but lose this ability by 8 months . Younger infants tend to have more extensive perceptual sensitivity to stimuli such as faces and speech sounds than older infants . The age of secondary language exposure should be explored to understand its effect on cognitive, language, and social–emotional skills over time. It might be fruitful to study language acquisition longitudinally in toddlers with bilingual parents. Implications from this research can have an immense impact on young children, especially when, historically, parents were often instructed to teach their child with ASD only one language. While both bilingual children with ASD and typically developing bilingual children may demonstrate language delays under 24 months of age these deficits seem to resolve with age. Bilingualism was even demonstrated to provide higher language scores than monolingual peers when assessed over 24 months. Bilingualism and ASD showed no statistically significant interaction on language, cognitive, or social–emotional development. These findings should reduce the hesitancy of therapists and parents to raise children with ASD in a bilingual environment and promote parent–child communication in the family’s native language.
|
Study
|
biomedical
|
en
| 0.999996 |
PMC11695127
|
Legume cultivation has a highly positive impact on agri-food systems by increasing the availability of biologically fixed nitrogen, enhancing soil quality, promoting biodiversity, and mitigating the impact of weeds and pests . For European agriculture, greater legume cultivation would help reduce its significant deficit and reliance on imported high-protein feedstuff and meet the increasing industry demand for novel protein-rich foods . Pea ( Pisum sativum L.) and white lupin ( Lupinus albus L.) are promising cool-season grain legume crops for southern Europe. Compared to other pulses, pea has a higher yield and energy production, while lupin maximizes protein yield per unit area due to its outstanding seed protein content . However, greater plant breeding effort is indispensable to reduce the yield gap with cereals and increase the economic sustainability of these crops . Cool-season grain legumes are typically sown in autumn in mild-winter regions and in late winter or early spring in cold-prone regions of Europe. The changing climate is expected to expand the autumn sowing the northwards, allowing crops to benefit from milder winters and escape the increasing risk of terminal drought through earlier crop maturity . Crop frost tolerance is a key breeding target in this context, not only to withstand low temperature stress in cold regions but also because sudden frost events following mild-temperature periods may produce high winter plant mortality due to insufficient cold acclimation . Various stresses may concurrently affect winter survival, including frost, waterlogging, and fungal pathogens . However, frost has prominent importance and can be faced by plants through frost avoidance and frost tolerance mechanisms . Frost avoidance is based on delayed flowering (aimed to protect the very sensitive reproductive organs) , which is primarily achieved through greater vernalization requirements in white lupin and by photoperiodic control and/or high growing degree days requirements in pea . In target regions possibly subjected to both low winter temperatures and terminal drought, late flowering and crop maturity may ensure frost avoidance and a higher yield in cold, relatively moisture-favorable years while being associated with greater drought susceptibility and lower yield in relatively mild, drought-prone years . For plant breeders, this dilemma can be coped with by selecting materials with intermediate flowering times but intrinsic frost and drought tolerance. Intrinsic drought tolerance could be expressed by a positive deviation from the genotype yield expected according to its onset of flowering . Intrinsic frost tolerance of cold-acclimated plants could likewise be expressed by a positive deviation from the genotype winter plant survival expected according to its onset of flowering. The frost tolerance mechanisms of cool-season grain legumes are based on physiological modifications to prevent or resist intracellular ice formation , such as decreased shoot water content during cold acclimation , increased cell membrane stability through changes in the lipid-to-protein ratio and the membrane lipid unsaturation level , and accumulation of osmoprotectant compounds such as proline, glycine betaine, mannitol, sucrose, raffinose, stachyose, and specific proteins that protect against dehydration . Breeding for improved frost tolerance under field conditions is complicated by the wide and increasing climate variation across years, which reduces the applicability, efficiency, and replicability of the selection . A reliable assessment of frost tolerance in controlled environments could overcome these limitations and allow, in addition, for off-season selections . Its assessment on seedlings rules out any effect of flowering time and focuses, therefore, on intrinsic frost tolerance. The assessment requires a period of cold acclimation (hardening) above 0°C temperature, the duration of which increases the frost tolerance of relatively winter-hardy material . Hardening was performed at 4°C over 2 to 4 weeks in most freezing tolerance studies on pea . Subsequently, slow cooling toward the stress temperature is essential to ensure sufficient time for water redistribution, with a cooling rate not exceeding 2°C/h . An accurate assessment of mortality can only be made after a minimum recovery period of 3 weeks under favorable temperatures . Besides plant mortality, genotype frost tolerance could also be expressed by a visual score based on the amount of necrotic areas and other traits . The assessment of the genotype lethal temperature 50 (LT 50 ), i.e., the freezing temperature corresponding to 50% of mortality, requires the evaluation of plant mortality across a set of freezing temperatures and may, therefore, be operationally less adequate than the evaluation of plant mortality at just one optimal freezing temperature when assessing frost tolerance in large numbers of genotypes as in selection trials . Such optimal temperature should ensure the maximization of genotype variation for plant mortality, and may approach the genotype mean value of LT 50 in studies including a sample of genotypes representative of the crop frost tolerance variation. Various studies suggested that this temperature may fall in the range of −7°C to −9°C for pea based on small sets of genotypes mostly selected several decades ago, whereas no information is available for white lupin. For pea, an official frost tolerance evaluation test prescribes the assessment of candidate varieties at −8°C freezing temperature . We recently established an easy-to-build, high-throughput phenotyping platform for frost tolerance assessment represented by a 13.6 m 2 growth freezing chamber with programmable temperature to be used for the selection and genomic prediction of frost tolerance in cool-season grain legumes. This study assessed plant mortality, LT 50 values, and the biomass injury visual score of 11 genotypes of pea and 11 of white lupin encompassing a wide range of winter mortality in earlier field trials in northern Italy, with the objectives of (i) optimizing the frost tolerance platform with respect to optimal freezing temperatures for each species and (ii) verifying the consistency of genotype plant mortality responses across platform and field conditions. The experiment included 11 genotypes of pea and 11 of white lupin comprising commercial cultivars, landraces, and breeding lines, which were selected within each species to represent a wide variation in winter survival based on the results of previous field trials in northern Italy ( Table 1 ). Based on winter plant mortality observed under field conditions in separate earlier experiments, we classified the genotypes into three broad classes of winter hardiness: high, intermediate, and low. One pea genotype, namely, the French landrace Champagne, was selected as a standard of extreme field-based winter hardiness according to Prieur and Cousin and Dumont et al. . The phenotyping platform consisted of a freezing chamber 4.80 m long × 2.84 m wide × 2.46 m high, with programmable temperature in the range of −15°C to 25°C. The chamber was equipped with eight Combo 300-W (C-LED, Bologna) lamps arranged in two rows, placed at a height of 1.6 m from the floor and about 0.9 m above the plant material. Individual test plants were sown at a depth of 2.5 cm into polystyrene plug trays composed of cells measuring 5 cm × 5 cm and 15 cm in depth filled with a commercial growing substrate that included peat corrected for acidity (pH = 6.0) and mineral compound fertilizer NPK (substrate SER CA-V7, Vigorplant, Piacenza, Italy). The plants were placed side by side on four large trolleys fitting into the chamber. Each experimental unit included a set of 10 adjacent plants. The frost tolerance of the 22 genotypes was tested under four freezing treatments: −7°C, −9°C, −11°C, and −13°C. Plant acclimation took place at 4°C over 15 days, a shorter duration than in most of the earlier pea freezing tolerance studies but consistent with the trend toward milder winters and reduced hardening periods in agricultural environments caused by the changing climate. The experiment included four experimental units (organized in blocks) per genotype and treatment. Within each treatment, the genotypes were arranged according to a group balanced block layout holding species on main plots and the different genotypes of the two species on subplots. Operationally, the four blocks were subdivided into two growth cycles of two blocks each, which were performed sequentially using exactly the same protocol. The seeds were pre-germinated on filter paper in Petri dishes for approximately 48 h at 19°C before being transplanted into the plug trays. The evaluation protocol included (i) 10 days of growth at 22.5°C with 12 h of daylight, (ii) 15 days of cold acclimation (hardening) at 4°C with 10 h of daylight, (iii) 12 h of cooling at −3°C in the dark, (iv) 4 h of freezing treatment, (v) 6 days of recovery at 4°C with 10 h of daylight, and (vi) 15 days of regrowth at 15/20°C (night/day) with 12 h of daylight ( Supplementary Table S1 ). The plants were irrigated every 2 days during growth, recovery, and regrowth, while irrigation was suspended from the beginning of hardening to the end of the freezing treatment. The decrease in temperature toward the freezing point and the subsequent increase in temperature occurred at a rate of 1°C/h, according to a pattern described in Supplementary Figure S1 for one test temperature. Air and soil temperatures were monitored with two Tinytag Plus 2 TGP-4510 (Gemini, Chichester) dataloggers to ensure compliance with the protocol. Frost tolerance was assessed using two criteria: plant mortality (i.e., the number of dead plants/total number of plants after hardening), and the level of injury to the aerial biomass measured through a visual score on individual plants and then averaged over plants of the experimental unit. The biomass injury visual score comprised the following 10 levels of increasing damage, which were based on observations at the end of the recovery period to evaluate mild injuries and at the end of regrowth to assess severe damage, such as mortality : (1) no visible damage, (2) loss of leaf turgidity for lupin and presence of dried tendrils for pea, (3) presence of dotted necrosis for lupin and leaf yellowing for pea, (4) presence of few necrotic spots, (5) up to 50% of leaf biomass necrotized, (6) between 50% and 90% of leaf biomass necrotized, (7) almost 100% of leaf biomass necrotized, (8) all of the biomass necrotized but a new shoot has started to grow, (9) the plant is severely damaged, with a very high expected probability of death, and (10) the plant is dead. For mortality assessment, plants that scored 9 and 10 were considered dead. To compute the LT 50 values, we fitted the following generalized linear model with the probit link function ϕ –1 : In the equation, the expectations of plant mortality ratios E ( m g , b ) are binomially distributed and depend on the fixed effects of genotype g t h and block b t h , as well as on the frost treatment temperature T , expressed as a covariate, with the slope β g depending on the genotype. The significance of each factor was assessed via a likelihood ratio test. Two standard model control techniques were applied to test the reliability of the model: a graphical assessment of raw residuals and Pearson’s residuals against fitted values, and the test of homogeneity of the means. The LT 50 values were computed for each genotype within block according to the procedure described in Lei and Sun , namely, as the opposite of the ratio between the intercept ( ν g + α b ) and the angular coefficient ( β g ) of the model. An analysis of variance, including the factors genotype and block, was performed separately for each species to detect significant differences among genotypes for (i) LT 50 , (ii) proportion of plant mortality following each freezing treatment, and (iii) biomass injury visual score following each freezing treatment. Mortality data were first transformed by using the arcsine square root transformation. We reported original data along with least significant difference (LSD) values back-transformed from LSD values obtained from the analysis of transformed data, and assessed the genotype differences by using Duncan’s test. The mean values of species for plant mortality and LT 50 were compared according to the group balanced block lay-out, i.e., by testing the species factor on an error term represented by the species × block interaction. The consistency of genotype frost tolerance assessments based on LT 50 , plant mortality, and biomass injury score values was determined by using Pearson’s correlation analysis. Statistical models were fitted by the glm() and lm() functions from the R-package “stats”. Duncan’s test was performed by using the duncan.test() function, and LSD values were computed by using the LSD.test() function from the R-package “agricolae” . On average, pea exhibited greater frost tolerance than white lupin as indicated by the lower LT 50 (−12.8 versus −11.0°C; P < 0.01) and lower plant mortality at the lowest freezing temperature (0.50 versus 0.91; P < 0.01) in the analysis of variance-based species comparison. Within pea, the genotype values of LT 50 ranged from −14.5°C for the breeding line KI_L38 to −11.6°C for the cultivar Kaspa ( Table 2 ). The pea genotype plant mortality values showed no mortality at −7°C freezing temperature and did not differ significantly (P > 0.05) at −9°C . They displayed significant differences ( P < 0.01) at lower temperatures and achieved the largest variation, ranging from 0.11 to 0.83, at −13°C . The biomass injury visual score of the pea genotypes decreased with increasing freezing temperature but exhibited significant ( P < 0.01) and similar extents of overall genotype variation across all freezing temperatures . The injury score values showed a high correlation ( r ≥ 0.77, P < 0.01) across the four freezing temperatures ( Supplementary Table S2 ). The high correlation ( P < 0.001) of genotype mortality at −13°C with biomass injury score for the same temperature ( r = 0.98) and LT 50 ( r = 0.91), and that between genotype values of the last two traits ( r = 0.92), indicated a strong consistency between the main indicators of pea genotype frost tolerance. Table 2 reports the results of pea genotype comparisons relative to LT 50 , plant mortality at the two lowest freezing temperatures (those displaying a significant genotype variation), and the biomass injury score for the highest and lowest temperatures (i.e., the extreme temperature range). In general, the pea genotype mean separation was more sensitive for LT 50 (where KI_L38 outperformed any other genotype at P < 0.05) than for the other traits ( Table 2 ). All genotypes assigned to the high winter hardiness class on the basis of field observation exhibited low to fairly low values of LT 50 , plant mortality, and injury score, while all genotypes assigned to the low winter hardiness class showed fairly high to high values of these traits ( Table 2 ). However, two genotypes in the intermediate winter hardiness class, namely, Dove and Kaspa, exhibited high and low frost tolerance, respectively, according to all traits. The highly winter-hardy genotype Champagne displayed high, but not outstanding, frost tolerance. The white lupin genotypes displayed LT 50 values ranging from −12.0°C for the Greek landrace GR56 to −10.0°C for the Egyptian landrace Egypte11 and the Portuguese landrace E80. We observed no lupin genotype plant mortality at -7°C, and significant ( P < 0.01) plant mortality variation only at −11°C and −13°C freezing temperatures . However, the widest variation for genotype mortality, in the range of 0.26 to 0.88, took place at −11°C in this species . The variation for the biomass injury score achieved significance ( P < 0.01) only for the two intermediate temperatures while being flattened toward low values at −7°C and high values at −13°C . A high consistency among major indicators of genotype frost tolerance was observed also in white lupin according to correlations ( P < 0.001) of genotype mortality at −11°C with the biomass injury score for the same temperature ( r = 0.97) and LT 50 ( r = 0.94), or that between values of the last two traits ( r = 0.91). High correlations were also observed for other indicators of frost tolerance ( Supplementary Table S3 ). The results for major indicators of frost tolerance reported for each white lupin genotype in Table 3 indicated also for this species a good consistency between frost tolerance in the phenotyping platform and winter hardiness based on field observations. The five genotypes belonging to the high winter hardiness class were the top-ranking ones for frost tolerance according to LT 50 values, plant mortality, or the biomass injury score for −11°C, whereas three genotypes in the low winter hardiness class out of four were bottom-ranking for all of these frost tolerance indicators. The only inconsistencies were represented by the genotype E80, which was susceptible to frost while belonging to the intermediate winter hardiness class, and the breeding line PLI-P3, which displayed intermediate frost tolerance while being assigned to the low winter hardiness class. In white lupin, too, LT 50 exhibited more sensitive genotype mean separation than plant mortality or the injury score ( Table 3 ). On average, pea exhibited greater frost tolerance than white lupin in this study. This result agrees with the greater average winter plant survival of pea relative to white lupin in a field-based assessment of a large number of varieties across climatically contrasting Italian environments . In general, however, pea is credited an intermediate winter hardiness among the cool-season grain legumes, being considered less winter-hardy than faba bean or lentil and more winter-hardy than chick pea . Our results indicated a good consistency between major indicators of genotype frost tolerance observed in the phenotyping platform, namely, LT 50 , plant mortality at the freezing temperature that maximized the genotype variation, and biomass injury score for the same temperature or a slightly higher one. LT 50 exhibited a more sensitive genotype mean separation than the other indicators. This characteristic, however, requires multiple freezing temperatures (four in our study), making it less suitable for evaluating large genotype numbers than the frost tolerance assessment based on one optimal freezing temperature (i.e., the one that maximizes the genotype variation for plant mortality). The optimal freezing temperature differed for the two species according to our results, being about −13°C for pea and −11°C for white lupin. When used for evaluation at an optimal freezing temperature, our platform could accommodate up to 216 genotypes in each of several evaluation cycles (each cycle acting as a replicate), using experimental units (replicates) of 10 plants each as in this study (or 144 genotypes, using experimental units of 15 plants). Our results suggest that the biomass injury score may concur to the frost tolerance evaluation along with plant mortality or act as the only frost tolerance indicator in case the platform included more genotypes per evaluation cycle with less plants per experimental unit (e.g., 432 genotypes with five plants per replicate), a situation that makes the estimation of plant mortality less reliable. A similar score was adopted by Beji et al. in a pea experiment including four plants per replicate, and is frequently adopted in other grain legumes under similar circumstances . Work by Humplík et al. suggests that the biomass injury assessment of the individual plants could be automated by image analysis, albeit hardly with large time savings and with a need for placing plants into individual pots. The optimal freezing temperature for pea plant mortality at −13°C contrasts with earlier results by Auld et al. , Swensen and Murray , Murray et al. , Cousin et al. , and Homer et al. , which suggested an optimal temperature in the range of −7°C to −9°C. The contrast is even greater when considering that most of these studies adopted a longer hardening period than our study. The improved frost tolerance of the current, recently bred germplasm sample (breeding lines or commercial cultivars), along with possible differences in the evaluation protocols, may partly account for the currently lower optimal freezing temperature. For example, differences in substrate type and drainage may affect plant mortality . Irrigation during the hardening period (as contemplated in some earlier studies) could increase ice formation and cause mechanical damage to the roots. Our slower thawing (1°C/h) relative to some early studies could be less damaging to plants . Other possibly different factors may include the plant growth and development stage before hardening, the duration of the frost treatment, and the length of the regrowth period. Although the frost tolerance evaluation of pea germplasm collections at −8°C is quite frequent , Prieur and Cousin suggested the selection of frost-tolerant pea germplasm by a set of freezing cycles ultimately achieving −12°C. No prior assessment of genotype frost tolerance variation and optimal temperature for frost tolerance evaluation based on LT 50 for plant mortality was available for white lupin. A study based on frost-induced leaf damage of cultivars and accessions estimated by chlorophyll fluorescence indicated an average value of −9.5°C for LT 50 , estimated as 50% of damaged leaves after a long hardening period (42 days at 8°C/2°C day/night temperature) . In contrast, Papineau and Huyghe proposed to assess white lupin frost tolerance at −16°C freezing temperature after a 3-week hardening period at −4°C. The observed good consistency between platform-based frost tolerance and field-based winter hardiness of pea and white lupin genotypes has practical importance for the exploitation of artificial screening results for these species. Correlations for pea plant mortality across field and growth chamber assessments were close to 0.7 in Homer et al. and in the range of 0.5–0.6 in Auld et al. . Correlations close to 0.5 have been reported for other legume species such as faba bean and red clover . As anticipated, one cannot expect a very high consistency between platform-based and field-based plant mortality in grain legumes because the latter depends not only on intrinsic frost tolerance but also on frost avoidance through a delayed onset of flowering. Other factors may influence the genotype variation for field plant survival, such as greater tolerance to diseases whose attack is favored by frost damage, such as Ascochyta spp. for pea , and a different susceptibility to imbibitional chilling of the germinating seed due to seed coat variation for rapidity of imbibition . Onset of flowering may actually explain the response of the lupin breeding line PLI-P3, which featured moderate frost tolerance according to the freezing test while belonging to the low winter hardiness class according to field observations. The high winter mortality under field conditions of this line was associated with extreme earliness of flowering in Annicchiarico et al. (where this line is coded as P3), a feature that would definitely increase its sensitivity to frost because of the early differentiation of the floral apex . The currently good but not outstanding intrinsic frost tolerance of the pea landrace Champagne ( Table 2 ) in spite of its reportedly extreme field-based winter survival may be accounted for by frost escape under field conditions via delayed flowering caused by possession of the Hr (high response to photoperiod) gene . Indeed the Hr gene reportedly co-segregated with the most important quantitative trait loci (QTL) for frost tolerance . Anyway, the reliability of our genotype classifications for winter hardiness suffered from the limited field-based evaluation it was based upon. For example, the pea variety Dove, here classified as intermediate for winter hardiness while showing high frost tolerance according to freezing test results, exhibited moderately high frost tolerance across various cold-prone agricultural environments of France . In conclusion, our results encourage the use of high-throughput phenotyping platforms such as the current one for the assessment of pea or white lupin frost tolerance aimed at plant breeding, molecular studies for detection of QTL and/or definition of genome-enabled prediction models, or for investigation of physiological mechanisms regulating frost tolerance . The assessment under artificial conditions could overcome the increasing unpredictability of field-based evaluations. In addition, its focus on intrinsic frost tolerance (as implied by the evaluation of young plants that lack any differentiation of reproductive organs) facilitates the combination of cold tolerance and drought tolerance characteristics unrelated to flowering time in novel varieties featuring greater yield stability and adaptation to the increasingly variable climate conditions. Indeed our results for PLI-P3 and Champagne confirm that intrinsic frost tolerance is not necessarily related to plant mortality under field conditions for pea or white lupin genotypes, and a similar response was observed for a few faba bean genotypes . For pea, a genomic selection model for intrinsic drought tolerance proved capable of producing material with a similar flowering time but with increased yielding ability under severe drought relative to its genetic base , while a similar model is awaiting exploitation for white lupin .
|
Study
|
biomedical
|
en
| 0.999997 |
PMC11695130
|
Cancer is a complex and multifaceted disease that poses significant challenges to individuals worldwide. Among those affected, women face distinct obstacles due to specific types of cancer that primarily afflict this population, including breast, cervical, ovarian, colorectal, and uterine malignancies . The global cancer incidence has reached around 18.1 million persons by 2020, with 8.8 million cases confirmed in women . In Southeast Asia alone, the International Agency for Research on Cancer reported that more than 1 million women are diagnosed with cancer per year, including breast, cervical, uterine, ovarian, colorectal, and lung cancers being the most frequently . In Vietnam, more than 80,000 women were diagnosed with new cases of cancer, comprising 50% of all newly reported cancer cases each year; particularly, breast, lung, and colorectum cancers were the top 3 most frequent cancers in Vietnamese accounting for 25.8, 9.1 and 9.0% of 83,647 newly diagnosed cases in 2020, respectively . Cancer prevalence in Vietnam reflects the global trend, underscoring the critical need to address the issues encountered by cancer patients in the country. During cancer, physical symptoms that result from the patient’s condition and therapies include pain, tiredness, nausea, hair loss, or menopausal-like symptoms . Particularly, physical symptoms, psychological anguish, surgical treatments, and social support were important determinants of overall well-being and the quality of life of cancer survivors . Furthermore, the psychological distress associated with cancer diagnosis and treatment, such as worry, despair, and fear of recurrence, has a massive impact on women’s mental health . Anywhere from 8–24% of cancer patients are living with depression . A cancer diagnosis is related to an increased incidence of common mental illnesses in persons with no prior psychiatric history, which may have a detrimental effect on cancer therapy and recovery, as well as the quality of life and survival . Depression affects up to 20% of cancer patients, compared to 5% in the general population globally . Furthermore, a study conducted at the Vietnam National Cancer Hospital found that the prevalence of psychological discomfort in cancer patients is approximately 60%, depression 46%, and anxiety 27% . Additionally, women with cancer may suffer from comorbid conditions, such as osteoporosis, cardiovascular diseases, or secondary cancers . Cancer can also impair women’s social lives in addition to their physical and mental health issues. They may endure social isolation, strained relationships, and disease stigma . As QoL measuring can provide a comprehensive picture of an individual’s overall well-being, more research has been conducted on patients’ QoL and its assessment has been widely used as an adjunct measure in oncology . This is particularly important in women with cancer, whose QoL is profoundly affected due to cancer progress and its treatments . As such, numerous studies have been conducted around the world and in Vietnam to identify the risk factors and potential remedies to enhance the quality of life (QoL) of cancer patients. Key predictors of cancer-related quality of life include age, sex, race or ethnicity, marital status, socioeconomic level, treatment techniques, and access to healthcare services, as identified by a study published in JNCI Cancer Spectrum . Similarly, in Vietnam, a study in 2015 on cancer patients in a national hospital stated that the quality of life of cancer patients is closely related to their educational levels, cancer stages, diagnosis duration, and treatment methods . Notably, patients with limited financial means are more vulnerable to decreased quality of life, even after cancer treatment, as some risk factors may persist . Consequently, cancer patients, particularly those with low income, generally experience lower health-related quality of life compared to non-cancer women, and even if they survive and recover from cancer, their quality of life remains lower than that of age-matched women in the general population . Given the tripling number of new cases and deaths of cancer in Vietnam over the past 30 years, it is imperative to understand the quality of life of female cancer patients in this context . However, there remains a paucity of studies examining cancer-related quality of life among women in Vietnam. Vietnam’s socio-cultural context, healthcare infrastructure, and economic conditions are different from those in other Western and Asian countries where many QoL studies have been done . Vietnamese women are faced with distinctive challenges including conventional gender roles, stigma related to cancer and lack of available supportive care services that could have a major influence on their quality of life . Therefore, the purpose of this study is to: The findings from this study will contribute to the existing body of knowledge and provide valuable insights into the specific needs and concerns of Vietnamese women with cancer. By understanding these factors, healthcare practitioners and policymakers can design and implement targeted interventions to improve the overall quality of life and treatment outcomes for women with cancer in Vietnam. This study adopts a cross-sectional design to assess the quality of life (QoL) of 214 women living with cancer in Vietnam. The convenient sampling method was applied. The participants of this study are eligible with inclusion criteria including (1) Vietnamese women over 18 years old and older, (2) who have been diagnosed with cancer including breast cancer, gynecological cancer, and hematological cancer, (3) who have finished at least one phase of intensive cancer treatment in the last 6 months, and (4) who are willing to participate in the study. The study excluded those who are diagnosed with mental illness. Data collection was conducted from September to December 2022 in person at several hospitals in Hanoi, Vietnam, which have an oncology department. The hospitals included were National Oncology Hospital, Hanoi Medical University Hospital, Vinmec Times City International Hospital, and Hanoi Obstetrics and Gynecology Hospital. General information includes sociodemographic data, medical and family history related to cancer, the number of symptoms after cancer, and the frequency of these symptoms were collected. Participants’ pain levels were evaluated using the Visualized Pain Scale, a 10-point scale ranging from 0 to 10, with higher scores indicating higher pain levels. This is a widely used tool for assessing pain levels and has been used in various studies and clinical settings . The Karnofsky Performance Status Scale was used to measure the level of functional capacity in women living with cancer . This scale rates the level of functional capacity on a scale from 20 to 100%, with higher percentages indicating better functional performance status. The Short Form 12 (SF-12), a 12-item short-form derived from the original Short Form-36 survey, was used to measure participants’ individual perceptions of their quality of life . The SF-12 tool assesses aspects of both physical and mental health. This instrument demonstrated high internal consistency (Cronbach’s alpha = 0.81) in previous study . The researchers obtained a list of women with cancer who had received treatment at the selected hospitals, along with their contact information (usually phone numbers). Eligible participants were invited to participate in the study through phone calls or in-person invitations at the hospital. Participants who agreed to participate and met the inclusion criteria were provided with an information sheet about the study and a consent form. After obtaining written consent, participants completed the survey questionnaire in the presence of the researchers. The data collection process took approximately 15 min per participant. Data analyses were conducted using IBM SPSS version 26.0. In two-sided tests, a statistical significance cutoff of 0.05 was employed. Characteristics and health-related variables of participants were described by presenting either the number (percentage) or the mean ± standard deviation values. The assumption of normality was verified using the Shapiro–Wilk test before conducting parametric tests (all p -values >0.05). To examine the differences in physical health (PCS) and mental health (MCS) scores among subgroups with varying participant characteristics and health-related variables, independent t-tests and one-way ANOVA were utilized. Additionally, the Tukey HSD test was performed as a post-hoc analysis. Pearson’s correlation coefficients were computed to determine the strength of the linear association between participants’ continuous variables (such as age, length of cancer diagnosis, and pain score) and QoL (PCS and MCS scores). Furthermore, hierarchical linear regression models were computed to assess the relationships between QoL (PCS and MCS scores) and potential predictors. The residuals of the regression models were examined to ensure they met the Gauss-Markov conditions. The analysis was conducted in two blocks. The first block consisted of participant characteristic variables, while the second block included additional participant health-related variables. The Institutional Ethical Review Board for BioMedical Research of Vinmec International General Hospital-VinUniversity approved this study . Out of the 284 questionnaires completed, 214 (75%) were analyzed due to minimal or no missing data. Table 1 presents the characteristics of these 214 participants. The average age of the participants was approximately 49.63 ± 10.84 years, with a relatively even distribution between urban (56.5%) and rural (43.5%) regions. The majority of the participants identified as no religious (88%), and over half of them held a bachelor’s degree or higher. Approximately 87% were currently employed, with 45% engaged in skilled labor and 42% involved in unskilled labor. Regarding monthly income, it ranged from 0 to 100 million VND, with an average of 8.3 ± 10.77 million VND. Almost all participants had insurance coverage (98.6%) and had someone at home to provide support (79%). Additionally, the medical history of the participants is presented. On average, the duration of cancer diagnosis was 3.02 ± 3.22 years. The majority of participants (84%) had been diagnosed with breast cancer. Among the four cancer stages, the second stage was the most reported (56%). As for the current cancer treatment methods, chemotherapy and surgery were the most frequently employed options, with utilization rates of 68 and 42%, respectively. In terms of lifestyle, a significant proportion of participants reported not smoking or drinking (99%), and 80% engaged in regular exercise. The average duration of nightly sleep was 6.8 ± 1.25 h. Alongside their cancer diagnosis, 23% of the participants also experienced other chronic illnesses. Table 2 shows the results of the survey on eight commonly experienced symptoms in cancer patients. The study found that hair loss, fatigue, and sleep disturbances were prevalent issues among the participants. In terms of the frequency of symptoms (counted for symptoms that appeared at least once a week – 2 point, to always – 6 point), the average number of symptoms reported was 4.97 ± 2.55, with a maximum possible score of 8. The average pain score was 1.23 ± 1.74, indicating that 44% of participants experienced pain to some degree, categorized as mild, moderate, or severe in 31, 12, and 1% of cases, respectively. The average score for functional status was 88%, suggesting a relatively high level of functional capacity. However, 24% of participants rated their functional capacity at 70% or lower (70%: unable to carry on normal activity or to do active work, to 20%: very sick, hospital admission necessary, active supportive treatment necessary). In terms of quality of life, the average score for physical health (PCS) was 46.31 ± 9.70, while the score for mental health (MCS) was 46.96 ± 9.06. In terms of physical health, the PCS score exhibited significant differences among subgroups of six variables: religion, marital status, BMI, chronic disease, pain, and functional status. For mental health (MCS) scores, significant differences were revealed in subgroups categorized by the length of cancer diagnosis and functional status, as shown in Table 3 . Table 4 shows the significant associations between PCS score and several variables. Specifically, PCS demonstrated a negative correlation with age ( r = −0.165, p = 0.016), number of symptoms ( r = −0.220, p = 0.001), pain ( r = −0.444, p < 0.001), and a positive correlation with functional status ( r = 0.222, p = 0.001). Furthermore, the MCS score displayed significant correlations with three variables. These included a negative correlation with the length of cancer diagnosis ( r = −0.156, p = 0.036), number of symptoms ( r = −0.362, p < 0.001), and a positive correlation with functional status ( r = 0.281, p < 0.001). Table 5 presents the predictors of the physical health score (PCS) and mental health score (MCS). In block 1, four characteristic variables were included as independent variables. The model accounted for 4.2% of the variance in PCS scores, and only religion was found to be a significant predictor. Moving to block 2, seven variables pertaining to the participants’ health conditions were added. This expanded model explained 29.1% of the variance in PCS scores. Two significant predictors were identified: the pain score ( β = −0.304, p < 0.001) and the number of symptoms ( β = −0.311, p < 0.001). Regarding mental health, the model accounted for 21.8% of the variance in the MCS score, and two significant predictors were identified. These predictors include functional status ( β = 0.259, p < 0.001) and the number of symptoms ( β = −0.311, p < 0.001). The main goal of this study was to investigate the QoL among Vietnamese women living with cancer and explore the factors influencing their QoL. The research aims to fill a gap in the understanding of QoL in this specific population, providing valuable insights for healthcare practitioners and policymakers to improve the well-being of these women. The study found relatively high mean scores for both physical health (mean of PCS is 46.31) and mental health (mean of MCS is 46.96), indicating that these women reported a generally favorable QoL. The findings align with some existing studies in Vietnam and worldwide, which also reported relatively high QoL scores in certain cancer populations. However, differences with other findings highlight the importance of considering individual, cultural, and regional factors when assessing QoL. For instance, a study conducted in Vietnam utilizing the same SF-12 instrument on a predominantly female population of type 2 diabetes mellitus patients found that the presence of at least one diabetic complication correlated with diminished scores across various domains of SF-12, particularly in the aspect of MCS . Another study from India reveals that cancer patients, particularly those from disadvantaged populations, experience poor health-related QoL outcomes . It is indeed possible that financial distress and belonging to minority populations in India could add to the burdens of cancer patients. The demographic characteristics of the participants in our study were diverse, but they might not have experienced the same level of financial distress or minority representation as observed in the Indian study. PCS demonstrated a negative correlation with age, number of symptoms, and pain, and a positive correlation with functional status. This indicates that older age, a higher number of symptoms, and greater pain were associated with poorer physical QoL. Similarity, the MCS score displayed a negative correlation with the length of cancer diagnosis, number of symptoms, and a positive correlation with functional status, suggesting that women who have been diagnosed with cancer for a longer period and experienced more symptoms tend to have lower MCS scores. This could indicate that over time, cancer survivors may find it more challenging to maintain the mental strategies they initially employed to cope with their diagnosis, higher symptom burden and related challenges, while better functional status was associated with better physical QoL and mental health. The significant predictors identified in the regression analysis for PCS and MCS provide valuable insights into the determinants of QoL in this population. Notably, religion, pain score, number of symptoms, and functional status emerged as significant factors impacting QoL outcomes. The influence of religion on QoL suggests the potential role of spiritual and cultural beliefs in coping with strategies . As cancer and its treatment can lead to changes in women’s lifestyles and mental health, healthcare practitioners should be sensitive to the spiritual and cultural backgrounds of patients to provide comprehensive and patient-centered care . Also, the impact of pain score, number of symptoms, and functional status on QoL indicates that interventions targeting pain management, symptom control, and rehabilitation programs may significantly enhance the QoL of women with cancer. These findings align with previous research on factors influencing QoL in cancer patients, emphasizing the universal importance of symptom management and functional well-being in shaping QoL outcomes in this population . Interestingly, our findings revealed no association between exercise and quality of life, despite substantial evidence suggesting that regular exercise significantly enhances quality of life among women with cancer . This discrepancy may be due to the exercise variable in our study not being assessed with a validated instrument, which could introduce bias in the results. Future studies should consider using validated, culturally appropriate tools to measure exercise in this population for more accurate insights. The demographic characteristics of the participants provide insightful context for understanding the study findings. A noteworthy aspect is the substantial proportion of participants who identified as no religious (88%). This suggests that the majority of the study population might employ coping strategies other than religion, such as seeking secular forms of emotional support or drawing strength from family, social networks, personal beliefs, and their innate resilience. The relatively diverse urban–rural distribution, educational backgrounds, employment status, and income levels reflect the heterogeneity of the sample, which contributes to the generalizability of the results. The broad representation of participants allows for a comprehensive understanding of QoL experiences in different socioeconomic and cultural contexts. Tailored strategies can be developed to address the specific needs of different subgroups within the population, considering their unique demographic characteristics and life circumstances. Religion, pain, number of symptoms, and functional status are revealed as significant factors of QoL among cancer with women in Vietnam; therefore, healthcare practitioners should consider these elements when providing treatment to deliver holistic and patient-centered care. Moreover, programs and interventions should be sensitive to the spiritual and cultural backgrounds of patients, and focus on pain management, symptom control, and rehabilitation. The diverse demographic characteristics of the participants imply that specialized strategies should be tailored to address the individual requirements needs of different subgroups within the population, taking into account their distinct demographic traits and living situations. For example, cultural tailored care plans that incorporating spiritual care into treatment, giving access to spiritual advisors, or providing spaces for meditation for Buddhist patients. It is also crucial that healthcare providers be trained in cultural competency so as to understand and respect the Vietnamese women’s cultural norms and values. Connecting women with community resources, such as support groups and cancer community, can assist in navigating available services. Specifically, urban cancer patients may benefit from support groups, survivorship programs, and wellness centers within city limits. Meanwhile, rural patients may need telehealth services, transportation assistance, and community-based outreach. The study’s sample size of 214 participants is relatively small, and a larger sample could provide more robust results. Additionally, there may be other factors not explored in the analysis, such as social support and coping mechanisms, which could also impact QoL outcomes and require further investigation. Nevertheless, the study’s findings offer valuable insights into the QoL of Vietnamese women with cancer and suggest areas for potential improvement in healthcare and support services to enhance their overall well-being. This study sheds light on the quality of life (QoL) of Vietnamese women living with cancer and its implications. Culturally sensitive care, effective pain management, and comprehensive support programs can enhance well-being. Tailored interventions for diverse subgroups should be considered. Holistic approaches addressing physical, psychological, spiritual, and social aspects are vital. Collaboration between practitioners and policymakers can lead to patient-centered strategies, ultimately improving QoL for Vietnamese women with cancer.
|
Review
|
biomedical
|
en
| 0.999997 |
PMC11695133
|
Coastal marshes, especially of the subtropics, face spatiotemporal fluctuations in sediment salinity, freshwater availability, and seawater inundation , and thereby are stressful habitats. Marshes are characterized by unique halophyte vegetation with specialized morphological, anatomical, and physiochemical adaptations . One of the common adaptations of coastal plants is the development of succulent tissues in leaves and/or stems, which not only store water but also minimize the toxic effects of accumulated salts for long-term survival . Succulence is common in a large number of Amaranthaceae, especially species of the subfamily Salicornioideae , which are obligate halophytes and require a certain level of salinity for their optimal growth . For instance, Arthrocnemum indicum , A. macrostachyum , Halopeplis perfoliata , Salicornia europaea and S. dolichostachya showed optimal growth under moderately saline conditions. Mechanistically, succulence depends on the efficient compartmentalization of accumulated salts (mainly Na + and Cl - ) in vacuoles and apoplasts, with concomitant accumulation of compatible solutes in the cytoplasm . As a result, succulent tissues possess larger mitochondria to fulfill the excess energy requirements of salt compartmentalization . However, high salinity is inhibitory to the growth of halophytes including highly tolerant Salicornioideae species . Exposure of plants to high salinity also leads to excessive production of reactive oxygen species , which if accumulated to high levels may cause oxidative damage to proteins, membrane lipids, and nucleic acids . Halophytes possess a well-coordinated system of enzymatic and non-enzymatic antioxidants to prevent oxidative damage . Common antioxidant enzymes include superoxide dismutase, catalase, and enzymes of the Foyer-Halliwell-Asada pathway . Ascorbate and glutathione are key non-enzymatic antioxidants, which directly and also in coordination with antioxidant enzymes help plant cells to quench ROS . Under low to moderate salinity, antioxidants keep the cellular levels of ROS within a narrow tolerable range . However, under high salinity, the production of ROS often exceeds the capacity of the antioxidant system to detoxify them and thus inflicts oxidative damage to cell components . The salinity threshold inflicting oxidative damage is often >300 mM NaCl in most Salicornioideae halophytes such as Arthrocnemum indicum , Salicornia brachiata , S. persica and S. europaea . However, no signs of oxidative membrane damage, measured as MDA accumulation, were evident in Sarcocornia quinqueflora even in as high as 1000 mM NaCl salinity . Hence, Salicornioideae halophytes appear to possess an efficient antioxidant defense to deal with salinity-induced ROS production. Besides physiochemical adaptations, a number of coastal marsh halophytes have evolved the phenomenon of seed heteromorphism as a bet-hedging strategy to survive the heterogeneity of the marsh environment . Heteromorphic seeds may vary in size and/or color . These differences in seed morphology accompany differential germinability/dormancy and stress tolerance responses of the heteromorphic seeds . Seed heteromorphism is common in Amaranthaceae including Salicornioideae halophytes. For instance, Arthrocnemum indicum , Salicornia europaea , and S. ramosissima produce heteromorphic seeds with two different sizes, while heteromorphic seeds of A. macrostachyum differed in color . A large number of studies exist that report differences in germination, dormancy, stress tolerance, ecological significance, and physio-chemical attributes of heteromorphic seeds . However, knowledge about the carryover effects of heteromorphism to later growth stages is scant. For instance, plants derived from heteromorphic seeds of Suaeda aralocaspica showed similar growth and physiochemical patterns under both non-saline and saline conditions . In contrast, plants of A. indicum and Atriplex centralasiatica emerged from heteromorphic seeds showed differences in growth and physio-chemical attributes. Hence, information about the carryover effects of seed heteromorphism on the subsequent growth phase of halophytes appears inconclusive and warrants more studies. Arthrocnemum macrostachyum (Moric) C. Koch (Synonym Arthrocaulon macrostachyum (Moric.) Piirainen & G. Kadereit) is a stem-succulent C 3 perennial euhalophyte of Amaranthaceae (subfamily Salicornioideae), which is commonly found in coastal areas of southern Europe, north Africa, Egypt, Saudi Arabia, Middle East, Iran and Pakistan . It is a densely-branching, erect, glabrous, glaucous-green, succulent, monoecious, perennial, halophyte shrub/sub-shrub with leaves fused to cover nodes, making it apparently leafless . It is a good source of vitamin E and has high potential to become a gourmet food . Its seeds contain about 25% oil with edible quality . It has been used as an antibiotic and alexipharmic remedy by locals in Tunisia . Extracts of A. macrostachyum also have hypoglycemic properties . Arthrocnemum macrostachyum has a high tolerance to salinity during both germination and growth stages . It produces heteromorphic seeds, which vary in color . Germination requirements, stress tolerance, and biochemical responses of heteromorphic seeds of A. macrostachyum have been examined . However, information about the growth and physiochemical attributes of plants derived from heteromorphic seeds of A. macrostachyum is absent. This study aimed to provide answers to the following questions: 1) Do plants obtained from heteromorphic seeds vary in growth response and salinity tolerance? 2) Are there any differences in the osmotic adjustment pattern of plants developed from heteromorphic seeds in response to salinity increments? 3) Do plants derived from heteromorphic seeds vary in their photosynthetic potential under increasing salinity? 4) What are the similarities and/or differences in the redox homeostasis response of plants emerged from A. macrostachyum heteromorphic seeds? Seeds of Arthrocnemum macrostachyum (Moric) C. Koch were collected from a large population found in a dry coastal-marsh pan adjacent to the Gaddani ship-breaking yard (Latitude: 25° 4’36.62”N; Longitude: 66°42’35.91”E; Distance from seafront: ~300 m) of the Lasbela District, Balochistan, Pakistan. The seed collection site has a hot, dry sub-tropical climate and is dominated by halophyte vegetation. Seeds were scrubbed manually to separate from the inflorescence husk, surface sterilized with 1% (v/v) sodium hypochlorite for 1 minute, rinsed with distilled water, and air-dried. Dimorphic (i.e. black and brown) seeds were then manually separated and stored in clear plastic petri-plates at room temperature (~25-30°C) until use (~6 weeks). Heteromorphic seeds were sown separately in shallow plastic trays (7.5 cm depth) containing garden soil and irrigated with water until seedlings reached the two-node stage. When plants were 3 months old the seedlings were transplanted into plastic pots (Size = 11.5 cm in diameter and 25.5 cm in length) filled with sand and sub-irrigated with half-strength Hoagland solution . After 20 days of acclimation, salinity (300 and 900 mM NaCl) was gradually introduced at the rate of 50 mM NaCl after every 12 hr to avoid osmotic shock. Plants irrigated with Hoagland solution served as control (0 mM NaCl). The growth experiment was conducted in a net-house under ambient conditions (average day/night temperature was 37.6/25°C, Photosynthetic photon flux density (PPFD) at midday was ~909.8 μmol m -2 s -1 ). There was one plant per pot and there were at least four replicates (n = 4) per treatment. Growth and different physio-chemical parameters were examined after 28 days of NaCl treatments. Shoot and root length and fresh weight (FW) was measured immediately after harvest. The dry weight (DW) of plant parts was determined after drying in an oven at 60 °C for 48 hrs. The moisture content of the shoot and root was calculated by using the following formula: The succulence of shoots and water content of roots was determined by using the following formula: Shoot and root sap osmolality was measured by expressed sap according to Koyro and Huchzermeyer by using a Dew-point microvoltmeter (Wescor HR-33T, USA). The osmotic potential (Ψ s ) was calculated using the van’t Hoff equation described by Guerrier : where n is the number of moles of solute, R = 0.008314 J mol -1 K -1 (gas constant) and T = 298.8 K (absolute temperature). Photosynthetic pigments (Chlorophyll a, b , and carotenoid) of freshly collected shoots were extracted with 100% ethanol in tightly capped glass test tubes stored in the dark at 4°C . Pigment estimation was carried out according to the method of Lichtenthaler and Buschmann with the help of a UV-Vis Spectrophotometer (Beckman-Coulter DU-730). Shoot samples were ground fine with mortar and pestle under liquid nitrogen, homogenized with ice-cold trichloroacetic acid (TCA, 3% w/v), and the homogenate was centrifuged at 12000×g for 20 minutes at 4 °C. The supernatant was used to quantify levels of hydrogen peroxide (H 2 O 2 ) according to the method of Loreto and Velikova and lipid peroxidation according to the method of Heath and Packer . Shoot tissues were finely ground under liquid nitrogen and homogenized with potassium phosphate buffer (pH 7.0) containing 2% (w/v) polyvinyl polypyrrolidone, 1 mM ascorbic acid, and 5 mM disodium EDTA. The homogenate was centrifuged at 12000×g for 20 minutes at 4°C and the supernatant was used to estimate activities of superoxide dismutase (SOD; EC 1.15.1.1), guaiacol peroxidase (GPX; EC 1.11.1.7) and glutathione reductase (GR; EC 1.6.4.2) using methods described in Hameed et al. . For the extraction of catalase (CAT; EC 1.11.1.6) and ascorbate peroxidase (APX; EC1.11.1.11) finely ground shoot tissues were homogenized in potassium phosphate buffer (pH 7.0) containing 4% (w/v) polyvinyl polypyrrolidone, 1 mM ascorbic acid and 5 mM disodium EDTA and centrifuged at 12000×g for 20 minutes at 4°C. The supernatant was mixed with the same volume of acetone (99.8%) containing 10% TCA and 50 mM dithiothreitol followed by overnight incubation at -20°C. Then the mixture was centrifuged at 12000×g at 4°C for 20 minutes. The collected supernatant was used to estimate the activities of CAT and APX by methods described in Hameed et al. . Shoot TCA extracts were used to quantify ascorbate (AsA) and dehydroascorbate (DHA) by using the method of Law et al. . Reduced (GSH) and oxidized (GSSG) glutathione was quantified according to the method of Anderson . Two-way analysis of variance (ANOVA) was performed to find out whether seed morphology (M), salinity (S), and their interaction (M×S) affected different parameters significantly. A post hoc Bonferroni test was used to indicate significant (P<0.05) differences among individual means of the treatments. For all the variables for which the assumption of the homogeneity of variances (Levene’s test) was not met, Welch’s Analysis of variance (ANOVA) was performed. Student t -test (P<0.05) was used to compare the responses of plants derived from black and brown seed morphs within each salinity treatment. All statistical analyses were carried out in SPSS version 20.0 for windows . Arthrocnemum macrostachyum seeds are of two morphologies: black and brown. We investigated the growth responses of the plants developed from heteromorphic seeds under increasing salinity . Two-way ANOVA indicated significant (P<0.05) effects of seed morphology (M), salinity (S), and their interactions (M×S) on fresh biomass (FW) of both shoots and roots of A. macrostachyum . Shoot and root FW of the plants germinated from heteromorphic seeds did not vary from each other under control and high (900 mM NaCl) salinity, but significant (P<0.05) differences were observed under moderate (300 mM NaCl) salinity in FW of plants grown from black and brown seeds . Likewise, FW of the plants derived from heteromorphic seeds in moderate salinity was comparable to the control, but a significant (P<0.05) decrease in FW of the plants irrespective of seed origin was observed in high salinity. Root but not shoot dry biomass (DW) varied significantly (P<0.001) between plants grown from heteromorphic seeds. Salinity had a significant (P<0.001) effect on DW of both shoots and roots of plants germinated from either seed type. Similar to FW, the DW of plants obtained from heteromorphic seeds was inhibited only at high but not moderate salinity . Shoot osmotic potentials (Ψ s ) of plants grown from heteromorphic seeds decreased (i.e. became more negative) with increasing salinity, whereas root Ψ s decreased substantially under saline conditions with comparable values in 300 and 900 mM NaCl . The Ψ s values of the two types of plants were generally similar and shoots had lower Ψ s than the roots. Root water contents of the two types of plants was comparable and also did not vary with salinity increments . Similarly, shoot succulence of plants from black seeds remained unaffected under increasing salinity. Plants from brown seeds had higher shoot succulence compared to those from black seeds in 0 (2 fold) and 300 mM (1.5 fold) NaCl treatments, but a 25% decrease in their shoot succulence occurred in 900 mM NaCl compared to the non-saline control . Two-way ANOVA indicated a significant (P<0.05) effect of seed morphology (M), salt treatments (S), and their interactions (M×S) on photosynthetic pigments. Chlorophyll a (Chl a ) content of the plants germinated from black seeds in moderate and high salinity treatments were comparable and significantly (P<0.05) higher, respectively, in comparison to the control . Chl a content of plants grown from brown seeds did not vary across salinity treatments. Chl b and carotenoid contents of plants derived from black seeds increased under saline conditions, while those of plants germinated from brown seeds did not vary with salinity . Plants obtained from black seeds had higher (~1.5 fold) Chl b and carotenoid contents than those from brown seeds, particularly under high salinity. Two-way ANOVA indicated a significant (P<0.001) effect of seed morphology (M), salinity (S), and their interactions (M×S) on hydrogen peroxide content. Hydrogen peroxide (H 2 O 2 ) content of plants from black seeds decreased (44%) transiently in moderate salinity compared to control and high salinity treated plants . H 2 O 2 content of plants germinated from brown seeds was unaffected by moderate salinity and showed a 3.5 fold increase under high salinity. Plants derived from brown seeds showed 2 fold higher H 2 O 2 content under high salinity compared to those from black seeds . There was a significant (P<0.05) effect of salinity (S) but not of seed morphology (M) and M×S interaction on lipid peroxidation (measured as malondialdehyde) level. In general, lipid peroxidation increased with increasing salinity in plants germinated from either seed type . Seed morphology (M), salinity (S), and their interactions (M×S) had significant effects on the activity of superoxide dismutase (SOD). SOD activity decreased under saline conditions in plants obtained from either seed type . Plants produced from black seeds had 3.8 fold higher SOD activity under moderate salinity compared to those derived from brown seeds. Salinity (S) but not seed morphology (M) and M×S interaction had a significant (P<0.001) effect on the catalase (CAT) and guaiacol peroxidase (GPX) activities. Activities of CAT and GPX from either seed type increased with increases in salinity . Activity of ascorbate peroxidase (APX) was affected significantly by seed morphology (P<0.001), salinity (P<0.001), and their interactions (P<0.01). APX activity increased with increases in salinity in plants derived from brown seeds and was unaffected by salinity in plants obtained from black seeds . Plants germinated from brown seeds generally had 2.5 fold higher APX activity under saline conditions in comparison to those from black seeds. Activity of glutathione reductase (GR) was affected significantly by seed morphology (P<0.05) and salinity (P<0.001) but not by their interaction. GR activity of plants from black seeds showed a slight (12%) decline under moderate but not high salinity compared to the non-saline control . GR activity of plants derived from brown seeds increased in high but not moderate salinity . There was a significant effect of salinity (P<0.05) and salinity-seed morphology interaction on the reduced forms of ascorbate (AsA) and glutathione (GSH). AsA content of plants germinated from black seeds increased (1.8 fold) transiently under moderate salinity compared to control and high salinity treatments . AsA content of plants from brown seeds decreased under saline conditions. GSH content of plants obtained from black seeds did not vary with salinity and those of plants from brown seeds increased (4.2 fold) only under high salinity . Plants of A. macrostachyum grown from the two types of heteromorphic seeds generally showed comparable growth (except DW and shoot succulence), sap osmotic potential, activities of most antioxidant enzymes (except CAT and GR), GSH and malondialdehyde content in the absence of salinity. Plants emerged from heteromorphic seeds of Atriplex centralasiatica , Chenopodium album , and Suaeda aralocaspica showed similar growth in the absence of salinity. In contrast, Suaeda splendens plants grown from heteromorphic seeds showed differences in growth and most physio-chemical parameters under non-saline conditions . Growth of Arthrocnemum indicum plants derived from heteromorphic seeds differed except that sap osmotic potential, MDA content, and activities of most antioxidant enzymes were comparable . In this study, A. macrostachyum plants from heteromorphic seeds also have some differences in physio-chemical parameters. For instance, plants derived from black seeds showed higher DW, H 2 O 2 , CAT, and GR levels and lower shoot succulence, Chl a , CAR, and AsA levels compared to plants germinated from brown seeds under non-saline conditions. Similarly, A. indicum plants grown from heteromorphic seeds had differences in FW, shoot succulence, H 2 O 2 content, AsA, and GSH levels in the absence of salinity . Hence, plants from heteromorphic seeds may exhibit commonalities and differences even in the absence of salinity. The biomass of A. macrostachyum plants grown from heteromorphic seeds under moderate (300 mM NaCl) salinity was comparable to non-saline controls. Similarly, Redondo-Gómez et al. also reported unaltered shoot biomass fraction and shoot area under moderate (340 mM NaCl) salinity for A. macrostachyum from Odiel Marshes, Spain. Biomass of another C 3 salicornioideae succulent Halopeplis perfoliata was also similar to the control when grown in 300 mM NaCl . Hence, moderate salinity does not appear deleterious for the growth of salicornioideae halophytes. In this study, A. macrostachyum plants from brown seeds had higher shoot FW and plants derived from black seeds had higher root biomass compared to their counterparts under moderate salinity. Similarly, plants of A. indicum , Chenopodium album , and Suaeda splendens from heteromorphic seeds showed differences in growth under moderate salinity. In contrast, plants from heteromorphic seeds of the annual halophyte Suaeda aralocaspica did not show differential biomass accumulation . Hence, growth responses of plants developed from heteromorphic seeds under moderately saline conditions may vary among species. Decrease in sap Ψ s is a commonly used indicator of osmotic adjustment, which is important for salinity tolerance in plants . In this study, A. macrostachyum plants that emerged from heteromorphic seeds displayed a similar decrease in sap Ψ s under moderate salinity compared to the non-saline control with lower values in shoots compared to roots. Plants of A. indicum grown from heteromorphic seeds also showed a similar decline in sap Ψ s in 300 mM NaCl compared to the control . Likewise, many other halophytes such as Suaeda maritima , Salicornia dolichostachya , and S. europaea also showed a decrease in sap Ψ s under moderate salinity. Root water content and shoot succulence of A. macrostachyum plants from heteromorphic seeds remained unchanged under moderate salinity compared to the control, indicating effective osmotic adjustment. Moderate salinity did not affect Chl a levels of plants derived from heteromorphic seeds. Redondo-Gómez et al. however, reported a decline in Chl a , improved midday Fv/Fm, and unchanged ɸPSII under moderate salinity compared to controls in A. macrostachyum plants, which could be ascribed to differences in genetic background and maternal environment . However, many other halophytes such as Salvadora persica and Atriplex portulacoides also showed generally unaltered Chl a under moderate salinity. Unaffected Chl a , in aforementioned halophytes including our test species indicate the resilience of light harvesting machinery to moderate salinity, which could support the maintenance of biomass. However, Chl b and CAR increased in plants germinated from black seeds; whereas these parameters remained unaffected by moderate salinity in plants from brown seeds. The Chl b and CAR molecules are mainly found in light-harvesting complexes . In addition, CAR also acts as a protective compound for the LHCs. Hence, a decrease in these compounds, especially Chl b , may result in structural/conformational changes in the PSII antennae . Although mostly related to Chl a , a decrease in Fv/Fm coincided with Chl b in leaves of rice under NaCl treatment . Hence, increased or unaltered levels of Chl b and CAR under moderate salinity in this study might be an adaptation of A. macrostachyum plants to maintain the photochemical efficiency of photosynthesis. However, information about light harvesting parameters in plants derived from heteromorphic seeds is non-existent and warrants more studies. In this study, levels of H 2 O 2 (i.e. a common ROS) either decreased or remained unchanged in A. macrostachyum plants from heteromorphic seeds under moderate salinity, Consequently, a decline in the activity of SOD, which converts superoxide radicles to H 2 O 2 , was observed in plants of test species derived from either type of seeds. Levels of most H 2 O 2 -detoxifying enzymes (except GR in plants from black seeds) and antioxidants (except ascorbate in plants from brown seeds) in plants obtained from heteromorphic seeds were either higher or comparable to non-saline controls. Similarly, levels of most antioxidant enzymes and substances either increased or remained unaltered under moderate salinity in many other salicornioideae halophytes such as Salicornia brachiata , S. persica and S. europaea . However, there was a 1.2-1.4-fold increase in MDA (an indicator of oxidative membrane damage) under moderate salinity in plants from black and brown seeds, respectively. This rise in MDA might be ascribed to photorespiratory H 2 O 2 , which is a characteristic of C 3 species and 2 to 5-fold increase in CAT activity (in presence of unaltered/lower ETR and decreased SOD) in our test species could be an indicator of photorespiration. Activity of CAT also increased under saline conditions in many other C 3 halophytes such as Halopeplis perfolaita and Salvadora persica . However, more detailed studies are required in this regard. Plants of A. macrostachyum survived high salinity (900 mM NaCl), which was equivalent to ~1.5-fold seawater salinity. Redondo-Gómez et al. also reported high salinity tolerance of A. macrostachyum in an earlier study. Similarly, many other salicornioideae halophytes such as Sarcoconia fruticosa , A. indicum , Halopeplis perfoliata , Salicornia persica and S. europaea could also tolerate seawater or higher salinity. However, high salinity (900 mM NaCl) caused a comparable decrease in most growth parameters of the A. macrostachyum plants developed from heteromorphic seeds. High salinity also caused inhibition of growth in many salicornioideae halophytes, namely Salicornia europaea , Halosarcia pergranulata and three Tecticornia spp. . Decreased growth of halophytes under high salinity could be an adaptive strategy to increase chances of survival long enough to produce some seeds . High salinity (900 mM NaCl; equivalent to about -4MPa Ψ s ) resulted in a significant (P<0.05) decline in sap Ψ s of shoots (about -7MPa) and roots (about -5MPa) of A. macrostachyum plants emerged from heteromorphic seeds, which appears adequate for osmotic adjustment. Likewise, plants of A. indicum produced from heteromorphic seeds also showed an osmoconformer response, as sap Ψ s decreased with increases in salinity . Hence, plants developing from heteromorphic seeds appear to respond similarly to the osmotic constraint of salinity. Furthermore, plants of A. macrostachyum from heteromorphic seeds showed similar shoot succulence and water content of root, which was largely insensitive to high salinity, except for a slight decline in shoot succulence of plants obtained from brown seeds. This finding also hints at effective osmotic adjustment in our test species under high salinity. Plants of A. macrostachyum derived from black seeds had higher Chl a , b , and CAR under 900 mM NaCl salinity compared to the control, whereas those from brown seeds remained unaffected by high salinity. In contrast, Redondo-Gómez et al. in an earlier study reported a decline in Chl a , b , and CAR in A. macrostachyum under high salinity. Chlorophyll a and b contents of congener A. indicum in 500 mM NaCl were comparable to the non-saline control . Likewise, levels of photosynthetic pigments in Salicornia brachiata under 500 mM NaCl also remained generally similar to the control . However plants of A. macrostachyum from heteromorphic seeds showed a 72-77% decline in SOD activity under high salinity compared to the control; which could be an indicator of the low incidence of superoxide production through electron leakage to oxygen from ferredoxin at photosystem-I level . Furthermore, the level of H 2 O 2 in plants of A. macrostachyum derived from black seeds was comparable to the control. However, H 2 O 2 content of plants germinated from brown seeds showed a 3.5-fold increase under high salinity in comparison to the control, which could possibly result from photorespiration, which is a characteristic of C 3 plants . The CAT activity in plants from black and brown seeds showed a 3.3 and 8-fold increase under high salinity, respectively. Higher induction of CAT activity in plants obtained from brown compared to black seeds hints at a greater extent of photorespiration-based H 2 O 2 production in plants germinated from brown seeds. The level of MDA (an indicator of oxidative membrane damage) increased under high salinity in plants from either seed type, but plants germinated from brown seeds had 1.1-fold higher MDA compared to those from black seeds. Activities of all H 2 O 2 -detoxifying enzymes and GSH increased in plants derived from brown but not black seeds, indicating the greater need for H 2 O 2 detoxification in the aforementioned plants under high salinity. However, this induction was not adequate and plants germinated from brown seeds developed comparatively higher MDA than those from black seeds. Differences in antioxidant defense and levels of MDA were also found in plants grown from heteromorphic seeds of A. indicum . However, more studies are needed for a better understanding about the differences in antioxidant systems in plants produced from heteromorphic seeds. Our data indicates many similarities and differences in growth and physio-chemical responses of plants of A. macrostachyum derived from heteromorphic seeds. Moderate salinity (300 mM NaCl) did not cause inhibition of growth or Chl a in both types of plants. However, high salinity (900 mM NaCl) led to a decrease in growth and sap ψ s in plants derived from heteromorphic seeds. Decreased SOD levels hint at a low incidence of chloroplastic H 2 O 2 formation under salinity. However, an increase in MDA levels and CAT activity in plants from both types of seeds suggest extra-chloroplastic H 2 O 2 generation under increasing salinity. Under high salinity, activities of all H 2 O 2 -detoxifying enzymes and GSH increased in plants obtained from brown but not black seeds; indicating the greater need for H 2 O 2 detoxification in plants from brown seeds under high salinity. However, this induction was not adequate enough and plants germinated from brown seeds developed comparatively higher MDA than those from the black seeds. Hence, plants derived from black seeds appear to be more resistant to high salinity- induced oxidative damages than those developed from brown seeds. These data indicate metabolic flexibility under increasing salinity in plants of A. macrostachyum emerging from heteromorphic seeds. Detailed studies are needed to determine the molecular basis of these similarities and differences.
|
Study
|
biomedical
|
en
| 0.999998 |
PMC11695134
|
Rapeseed ( Brassica napus L.) is extensively cultivated worldwide and serves as a versatile crop with applications ranging from seed oil production to biodiesel . Additionally, rapeseed contributes to agriculture through its use as fodder, ornamental plants, vegetables, and organic fertilizer, thereby exhibiting great significance in both the agricultural economy and environmental sustainability. Photosynthesis plays a vital role in crop yield by providing the energy and carbohydrates necessary for both vegetative and reproductive growth of plants. The efficiency of light energy conversion determines crop yield. Despite its agricultural importance, rapeseed, as a C 3 plant, exhibits comparatively lower light energy utilization efficiency than C 4 plants like maize and sorghum , and even other C 3 crops such as rice and soybean . Leaves are the primary organs responsible for photosynthesis in rapeseed, and during the reproductive growth phase, their photosynthetic capacity has a profound impact on the yield and quality of seed oil. The production and accumulation of dry matter within the plant during this stage are closely linked to the silique number per plant and the seed number per silique . Therefore, improving the light energy utilization rate of rapeseed is one of the key strategies to increase its yield. Photosynthetic efficiency of plants is governed by complex biological processes, and influenced by multiple factors, mainly including light intensity, CO 2 concentration, temperature, water availability, chlorophyll content, stomatal opening, enzyme activity, plant species, nutritional status, and environmental conditions. For example, overexpression of rubisco enzyme in the Calvin cycle has been reported to increase the regeneration of ribulose-1,5-bisphosphate ( RuBP ), thus improving the photosynthetic efficiency in multiple C3 plants such as Arabidopsis , tobacco , potato , and soybean . Similarly, overexpression of the chloroplast NAD kinase ( NADK2 ) significantly promotes photosynthetic electron transport, thereby improving photosynthesis . In recent years, studies have been conducted to explore high photosynthetic efficiency germplasm in various crops, aiming to enhance photosynthetic efficiency either directly or indirectly. Some studies have focused on identifying genes or loci that can increase the chlorophyll content, thus indirectly enhancing photosynthetic efficiency , while other studies have employed genome-wide association studies (GWAS) to directly identify genes or loci influencing photosynthetic traits in natural populations. GWAS has been widely used for analyzing the genetic basis of various complex traits in rapeseed , and has also been applied to pinpoint genetic regions related to different chlorophyll fluorescence characteristics in many other crops, including rice , barley , soybean , maize , and Arabidopsis . However, GWAS has been less frequently employed to investigate gas-exchange parameters of dark reactions in photosynthesis (such as CO 2 fixation dynamics) in crops like maize, rice, wheat, and soybean . To date, few reports have been conducted on the identification of genes related to photosynthetic efficiency from rapeseed, likely due to challenges in accurately measuring these traits and identifying relevant loci. Identifying genes that enhance photosynthetic efficiency will provide valuable targets for breeding programs aimed at developing improved rapeseed varieties. Photosynthesis predominantly occurs within chloroplasts, which are predominantly found in leaf cells. Chlorophyll within these chloroplasts plays a crucial role in capturing sunlight and converting it into chemical energy. As the main photosynthetic organs, leaves exhibit morphological traits—such as leaf length (LL), leaf width (LW), leaf area (LA), leaf petiole angle (PA), and petiole length (PL)—that can influence overall photosynthetic performance. Therefore, this study first identified the photosynthesis traits of rapeseed leaves and the leaf morphology traits across different germplasms, followed by an analysis of the genetic relationships between these traits. Finally, we identified loci and genes associated with leaf morphology and photosynthetic rate using GWAS. Our findings provide a theoretical foundation and technical framework for breeding novel rapeseed varieties with enhanced photosynthetic efficiency, yield, and oil content. A highly diverse natural population comprising 104 core inbred lines of rapeseed were sourced from the germplasm resource laboratory at Northwest Agriculture and Forestry University (NWAFU, Shaanxi province, China) . These lines were cultivated and evaluated at the Caoxinzuang experimental station of the NWAFU, Yangling, Shaanxi province (latitude 34°30’ N, longitude 108°9’ E), across 3 continuous cropping seasons . A randomized block experimental design was employed, with each line planted in 3 rows with a row spacing of 0.35 m × 0.15 m and a row length of 2 m. Sowing was carried out in September of the previous year, and the routine field management was implemented. At the 8- to 10-leaf stage, we assessed 5 leaf morphology traits, including leaf length (LL), leaf width (LW), leaf area (LA), petiole angle (PA), and petiole length (PL) and 5 photosynthesis-related traits, including the net photosynthetic rate (Pn), stomatal conductance (Gs), intercellular carbondioxide concentration (Ci), the transpiration rate (Tr), and leaf chlorophyll content (Chl). The leaf chlorophyll content was measured during the seedling stage using a SPAD-502 meter. In the seedling phase, the fifth fully expanded leaf was selected for photosynthetic trait assessment, and the assessment was performed utilizing the LI-6400 portable photosynthesis system between 9:00 and 11:00 am on sunny mornings. Light intensity was set at 500 μmol m -2 s -1 , relative humidity of the air was maintained at 40%, and the CO 2 concentration was adjusted to 450 μmol mol -1 . Each experiment included three biological replicates, and five plants per line were randomly selected as sample replicates. Each plant was measured three times for technical replication to ensure accuracy, and the average value for each line was calculated based on these five plants. Phenotypic data were analyzed using Excel and R 4.2.1 software. The best linear unbiased prediction (BLUP) value for each inbred line was obtained by fitting a mixed linear model using the R/lme4 package. Heritability (h 2 ) was calculated according to the following formula : where, genotypic variance ( σ G 2 ), genotype × environment variance ( σ GE 2 ), and error variance ( σ e 2 ) were estimated based on the number of years (n) and replicates (r). These estimates were analyzed using the lmer function in the R/lme4 package. Correlation and partial correlation analyses among the traits were conducted using PerformanceAnalytics v.2.0.4 in the R package . The Brassica 50K Illumina Infinium consortium SNP array was employed for genotyping . The SNPs were screened using the TASSEL5 software, with the following thresholds: a missing rate ≤ 0.2, a heterozygous rate ≤ 0.2, and a minor allele frequency (MAF) > 0.05 . As a result, a total of 22,628 SNPs were selected for GWAS analysis. The genetic structure and kinship relationships of the 104 germplasms were examined using the STRUCTURE v. 2.3.4 software and SPAGeDi, respectively. Based on annual data and the BLUP values, associations between the traits (leaf morphology, chlorophyll content, and photosynthesis-related traits) and SNPs were investigated using a mixed linear model (MLM) in the TASSEL5 software . SNPs with -log 10 p-value > 4 were defined as significantly associated with the phenotype. Haplotypes at the loci associated with traits were identified using the four-gamete rule through the Haploview software . The R 4.2.1 software was used for the generation of box plots and the visualization of the relative phenotypic data. Candidate loci were defined as those detected across at least two traits or consistently identified across multiple years . The B. napus reference genome Darmor-bzh v5 ( https://www.genoscope.cns.fr/brassicanapus ) was used to identify candidate genes . Genes located within linkage disequilibrium (LD) blocks (r 2 > 0.6) of significant SNPs were defined as potential candidate genes . Additionally, the genes outside LD blocks but within a 100 Kb flanking region were also designated as potential candidate genes . Subsequently, the potential candidate genes were subjected to functional annotations through protein BLAST searches in NCBI ( https://www.ncbi.nlm.nih.gov/ ). The homologous genes of these candidate genes were identified from Arabidopsis and rapeseed cultivar ‘ZS11’ reference genomes in TAIR ( https://www.arabidopsis.org ) and BnIR ( https://yanglab.hzau.edu.cn ) databases by BLAST analysis, respectively. GO (Gene ontology) enrichment analysis of these potential candidate genes was conducted utilizing the ClusterProfiler 4.0 package in R . Subsequently, gene expression profiles across five developmental tissues (root, leaf, stem, bud, and silique) were obtained from BnIR . Photosynthesis is a complex process influenced by various environmental factors throughout plant growth, and thus cannot be adequately characterized by a single parameter. In this study, a total of 10 leaf morphology- and photosynthesis- related traits were evaluated, including leaf length (LL), leaf width (LW), leaf area (LA), petiole angle (PA), petiole length (PL), leaf chlorophyll content (Chl), net photosynthetic rate (Pn), intercellular carbondioxide concentration (Ci), stomatal conductance (Gs), and transpiration rate (Tr) of a nature population of rapeseed. The frequency distribution histogram demonstrated that most of these traits followed a normal distribution or were approximately normal, except for LA and Gs, which exhibited left skewness . Significant phenotypic variation was observed across all traits under different environmental conditions, with coefficients of variation (CV) ranging from 7.92% to 53.22% ( Table 1 ). Among the 10 traits, Chl content exhibited the lowest variation, with a CV value of 9.14%, 9.33%, and 9.70% in 2019, 2020, and 2021, respectively. Conversely, Gs displayed the highest variation, with a CV value of 52.30%, 53.22%, and 39.36% in 2019, 2020, and 2021, respectively. These findings indicate that the materials used in our study exhibited substantial diversity in photosynthesis-related traits. Moreover, the broad-sense heritability (H 2 ) of these 10 traits was computed in all three years , revealing moderate to high heritability values, ranging from 62% for Tr to 74% for Ci ( Table 1 ). These results suggest that the genetic factors were the primary contributors to the leaf photosynthesis-related phenotypic variations, thereby making these traits suitable for subsequent GWAS analysis. Pearson correlation analysis was performed to uncover the correlation between the leaf morphology traits and photosynthesis traits. The results showed that leaf area was positively correlated with leaf length, leaf width, and petiole length, while displaying a weaker negative correlation with the photosynthetic rate and stomatal conductance . Additionally, all 5 photosynthesis traits were positively correlated with each other, with the Chl content showing a positive correlation with petiole angle . These findings suggest that while leaf morphology does not directly affect intrinsic photosynthetic capacity, it may influence light penetrance. To identify loci significantly associated with leaf morphology and photosynthesis, GWAS was conducted using three years of data along with BLUP values. Figures 3 and 4 illustrate the Manhattan plots derived from the GWAS based on BLUP values for photosynthesis and leaf traits, respectively. The GWAS results showed that a total of 538 quantitative trait nucleotides (QTNs) were identified to be significantly associated with leaf area and photosynthesis traits (−log 10 P > 4) across the three years ( Supplementary Table S1 ), of which, 122 QTNs were found to be associated with multiple traits in different years ( Supplementary Table S2 ). The largest number (163) of significant QTNs were identified to be associated with PL, followed by 127, 60, 54, 44, 27, 18, 18, 14, and 12 QTNs with Pn, Chl, LW, Ci, Gs, Tr, PA, LA, and, LL, respectively .These 538 significant QTNs were distributed across all chromosomes except A08, with the highest number on chromosome C04, followed by chromosome A09 . QTNs located in close proximity (< 1 Mb) and in linkage disequilibrium (LD) (r 2 > 0.2) were grouped into the same cluster, since they correspond to the same quantitative trait locus (QTL) . Therefore, these 538 QTNs were categorized into 84 QTL clusters ( Supplementary Table S1 ). We further analyzed the QTNs co-detected to be associated with different traits and planting years. Among the total 84 QTL clusters, 21 were identified as key clusters, as they were associated with two or more traits or consistently associated with the same traits across multiple planting years ( Supplementary Table S3 ). These 21 key QTL clusters contained 381 QTNs, which were distributed on 12 chromosomes (including A02, A03, A04, A05, A07, A08, A09, C01, C02, C04, C06, and C07) . Among these 21 key QTL clusters, 8, 6, 6, 5, 5, 4, 3, 3, 2, and 2 clusters were associated with LW, Gs, PL, Ci, Tr, Pn, Chl, PA, LA, and LL, respectively ( Table 2 ). Cluster q.A3-5 was co-identified to be associated with Ci, LW, and PL traits, explaining 15.96%~23.42% of the phenotypic variation. Cluster q.A4-2 and q.C2-2 were co-identified to be associated with Gs, Tr, and Pn traits, explaining 17.85%~22.35% and 23.48%~30.18% of the phenotypic variation, respectively. Cluster q.A7-4 was co-identified to be associated with Gs, Pn, LW, and LL traits, explaining 18.29%~22.05% of the phenotypic variation. Cluster q.A9-1 was associated with Gs, Ci, and Pn traits, explaining 18.11%~27.2% of the phenotypic variation. Cluster q.A9-4 was associated with PL, LW, and LA, explaining 16.18%~41.62% of the phenotypic variation. The q.C4-2 was associated with Gs, Tr, Pn, LL, and PL, explaining 14.7%~20.66% of the phenotypic variation. The q.C7-2 was associated with Gs, Ci, Tr, and LW, explaining 15.83%~22.26% of the phenotypic variation . These QTLs, detected across multiple traits, are considered relatively stable and thus present valuable targets for improving photosynthetic efficiency in rapeseed. To identify the beneficial alleles enhancing plant productivity in rapeseed, we compared the phenotypic differences between different alleles from these 21 QTL clusters . The germplasms were divided into two subgroups based on their allele profiles, and the phenotypic values of these subgroups were analyzed. The results showed that rapeseed lines with the CC genotype at the locus A04_15190622 demonstrated a significantly higher net photosynthetic rate (Pn) compared with those with the AA genotype . Moreover, the phenotypic values of intercellular carbondioxide concentration (Ci), stomatal conductance (Gs), transpiration rate (Tr), and leaf width (LW) of rapeseed plants with the CC genotype were also significantly higher than those with the AA genotype , indicating that the locus A04_15190622 is a key QTN regulating the photosynthesis of rapeseed. The rapeseed plants with the AA genotype at the locus A09_325988 exhibited significantly higher phenotypic values of Ci, Gs, Tr, and LW compared with those with the GG genotype , indicating that the locus A09_325988 is also a key QTN influencing the photosynthesis process in rapeseed. The plants with the GG genotype at the locus A04_9347569 demonstrated a significantly higher Chl content compared with those with the AA genotype , implying that the A04_9347569 is a crucial QTN that regulates the Chl content in rapeseed. The plants with the AA genotype at the locus A09_29345053 displayed a significantly larger leaf area (LA) and LW compared with those with the CC genotype . Moreover, the phenotypic values of Tr, Chl, LL, and PL of the plants with the AA genotype were also significantly higher than those with the CC genotype , suggesting that the locus A09_29345053 is a key QTN regulating leaf morphology in rapeseed. The plants carrying the AA genotype at the locus A05_18835949 exhibited a significantly larger petiole angle (PA) compared with those with the GG genotype , indicating that the locus A05_18835949 is a crucial QTN regulating the petiole angle of rapeseed. A total of 3,129 potential candidate genes were identified from these 21 stable QTL clusters described above ( Table 2 ). Among them, a total of 1,315 candidate genes were found in the QTL clusters for PL, followed by 1,293, 849, 749, 738, 705, 642, 600, 427, and 407 candidate genes for traits LW, Chl, Gs, Ci, Tr, LL, PA, Pn, and LA, respectively . A total of 258 candidate genes were shared by Pn, LL, Tr, Gs, and PL; 426 candidate genes by PA, Chl, LW, and PL; 209 candidate genes also by Ci, Tr, Gs, and LW; 308 candidate genes by LA, LW, and PL; 169 candidate genes by Pn, Tr, and Gs; 131 candidate genes by Ci, LW, and PL; 69 candidate genes by Ci, Tr, and Gs; 116 candidate genes by LL and PL; 99 candidate genes by LA and LW; 76 candidate genes by LW and PL; and 44 candidate genes by Gs and LW . These findings indicated that the candidate genes associated with different traits were overlapped, highlighting the reliability of these candidate genes. The candidate genes associated with photosynthesis, Chl, LA, and PA were subjected to GO functional enrichment analysis. The results showed that these genes associated with photosynthesis were mainly enriched in GO terms related to threonine-type endopeptidase activity, threonine-type peptidase activity, electron transfer activity, iron ion binding, and other molecular functional category . The candidate genes related to Chl were mainly enriched in GO terms related to xenobiotic transmembrane transporter activity, threonine-type endopeptidase activity, threonine-type peptidase activity, and transferase activity . The candidate genes related to LA were mainly enriched in alpha-(1,2)-fucosyltransferase activity, galactoside 2-alpha-L-fucosyltransferase activity, and fucosyltransferase activity . The candidate genes related to PA were mainly enriched in xenobiotic transmembrane transporter activity, active transmembrane transporter activity, ribonucleotide binding, and carbohydrate derivative binding . Among the candidate genes associated with photosynthesis, two candidate genes BnaA07g21550D ( NADP-ME4 , NADP-malic enzyme 4 ) and BnaC07g24400D ( CYP709B3 ) were found in the stable QTL clusters q.A7-2 and q.C7-2 , respectively ( Table 3 ). These two candidate genes exhibited high expression levels in leaves and silique walls, and thus might play a critical role in regulating plant growth and photosynthesis . Among the candidate genes associated with Chl content, two genes including BnaA09g15700D ( PAF1 , proteasome alpha subunit F1 ) and BnaC01g32150D ( MKK5 , MAP kinase kinase 5 ) were located in the stable QTL clusters q.A9-2 and q.C1-6 , respectively ( Table 3 ). The expression of BnaA09g15700D was relatively high in all the tested tissues, while the expression of BnaC01g32150D was significantly higher in the leaves than other tissues, indicating these two genes might be the key candidate genes regulating chlorophyll synthesis . Among the candidate genes related to LA, two genes [BnaA09g42000D ( NAPRT2, nicotinate phosphoribosyltransferase 2 ) and BnaA09g45940D ( WRKY4 , WRKY DNA-binding protein 4 )] were located in the stable QTL clusters q.A9-3 and q.A9-4 , respectively ( Table 3 ), which exhibited higher expression levels in the leaves than the other tissues, suggesting that they might be the key genes regulating leaf size . Among the candidate genes related to PA, two genes BnaA03g38230D ( CRCK3 , calmodulin-binding receptor-like cytoplasmic kinase 3 ) and BnaA05g25510D ( EIF4A2 , eukaryotic translation initiation factor 4A2 ) were located in the stable QTL clusters q.A3-8 and q.A5-4 , respectively ( Table 3 ).These genes were highly expressed in leaves and stems, indicating their potential roles in regulating petiole angle . Photosynthesis is fundamental to plant growth and development, playing a pivotal role in determining crop yields. Therefore, enhancing photosynthetic efficiency is recognized as an effective strategy to increase crop productivity . Despite its importance, studies investigating the impact of natural genetic variation on photosynthetic efficiency in rapeseed remain limited. In this study, we investigated the gas-exchange parameters of dark reactions in photosynthesis, chlorophyll content, and leaf morphology traits of a natural rapeseed population and identified key loci and genes regulating CO 2 -fixation and associated mechanisms in rapeseed by GWAS. Our correlation analysis revealed that photosynthesis and chlorophyll content exhibited a significant positive correlation, consistent with previous reports . Interestingly, it has been found that mutants with low leaf Chl content exhibited an increased net photosynthetic rate under the same conditions of light, water, and fertilizer cultivation in barley . Additionally, we observed a novel significant negative correlation between photosynthesis and leaf area, supporting the hypothesis that lobed leaves may possess a higher photosynthetic capacity. These correlations indicated that optimal light capture and energy conversion might require intricate balance to maintain efficient photosynthesis within limited leaf spatial structure. Our results suggest that other factors such as petiole angle, which determines leaf orientation and petiole length, may play a critical role in photosynthetic efficiency. Although previous studies have explored the genetic factors affecting the chlorophyll content in rapeseed , limited attention has been given to the genetic mechanisms underlying photosynthesis and leaf morphology traits. In this study, 9 QTL clusters were identified to be associated with photosynthesis, which were co-detected by multiple traits. Additionally, 11 QTL clusters were identified to be associated with leaf morphology, of which 4 QTL clusters ( q.A3-5 , q.A7-4 , q.C4-2 , and q.C7-2 ) were co-detected by both photosynthesis traits and leaf morphology traits ( Table 2 ). Furthermore, many of the candidate genes identified were associated with both photosynthetic and leaf morphology traits, underscoring the complexity of the genetic architecture governing these traits and the potential pleiotropic effects of these candidate genes. Our findings contribute to the understanding of the genetic control of photosynthesis and leaf morphology in rapeseed. Specifically, 60 QTNs were identified to be associated with chlorophyll content, and mapped to 3 QTL clusters located on chromosomes A4, A9, and C1, respectively. These loci differ from those previously reported for chlorophyll content, which were primarily located on chromosomes A01, A02, and A03 . This indicates that the QTL clusters identified in our study represent novel loci, thereby expanding the known genetic landscape related to chlorophyll content in rapeseed. The GO enrichment analysis of the candidate genes related to photosynthesis and leaf morphology provides a deeper understanding of the molecular functions and biological processes. Threonine-type endopeptidases may contribute to maintaining photosynthetic efficiency by participating in the turnover of photosynthetic proteins within chloroplasts, facilitating the degradation and recycling of damaged or unnecessary proteins. Our data showed that the genes related to photosynthesis and chlorophyll content were enriched in threonine-type endopeptidase activity pathway . Similarly, candidate genes associated with LA were enriched in the fucosyltransferase activity pathway, which plays an important role in the formation of plant cell walls , leaf growth and plant development. The identification of candidate genes in stable QTL clusters related to photosynthesis traits in rapeseed enhances our understanding of the genetic architecture of these complex phenotypes. The NADP-ME4 gene is involved in the malate-aspartate shuttle, which helps balance cellular redox states and supplies carbon skeletons for the Calvin cycle . The NADP-ME subtype is an effective C 4 photosynthesis pathway, as it as it modulates the PEPCK activity to optimize light capture, maintaining a high photosynthetic rate under varying light conditions . Although NADP-ME is primarily involved in malate metabolism, its direct role in the C 3 photosynthetic pathway is less clear. Nevertheless, the metabolic network within plant cells is highly complex and interconnected, and NADP-ME may indirectly influence the C 3 photosynthetic pathway by affecting intracellular malate levels and the availability of NADPH. CYP709B3 , encoding a cytochrome P450 monooxygenases, affects photosynthetic efficiency by modulating electron transport in photosystemII . Our GWAS and gene expression analyses identified two genes, BnaA07g21550D ( NADP-ME4 ) and BnaC07g24400D ( CYP709B3 ), as key candidate genes involved in photosynthesis and plant growth, particularly in leaves and silique walls. PAF1 (proteasome alpha subunit F1) is a component of the 26S proteasome, which plays a crucial role in the regulation of protein turnover within cells. The 26S proteasome is involved in various cellular processes such as the degradation of damaged and misfolded proteins . MKK5 (MAP kinase kinase 5) is an enzyme involved in the mitogen-activated protein kinase (MAPK) signaling pathway, playing a crucial role in regulating various cellular processes such as stress responses, cell growth, differentiation, and apoptosis . In addition, our study found that these two genes, BnaA09g15700D ( PAF1 ) and BnaC01g32150D ( MKK5 ), are located in the QTL region that is significantly associated with Chl content. These two genes exhibit higher expression in leaves compared to other tissues, indicating their roles as key regulators of chlorophyll synthesis. WRKY4 , a transcription factor, is involved in the regulation of plant growth and development, and the response to biotic and abiotic stresses . In this study, BnaA09g45940D ( WRKY4 ) and BnaA09g42000D ( NAPRT2 ) were identified within stable QTL clusters ( q.A9-3 and q.A9-4 ) associated with leaf area, suggesting that these two genes are key regulators of leaf size and development. In summary, this study identified 21 stable QTL clusters and 10 key candidate genes through GWAS, and elucidated their roles in regulating photosynthesis and leaf morphology in rapeseed. These findings deepen our understanding of the genetic architecture underlying photosynthesis and leaf development and provide valuable insights that could facilitate the breeding of high-yield, high-quality rapeseed varieties with enhanced photosynthetic efficiency. This research lays a strong foundation for future genetic improvement efforts in rapeseed cultivation.
|
Other
|
other
|
en
| 0.999997 |
PMC11695188
|
This Research Topic, titled “ Nutrient density: evidence of multisectoral approaches for improved nutrition ,” addresses some of the above-mentioned challenges and introduces potential integrated solutions for enhancing nutrient density while being mindful of sustainability priorities. Among others, the following topics are discussed: (i) the persistence of low dietary diversity in both LMICs and HICs; (ii) the potential of commonly consumed foods to positively impact diet quality across countries and regions, but only if consumed in adequate quantities and as part of broader healthy diets; (iii) new and existing methodologies for analyzing the nutrient content of foods and ranking them based on nutritional value and environmental impacts; and (iv) strategies to combat food shortages, improve access to safe and nutritious foods, and promote nutrient adequacy, through sustainable production and supply of healthy foods that minimize environmental harm.
|
Review
|
biomedical
|
en
| 0.999997 |
PMC11695189
|
Pancreatic cancer, a highly malignant digestive system cancer, poses a serious threat to human health. It ranks twelfth in the world in terms of newly diagnosed cases among all malignant tumors, but ranks sixth in terms of mortality rate . In China, the estimated number of new confirmed cases was 118,700, and the number of deaths was 106,300 in 2022 . Pancreatic cancer typically onsets insidiously and progresses rapidly, leading to a diagnosis at an advanced stage for most patients. Less than 20% of patients have the chance for surgical intervention, and the 5-year survival rate for patients with metastatic pancreatic cancer is a mere 3% . The dense stroma matrix surrounding pancreatic cancer cells hinders the delivery of therapeutic drugs and immune cells to the target site, leading to diminished drug effectiveness . The targeted and immunotherapy approaches, which have made significant strides in the treatment of other types of tumors, have demonstrated poor efficacy when applied to pancreatic cancer . For patients with metastatic pancreatic cancer, gemcitabine monotherapy was proved to be more effective than fluorouracil . Therefore, in the past two decades, clinicians have endeavored to identify drugs or combination therapy regimens capable of surpassing the effectiveness of gemcitabine. In this context, the combination of leucovorin, fluorouracil, irinotecan and oxaliplatin (FOLFIRINOX) and the combination of gemcitabine and nab-paclitaxel (GEMNABP) have demonstrated favorable outcomes in clinical trials, establishing them as the gold standard first-line chemotherapy regimens. Recently, the phase III NAPOLI-3 clinical trial upgraded FOLFIRINOX to NALIRIFOX, which included liposomal irinotecan, oxaliplatin, leucovorin, and fluorouracil. The findings indicated that both progression-free survival (PFS) and overall survival (OS) of NALIRIFOX were superior to GEMNABP . Based on the positive results, NALIRIFOX has been recommended as a new first-line treatment for metastatic pancreatic cancer in the 2023 edition of National Comprehensive Cancer Network (NCCN) guidelines and the 2024 edition of Chinese Society of Clinical Oncology (CSCO) guidelines. Only NALIRIFOX and GEMNABP were head-to-head compared in the NAPOLI-3 trial, and to date, neither of these two regimens has been directly compared with FOLFIRINOX. Recently, Nichetti F et al. conducted a network meta-analysis (NMA) comparing these three treatments in terms of PFS, OS, and toxicity . The results revealed that the median PFS for NALIRIFOX and FOLFIRINOX was 7.4 months and 7.3 months, respectively, showing no significant difference. In contrast, GEMNABP demonstrated a significantly poorer median PFS of 5.7 months. Similarly, the OS of NALIRIFOX and FOLFIRINOX was 11.7 months and 11.1 months, respectively, whereas GEMNABP displayed a poorer OS of 10.4 months. In comparison to the other two treatments, NALIRIFOX was linked to a lower occurrence of grade 3 or higher hematological toxicity. However, it exhibited a higher risk of severe diarrhea compared to GEMNABP. Clinical data serves as a critical basis for the evidence-based use of drugs in clinical practice. However, to the best of our knowledge, there is a dearth of pharmacoeconomic evaluations of these three treatments, which are also essential for promoting rational drug use. Although NALIRIFOX has demonstrated favorable clinical outcomes, its use involves a relatively expensive drug, liposomal irinotecan, which encapsulates irinotecan in pegylated liposomal particles to achieve improved pharmacokinetics and has been approved for metastatic pancreatic cancer . In fact, among the drugs used in these three treatments, only liposomal irinotecan has not been included in the Chinese Basic Medical Insurance Drug List. Therefore, drawing from the NMA, this study conducted a cost-effectiveness assessment of these three regimens from the perspective of the Chinese healthcare system and a price simulation of liposomal irinotecan to inform the pricing strategy. It aimed to provide decision-makers with valuable references for optimizing the allocation of healthcare resources and offering doctors important economic evidence for selecting appropriate chemotherapy regimen for their patients. This cost-effectiveness analysis was conducted according to Consolidated Health Economic Evaluation Reporting Standards (CHEERS) and from the perspective of Chinese healthcare system ( Supplementary Table S1 ). Total costs, LYs (life-years), QALYs (quality-adjusted life-years), incremental cost-effectiveness ratios (ICERs), net monetary benefits (NMBs) and incremental net monetary benefits (INMBs) were the main outputs. According to the 2020 version of the China Guidelines for Pharmacoeconomic Evaluations (CGPE), whether cost-effectiveness of the treatment was determined by comparing the ICER value with the willingness to pay (WTP) threshold of $38,223.34, which was three times the gross domestic product (GDP) per capita of China in 2022, or by evaluating the INMB. INMB = WTP*(E2 − E1) − (C2 − C1). INMB >0 indicates cost-effective. This model was conducted based on the NMA, which involved the reconstruction of individual patient data from 7 phase III clinical trials . Aligned with the NMA, the hypothetical cohort in this study comprised patients with metastatic pancreatic ductal adenocarcinoma who received NALIRIFOX, FOLFIRINOX, or GEMNABP as their first-line treatment planned at standard dose intensity. Prior adjuvant treatment was allowed. Patients were adults aged 18 years or older, with an Eastern Cooperative Oncology Group performance status score (ECOG PS) of 0 or 1.44% of patients were male. Since this study was based on previously published data and did not involve patient recruitment or a retrospective analysis of primary patient data, ethical approval was not required. All medications involved in this model were administered intravenously. Specific dosing regimens and doses were collected from the corresponding clinical trials and detailed in the Table 1 . When determining the dosage, an average body surface area of 1.72 m 2 was used . We employed the TreeAge Pro 2019 software to construct a partitioned survival model comprising three distinct health states: PFS, progressed disease (PD), and death. The model structure was shown in Figure 1 . The partitioned survival model calculates the proportion of patients in different health states based on the area under the OS and PFS curves. This approach yields results that closely approximate the actual observed data. Given the highly aggressive nature of metastatic pancreatic cancer, the time horizon for the model was established at 5 years, within which 99% of the patients in the model had died. The cycle length was set at 28 days to align with the dosing regimen. Survival data used in this model were derived from the reconstructed and validated survival curves in the NMA. WebPlotDigitizer was used to extract PFS and OS data points from the Kaplan-Meier curves, followed by curve reconstruction using the R software (version 4.3.2). As shown in Supplementary Figure S1 and Supplementary Table S2 , the median PFS and median OS values of the reconstructed curves were very close to the original ones. Based on Guyot et al.'s algorithm, we extrapolated survival curves and used exponential, Weibull, log-normal, log-logistic, Gompertz, and generalized gamma parametric survival functions to fit the survival data. These standard parametric models are the most commonly used method in existing studies, despite their potential limitations in capturing survival curve inflection points compared to more flexible parametric models. However, given the nature of the traditional chemotherapy drugs involved in this study, which were less likely to lead to plateaus or other complex scenarios in survival curves, and because the survival curves were relatively mature, the use of these standard parametric models was considered appropriate. The goodness of fit of the model was assessed using the Akaike Information Criterion (AIC) and the Bayesian Information Criterion (BIC). Therefore, we ultimately fitted the OS curve of FOLFIRINOX with a Weibull distribution, and the remaining survival curves with a Gamma distribution, based on the lowest AIC and BIC values and consistency with visual inspection. This approach ensured that the chosen distributions provided the optimal fit to the original curves . Key parameters of the optimal distribution of survival curves were detailed in Table 2 . Following the 2020 edition of CGPE, both cost and utility values were discounted at a rate of 5% in this model. Only direct medical costs were taken into account in this model. Biochemical examinations, blood routine tests, radiological examinations, as well as expenses related to the best supportive care and terminal care, were derived from a cost-effectiveness analysis for metastatic pancreatic cancer conducted by Na L et al. in China . The drug price was sourced from the database of the Hunan Province Drug and Medical Consumables Procurement Management Subsystem ( https://healthcare.hnybj.com.cn/ ), which could reflect the prices in public hospitals in China. Irinotecan, oxaliplatin, gemcitabine, and nab-paclitaxel were procured through centralized drug procurement, and their prices reflected the winning bid prices. Patients were assumed to receive subsequent treatment after disease progression. GEMNABP and FOLFOX (oxaliplatin, leucovorin, fluorouracil) were found to be the most commonly used regimens for second-line treatment . Specifically, GEMNABP was assumed to be the second-line treatment for patients who had previously been treated with NALIRIFOX or FOLFIRINOX, while FOLFOX was employed for patients who had used GEMNABP as first-line treatment. Detailed dosing regimens were provided in Table 1 . The cost related to adverse events (AEs) was gathered from a cost-effectiveness analysis for metastatic pancreatic cancer, which was based on a retrospective cohort study in China . The AE incidences for each treatment were derived from the NMA. Considering the relatively minimal costs and negative effects associated with grade 1–2 AEs, this model only incorporated severe AEs of grade 3 or greater. All AEs were assumed to occur in the initial cycle of the model. Costs were adjusted to the 2022 level using the Consumer Price Index and subsequently converted to U.S. dollars at an exchange rate of 1 USD to 6.7261 CNY. The utility values of PFS and PD, as well as the disutility values associated with AEs, were obtained from previously published literature, as indicated in Table 2 . One-way and probabilistic sensitivity analyses (PSA) were conducted to assess the robustness of the model. In the one-way sensitivity analysis, ranges of parameter values were based on published sources or set at ±20% of the base-case value. INMB served as a measure of economic efficiency. Second-order Monte Carlo simulation was used to perform PSA by assigning appropriate distributions to each parameter and sampling them simultaneously for 1,000 iterations. Gamma distribution was selected for costs and the body surface area, beta distribution was used for utility parameters and probabilities, as shown in Table 2 . According to the opinions of clinical experts, a considerable number of patients with advanced pancreatic cancer receive only best supportive care due to poor physical condition after first-line treatment, consistent with the CSCO guidelines. Therefore, we conducted a scenario analysis, assuming that patients would not receive second-line treatment after disease progression. To reduce the toxicity of combination chemotherapy, modified versions of the FOLFIRINOX (mFOLFIRINOX) are frequently employed in clinical practice . This modification involves reducing the dosage of irinotecan and omitting fluorouracil bolus. Similarly, the adjusted GEMNABP regimen is achieved by reducing the frequency of drug administration. To assess the impact of these commonly used modified or adjusted chemotherapy regimens on the model’s robustness, scenario 2 analysis was conducted. According to the recommendations of the CSCO guidelines, ajusted dosing schemes were presented in Supplementary Table S4 . To assess the potential cost-effectiveness of NALIRIFOX under current market conditions, we determined the price of liposomal irinotecan in scenario 3 by referencing the cost of the generic drug, which was $22.82/mg. Additionally, the patient assistance program (PAP) was taken into consideration despite its limitations in suitability for all patients and challenges in ensuring consistent accessibility. Discontinuing the PAP would diminish the cost-effectiveness of the regimen. The PAP for generic liposomal irinotecan enables eligible patients to receive one free dose (43 mg) of medication after purchasing one dose at their own expense. Subsequently, upon purchasing 16 doses at their own expense, patients can continue to apply for free assistance until disease progression. To investigate the impact of liposomal irinotecan pricing on the cost-effectiveness of NALIRIFOX, we conducted an analysis of the variations in ICER when comparing NALIRIFOX with FOLFIRINOX or GEMNABP as the price of liposomal irinotecan gradually decreased. This analysis was carried out in both the base-case scenario and scenario 3. Additionally, a series of cost-effectiveness acceptability curves (CEACs) were developed for each treatment at various prices of liposomal irinotecan in the simulation. The cost-effectiveness analysis revealed that NALIRIFOX yielded the highest LYs and QALYs, as well as the highest cost. In contrast, GEMNABP was linked to the lowest LYs, QALYs, and cost. While FOLFIRINOX produced QALYs close to those of GEMNABP, its relatively higher cost resulted in an ICER of $193,629.96/QALY, significantly higher than the WTP threshold of $38,223.34/QALY. In addition, the INMB was $-2,082.02, indicating that it was not cost-effective. Similarly, NALIRIFOX was also deemed not cost-effective due to its high cost compared to GEMNABP, as evidenced by an ICER much higher than the WTP threshold and an INMB of $-37,086.96. Overall, GEMNABP was considered the optimal choice. Additionally, based on the NMB values, the cost-effectiveness priority order was as follows: GEMNABP > FOLFIRINOX > NALIRIFOX. Details were shown in Table 3 . The tornado diagram of one-way sensitivity analysis indicated that the results were stable, showing that both FOLFIRINOX and NALIRIFOX lacked cost-effectiveness compared to GEMNABP when the parameters varied within the defined range, as shown in Figure 2 . When comparing FOLFIRINOX with GEMNABP, the model results were primarily impacted by PFS utility value, as well as the risk of AEs including fatigue and decreased platelet count. While in the analysis of the cost-effectiveness of NALIRIFOX versus GEMNABP, body surface area and the price of liposomal irinotecan were the parameters that had the greatest impact on the results. CEACs of the PSA were shown in Figure 3 . With the increase of the WTP threshold, the probability that GEMNABP was more cost-effective gradually decreased. Incremental cost-effectiveness scatter plots could be found in Supplementary Figure S3 . The results of the scenario analysis were presented in Table 3 , which align with the findings of the base-case analysis. In scenario 3, despite the lower price and more favorable PAP for generic liposomal irinotecan, NALIRIFOX was still not deemed cost-effective in comparison to GEMNABP or FOLFIRINOX. Figure 4 illustrated the outcomes of the price simulation. As the price of liposomal irinotecan decreased, the ICER value of NALIRIFOX compared to GEMNABP or FOLFIRINOX displayed a gradual decline. In the base-case analysis, when the price of liposomal irinotecan decreased by more than 86.5% (less than $3.36/mg), NALIRIFOX became more cost-effective compared to FOLFIRINOX. Further, when the price reduction surpassed 91.6% (less than $2.08/mg), NALIRIFOX was more cost-effective compared to GEMNABP. In scenario 3, taking into account the PAP, NALIRIFOX became cost-effective compared to FOLFIRINOX and GEMNABP when the price reduction of generic liposomal irinotecan exceeded 51.8% (less than $11.00/mg) and 70.2% (less than $6.81/mg), respectively. The CEACs at four specific prices were presented in Figure 5 . For more information on CEACs presented under a sequence of varying prices (reductions of 0%, 50%, 75%, 85%, 95%) in the base-case analysis, comparing NALIRIFOX with FOLFIRINOX or GEMNABP separately, see Supplementary Figure S4 . When directly comparative clinical trial data is unavailable, indirect comparison and NMA is a commonly used method in health technology assessment . Based on previously published NMA, this study performed a cost-effectiveness analysis of three first-line combination chemotherapy regimens for metastatic pancreatic cancer. GEMNABP was associated with the lowest LYs, QALYs, and cost, whereas NALIRIFOX yielded the highest LYs, QALYs, and cost. Both FOLFIRINOX and NALIRIFOX generated ICER that was higher than the WTP threshold when compared to GEMNABP. Therefore, GEMNABP was considered the optimal choice, in line with the results of INMB. However, as NALIRIFOX and GEMNABP could gain more QALYs and LYs, comprehensive evaluation of the patient’s circumstances should be conducted when determining whether to prioritize cost-effectiveness or clinical benefits. It also should be noted that tolerance to AEs is a crucial factor influencing chemotherapy outcomes. For instance, caution should be exercised when selecting the NALIRIFOX or FOLFIRINOX for patients who have high risks of diarrhea. And when considering the risk of hematologic toxicity, NALIRIFOX poses a relatively lower risk. Although our economic evaluation model incorporated the costs and disutility values of severe AEs, clinicians are strongly encouraged to fully consider the risk of AEs when choosing treatment based on individual patient circumstances. Furthermore, upholding the patient’s autonomy in medical ethics and providing comprehensive information about potential benefits, drawbacks, and associated costs of the treatment is essential. Several investigations have reported the cost-effectiveness analysis of FOLFIRINOX versus GEMNABP. Our findings were consistent with the findings of two studies carried out in China . On the contrary, studies conducted in Canada and the United Kingdom indicated that GEMNABP yielded lower QALYs and higher costs, making it an inferior option compared to FOLFIRINOX . The centralized drug procurement policy implemented in China may account for this phenomenon, as it has led to a notable reduction in the prices of certain drugs and an enhancement in drug affordability. For instance, the previously expensive nab-paclitaxel in GEMNABP witnessed its price drop from $401.05 in $2019 to $22.24 in 2022 after two rounds of price reductions facilitated by this policy, representing a significant decrease of 94.45%. Undoubtedly, this would have a substantial impact on the cost-effectiveness of GEMNABP. So attention should be paid to the fact that drugs in developed countries are often more expensive, and the WTP threshold for pharmacoeconomic evaluation is also higher. For example, the price of liposomal irinotecan in the United States is $62.93/mg, which is significantly higher than the price in China. The sensitivity and scenario analyses confirmed the robustness of these results. The cost-effectiveness of NALIRIFOX was limited by its high cost. According to the tornado diagram, the body surface area and the price of liposomal irinotecan had the most substantial impact on the outcomes when comparing NALIRIFOX with GEMNABP or FOLFIRINOX, while the body surface area also affected the drug cost ultimately. Therefore, it is necessary to pay attention to the dosage and price of liposomal irinotecan in clinical application, which may have a significant impact on the cost-effectiveness of the treatment. When analyzing the cost-effectiveness of FOLFIRINOX vs. GEMNABP, the impact of AEs was more significant. The results of scenario 3 analysis indicated that, at present, the lower-priced generic drug still could not make NALIRIFOX cost-effective, even though it reduced the ICER value from $346,330.80/QALY in the base-case analysis to $104,318.74/QALY when compared to GEMNABP. Further price simulation of generic liposomal irinotecan showed that NALIRIFOX would be favored when the price was less than $11.00/mg and $6.81/mg in comparison to FOLFIRINOX and GEMNABP, respectively. However, when determining the appropriate price for the inclusion of liposomal irinotecan in the Basic Medical Insurance Drug List, it is essential to consider the impact of patients no longer being eligible for the PAP. Consequently, the drug cost should be further reduced to align with the result in the base-case analysis. NALIRIFOX would be cost-effective compared to FOLFIRINOX and GEMNABP if the price is less than $3.36/mg and $2.08/mg, respectively. Even though the required reduction is significant, the government will have a chance to achieve appropriate price through negotiation or centralized procurement when there are more manufacturers of generic drugs on the market for liposomal irinotecan. Just like the case with nab-paclitaxel. It is worth noting that although there is currently only one generic liposomal irinotecan available on the market in China, three other manufacturers’ generic drugs are already under review by the China Center for Drug Evaluation (NMPA). So revealing the potential prices of liposomal irinotecan that would make NALIRIFOX cost-effective in comparison to the other two first-line treatments for metastatic pancreatic cancer can offer valuable evidence for manufacturers and decision-makers, since the price fluctuation of liposomal irinotecan may come soon. There are some limitations of this study. First, this cost-effectiveness analysis model was based on the NMA. The NMA incorporated phase 3 randomized controlled trials with globally comparable inclusion criteria, but heterogeneity in these trials might still affect the pooled results. For example, unlike the GEMNABP, clinical trials related to FOLFIRINOX typically had an age limit for patient enrollment, and the intervals allowed for prior adjuvant therapy to start before first-line treatment varied across studies. Therefore, it is necessary to validate and improve this model with well-designed head-to-head comparison clinical data. In addition, the clinical trials involved in the NMA primarily consist of multicenter trials conducted globally. This may not fully reflect the effectiveness and safety of these treatments in the Chinese population and potentially introduce bias to the model results. Second, utility values were collected from the literature, which may deviate from the actual data. Excluding grade 1–2 AEs when calculating the costs and utilities might result in an underestimation of real-world burden and disutility values. Nevertheless, sensitivity analyses indicated that changes in the related parameters did not alter the model results. Third, the structural uncertainty of the model, brought by the reconstruction and extrapolation of survival curves, should not be ignored. Utilizing parametric models to fit and extrapolate survival curves can introduce uncertainty in capturing complex survival hazards. Additionally, researchers may struggle to determine the best model using statistical indicators. Therefore, enhancing the model’s accuracy by validating it with long-term real-world data in the future would be highly beneficial. Fourth, while this study aimed to fully encompass the clinical application characteristics of Chinese patients, such as setting the range of body surface area at ±20% of the base-case value and considering modified versions of the FOLFIRINOX and adjusted GEMNABP regimen in scenario analysis to account for drug tolerances, it is important to acknowledge that these assumptions about patients’ demographics might not fully represent the diverse Chinese population and could introduce potential bias. Fifth, further research is required to confirm the validity of these findings in areas with existing income inequality, as the WTP threshold based on GDP per capita may not be applicable to certain regions. This study also overlooked potential variations between urban and rural areas within the Chinese healthcare system, which could affect cost-effectiveness. In conclusion, the findings of this study suggested that GEMNABP was favored among these three first-line treatments thus far from an economic standpoint. Cost-effectiveness of NALIRIFOX would be obtained when the price of liposomal irinotecan was less than $3.36/mg and $2.08/mg compared to FOLFIRINOX and GEMNABP, respectively, without considering the PAP. These evidence are valuable for doctors in selecting appropriate treatment protocols and for decision-makers in determining the pricing for liposomal irinotecan. Further investigation is needed to gather additional evidence regarding the budget impact.
|
Review
|
biomedical
|
en
| 0.999996 |
PMC11695191
|
Oocyte cryopreservation, or egg freezing, has emerged as a vital reproductive technology, enabling women to preserve their fertility for personal, socio-economic, or health-related reasons. Initially developed for patients facing fertility-compromising medical interventions, oocyte cryopreservation has lately become popular among healthy women seeking to postpone childbirth for career, personal ambitions, or the lack of a suitable partner. This transition not only indicates broader cultural trends in delayed parenthood but also brings about societal benefits, emphasizing the necessity of access to fertility preservation for women. This study examines the decision-making process influencing women’s decisions to engage in oocyte cryopreservation, including the Theory of Planned Behavior (TPB) alongside an economic stated-preference framework. This research aims to quantify how women navigate financial restrictions, success probability, and perceptions of infertility concerns. The findings of this study offer essential insights for policy formulation aimed at promoting equal access to fertility preservation technologies, underlining the urgency of addressing the economic and social factors influencing reproductive decision-making. Oocyte cryopreservation, also known as egg freezing, represents a significant advancement in reproductive technology, initially developed to preserve the fertility of women undergoing treatments such as chemotherapy, which could jeopardize their reproductive potential ( 1 ). The advent of vitrification (“fast freezing”) and intracytoplasmic sperm injection (ICSI) techniques reignited interest in egg freezing in the early to mid-2000s ( 2 ). By 2012, authoritative bodies such as the American Society of Reproductive Medicine ( 3 , 4 ) and the European Society of Human Reproduction and Embryology ( 5 ) had removed the experimental status of the procedure, recognizing its safety and efficacy. This paved the way for its broader acceptance and eventual endorsement for non-medical, or ‘social,’ egg freezing, adopted by healthy women who wish to delay childbearing for socio-economic reasons or due to the lack of a suitable partner ( 1 , 3 – 8 ). Oocyte cryopreservation, also known as egg freezing, has two primary objectives: Medical Egg Freezing (MEF), which preserves fertility for medical causes such as chemotherapy, and Social Egg Freezing (SEF), typically employed for delaying parenthood owing to professional ambitions or the lack of a suitable spouse ( 3 , 7 , 9 ). The distinctions between medical and social egg freezing significantly affect its accessibility, impacting debates on whether social egg freezing should be regarded as preventative healthcare or an elective choice. These disparities influence access, funding, and broader ethical discussions around reproductive rights, autonomy, and healthcare obligations. The complex terminology associated with egg storage reflects the ongoing debate regarding whether a traditionally non–medical procedure has a legitimate medical justification. A key question in the field of egg freezing is how to distinguish between “medical egg freezing” (MEF) and “social egg freezing” (SEF) ( 10 ). The notion of medical necessity is a complex issue that requires diverse perspectives for a comprehensive understanding. It is subject to varying interpretations by patients, healthcare providers, politicians, and ethicists. The debate centers on whether age–related fertility reductions should be considered medical conditions. Critics argue that egg freezing is a form of preventative medicine rather than necessary medical therapy ( 11 ). This discourse, which addresses both MEF and SEF within a unified framework, informs policy discussions that aim to foster reproductive autonomy across diverse socio–economic circumstances ( 12 – 14 ). In the past two decades, oocyte cryopreservation, particularly SEF, has become increasingly recognized as a choice for women wishing to postpone childbirth for non–medical purposes. Research suggests that SEF frequently encompasses job aspirations, financial limitations, and the lack of a suitable partner, underscoring SEF as a reaction to intricate socio–economic and psychological pressures rather than physical conditions ( 3 – 5 ). This discretionary selection also encompasses broader issues such as reproductive autonomy, and the right of individuals to make their own decisions about their reproductive health. SEF not only ignites discussions on the medicalization of societal problems and the ethical ramifications associated with its use, but also empowers women to take control of their reproductive health, making their own decisions about their genetic heritage, familial relationships, and future parenting ( 9 , 15 – 20 ). SEF enables women to preserve fertility while considering the natural decline in reproductive potential linked to age. Research indicates that women, especially those who are highly educated, gainfully employed, and generally aged 36 to 40, engage in SEF as a proactive strategy to counteract age–related reproductive deterioration ( 12 – 14 ). Prevalent factors for deferring motherhood encompass pursuing career stability, attaining educational objectives, financial security, and searching for a suitable spouse ( 12 , 13 , 21 ). Although SEF adopts a proactive strategy for fertility management, the long–term outcomes remain ambiguous, and apprehensions over the effectiveness of reproductive therapies as women age continue to exist ( 12 , 22 , 23 ). Biologically, the female reproductive window is considerably more limited than that of males, with a pronounced decrease in fertility potential after age 35. The decline is primarily attributable to the diminished quantity and quality of oocytes, which decreases the probability of successful fertilization and heightens the risks of abnormal embryos and fetal loss ( 12 , 13 ). SEF provides women the opportunity to mitigate biological limitations, therefore aligning reproductive decisions with their personal, professional, and socio–economic aspirations for future parenting. This validation of their choices through SEF provides women with a sense of control over their reproductive health, acknowledging and respecting their individual circumstances and decisions. The increasing interest in SEF highlights the intricate interaction of social, economic, and biological variables influencing contemporary reproductive choices. SEF is a substantial and complex alternative in modern family planning, inciting continuous discourse over reproductive rights, social norms, and the ethical and medical ramifications of managing non–medical issues via fertility preservation ( 2 – 4 , 12 , 13 , 19 – 22 ). Given these factors, SEF is an appealing resolution for women facing age–related infertility challenges, which are intensified by the trend of delayed motherhood. Epidemiological studies indicate that as fertility declines in women’s mid–thirties due to diminished oocyte quality and quantity, the incidence of involuntary childlessness rises, with many women over the age of 45 turning to donor eggs for conception ( 24 , 25 ). The medical ramifications for both the mother and the potential child are crucial factors in SEF. The egg cryopreservation process has two main phases: ovarian stimulation and retrieval, and retrieval, followed by cryopreservation. Initially, the oocytes are harvested after ovarian stimulation, then cryopreserved for long–term storage. The process itself is not without risks: ovarian hyperstimulation, oocyte retrieval, and pregnancy carry specific medical concerns. For instance, women undergoing oocyte retrieval are at risk of Ovarian Hyperstimulation Syndrome (OHSS), particularly when stimulated for egg retrieval ( 26 ). Moreover, if a woman opts to conceive later in life, this may pose considerable risks associated with age–related health issues. Women over 45 undergoing In Vitro Fertilization with Intracytoplasmic Sperm Injection (IVF–ICSI) treatments may encounter problems due to pre–existing chronic health conditions, thereby heightening obstetric risks and adversely affecting delivery outcomes ( 27 ). Concerns arise over the ethics of providing fertility–preserving technologies to healthy, fertile women, which may foster erroneous expectations about success rates and medical feasibility in later life stages ( 28 ). The future child’s health is crucial in assessing the risks linked to SEF. Advanced maternal age is associated with increased rates of problems, including premature deliveries and low birth weights, which are more common in the offspring of mothers aged over 40. Research demonstrates that advanced maternal age correlates with an increased likelihood of adverse newborn outcomes, such as preterm delivery and lower birth weights, which may influence long–term child health ( 29 ). The dual influence on maternal and child health underscores the need for meticulous evaluation from both medical and ethical viewpoints when assessing the function of elective egg freezing in fertility preservation and delayed parenthood. Initially conceived as a medical procedure, egg freezing has become more popular among healthy women desiring reproductive autonomy, raising questions over its categorization, ethical considerations, and matters of public funding ( 15 , 17 , 30 ). SEF is frequently perceived as addressing socio–economic pressures rather than strictly medical issues. Critics argue that offering a medical solution for social challenges—such as career demands and gendered labor market expectations—reflects a “medicalization” of social problems, where individual medical procedures are applied to fundamentally non–medical issues ( 3 , 5 , 31 – 34 ). Critics propose alternate strategies, including supportive family policies, cultural transformations, and public health initiatives, that could more effectively tackle the structural factors contributing to delayed childbearing ( 34 ). The binary classification of egg freezing into ‘medical’ and ‘social’ categories has oversimplified the complex motivations for the procedure. This oversimplification raises questions about its suitability for regulatory and funding purposes. Van de Wiel ( 35 ) argues that this classification fails to fully capture the diverse reasons behind women’s choice of SEF, while Pennings ( 16 ) suggests that the ‘social’ label implies that SEF is seen as a preference rather than a necessity. The merging of medical and elective procedures complicates the ethical landscape, making the distinction between these categories problematic, if not impractical. However, the distinction between MEF and SEF continues to have a significant impact on regulatory policies and funding decisions worldwide ( 15 , 17 , 30 , 35 ). It is crucial that we develop a more nuanced understanding of SEF to address the ethical complexities in reproductive health. Advocates of SEF view it as a tool that can significantly enhance women’s autonomy, giving them control over their reproductive timing and helping them overcome the biological constraints that often put them at a disadvantage compared to men in terms of fertility ( 9 , 10 , 20 ). This perspective supports SEF as a legitimate form of reproductive control, arguing that it strengthens women’s autonomy against cultural pressures that may restrict their reproductive choices. The discussion around SEF’s classification underscores the ethical complexities associated with its use and influences policy debates and the availability of this reproductive technique in various socioeconomic contexts ( 3 , 16 , 31 , 32 , 35 ). The impact of SEF on women’s autonomy is a significant step towards promoting gender equality in reproductive health. The rapid advancement of assisted reproductive technology (ART) has sparked considerable ethical arguments, a topic thoroughly examined in the literature ( 36 ). The American Society of Reproductive Medicine’s Ethics Committee has deemed planned oocyte cryopreservation ethically acceptable, highlighting its advantages for social equity and women’s reproductive autonomy ( 37 ). The first successful birth in the U.S. using vitrified human oocytes was reported in 2013 ( 38 ). In France, oocyte vitrification was legalized under the French Bioethics Law of 2011, though it continues to be a subject of debate ( 27 , 39 , 40 ). Meanwhile, the National Bioethics Council in Israel has recommended oocyte cryopreservation to counteract age–related fertility declines ( 41 ), whereas in EU countries like Austria, egg freezing for social reasons is currently prohibited but remains a controversial issue ( 42 ). While the practice of freezing oocytes for cancer patients and others with decreased fertility is generally viewed positively from both medical and ethical perspectives, extending this option to healthy women for the reasons stated above introduces new ethical debates ( 36 , 43 ). SEF generates a multitude of ethical considerations, including commercial exploitation, the medicalization of reproduction, the autonomy of women, idealized conceptions of the ideal time to become pregnant, the repercussions of egg freezing on gender disparities, and adherence to professional standards ( 13 , 44 , 45 ). Ethical considerations encompass a comprehensive evaluation of the advantages, disadvantages, costs, and ramifications necessary to guarantee the continued efficacy and safety of the procedure ( 13 , 45 ). The benefits that elective egg storage provides to women and its contribution to gender equality are emphasized by proponents of the practice. Many women perceive egg freezing as a method to temporarily suspend their biological cycles, thus safeguarding them against age–related fertility concerns and granting them reproductive autonomy and the potential to conceive biological offspring. Additionally, by freezing oocytes at an earlier stage, the probability of genetic abnormalities developing in offspring may be reduced, this risk increases as the age of the mother increases ( 44 , 46 ). Conversely, ethical objections to fertility preservation for non–medical purposes highlight the potential for cryopreserved oocytes to provide illusory optimism for future conception, thereby prompting women to postpone motherhood. A delay may elevate hazards related to late pregnancy for both the mother and child, along with potential repercussions for the child’s psychosocial development stemming from the parent’s older age. Additionally, the fact that many women who opt for SEF ultimately do not utilize their stored oocytes serves as a further critique of the practice ( 13 , 45 ). An important counseling point often raised by patients concerns the optimal timing for using SEF. Historically, the typical age for egg freezing or vitrification has ranged from 35 to 38 years ( 37 , 47 ). Generally, there are two prevailing philosophies regarding SEF. On one hand, experts recommend not delaying the procedure, as older oocytes are less likely to lead to a successful pregnancy due to age–related decline and an increased likelihood of chromosomal aneuploidy. On the other, there is a caveat against utilizing this method at a young age when there is still a possibility that the patient may ultimately not need to use the preserved oocytes at all. There has been a significant increase in the quantity of fertility centers worldwide since 2012 that provide elective oocyte cryopreservation ( 14 ). Concurrently, a growing proportion of women are choosing to delay the onset of reproduction due to societal considerations. The dynamics surrounding family planning have undergone substantial transformations in tandem with the shifting roles of women in recent decades. There has been a significant rise in the average age at which women give birth to their first child worldwide ( 46 , 48 ). Higher education, professional aspirations, financial implications, and changes in social norms and interpersonal connections have all contributed ( 49 , 50 ). Conversely, postponing parenthood may have an adverse impact on the reproductive capacity of women, a consequence that is frequently unavoidable rather than elective. Involuntary childlessness can be psychologically stressful ( 51 ). Female reproductive potential inevitably and irreversibly declines after the age of 37, with oocyte quantity decreasing by a factor of two exponentials ( 52 ). Furthermore, it has been observed that the integrity of chromosomes and the quality of ova produced decline in significance beyond the age of 35. Advanced maternal age is a significant risk factor for early miscarriage, with the risk increasing to 51% for ages 40–44 and peaking at 93% after age 45 ( 53 ). The success rates for in vitro fertilization (IVF) are around 30% for women under 35, however, these rates decline significantly after this age, with almost no chance of a live birth using their own eggs for women over 45. It’s important to note that alternative methods like adoption or IVF with donor eggs may not be suitable for many women, especially those seeking a genetic connection to their child. These methods might encounter several challenges, including age–related constraints ( 54 ). The Theory of Planned Behaviour (TPB) ( 55 – 57 ) provides a systematic framework for comprehending intentions, which are significant indicators of behavior. Intentions concerning oocyte cryopreservation stem from a confluence of various factors, including personal characteristics, emotions, intellect, values, and in general attitudes, sociodemographic variables such as age, gender, familial status, educational attainment, and income, as well as the powerful influence of society, including culture, political environment, and social norms ( 58 – 60 ). Behavioral aphorisms represent the consequences individuals link to fertility–related choices, whereas normative beliefs relate to the perceived degree of society’s endorsement of reproductive alternatives, particularly cultural expectations on cryopreservation. Control beliefs include perceptions of the circumstances that either promote or obstruct the choice to freeze oocytes. A critical aspect of TPB is the concept of subjective norms 1 , which refers to an individual’s perception of psychological support or social pressure from their close social circle to either pursue or avoid a specific behaviour, such as oocyte cryopreservation. While subjective norms may not consistently reflect actual societal perspectives, they considerably influence decision–making, with favorable subjective norms enhancing the probability that a person would opt to cryopreserve oocytes ( 58 , 61 ). Another crucial component is perceived behavioral control, which refers to an individual’s evaluation of the complexity or ease of executing the behavior. How a woman perceives the cryopreservation process—whether it is within her control and accessible or challenging and costly—affects her intention to proceed, as subjective evaluations of feasibility strongly influence intentions ( 58 , 61 ). To gain a precise understanding of attitudes toward cryopreservation, it is necessary to collect data on various influential factors through comprehensive questionnaires that analyze attitudes, beliefs, and preferences. The TPB–aligned questionnaires in this study aim to comprehensively evaluate many factors affecting oocyte cryopreservation intentions. These factors include emotional responses to potential infertility, which frequently reflect values, general attitudes, worries, risk aversion, resource constraints, and social norms. Factors related to life stages, including age, education, income, religion, and family status, are assessed for their influence on decision–making. Factors such as financial expenses and the assessment of utility, which encompasses both private and social advantages, further influence an individual’s intentions about cryopreservation ( 62 ). Elective oocyte cryopreservation is a decision that should not be taken lightly. It requires careful consideration and is typically guided by a multidisciplinary team, including an embryologist, fertility expert, and psychologist or counselor. Their role is to help women make informed decisions by understanding the procedure’s risks, benefits, and associated costs. This involves discussions on success rates, potential long–term health implications, statistics on offspring conceived from cryopreserved oocytes, the duration of egg storage, and the importance of signing an informed consent document ( 2 , 13 , 63 , 64 ). It’s important to note that the use rate of frozen oocytes is relatively low, with studies showing a range of 6% to 15% from a cost–effectiveness perspective ( 27 , 41 , 65 – 68 ). This underscores the need for careful consideration of the economic aspects of elective oocyte cryopreservation, ensuring that resources are used efficiently. Success rates tend to be higher when women opt for oocyte vitrification at a younger age because fewer vitrified oocytes are required to achieve a live birth. Paradoxically, however, younger women are less likely to use their vitrified oocytes due to a greater probability of finding a partner and conceiving naturally later on. Van Loendersloot et al. ( 69 ), Hirshfeld–Cytron et al. ( 70 ), Garcia–Velasco ( 71 ) Mesen et al., and Devine ( 72 , 73 ) suggest that oocyte cryopreservation would be cost–effective if at least 50–60% of women actually use their vitrified oocytes. On the other hand, other scholars ( 74 – 76 ) contend that a 50% utilization rate may be excessively optimistic. They suggest that women at a higher risk of becoming prematurely sub–fertile—because of factors like ovarian endometriosis, ovarian surgery in the past, or personal circumstances that prevent them from getting pregnant—are more likely to use their vitrified oocytes. Oocyte freezing is probably more economical for these women. The discourse on societal benefits is crucial when evaluating public financing for procedures such as oocyte cryopreservation, presenting the challenge of who ought to bear the costs. ‘Elective freezing,’ ‘non–medical freezing,’ or ‘social freezing’ (as opposed to ‘medical freezing’) is currently a privilege mostly enjoyed by women who can afford the costs of ovarian stimulation medicines, medical procedures, vitrification or slow freezing, and storage fees ( 77 ). While the right to propagate is commonly recognized as a liberty right, it is not typically considered a claim right ( 78 ). This means that although women may elect to cryopreserve their oocytes, they do not have a legitimate claim on societal resources to subsidize it. However, many Western countries offer healthcare coverage for a certain number of ‘standard’ IVF cycles to ensure equal access, and several US states mandate infertility insurance coverage. This suggests that, in these jurisdictions, the right to reasonable healthcare extends to fertility treatments ( 79 ). The question arises: should countries with publicly funded IVF extend coverage to social freezing? If oocyte cryopreservation is an accepted method to counter infertility and fertility treatment is covered by public healthcare, should social freezing also be included in public healthcare or mandated insurance coverage, or is there a significant distinction between ‘regular’ IVF and IVF with previously stored oocytes? The challenge in this assessment is that elective oocyte freezing involves two distinct phases: initially, ovarian stimulation, oocyte retrieval, cryopreservation, and storage, and later (often years afterward), the thawing and fertilization of the cryopreserved oocytes. In the first phase, women who opt for social freezing are healthy and are seeking a procedure that results in stored oocytes that may or may not be utilized, depending on their life circumstances. The second phase is medical intervention. Women seeking elective oocyte cryopreservation differ from other IVF patients in a critical way: they are not infertile, which is often a prerequisite for state–funded IVF cycles in many countries. The term ‘elective freezing’ highlights that oocyte cryopreservation by healthy women is similar to other elective medical interventions, like cosmetic surgery, which typically do not provide direct therapeutic benefits (unless psychologically). This raises questions about why society should fund what some might consider merely a convenience. However, while the distinction between medical and social interventions often guides reimbursement policies, there are many exceptions, particularly in reproductive health, such as elective abortion, contraception, and pregnancy care, which are treated as medical interventions deserving of coverage despite pregnancy not being a disease ( 80 ). Furthermore, social freezing may be conceptualized as a type of anticipatory medicine in which women reserve eggs in anticipation of potential future reproductive difficulties. While this preventative strategy may not yield immediate therapeutic advantages, it does possess the potential for future therapeutic benefits, which may support its inclusion in healthcare coverage. If public healthcare covers IVF cycles using fresh but aged oocytes or donor oocytes for women, it follows that the use of their cryopreserved oocytes should also be reimbursed. Consistency in treatment would acknowledge the ethical and practical advantages of using freshly aged oocytes from the woman herself instead of utilizing donor oocytes. These considerations support the idea that compensation for elective cryopreservation should be comparable to that for “regular” IVF treatment when viewed as a unified entity with IVF treatment. However, a more nuanced policy approach may be needed given the separate steps of the process and the possible absence of causality between the initial storage phase and the subsequent treatment phase. Options might be full coverage or a cash or service refund for the first phase in the event that the woman returns for the second. Research conducted by the European IVF–monitoring Consortium (EIM) for the European Society of Human Reproduction and Embryology (ESHRE) and its Working Group on Oocyte Cryopreservation highlights considerable variability in the regulatory and financial frameworks governing egg freezing across forty–three countries. The data indicates that legal frameworks and financial mechanisms vary significantly, illustrating diverse national strategies for managing and promoting this technology ( 81 , 82 ). Over the past thirty years, there has been a noticeable increase in the postponement of motherhood among women of reproductive age in several Western nations. This trend is primarily attributed to a variety of factors, including improved educational and professional opportunities, caregiving responsibilities, financial challenges, the pursuit of economic security, the absence of a suitable partner, and the aspiration to establish a stable home environment. Additionally, the widespread availability of contraception and the belief that individuals are not yet ‘ready’ for motherhood further contribute to this shift ( 83 ). When it comes to the funding of medical and SEF, policies vary significantly. For instance, nations like Israel, the United States, and certain European regions provide either partial or full coverage for MEF. However, SEF is often excluded from public funding due to its optional nature. In Israel, public funding for MEF is allowed through the national health insurance system, but SEF is limited to private healthcare plans. This distinction underscores the emphasis on medically necessary applications over optional procedures ( 11 , 34 , 82 ). Under the Israeli National Health Insurance Law , which ensures the accessibility and financial support of numerous technologies, including IVF, the funding and utilization of Assisted Reproductive Technologies (ART) are regulated in Israel. Egg donation is an example of an ART that is regulated by specific laws ( 10 ), as opposed to directives from the Israeli Ministry of Health regarding other ARTs. Two directives specifically dedicated to regulating oocyte vitrification were issued by the Israeli Ministry of Health. The first directive ( 84 ) stated that vitrification should no longer be considered experimental. The subsequent directive ( 85 ) detailed the indications and conditions that justify the use of egg freezing, allowing both MEF and SEF. The Israeli National Health Insurance Law outlines chemotherapy and radiation therapy as justifiable indications for funding fertility preservation methods such as egg–freezing, embryo freezing, and ovarian tissue freezing (children, adolescent, and women patients). The Israeli Ministry of Health ( 86 ) also provides similar indications based on recommendations from the Israeli National Council for Gynecology, Neonatology, and Genetics. In 2011, the Ministry published additional medical conditions under which egg freezing will be performed, extending indications for medical fertility preservation beyond cancer patients to include other conditions and procedures that pose a risk to future fertility. While MEF and SEF are regulated and performed in Israel, the funding guidelines differ. Fertility preservation is fully covered by the Israeli National Health Insurance for medical indications ( 87 ). Women undergoing chemotherapy or radiation do not incur costs for fertility preservation for up to two children ( 85 ). For increased risk of early amenorrhea, funding for MEF is limited to women under the age of thirty–nine and to a maximum of four treatment cycles or twenty retrieved eggs—whichever comes first. If the woman is a carrier of Fragile X Syndrome 2 , funding extends to six cycles or forty eggs ( 88 ). Funded storage is limited until the birth of two children or until the woman reaches the age of forty–two (whichever is earlier). In contrast, SEF is not covered by the Israeli National Health Basket. However, one Health Maintenance Organization, “Meuhedet,” offers partial subsidization for women with supplemental medical insurance ( 89 ). The usage of frozen eggs later can be funded as part of the public funding scheme for IVF, with every woman aged eighteen to forty–five entitled to almost unlimited funded treatment up to the birth of two living children, without conditions based on familial status or sexual orientation. In 2014, some moderate restrictions were issued concerning the provision of IVF, such as reassessment after eight unsuccessful cycles ( 90 ). SEF regulations allow healthy women aged thirty to forty–one to freeze eggs, limiting the procedure to four treatment cycles or twenty retrieved eggs (whichever comes first) with implantation of fertilized eggs allowed until the age of fifty–four. Eggs can be stored for five years with an option to extend. These differences between MEF and SEF can be seen as establishing a hierarchy, prioritizing MEF over SEF in terms of funding and regulatory support. Regarding Jewish religious tradition and practice, egg freezing has been embraced by Israel’s religious establishment, spanning various local religious factions. The PUAH Institute, established in 1980 to align ART implementation with Jewish law (halacha) ( 91 , 92 ) has strongly supported egg freezing, particularly advocating for SEF among single Orthodox women and providing support in IVF clinics. PUAH’s official stance on egg freezing suggests that SEF can aid women who began childbearing later in life but wish to establish a family ( 93 ) . The Institute also extends its support to Jewish American communities, offering educational, financial, and emotional assistance for those utilizing ARTs. Consequently, rabbis across various religious communities now encourage single Orthodox women in their late thirties to freeze their eggs ( 94 ). In Judaism, where reproduction is a central tenet, innovative reproductive technologies that facilitate the growth of the Jewish population are widely accepted ( 95 – 97 ). Jewish women who opt for egg freezing are often seen as committing to the Jewish maternal imperative—the religious and social expectation for Jewish women to engage in “reproducing Jews” ( 94 ), “embodying (Jewish) culture” ( 98 ), and “birthing a mother” ( 99 ). This imperative is particularly prominent in Israel, described as the “land of imperative motherhood” [98[, where the state supports Jewish women’s reproduction through numerous subsidized fertility services. Israeli women and couples may even undergo various forms of “bio–scrutiny 3 ” to ensure they create the desired type of Jewish family in terms of both physical and genealogical heritage ( 100 ). Childbearing holds a revered place in Judaism, where both ancient and modern texts view it as essential to personal and social identity and vital for the continuity of the Jewish people, giving it a collective moral significance ( 95 , 101 ). While the commandment to “be fruitful and multiply” traditionally applies to men, Jewish identity is passed matrilineally, making childbearing a significant responsibility and life goal for Jewish women. Most rabbis assert that an infant’s Jewishness depends on the mother’s religion—emphasizing the womb over genetic lineage. However, recent anthropological studies highlight the importance of genetics in contemporary Jewish reproduction, underscoring the preference for using one’s own eggs ( 95 , 98 , 99 , 102 – 108 ). In Israel, childbearing is not only critical for nation–building ( 106 , 107 ) but also considered a primary form of women’s political participation ( 109 – 114 ), resulting in Jewish women in Israel having more children on average than those in any other industrialized nation. Childlessness carries a significant stigma, often overshadowing other life achievements ( 99 ). Viewing “the right to parenthood” as a fundamental human right ( 95 ), childless women in Israel often describe their condition as akin to a “serious illness,” with infertility perceived as a “final extinction” for families of Holocaust survivors ( 95 ). The enthusiastic reception of all forms of ARTs since the early introduction of IVF ( 95 , 97 , 115 , 116 ) supported by the world’s most generous state–backed IVF policy ( 117 ), illustrates the deep commitment to the Jewish maternal imperative. Despite the intense physical and emotional toll of these procedures ( 118 ), many women persist with these invasive treatments, viewing them as pathways to fulfillment and happiness ( 119 ). Further research is necessary to deepen the understanding of the long–term societal, health, and familial impacts of oocyte cryopreservation. Emerging technologies, demographic shifts, and evolving societal attitudes towards delayed parenthood present significant areas for study. Longitudinal research on the health outcomes of children born from cryopreserved oocytes will provide valuable insights into potential long–term effects. Additionally, studies on the psychological and social impacts of egg freezing, particularly for women who eventually do not use their stored oocytes, will contribute to a holistic understanding of this practice. Investigation into cost–effectiveness, alongside policy and regulatory impacts across various regions, will also aid in formulating equitable, accessible, and sustainable fertility preservation strategies. Oocyte cryopreservation is a transformative option in reproductive healthcare, empowering women with increased control over family planning and allowing them to navigate the intersection of career, personal goals, and biological limitations. As healthcare policies and societal norms continue to evolve, it is crucial to balance access, ethical considerations, and cultural influences to support reproductive autonomy. This study underscores the importance of establishing policies responsive to the multifaceted needs of women, contributing to a framework that respects both individual choices and broader societal impacts. Through the integration of the economic stated preference framework and the TPB, this study seeks to investigate the motivations underlying cryopreservation. The TPB ( 55 – 57 ) is employed to underscore the correlation between micro and macro–level intentions and behaviors. Fertility behavior is perceived as the result of a decision–making process that weighs the advantages and disadvantages of potential courses of action within the micro context with consideration for individual characteristics and variables. These factors include subjective norms (an individual’s perception of psychological support or social pressure) and perceived behavioral control (how easy or difficult an individual perceives it is to perform the behavior or reach the intended goal), which both significantly influence intentions and behaviors ( 62 ). The approach uses questionnaires to assess elements such as beliefs, attitudes, and social norms (for example, norms concerning the appropriate age for childbearing), as well as wider national or cultural values, and the economic and political environment. Furthermore, the TPB is evaluated at the macroeconomic level ( 59 , 120 ), specifically regarding how governmental entities determine whether to subsidize or finance oocyte cryopreservation. Insights into the preferences and evaluations of patients regarding various facets of healthcare procedures are critical for program development and assessment. By incorporating patient preferences into policy decisions concerning clinical practices, licensing, and reimbursement, substantial improvements can be achieved. Enhancing the alignment of healthcare policies with patient preferences has the capacity to elevate satisfaction levels with clinical interventions and public health undertakings, thereby potentially bolstering the overall efficacy of healthcare processes ( 121 , 122 ). Economists define two main approaches to measuring preferences: revealed and stated ( 123 ). Revealed preferences are inferred from actual observed behaviors in the market, identified through complex econometric methods used by researchers. In contrast, stated preferences are gathered through surveys that allow researchers to manipulate how preferences are elicited. Stated–preference methods are categorized into two main types: Methods that utilize rating, ranking, or choice designs (used individually or in combination) to quantify preferences for various attributes of an intervention. These methods, commonly referred to as conjoint analysis (CA), discrete–choice experiments, or stated–choice methods, are designed to explore the trade–offs between different properties of a product and their influence on user preference ( 124 – 126 ). The use of CA in healthcare research has increased substantially ( 127 , 128 ), with Clark et al. ( 129 ) and De Bekker–Grob et al. ( 130 ) providing exhaustive literature reviews. CA is a method that, based on the evaluation of a set of values ( 131 – 134 ), derives part–worth values for individual attributes from a total score for a product or service composed of multiple attributes. This methodology is especially well–suited for quantifying preferences for non–traditional market products and services or those in sectors where market options are limited by regulations or legal restrictions, such as healthcare ( 135 ). CA has demonstrated efficacy in preference measurement across a multitude of health applications ( 89 , 128 , 136 – 141 ), and its applicability transcends healthcare interventions. It is increasingly used to understand preferences related to health–related quality of life and to evaluate patient–reported outcomes of different health conditions ( 142 , 143 ). Licensing authorities have also shown interest in CA as a tool for assessing patient willingness to undergo innovative treatments that may offer enhanced efficacy ( 144 ). CA facilitates decision–making processes for patient participation ( 145 , 146 ), supports shared decision–making ( 147 ), aids in clinical decision–making ( 148 ), and helps elucidate how various stakeholders value healthcare outcomes ( 149 ). Furthermore, CA can evaluate the relative importance of one or more attributes of a product or service and assess how individuals make trade–offs between these attributes. This process identifies the user–required exchange rate between units of an attribute ( 149 ). CA studies present hypothetical scenarios to participants, which involve attributes of a product or service that are assigned to different degrees of importance. Respondents are then asked to rank these services, rate them, or choose between paired attributes. While people frequently make decisions involving exchange and substitution in their daily lives, they are rarely required to explicitly rank and rate attributes as part of their routine decision–making. This paper contributes to the development and application of the pairwise ‘choice’ approach in the decision–making process, which compares two indirect utility (benefit or satisfaction) functions. Participants in the study are asked to make a series of pairwise choices, selecting the option that offers the higher level of utility in each comparison. CA techniques used to elicit preferences helped determine the relative importance that individuals attribute to different attributes of a particular health product or service ( 135 ). By analyzing how participants express their inclinations towards various attributes of the product or service, CA enables the evaluation of the practicality, or implicit worth, of those particular elements of the healthcare intervention. The analysis of CA in this paper is grounded in the methodology described by Ryan ( 150 ). For this research, a structured CA questionnaire was developed: Participants were presented with hypothetical scenarios with various attributes crucial to cryopreservation and asked to make pairwise choices between two options. The initial set of attributes and their levels were defined based on a literature review. Appendix 1 , Table 1 , summarizes the attributes and levels included in the CA study. Each respondent was shown a series of 10 scenarios, with Option A having fixed attributes and Option B varying in each scenario, thus forming a total of 10 pairwise choice questions. An example of one such pairwise choice is detailed in Table 2 of Appendix 1 . Data was gathered by surveys conducted among women from the general public. The study respondents were drawn from a pool of participants recruited through a survey company and participated voluntarily without any monetary compensation. The study design was cross–sectional 4 with a single data collection point. The survey company had sole access to the participants’ data. Each participant was given a personal code so that the personal information was not known to the researcher conducting the study. Participants were given detailed information on oocyte cryopreservation prior to completing the survey. This included explanations about the reasons for considering oocyte cryopreservation, such as medical conditions like cancer, military service risks, and high–risk occupations, as well as the biological background regarding a woman’s egg reserve and the benefits of egg freezing (the Questionnaire is presented in Appendix 2 ). Three data–gathering stages were used to construct the survey and carry it out: Preliminary Stage: In the preliminary stage, items to be included in the research questionnaires were identified, using in–depth interviews with five fertility experts and three potential candidates for oocyte cryopreservation. The time frame for the preliminary interviews was six months. The questionnaires were initially constructed on the basis of content analysis of interview results. Pilot Study: After completing the first version of the questionnaires (based on the preliminary stage findings), a pilot study was conducted, with 15 participants. The pilot study aimed at assessing the difficulty and clarity of the questionnaire and the respondents’ willingness to respond to the items in it. This pilot study, which included face–to–face interviews conducted by the researcher, provided the participants with detailed information about cryopreservation and enabled the presentation of relevant information in a supervised manner as it gathered responses to the different factors. The time frame for the pilot study was three months. Main Survey: Based on the findings of the pilot study, the research questionnaires were developed for the survey population. The population sample consisted of Israeli Jewish women aged 18– 65 , from four major urban centers: four large cities in four major population regions in Israel: Tel Aviv, Jerusalem, Haifa, and Beer Sheba. First, the survey company made contact by telephone, then questionnaires via Google Docs were sent to respondents who agreed to participate in the study. Every respondent confirmed her participation by digitally signing an informed consent form. The time frame for the main survey was 3 months. Out of 807 questionnaires distributed, 94 were eliminated because they had invalid or missing data and 148 were eliminated because of inconsistency i.e., according to internal (theoretical) consistency tested through the CA technique (See Section 4.3 Methodological issues addressed). The final sample consisted of 565 participants. The participants, all 18 years of age and over, were given a page describing the goals of the study, guaranteeing anonymity, and explaining the possibility of terminating their participation at any time. Participants were asked to sign an informed consent form before answering the questionnaire. Anonymous, self–administered questionnaires were filled out without interventions by investigators. In the cover letter attached to the questionnaire, the participants were informed that data collection and analysis would be kept fully anonymous, and their personal information would be fully protected, all answers would be kept confidential, processed statistically, and used for scientific research only. The participants were free to decide whether or not to participate. Each participant provided signed informed consent to participate in the study. Ariel University Ethics Committee approval number: AU–SOC–YB–20141230. SAS Vs 9.4 was used for the analysis. Continuous variables were presented by mean and standard deviation, or median and inter quantile range. Categorical variables were presented by (N%). In market research, CA is a statistical method utilized to determine how individuals make purchases and what qualities they genuinely value in services and products. In this type of survey, participants are provided with a series of alternatives or products containing distinct qualities at varying degrees. They are subsequently requested to select their preferred option or arrange them in ranked preference. The basic idea is to dissect and analyze the options in order to ascertain which attribute combination has the greatest impact on consumer choice. The premise of this methodology is that a product’s qualities (e.g., price, quality, brand, features) characterize it, and that the consumer’s assessment of the product is a composite of the assessments of each individual quality or brand attribute. CA can distinguish the relative significance of attributes that influence a consumer’s choice or decision by presenting them with a variety of product configurations comprised of distinct attribute combinations. A conjoint analysis is performed as follows in Table 1 : Appendix 1 of the research paper contains the details of the CA study. In the Appendix 1 , Table 1 presents a comprehensive overview of the attributes and levels that were assessed throughout the investigation. Ten distinct scenarios, each with two alternatives (Option A and Option B), comprised the study. Option A possessed constant attributes, whereas Option B varied across scenarios. This setup resulted in 10 pairwise choices used in the CA questions. Table 2 in Appendix 1 presents an example of one such pairwise choice. CA is important for several reasons as described in Table 2 : Using CA to examine the distinctions between scenarios yields comprehensive insights into the decision–making processes of consumers, helping businesses align their products and services with consumer preferences and improving market fit. Scenarios are presented in Table 3 in Appendix 1 – according to the distinction between options B and A. Exploratory factor analysis of the attributes relevant to the decision to cryopreserve oocyte: Risk of infertility, Chances of success of the oocyte cryopreservation process, Chance of initiating a pregnancy from frozen oocyte, Option of oocyte cryopreservation for chosen period of time (Years), Initial registration fee to fertility laboratory and cryopreservation (One–time payment), Annual fee for cryopreservation and must be paid every year (storage) was carried out using Principal Component Analysis, Rotation Method: Varimax with Kaiser Normalization. This analysis yielded two factors: 1. Factor_Risk=Mean (Risk of infertility, Chances of success of the oocyte cryopreservation process, Chance of initiating a pregnancy from frozen oocyte, Option of oocyte cryopreservation for chosen period of time (years). 2. Factor_Price=Mean (Initial registration fee to fertility laboratory and cryopreservation (one–time payment), Annual fee for cryopreservation and must be paid every year (storage) The regression function estimated is denoted by Equation 1 and Equation 2 . CA was estimated in accordance with the function of the form: CA was estimated in accordance with the function of the form: ΔV 5 = α 1 risk of Infertility 6 + α 2 Chances of success of the oocyte cryopreservation process 7 + α 3 Chance of initiating a pregnancy from frozen oocyte 8 + α 4 Option of oocyte cryopreservation for chosen period of time 9 + α 5 Initial registration fee to fertility laboratory and cryopreservation 10 + α 6 Annual fee for cryopreservation and must be paid every year 11 + α 7 factor Risk 12 + α 8 factor price 13 + α 10 WTP 14 + α 11 trad + α 12 religious + α 13 age + α 14 education 15 + e 16 + u 17 When using the CA technique, it is important to include an evaluation of whether individuals appear to understand the technique and relate to it seriously. This study tested for internal (theoretical) consistency and validity ( 152 ). To check internal consistency, the rationality of the choices made was tested, i.e., if one scenario is clearly ‘better’ than another, respondents are expected to choose that scenario. In choice 7, the expectation is that all respondents would prefer the second scenario over the first. The assumption about respondents who answered inconsistently was either they did not understand the questionnaire, or they were not taking it seriously, these responses were omitted from the analysis. The premise of CA is that individuals have continuous preferences so that a deterioration in the level of one attribute is always compensated for by an improvement in another. The regression analysis results were used to test the internal validity of CA, i.e., the extent to which the independent variables being tested are what led to the predicted results. Given that higher levels of risk of infertility imply a problem, one would expect that the coefficient of the attribute ‘Importance of the risk of infertility’ would be positive in the regression equation regarding the WTP for cryopreservation. One would expect the coefficient of the Cost attribute to be negative regarding the WTP for cryopreservation. And one would expect that the coefficient of the attribute Personal monthly income would be positive in the regression regarding the WTP for cryopreservation. A statistical descriptive analysis was performed to investigate the social and demographic characteristics of the respondents who participated. Table 3 summarizes the social and demographic characteristics of the research sample. Respondents’ Age, Education, Personal Income, Degree of Religious Observance, and Marital Status were included as demographic variables in the model. The CA investigation encompassed ten unique scenarios, each of which offered two alternatives (Option A and Option B). Option A maintained consistent attributes across all scenarios, whereas Option B exhibited variations across different scenarios. A total of 10 pairwise comparisons were generated by this design, which were then employed in the Conjoint Analysis (CA) to investigate decision–making preferences. Table 3 of Appendix 1 specifies the 12 different scenarios (cases) in terms of the difference in the characteristics vs the baseline scenario (Option A). Table 4 of Appendix 1 lists, for each of the 12 scenarios (cases) the percentage of participants who chose option A and the percentage who chose the alternative. Table 5 in Appendix 1 lists the mean and standard deviation of values of the scenario parameters, for scenarios which were chosen vs scenarios which were not. The values are shown in absolute terms, instead of differences from option A, as this is easier to intuitively understand. The purpose of this table is to judge the relative differences in parameters between scenarios that were chosen and those that were not. It can be seen that Risk of infertility and Initial registration fee negatively affect choice (were lower for chosen), Chances of success, Chance of initiating a pregnancy from frozen and Option of oocyte cryopreservation positively affect choice and Annual fee for cryopreservation has almost no affect. Table 6 in Appendix 1 provides descriptive data regarding the distinctions in choice parameters between option B and option A, categorized by the option selected. Table 7 in Appendix 1 presents the Principal Component Analysis. In addition to the presentation of the Principal Component Analysis and factor loadings, two binary logistic regressions were used to estimate the change in utility in moving from one scenario to the second scenario. Binary logistic regression is used to assess the effect of variables on a binary outcome. In this case it is used to assess the effect of the difference in scenario variables, between each scenario and the baseline scenario, on the probability of choosing the baseline scenario. The worth of this statistical model can be measured by the C–Index, the closer this index is to “1”, the higher the model’s ability to discriminate between individuals who chose the baseline scenario vs another scenario. Table 4 shows the results of a binary logistic regression analysis. Prior to analysis, the difference in first price was divided by 100, in order to produce a sensible odds ratio. This only influences estimates (beta) and odds ratio, it has no effect on significance or standardized estimate. The Standardized Estimate can be used to derive relative importance of the various factors (Odds ratios depend on unit of measure and cannot be compared between parameters). All variables, except difference in yearly price, were significant. An Odds Ratio above 1 means that a larger difference between the parameters in option B vs. option A leads to a higher probability of choosing option A. An Odds Ratio below 1 means that a larger difference between the parameters in option B vs. option A leads to a lower probability of choosing option A (See Table 8 in Appendix 1 ). The model’s discriminatory ability (C–Index) was 72.9% (See Table 9 in Appendix 1 ). From Table 4 p–values we can see that all scenario parameters, except the difference in annual fee, had a significant effect on the probability of choosing the baseline scenario. From the standardized estimates, we can see those differences in chance of success, cryopreservation process, initiating a pregnancy from frozen oocyte and initial registration fee to fertility laboratory and cryopreservation had similar effect sizes (ranging from 0.28 to 0.31, in absolute value), while difference in risk of infertility had a much lower effect (0.09). Tables 8 – 10 , located in Appendix 1 along with detailed explanations, provide an in–depth statistical analysis of the factors influencing decisions around oocyte cryopreservation. Table 8 in Appendix 1 highlights the odds ratio estimates and Wald confidence intervals for key attributes such as registration fees, chances of success, and cryopreservation duration, revealing their significant impact on decision–making. Table 9 in Appendix 1 presents the association between predicted probabilities and observed responses, demonstrating moderate predictive accuracy with a concordance rate of 69.7% and a c–statistic of 0.729. Table 10 in Appendix 1 further illustrates the choice patterns across predicted probability thresholds, where participants with higher predicted probabilities (over 0.5) tended to select the first option, underscoring the influence of key predictive factors. Table 5 shows the results of a binary logistic regression analysis, with added factors and demographic variables. From the additional parameters, Factor_Risk, age and income were significant. Adding parameters to the model had little effect on the model’s C–Index (rose from 72.9% in previous model to 74.5% in this model). In Table 5 , additional explanatory variables were added to the model: The Risk Factor and Price Factor from part A of the study, WTP yearly, age, religious, traditional, education and income. The significant parameters were the risk factor, age and income, but these added parameters had little effect on the model, as can be seen by the small, standardized estimates (0.04 to 0.08) and the small addition to the C–index (from 72.9% in previous model to 74.5% in this model). The findings indicate that participants prioritize factors such as improved chances of future successful pregnancies and reduced anxiety about age–related fertility decline. This suggests that oocyte cryopreservation holds significant perceived value for them, offering considerable benefits that enhance their reproductive autonomy and overall well–being. The study employs statistical methods, including binary logistic regression and Conjoint Analysis (CA), to examine the factors that influence women’s decisions regarding oocyte cryopreservation. The findings emphasize the importance of several critical factors that influence these decisions, such as financial considerations, reproductive outcomes, and policy implications. The results suggest that the perceived likelihood of attaining a successful pregnancy and concerns regarding future infertility are the most impactful factors on the decision to undergo oocyte cryopreservation. Cryopreservation is more likely to be considered by women if they believe it will substantially increase their chances of conceiving in the future. This demonstrates that the decision is not exclusively influenced by current circumstances, but also by expectations of future fertility and reproductive autonomy. The decision–making process is significantly influenced by financial factors. The results of the analysis indicate that women with higher incomes are more likely to choose oocyte cryopreservation. For many individuals, the expenses associated with the procedure, which encompass initial retrieval, storage, and future in vitro fertilization (IVF), may be prohibitive. This implies that financial constraints restrict access to this technology, rendering it a more viable alternative for individuals with higher economic resources. The significance of personal circumstances and social conditions is also underscored in the study. Women who perceive themselves as having a high risk of infertility, whether as a result of age or medical conditions, are more inclined to engage in oocyte cryopreservation. Furthermore, decisions can be influenced by societal factors, including the public’s perception of fertility preservation and the availability of supportive policies. The Conjoint Analysis quantifies preferences by analyzing various aspects of oocyte cryopreservation, including the probability of successful pregnancy, hazards, and costs. The results indicate that women prioritize attributes associated with reproductive outcomes over immediate financial considerations. This suggests that the prospective long–term advantages of preserving fertility are perceived as outweighing the initial expenses. The binary logistic regression analysis determines the factors that are most predictive of the decision to cryopreserve oocytes. Age, income, and perceived risk of infertility are all substantial predictors. The analysis offers a model for comprehending the mechanisms by which these variables interact to influence decision–making, providing insights into the groups that are most likely to contemplate oocyte cryopreservation. In recent years, there has been a significant increase in both medical and non–medical female fertility preservation methods. The present study focuses on social oocyte cryopreservation among women, performed due to personal, professional, or financial reasons and motivated by the desire to preserve reproductive capacity, which naturally decreases with age ( 12 , 13 , 21 , 151 – 153 ). While there is a considerable amount of literature on social oocyte freezing, empirical studies focusing on the detailed behavioral and economic factors influencing women’s decisions to undergo social oocyte cryopreservation are relatively limited. This study aims to fill this gap by exploring these aspects. This study may shed light on an area where research is needed. Additionally, the current study is pioneering as it offers a thorough analysis of social oocyte cryopreservation, with a primary emphasis on planned behavioral aspects and the inclusion of relevant economic factors in the decision–making process. The study examines the factors that influence the intentions and subsequent behaviors of oocyte cryopreservation. The expanding availability of oocyte cryopreservation presents a distinctive opportunity to examine the willingness and inclination of individuals to make use of this technology. The primary theoretical framework for this analysis is the Theory of Planned Behavior (TPB), which establishes a direct correlation between beliefs and behavior ( 55 – 57 ). TPB is a suitable paradigm for comprehending decisions related to fertility preservation, as it posits that intentions are strong predictors of actual behavior ( 56 ). Furthermore, this investigation incorporates an economic asserted preference approach with TPB to economically quantify preferences, utilizing CA, to identify the combinations of attributes that most significantly influence decision–making ( 150 , 154 , 155 ) To effectively investigate the economic stated preference in the context of the decision–making process of women regarding oocyte cryopreservation, it is essential to conduct a more thorough examination of the economic framework. This entails an assessment of the specific attributes that influence these decisions and an assessment of their relative significance. The economic stated preference framework provides an essential perspective for analyzing the trade–offs women assess when contemplating oocyte cryopreservation. This technique enables women to evaluate several factors, including the probability of future pregnancy success, infertility risks, and the related financial implications. The findings of the present study demonstrate that women value long–term reproductive autonomy, including the potential for future parenthood, more than immediate financial limitations, highlighting a societal trend where reproductive timing increasingly aligns with career and personal aspirations ( 156 , 157 ). This conclusion corroborates previous research, indicating that women perceive oocyte cryopreservation as a means of securing reproductive autonomy despite the substantial initial expenses ( 40 ). The present study further emphasizes that women view cryopreservation as providing psychological reassurance and alleviating anxiety associated with fertility decline. The evidence indicates that women who opted for cryopreservation experienced enhanced control over their reproductive future, accompanied by a notable decrease in the pressure linked to biological aging ( 158 ). The findings affirm that reproductive autonomy is pivotal in these decisions, aligning with extensive feminist discourse on empowerment via reproductive choice ( 113 , 157 ). Furthermore, the role of evidence–based counseling in improving decision–making cannot be overstated. It provides women with the necessary information and support, enhancing their confidence in making educated reproductive decisions, as the current research indicates. Incorporating the Theory of Planned Behavior (TPB) into economic models provides a comprehensive framework for understanding women’s fertility preservation choices. The Theory of Planned Behavior elucidates those behavioral intentions, influenced by attitudes, subjective norms, and perceived behavioral control, significantly predict actual behavior ( 56 ). According to the present study, strong social support and a favorable attitude toward oocyte cryopreservation had a significant impact on women’s decisions. The role of social support in women’s fertility preservation decisions is crucial, and this paper’s findings highlight the need for understanding in this area. Women who recognized increased cultural acceptability or familial support were more inclined to prioritize egg freezing despite financial obstacles ( 158 ). The present findings indicate that perceived behavioral control, especially when overcoming financial or logistical obstacles, was pivotal in decision–making. Notwithstanding these challenges, women who showed confidence in their capacity to manage the cryopreservation technique were more inclined to undertake the procedure. This underscores the importance of integrating economic factors with behavioral insights to comprehensively understand the complexity of reproductive decision–making. This integration is crucial in providing an enlightened and informed perspective on women’s fertility preservation decisions. Integrating the Theory of Planned Behavior into the economic model enhanced the comprehension of the interplay between financial, social, and psychological elements influencing women’s decisions on fertility preservation. The study’s findings demonstrate that financial constraints significantly impede oocyte cryopreservation, thereby impacting reproductive equity. A significant percentage of women in the lower–income category considered the expenses of cryopreservation to be excessive, although acknowledging the long–term reproductive benefits ( 157 , 159 , 160 ). This financial barrier intensifies inequalities in access to fertility preservation technologies, highlighting the necessity for policies that promote equitable access ( 40 ). The statistics indicate a significant disparity between the desire to preserve fertility and the available financial resources, highlighting the necessity for governmental measures, such as subsidies or insurance coverage, to alleviate these obstacles. The findings strongly support the establishment of financial assistance programs to enhance accessibility for a diverse demographic. By alleviating the economic barriers to cryopreservation, we can foster reproductive equity and autonomy, ensuring that fertility preservation options are within reach for all women, regardless of their financial circumstances. This potential impact of financial assistance programs offers hope for a more equitable future in reproductive healthcare. The present research indicates significant interest among women in oocyte cryopreservation, with participants demonstrating robust support for enhancing accessibility to the treatment. However, the current financial barriers are hindering this. Therefore, it is crucial to consider public financing as a solution ( 161 , 162 ). Currently, Israel’s national health insurance covers medically essential therapies but excludes elective fertility preservation, including social oocyte cryopreservation. Based on the findings, governments should contemplate including cryopreservation in public health benefits, empowering more women to make informed decisions and preserve their fertility by their reproductive objectives ( 162 ). An essential revelation from the present research is that financial constraints constitute the principal obstacle preventing women from pursuing oocyte cryopreservation. International data indicates that subsidized fertility preservation enhances accessibility, especially for women who may postpone childbearing for personal or professional reasons but lack the financial resources to preserve their fertility at the ideal moment. Aligning policy with potential users’ economic interests and reproductive goals is crucial in democratizing access to fertility preservation services and guaranteeing reproductive equity ( 157 ). The research underscores the advantages of integrating oocyte cryopreservation into public health financing, hence improving accessibility for a broader population, especially individuals encountering financial barriers. The findings indicate that, in the absence of financial support, cryopreservation is unattainable for several women, exacerbating existing inequalities in reproductive healthcare. Offering financial assistance via subsidies or insurance schemes might equalize access and synchronize healthcare policy with women’s reproductive objectives, as demonstrated by the experiences of women in the present research ( 76 , 163 ). This regulatory change is expected to diminish the future necessity for more intense and expensive reproductive treatments. Enhancing the accessibility of oocyte cryopreservation for a broader demographic enables healthcare institutions to address women’s reproductive needs and promote equitable access more effectively. The study revealed considerable psychological advantages associated with oocyte cryopreservation. Women who choose to freeze their oocytes reported an enhanced sense of control over their reproductive prospects and a significant decrease in anxiety associated with fertility loss. The perception of emotional alleviation was often identified as a primary incentive for undertaking the procedure ( 54 ). The present study findings highlight the necessity of considering the medical and economic dimensions of cryopreservation and the psychological and emotional benefits it offers. The psychological effects of oocyte cryopreservation are not uniform. While some women may experience a sense of control and reduced anxiety, others may persist in feeling anxious over success rates and future reproductive outcomes. This variability underscores the necessity for individualized and thorough counseling to alleviate these apprehensions. The emotional advantages, however, are often evident, underscoring the procedure’s significance in promoting reproductive autonomy and mental health. The financial and reproductive implications of cryopreservation can be better understood by women by developing decision support tools that are based on the economic stated preference framework. The decision–making process can be made more transparent and informed by personalized financial modelling, which can demonstrate the impact of various choices on individual circumstances ( 164 ). and enable women to make more informed decisions. Ensuring equitable access to SEF is of utmost importance, alongside counseling and ethical considerations. Widespread public discourse, including that of popular media and social networks, has been deeply affected by the utilization and availability of SEF. Financial considerations for EF are significant, for many people, the cost of EF is prohibitively high and prevents access ( 165 – 169 ) SEF is a costly procedure, with expenses ranging from $15,000 to $20,000 per cycle, and it is usually not covered by insurance, rendering it an out–of–pocket cost ( 170 ). While the debate over insurance coverage for assisted reproductive technologies is vital, coverage for oocyte cryopreservation is mainly chiefly limited to medical cases rather than elective or nonmedical reasons. Consequently, insurance considerations are frequently omitted from SEF–specific financial discussions. A considerable number of patients require financial assistance in the form of loans or financial aid in order to afford the medications, procedures, and storage of retrieved oocytes required for cryopreservation. The aforementioned expenses include only the first retrieval and fail to consider any subsequent costs associated with thawing the oocytes for in vitro fertilization. Additionally, patients may be required to repeat the refrigeration and IVF procedure multiple times in order to achieve the desired number of children. Supplementary expenses beyond the initial charges may pertain to the utilization of donor sperm or the testing of partner sperm prior to embryo development. As a result, numerous individuals may be unable to afford SEF ( 171 ),. The study underscores the necessity of addressing economic disparities in the availability of oocyte cryopreservation. Women from lower socioeconomic backgrounds encounter numerous obstacles, including financial constraints, inadequate information, and inadequate social support ( 172 , 173 ). The analysis of CA data in this study indicates that the likelihood of success in oocyte cryopreservation and the initiation of pregnancy from frozen oocytes is substantially more significant than the reduced risk of age–related fertility decline. The relative significance of these factors indicates that there should be greater emphasis on enhancing the success rates and outcomes of oocyte cryopreservation procedures, rather than solely concentrating on the perceived risk of infertility. The importance of prioritizing and effectively communicating success rates to patients is underscored by this insight, which is essential for healthcare providers and policymakers. The results also indicate that the perceived probability or concern of future infertility, financial capacity, and the perceived likelihood of conceiving a healthy child through the use of cryopreserved oocytes were all significant determinants. The perceived probability of conceiving a healthy child through the use of cryopreserved oocytes is a compelling outcome that demands further investigation. This perspective has the potential to encourage women to opt for cryopreservation at an earlier age, as they may perceive it as a proactive approach to increase the likelihood of having healthy children, rather than relying solely on natural conception. This trend has the potential to transform the traditional narrative surrounding oocyte cryopreservation, establishing it as a favored method for assuring reproductive and genetic health, rather than merely a backup for age–related fertility decline. This realization could have substantial implications for the manner in which fertility preservation is communicated and perceived, indicating a necessity for targeted education and counseling to address the advantages and disadvantages of this method ( 174 , 175 ). Furthermore, the research underscores the significance of age as a factor, as women’s fertility naturally decreases with age. Consequently, some people choose cryopreservation as a preventive measure against age–related fertility decline ( 12 , 151 – 153 ). This is consistent with the findings of Stevenson et al. ( 152 ), who discovered that the decision to pursue oocyte cryo–preservation is substantially influenced by knowledge and perceptions about fertility decline. Similar patterns of motivation and concern among women contemplating oocyte cryopreservation for non–medical purposes are revealed when the present paper’s findings are compared to those of studies, including those conducted by Tan et al. ( 154 ) and Stoop et al. ( 155 ). In 2014, Tan et al. ( 154 ) reported that Singaporean female medical students predominantly contemplated social oocyte freezing due to concerns about age–related fertility decline and the absence of a partner. This finding is consistent with the results of the present study analysis. In the same vein, Stoop et al. ( 155 ) discovered that the dread of future infertility was a significant factor in the decision of women in Belgium to undergo social oocyte cryopreservation. This study also highlights a paradoxical finding, that younger women who cryopreserve their oocytes are less likely to use them later, as they often conceive naturally despite having preserved their oocytes as a precaution. This trend is consistent with Seyhan et al. ( 151 ), who reported that many women viewed cryopreservation as a “backup plan” rather than a primary strategy for childbearing. The findings have several implications for clinical practice, healthcare policy, and ethical considerations. Clinicians and counselors should recognize the critical role that success rate sensitivity plays in patient decision–making. Transparent communication about the success rates of different cryopreservation options and personalized counselling can enhance decision quality and patient satisfaction. This is supported by the study by Stevenson et al. ( 152 ), which emphasizes the importance of patient education and counselling in fertility preservation decisions. For healthcare services and marketing, clinics offering oocyte cryopreservation should ethically and accurately present their success rates to influence patient choice favorably. The competitive advantages should be highlighted in an ethical and precise manner in order to enable patients to make informed decisions. Policymakers should also consider developing guidelines that mandate the transparent reporting of success rates and other performance metrics for fertility preservation options, as suggested by Stoop et al. ( 155 ). The study also identifies a prevalent lack of knowledge and comprehension among potential users, which contributes to the underutilization of oocyte cryopreservation. These disparities must be addressed by implementing educational initiatives and public health policies that are specifically designed to help. Additionally, the broader implementation of fertility treatments is impeded by the financial investment, technical complexity, and psychological distress that are associated with them, which underpins the necessity of enhanced accessibility and supportive policies. The study essentially demonstrates that women are predominantly motivated to preserve their fertility due to concerns about future reproductive outcomes. Although financial obstacles are substantial, the perceived advantages of having the option to conceive at a later age often outweigh the costs. Public policies providing financial assistance for oocyte cryopreservation are crucial. Subsidies, insurance coverage, or integration into national health programs might mitigate the cost impact, allowing a broader demography to contemplate this choice. Enacting such legislation adheres to the tenets of reproductive justice, guaranteeing that all women, irrespective of socioeconomic background, can make informed choices regarding their reproductive destinies. Reducing financial obstacles enables greater access for women to fertility preservation technology, promoting reproductive autonomy and facilitating different family planning options. Empowering more women to make informed decisions about their reproductive futures through public policies that provide financial support for oocyte cryopreservation could help ensure broader access to this technology. The study’s findings indicate that respondents place a high value on oocyte cryopreservation and perceive it as an effective technique for improving reproductive autonomy, reducing anxiety associated with fertility decline, and enhancing their overall well–being by expanding reproductive options. The study contributed to the quantification of women’s preferences by demonstrating that attributes such as the probability of a successful pregnancy and future reproductive opportunities are highly valued. This was achieved through the use of Conjoint Analysis (CA). The nuanced understanding of how variations in success rates affect patient choices highlighted in this study emphasizes the need for clear, ethical, and effective communication and practices in the field of fertility treatments. These insights should guide clinicians, healthcare providers, and policymakers in their efforts to support patients in making the most informed and beneficial decisions regarding their fertility options. This paper presents highly novel and as yet unpublished data offering behavioral and economic insights into women’s perceptions of oocyte cryopreservation and provides valuable insights for development of female reproductive health policy. For women who want children, fertility education is key. Natural conception or donor insemination is the preferred way, and couples who have decided to have children should start trying early. For those, though, where conventional family planning is not possible due to various reasons, the chance for motherhood should not be denied, and they should be encouraged to be proactive to prevent infertility ( 176 ). The aim should be to increase awareness among women of reproductive age regarding age–related fertility decline. Funding strategies could potentially be developed in the future to prevent age–related fertility decline as preventative medicine has been developed in so many other fields. Also, algorithms could be developed to individually assess fertility status and cultivate a pro–fertility mentality in a realistic context ( 177 ). Representativeness and Sample Bias: The study’s sample was restricted to Jewish Israeli women aged 18–65 from significant urban centers, which may not be representative of the broader population or diverse cultural contexts. The generalizability of the findings to women from other backgrounds or countries may be impacted by this limitation. Self–Reported Data: The study is predicated on self–reported data. Self–reported data are susceptible to biases, including recall bias, social desirability bias, and the respondents’ propensity to disclose intimate or sensitive information accurately. Preliminary Information: Participants in this study were provided with rudimentary information regarding oocyte cryopreservation, explaining advantages, risks, and the diverse circumstances in which it is considered, in order to guarantee that respondents were adequately informed. Nevertheless, this could have influenced their responses by predisposing them to view oocyte cryopreservation more favorably. Cross–Sectional Design: The research employs a cross–sectional design, which records data at a singular point in time. This design restricts the capacity to establish causality or observe changes in attitudes or behaviors over time, which are essential for comprehending decision–making processes related to fertility preservation. Conjoint Analysis Constraints: Respondents are required to make hypothetical decisions between predetermined scenarios in order to elicit preferences through the use of Conjoint Analysis (CA). This may not accurately represent the intricacies of real–world decision–making and may oversimplify the factors that influence women’s decisions regarding oocyte cryopreservation.
|
Review
|
biomedical
|
en
| 0.999998 |
PMC11695195
|
Raynaud's phenomenon refers to a condition characterized by a complex set of symptoms linked to impaired blood flow, usually triggered or worsened by cold temperatures, emotional stress, or substances that mimic the effects of the sympathetic nervous system . It affects approximately 3-5% of the general population, with over 80% of cases representing primary Raynaud's phenomenon (RP) that is not associated with systemic autoimmune disease . Secondary RP is associated with autoimmune disease it is most frequently associated with systemic sclerosis (SSc). Its main complications include digital ulcers, necrosis, and ischemia . Nailfold capillaroscopy is a non-invasive, reproducible, and effective in vivo technique used to visualize and evaluate microcirculation (microscopic vessels) . It can be performed in any anatomical location where terminal capillaries are oriented parallel to the skin. Various instruments can be used including dermatoscopes, ophthalmoscopes, or digital videocapillaroscopes that allow precise optical magnification up to 200x . This technique is highly useful in studying rheumatic diseases and differentiating between primary and secondary RP, enabling the identification of vascular anomalies such as density, enlargement, presence of giant capillaries, and microhemorrhage which are significant in the disease's natural course . SSc is a systemic autoimmune disease characterized by immune-mediated small vessel vasculopathy and fibrosis of the skin and various organs . Nailfold videocapillaroscopy (NVC) findings pathognomonic for SSc have been described . We report the case of a 42-year-old patient with recent-onset RP associated with digital ulceration, without clinical or immunological manifestations suggestive of autoimmune disease, where NVC findings provided the diagnosis of pre-scleroderma. As this case report involves a single patient and is presented with de-identified data, institutional ethics approval was not required. The patient gave informed consent for the publication of this case, including the use of images and medical history. A 42-year-old woman with no significant medical history presented to our rheumatology outpatient clinic at Mayo Clinic on May 20, 2020, with a five-month history of bilateral RP affecting her hands and feet, with approximately three episodes per day, and periungual erythema . She also reported noticing a black spot under the nail bed of her right thumb three months prior, associated with local inflammatory signs. However, seven days later, she developed pain, inflammation, and necrosis in the distal phalanx of her right index finger . During the rheumatological evaluation, a review of systems revealed no myalgia, asthenia, weight loss, arthralgia, morning stiffness, or symptoms of dryness, and she reported only mild symptoms of gastroesophageal reflux. Physical examination showed normal blood pressure, no alopecia, rash, skin fibrosis, or lymphadenopathy, and no joint swelling or tenderness. Cardiopulmonary and gastrointestinal examinations were unremarkable. The extremities showed no edema or neurological alterations, but active necrosis was present in the right index finger. Laboratory tests revealed mild thrombocytosis, with an otherwise normal complete blood count. Antibody testing was negative for antinuclear antibodies (ANA), antiphospholipid antibodies, and scleroderma-specific antibodies. Other conditions, including systemic lupus erythematosus, Sjögren's syndrome, mixed connective tissue disease, antisynthetase syndrome, cryoglobulinemia, and paraproteinemia secondary to hematological neoplasia, were excluded (Table 1 ). NVC was performed for evaluation of her RP revealing a few giant capillaries, microcapillaries, and hemorrhages highly suggestive of systemic sclerosis and consistent with an early scleroderma pattern. Treatment was initiated with 2% nitroglycerin cream (for three months), hydroxychloroquine 200 mg daily, atorvastatin 40 mg daily, and nicardipine 20 mg every 8 h, continuing to date. Interstitial lung disease and pulmonary hypertension were ruled out through CT chest and transthoracic echocardiogram. The patient showed complete resolution of digital necrosis (over three months) with significant improvement of RP during a four-year follow-up. In SSc, there is a close pathophysiological and clinical correlation that follows progressive states of endothelial injury and dysfunction . Vascular changes are therefore of high importance when establishing a diagnosis. According to the American College of Rheumatology and the European League Against Rheumatism (ACR/EULAR) in 2013, for classification, at least nine points are required based on the presence of pitting scars, RP, digital ulcers, capillaroscopic abnormalities, telangiectasias, and pulmonary hypertension . However, the clinical spectrum of SSc is quite heterogeneous and may present an unpredictable natural course with some individuals developing mild and stable forms for years, while others develop rapidly progressive or refractory forms with fatal outcomes. Rubio-Rivas et al. report cumulative survival rates of individuals with SSc at 74.9% and 62.5% at five and 10 years, respectively, from the time of diagnosis, highlighting the need for tools to enable early detection and diagnosis . Very early diagnosis of systemic sclerosis (VEDOSS) classification criteria were developed to address this gap . However, when analyzing its variables (RP, puffy fingers, positive serology, and capillaroscopic abnormalities), limitations in identifying some individuals with the disease who may lack classification findings, become apparent . This is especially relevant considering that in SSc patients, the presence of RP suggests established vascular dysfunction states that precede the evolution of puffy fingers, sclerodactyly, and the extension of skin involvement . Therefore, it is evident that very early clinical manifestations already reflect disease progression that could potentially be irreversible. Hence, identification and treatment in early and potentially subclinical phases of the disease suggest a potentially narrower therapeutic window, intervening before the onset of sclerodactyly, skin fibrosis, or organ damage, which is crucial . Even so, identifying patients in the oligosymptomatic phase that precedes VEDOSS represents a real clinical challenge, although initiating early treatment may be a window of opportunity for prevention of progressive or severe manifestations involvement . In almost all patients with limited cutaneous SSc, the first symptom is RP, often two to five years preceding any other symptom of scleroderma . In this population, the preclinical phase would be defined by the presence of any non-Raynaud's symptoms or include RP itself, but without other manifestations of scleroderma, as was the case with our patient . However, there is still no clear consensus on its precise definition. Nonetheless, in this scenario, accessible and minimally invasive diagnostic aids, such as NVC, are of great help during an early diagnostic approach, as there is an established association between the duration of the disease and microvascular involvement . Shenavandeh et al. in their cohort did not find an association between capillaroscopic patterns and cutaneous subtypes of SSc but did observe a significant association when analyzing disease duration . Interestingly, the early scleroderma pattern was more frequently observed in subjects with limited SSc. However, in the context of VEDOSS, even in preclinical SSc, there is no adequate characterization of capillaroscopic abnormalities, limiting the correct interpretation of the study in this population. Nonetheless, Cutolo et al. in their preliminary analysis of NVC in VEDOSS, the capi-vedoss experience , found a higher frequency of an early scleroderma pattern in this population, as was seen in our patient, suggesting a possible role for NVC in very early stages of the disease . Still, further studies are needed to reach a conclusion supported by evidence . Regarding the immunological profile of these patients, the findings in the literature are extremely varied and controversial. For example, Salazar et al. in their cohort compared subjects with ANA-positive versus ANA-negative SSc , finding that the latter presented notably less cutaneous, pulmonary, and especially vasculopathic involvement . This widely differs from the presentation in our patient, who, despite having negative ANA, showed severe microvascular involvement (digital ulceration, giant capillaries, and nailfold bed hemorrhages). In the present case, the diagnostic approach was challenging, particularly due to the absence of findings suggestive of a specific autoimmune disease as the underlying cause of Raynaud's phenomenon. Among the main differential diagnoses, systemic lupus erythematosus was ruled out due to the absence of multisystemic or cutaneous involvement and the negativity of antinuclear antibodies. Symptoms indicative of mixed connective tissue disease were also absent, and cryoglobulinemia was excluded. Similarly, Sjögren's syndrome was ruled out based on the absence of sicca symptoms as well as cutaneous, neurological, or immunological abnormalities. Neoplastic processes were also excluded due to the lack of symptoms suggestive of malignancy, considering the patient's age, sex, and risk factors. Finally, no paraproteinemia was identified through protein electrophoresis. Although features such as swollen fingers, sclerodactyly, microstomia, telangiectasias, acro-osteolysis, and skin fibrosis were absent, and the specific autoimmune profile was negative for scleroderma, nailfold videocapillaroscopy proved to be highly useful. This technique facilitated the diagnosis of pre-scleroderma by identifying an early scleroderma pattern, providing evidence of microvascular damage in the context of Raynaud's phenomenon secondary to systemic sclerosis. This finding is particularly relevant given the lack of unified criteria for the very early diagnosis of systemic sclerosis (VEDOSS) and the even greater scarcity of information on pre-scleroderma, a concept still under development. Lastly, the treatment of SSc will depend on the clinical manifestations of each patient, as there is no unified management strategy for very early scleroderma, pre-scleroderma, or isolated microvascular involvement beyond the use of calcium channel blockers or, depending on the severity of vascular obstruction from recurrent and prolonged vasospasm, the use of vasodilators . Other pharmacological groups, such as statins have been considered for their anti-inflammatory effects and reduction of C-reactive protein, low-density lipoprotein concentration, tumor necrosis factor-alpha, and interferon-gamma . Although there is no evidence supporting the use of hydroxychloroquine in the treatment of severe secondary RP, it could play an immunomodulatory role in the vascular lumen . Basta et al. in their study the authors evaluated the response to hydroxychloroquine in subjects with SSc compared to individuals who did not receive the intervention . After three months, they found that participants treated with the antimalarial exhibited a significant reduction in the NEMO score, microhemorrhages, microthrombosis, giant capillary score, and levels of E-selectin, VCAM, and endothelin-1 . These findings support the hypothesis that hydroxychloroquine may be a therapeutic option for preventing microvascular complications in systemic sclerosis . However, further studies with greater statistical power are necessary to establish this intervention as a standard treatment for patients with SSc. The present case highlights the unusual presentation of our patient, which differs markedly from the typical clinical manifestations of systemic sclerosis and even more so from those of patients with very early systemic sclerosis. This underscores the heterogeneity of the disease and emphasizes the importance of recognizing the concept of "pre-scleroderma," characterized by subtle disease manifestations, even in the absence of immunological alterations. Based on the successful outcome of the present case, we propose the following diagnostic and therapeutic approach, focusing on Raynaud's phenomenon as an epiphenomenon of autoimmunity and microvascular damage . In summary, we present a case of a patient with Raynaud's phenomenon and digital ulceration as a sentinel event, accompanied by an NVC pattern highly suggestive of early scleroderma, in the absence of other non-Raynaud's-related manifestations or positivity for specific antibodies. This report underscores the clinical significance of Raynaud's phenomenon as the first manifestation of systemic sclerosis and highlights the crucial role of NVC in the initial diagnostic approach to pre-scleroderma. Moreover, it suggests hydroxychloroquine as a potential treatment strategy for managing and preventing the progression of microvascular damage in this population, in combination with other adjuvant medications.
|
Review
|
biomedical
|
en
| 0.999998 |
PMC11695196
|
Balanitis xerotica obliterans (BXO), a male genital variant of lichen sclerosus et atrophicus (LSA), was first described in 1928 by Stühmer . Although its exact etiology is unknown, growing evidence suggests autoimmune influence, genetic predisposition, inflammatory insults, various infections, and trauma . Balanitis xerotica obliterans is most commonly found in patients aged 40-60 years of age . Areas affected by BXO usually involve the foreskin and penile glans. Common physical examination findings include erythematous changes or white hypopigmented lesions. Most patients present with phimosis with varying degrees of difficulty or inability to retract the foreskin . Meatal stenosis and subsequent urethral stricture disease are common outcomes of this condition . Differential diagnosis includes neoplastic processes, autoimmune diseases, contact dermatitis, psoriasis, Zoon balanitis, leukoplakia, and fixed drug reactions . Histopathologically, the prevalent lesions found in BXO are epithelial hyperplasia, atrophy, penile intraepithelial neoplasia, basal cell vacuolization, lamina propria sclerosis, and variable patterns of lymphocytic infiltration . Management of BXO remains challenging. When only genital skin is involved, topical use of steroids and immunomodulators has shown varying success rates. However, treatment becomes more challenging when the urethra is also involved. Surgical options include meatoplasty, urethral dilatation, and urethroplasties, but urethral strictures associated with BXO have a higher recurrence rate after surgical interventions . Recently, topical application of different immunomodulators and steroids has been evaluated for its role in urethral stricture disease associated with BXO . Tacrolimus is an immunomodulator that prevents the production of interleukin-2 (IL-2) and consequent T-cell activation, which controls the inflammatory process and is used in many inflammatory skin diseases . The aim of this study is to evaluate the role of topical application of tacrolimus in urethral strictures associated with BXO. This was a prospective study done in the Department of Urology of Indira Gandhi Institute of Medical Sciences, a tertiary care center in Patna, eastern India, between April 2022 and June 2024. Male patients who were >18 years of age, presented with lower urinary tract symptoms due to urethral stricture and biopsy-proven BXO, and provided consent to participate in the study were included. Patients with complete urethral obstruction, recurrent urethral stricture, and known cases of benign prostatic hyperplasia, neurogenic bladder, any malignancy, obstructive uropathy, raised serum creatinine, urinary bladder stones, and immunocompromised conditions were excluded. The study was approved by the Institutional Ethics Committee of Indira Gandhi Institute of Medical Sciences . For each patient, a detailed history was taken, a physical examination was done, and an International Prostate Symptom Score (IPSS) was given. Uroflowmetry, ultrasonography (kidney, ureter, and bladder), serum creatinine, retrograde urethrogram (RGU), and micturating cystourethrogram (MCU) were done in all cases. All patients included in the study were given a local application of 0.1% tacrolimus twice daily for six weeks. This local application of tacrolimus included application over the prepuce and glans penis as well as intra-urethral instillation for its application on urethral mucosa. At six weeks, patients were re-evaluated. Symptomatic improvement and adverse effects were asked about, and a local examination was done. The IPSS scoring, uroflowmetry, ultrasonography, and RGU-MCU were repeated. Any change in IPSS or maximum urinary flow rate (Q max ) was noted, as well as changes in uroflowmetry pattern and improvement in the skin color of the glans and prepuce. In case of improvement, patients were continued on tacrolimus once daily for three months, after which patients were again evaluated for further improvement and kept in follow-up. All the patients had a follow-up of at least six months. Statistical analysis was done using IBM SPSS version 20.0 (IBM Corp., Armonk, NY). The mean and standard deviation of numerical variables were calculated. A paired t-test was used to compare the effect of tacrolimus on IPSS and Q max . P-value < 0.05 was considered statistically significant. During the study period (with a mandatory follow-up of six months), a total of 53 patients were included. The mean (±standard deviation) age of the patients was 40.43±5.43 years. Most of the patients were in the age range of 41-50 years. On RGU, 40 (75.5%) patients had stricture <2 cm in length, and 13 (24.5%) patients had stricture ≥2 cm in length. The mean pre-intervention Q max was 12.00±1.43 m/s, and the mean post-intervention Q max was 15.26±3.14 m/s (p<0.001), with this difference being statistically significant. The mean pre-intervention IPSS was 18.55±2.28, and the mean post-intervention IPSS was 13.04±4.72 (p<0.001), with this difference being statistically significant, as seen in Table 1 . Out of our 53 patients, 21 (39.6%) needed surgical intervention (meatoplasty, single-stage buccal mucosal graft urethroplasty, and perineal urethrostomy), as their response to tacrolimus was not satisfactory. All 13 (24.5%) patients with stricture ≥2 cm in length needed surgical intervention. Out of 40 (75.5%) patients having strictures <2 cm in length, seven had no satisfactory response to tacrolimus and were managed by surgical intervention. One patient with a stricture <2 cm in length needed surgical intervention in the follow-up period due to the recurrence of symptoms and stricture. The mean pre-tacrolimus Q max in patients with <2 cm stricture was 12.60±1.01 m/s, and the mean post-tacrolimus Q max was 16.82±1.43 m/s (p < 0.001), whereas, for patients with ≥2 cm stricture, the mean pre-tacrolimus Q max was 10.15±0.80 m/s, and the mean post-tacrolimus Q max was 10.46±1.71 (p = 0.472), with these differences being statistically insignificant, as seen in Tables 2 - 3 . The mean pre-tacrolimus IPSS in patients with strictures <2 cm was 17.85±2.06, and the mean post-tacrolimus IPSS was 10.87±2.86 (p < 0.001), whereas for patients with ≥ 2 cm stricture, the mean pre-tacrolimus IPSS was 20.69±1.44, and the mean post-tacrolimus IPSS was 19.69±2.56 (p = 0.115), with these differences being statistically insignificant. None of our patients reported any major adverse effects. Some patients reported mild irritation of the prepuce and glans skin. Mild skin erythema was noted in a few patients. Management of urethral strictures associated with BXO is still a challenge, with the progressive nature of the disease and its high rate of recurrence further complicating treatment. Depending on the stricture length and location urethral dilatation, meatoplasty, internal urethrotomy, one-stage or two-stage urethroplasty, and perineal urethrostomy are surgical options . Kulkarni et al. have reported encouraging results for urethroplasty in long-segment strictures associated with BXO, but urethroplasty is associated with significant adverse events, including erectile dysfunction, donor site complications, urethral diverticulum, and chordee. Moreover, urethroplasty is a major surgery requiring patient fitness for the intervention . Therefore, conservative management of urethral strictures associated with BXO is highly desired. Pharmacological treatment has been proven to be highly effective in the treatment of BXO involving genital skin . Hengge et al., in their multicentric study done in 2006, reported topical tacrolimus ointment to be an effective and safe treatment for long-standing active lichen sclerosus. In their study, clearance of active lichen sclerosus was reported in 43% of patients at 24 weeks of treatment, and partial response was seen in 34% of the patients . Topical corticosteroids (e.g., clobetasol propionate ointment) are a known and effective treatment for BXO involving the glans and prepuce . For example, Kim et al. in their study reported the safety and efficacy of topical tacrolimus (a calcineurin inhibitor) in the management of genital lichen sclerosus . Tausch et al. also reported good results when using topical clobetasol in BXO involving the skin and meatus . In addition to the encouraging pharmacotherapy results of topical corticosteroids on genital skins involved with BXO, their use has also been evaluated for urethral strictures associated with BXO or genital lichen sclerosus. In 2008, Ebert et al. did a pilot study to determine the safety and efficacy of tacrolimus 0.1% ointment in the postoperative period in proven cases of LSA. Topical application of tacrolimus 0.1% was found to be safe and effective in disease control during the postoperative period . Results of a 2010 study by Karatas et al. strongly indicated that tacrolimus-eluting stents may be useful for the management of recurrent urethral stricture . In 2011, Mallick et al. studied patients attending the outpatient department with typical clinical features of BXO. Their study demonstrated symptomatic relief in 53.33% of cases treated with tacrolimus ointment . Dey et al. used intraurethral instillations of tacrolimus 0.03% in 20 patients with urethral stricture and biopsy-proven BXO, with 75% of the patients responding favorably to the treatment; four of their patients did not respond well, and one patient required urethroplasty due to recurrent urinary tract infections . Choudhury et al. reported that both clobetasol and tacrolimus were effective in the improvement of symptom score, Q max , and local external appearance. Although post-intervention intergroup differences in IPSS were not significant, post-intervention intergroup differences in Q max were statistically significant in favor of clobetasol (p = 0.007) . Our study evaluated the safety and efficacy of local applications of tacrolimus 0.1% in the management of urethral strictures associated with BXO. The use of tacrolimus significantly improved Q max and IPSS, which has also been previously reported in the literature . We also evaluated the effect of tacrolimus and its dependence on stricture length. Application of tacrolimus was not effective in patients with strictures ≥2 cm in length, and these patients required surgical interventions. Of the 40 patients with strictures <2 cm in length, only eight (seven patients due to no response and one patient due to recurrence in follow-up) required surgical intervention. Study limitations This study is a single-group study. It had a mandatory follow-up of only six months. Additional comparative studies with long-term follow-up are needed to fully understand the risk of recurrence after any initially satisfactory improvement. Tacrolimus, being an immunomodulator, could be an effective conservative treatment option for short-segment (≤2 cm) urethral strictures due to BXO. Its topical application is feasible and safe, and its use avoids the complications of surgical interventions. However, further studies with longer-term follow-up are necessary. These future studies may further establish the possible effective and satisfactory role of tacrolimus in these conditions.
|
Other
|
biomedical
|
en
| 0.999997 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.