article_id
stringlengths 6
9
| article_text
stringlengths 5
1.27M
| document_type
stringclasses 4
values | domain
stringclasses 3
values | language
stringclasses 28
values | language_score
float64 0
1
|
---|---|---|---|---|---|
PMC11696398 | Spiro-conjugated molecules are promising structures in which two π-systems are orthogonally bonded to one common sp 3 carbon. Their advantages of structural symmetry and rigidity with small reorganization energy has led to a great interest in optoelectrical applications. 1 − 3 As illustrated by Simons and Fukunaga, 4 the unique orthogonal π-systems reduce the intermolecular aggregation and enhance the carrier mobility compared to planar analogues, which are beneficial for organic light-emitting diodes (OLEDs), 5 organic field-effect transitions (OFETs), 6 and organic photovoltaics (OPVs). 7 Conventionally, building up a 9,9′-spirobifluorene (SBF) core normally involves nucleophilic addition of a lithiated biaryl intermediate to 9-fluorenone, followed by Lewis acid-catalyzed intramolecular Friedel-Craft alkylation. 8 Fused-thiophene is a common building block in the development of organic semiconductors that possess better geometrical planarity and stronger carrier transportation capability in comparison with fused-benzene analogues. 9 − 14 Construction of 4,4′-spirobi[cyclopenta[2,1-b;3,4-b’]dithiophene] (SCT) core is also done using a similar approach to that for SBF . 15 However, in our previous work, a diol side product was collected in synthesizing SCT-cored derivatives. 16 Regrettably, we did not investigate the chemistry of the diol. In addition, we found that the intramolecular Friedel-Craft alkylation is concentration dependent and competes with intermolecular alkylation. To address this issue, we employed (3,3′-dibromo-4,4′-dihexyl-[2,2′-bithiophene]-5,5′-diyl)bis(trimethylsilane) ( Br 2 -2TC6-TMS 2 ) as a precursor to build up a branched diol intermediate, which was further treated with Lewis acid to form a novel structure. The presence of alkyl side chains would efficiently improve the solubility in common organic solvents and avoid lithiation on the 4-position of the thiophene ring. 17 , 18 To our surprise, the diol incurred intramolecular dehydration only toward a dihydrooxepine-based core, along with a dispiro-conformation arranged on its’ 2,7-regioposition. Dispiro building blocks are shape-persistent architectures that include orthogonal squares, tubes, and ladder structures. 19 , 20 For example, Wei and co-workers synthesized two H-shaped molecules TBPDSFDITF and TDOF-DSFDITF , both rigid conformations presenting a high quantum efficiency of 80%. 19 Besides, Poriel et al. developed a dispiro-molecule (1 , 2-b)-DSF-IFs with high thermal stability ( T g = 350 °C), which is applied to blue OLED. 21 Takagi’s group prepared 9-fluorene-type trispirocyclic compounds for hole-transporting material (HTM) in electron luminescence (EL) device application. 22 In comparison with a single spiro-π system, dispiro-based π-systems can serve as a better chromophore for stronger light absorption, and a more rigid skeleton is featured with higher thermal stability as well as easier carrier transportation. 23 , 24 Such characteristics would allow them to have a potentially better performance than normal spiro architectures if used as optoelectrical materials. In 2002, Tsuji and co-workers successfully synthesized a racemic hexaarylethane derivative with a helical π-skeleton in four steps. Electrochemical testing results show that this molecule exhibits strong electron-donating characteristics to form redox pairs with its oxidated form and can be potentially used as an electrochiroptical material. 25 Yamashita in 2004 synthesized bithiophene-hexaarylethane, which shows strong electrochemical stability. The presence of thiophene oligomer makes this structure easy to modify and possibly be used as a molecular wire. 26 Based on literature review, we found that no examples regarding dispiro-ladder-type conformations bearing oxepine-based heterocycles are reported. Therefore, it is still worthwhile to enrich the family of spiro-moieties. In this work, the formation of diol and subsequent intramolecular dehydration toward dihydrooxepine was found to be the major selectivity for cascade steps. Single-crystal X-ray diffraction provided a clear view for special arrangement of the two cyclopentadithiophene (CPDT) units in the dispiro-based skeleton, which is quite different from the orthogonal arrangement. Thus, this work discusses the chemistry along with optoelectrical properties in detail. As shown in Scheme 1 , 3,3′,5,5′-tetrabromo-4,4′-dihexyl-2,2′-bithiophene ( 1) was synthesized in good yield via a base-catalyzed halogen dance (BCHD) reaction as described in the literature. 27 Lithiation of 1 with two equivalent n -BuLi and the following addition of stoichiometric trimethylsilyl chloride (TMS-Cl) led to the formation of the precursor Br 2 -2TC6-TMS 2 (2) . In our expectation, treatment of 1 with stoichiometrically controlled n -BuLi would selectively remove α-site bromide and the following substitution with trimethylsilyl affords 2 . However, based on TLC and 1 H NMR studies, we found that the Li–Br exchange on the β-site occurred simultaneously in minor selectivity even though addition of n -BuLi was carefully controlled, and the resultant β-selective side product showed closed polarity to 2 with an R f value over 0.9 in hexane, enhancing the difficulty for column chromatographic purification. Therefore, we decided to use this crude product 2 directly in the subsequent reactions and attempted to isolate the afterward intermediates. In the effort to prepare precursors for the synthesis of spiro compounds, two pathways were investigated. In the first one, treatment of 2 with n -BuLi and ketone 3 may afford 6a and 6b . Similarly, in another path, 2 is converted into ketone 4 , which then reacts with lithiated 5 to afford molecules 7a and 7b . According to pioneering works and our previous synthetic experience, 16 , 27 we proposed that fast addition of two equivalent n -BuLi (2 equiv) in one step would favor rapid Li–Br exchange on both thiophene β-sites of a bithiophene structure and generate two nucleophilic centers. Therefore, diol molecules 6b and 7b would be the major selected products after nucleophilic addition with 3 . If slow dropwise addition of two equivalent n -BuLi is introduced step by step (1 equiv +1 equiv), the subsequent nucleophilic attack would first occur on one nucleophilic center, followed by a second-step Li–Br exchange of the residue bromine. In this situation, formation of 6a and 7a would be more favored. However, in practice, we found that only the diol intermediate 6b with three bulky groups was the major selective product whose structure was confirmed by 1 H NMR and MADLI-TOF-MS study, regardless of whether n -BuLi was added by fast addition or by step-by-step dropwise addition. Interestingly, the obtained 1 H NMR spectrum of 6b shows two environmental β-site thiophene-based protons at 7.42 ppm (H a ) and 6.90 ppm (H b ), respectively, regarding the 4H-cyclopenta[2,1-b:3,4-b’]dithiophene (CPDT) moiety, suggesting that a steric effect within 6b causes a large torsion angle between 2,2-bithiophene and CPDT conformations . We ultimately confirmed the formation of 6b by identifying the integration ratio of 1:1:1 for H a , H b , and H c (−OH group, 4.32 ppm), and a signal peak at 1150.55 Da accounting for the presence of 6b on MALDI-TOF-MS spectrum . 7a was collected in only 10% best yield but demonstrated poor experimental reproducibility ( Scheme 2 ). The presence of 7a was confirmed by MALDI-TOF-MS for a target molecular weight of 813.84 Da , but no molecular signals for either 6a or 7b were detected by any spectroscopic methods. We proposed this selectivity is subject to the orientation of the hexyl group. When the hexyl group was on the β-position of the ketone site, the plausible steric hindrance of 4 prohibited the nucleophilic attack by lithiated 5 to the carbonyl group and hence caused low conversion of the ketone. 27 It is remarkable that when the hexyl group was on the β-site of the bithiophene ring and was adjacent to the aromatic C–Br bond, the steric hindrance of 2 seemed negligible and the electron-donating character of the hexyl group even facilitated the rapid Li–Br exchange on both symmetric sites so that it was kinetically hard to obtain a single activated nucleophilic center by stoichiometric control. Additionally, other aromatic ketones such as 9-fluorenone and 4,5-diazafluoren-9-one were employed as substrates to enrich dispiro building blocks similar to compound 9 ( Schemes S2 and S3 ). Regrettably, both six-membered aromatic ketones failed to give the corresponding tertiary diols, which can be reasoned by the enhanced steric hindrance of the electrophilic center when the thiophene moiety is substituted with benzene and pyridine rings. Besides, we found that 4,5-diazafluoren-9-one exhibited poor solubility in THF at −78 °C, leading to almost 100% recovery of this substrate. 9-Fluorenone was recovered with approximately 90% recovery rate. Based on several attempts ( Table S1 ), we ultimately chose 6b , the tertiary diol, with the best yield of 90% and convincible experimental reproducibility as a key precursor for construction of the 2,7-dihydroxepine core bearing spiro-conformation. With 6b in hand, the necessity of bromination prior to dehydration is to avoid deprotection of the trimethylsilyl group and subsequent α-site polymerization under the catalysis of Lewis acid. In addition, bromination also provides an alternative possibility for further deviation via coupling reactions. This step was complete upon dropwise addition of NBS in DMF solution and achieved 69% isolated yield. Notably, the two β-site protons on the CPDT conformation of 8 are downshifted and become closer to each other, as observed in the 1 H NMR spectra . This observation could be explained by the reduced conformational torsion angle after the substitution of trimethylsilyl groups with the less-hindered bromine atoms. To minimize the side selectivity toward intermolecular dehydration, dropwise addition of 8 in dilute dichloromethane solution into dilute BF 3 –OEt 2 solution gave rise to 2,7-dihydrooxepine-cored spirodithiophene molecule 9 ( Scheme 2 ) with a best yield of 88%. The 1 H NMR spectra indicate that 9 is conformationally symmetric, as only one group of aromatic protons are present at 6.34 ppm, which is different from the spectra of 6b and 8 with two groups of aromatic protons. Except aromatic and aliphatic protons, no proton signals for the −OH group are observed. The MALDI-TOF-MS spectrum with a peak at 1167.36 Da accounts for the presence of the target product. The peaks around 1090.46 Da are identified as penta-brominated species. To explain the role of BF 3 –OEt 2 in the cyclization, our proposed mechanism ( Scheme S1 ) suggests that the intramolecular dehydration starts from O -borylation of one −OH group (blue highlighted), which results in the rapid generation of the oxonium ion. Meanwhile, the strong electron-withdrawing character of one fluorine atom on BF 3 tends to bond with another hydrogen on the unreacted −OH group (pink highlighted), leading to B–F bond cleavage and elimination of one HF molecule. The O -borylation enables the formed −OBF 2 to be severed as a good leaving group, which can be easily removed intramolecularly via the S N 2 mechanism, and a dihyrooxepine core bearing spiro-conformation is readily formed. Subsequent deprotonation of oxonium ion afforded 2,7-dihydrooxepine scaffold 9 and difluoro boric acid ( BF 2 OH ) as the side product. The single crystal of 9 was prepared by liquid–liquid slow diffusion in a THF-methanol dual-solvent system, in which THF served as a good solvent and methanol served as a poor solvent. As shown in Figure 4 , single-crystal X-ray diffraction (SC-XRD) analysis confirms the formation of a dihydrooxepine core along with dispiro-conformation on its 2,7-site. As expected, the seven-membered ring is not coplanar but is arranged as a boat conformation, and the mean dihedral angle between C 2 –O-C 7 and C 3 –C 4 -C 5 –C 6 fragments is measured to be 71.45°. Besides, it is obviously seen that both spiro-conformations are nonorthogonal. This is subject to the conformational rotation of two CPDT groups through the sp 3 carbon because of the steric hindrance with their neighboring hexyl groups. Thus, the two CPDT groups are twisted around each other with a dihedral angle of 72.31°. Upon proof of the desired structure, the other residual peaks in the MALDI-TOF-MS spectrum of 9 can be explained as fragment signals rather than impurities. DSOCT-Br 6 (9) was further derivatized in two steps. First, the Suzuki coupling reaction with 4-hexyl-5-(4,4,5,5-tetramethyl-1,3,2-dioxaborolan-2-yl)thiophene-2-carbaldehyde ( B-TCHO ) afforded 10 in a yield of 75%. Second, the sequent Knoevenagel condensation with 2-(3-oxo-2,3-dihydro-1H-inden-1-ylidene)malononitrile (IC) and 2-(5,6-difluoro-3-oxo2,3-dihydro-1H-inden-1-ylidene)malononitrile (FIC) using pyridine as a base catalyst afforded A–D–A-type cyclic conjugated compounds 11 and 12 in a yield of 75 and 83%, respectively ( Scheme 3 ). In the first derivation step, we were able to ulteriorly confirm the successful generation of the 2,7-dihydrooxepine-spirodithiophene framework by spectroscopic analysis. For instance, the 1 H NMR spectrum of 10 obviously shows two different environments of protons with an integrated ratio of 1:2 for aldehyde groups (H a and H b ), π-bridge thiophen units (H c and H d ), and hexyl groups, which matches the spatial orientation of all groups within the molecular framework. In addition, the exact mass of 1864.12 Da found in the MALDI-TOF-MS spectrum is consistent with the structure of 10 . Based on a comprehensive study, it is certain that we have obtained the key conformation bearing both dihydrooxepine and spirodithiophene building blocks. As shown in Figure 5 , compounds DSOCT-(TIC) 6 ( 11 ) and DSOCT-(TFIC) 6 ( 12 ) showed a main absorption band over 500–800 nm. Compared to our previous results for linear analogues CPDT-(TIC) 2 and CPDT-(TFIC) 2 ( 10 ) the spiro compounds present larger molar absorptivity (151,100 vs 102,000 L mol – cm –1 and 215,400 vs 117,600 L mol –1 cm –1 , respectively) ( Table S2 ). The band gaps for both molecules are determined by cyclic voltammetry (CV) to be 1.46 and 1.40 eV, respectively, consistent with the optical data . Fluorination at the terminal site lowers the HOMO/LUMO level, as well as the band gap of DSOCT-(TFIC) 6 , facilitating easier charge excitation and separation in organic solar cells, and a lower open circuit voltage ( V oc ) is observed. 28 − 30 Additionally, fluorination at the terminal site presumably enhances the intermolecular π–π stacking, facilitating efficient carrier transportation, 28 − 30 which is beneficial for a higher current density and fill factor of solar cells. 28 , 29 Therefore, an overall improved power conversion efficiency (PCE) of fluorinated compound-based solar cells could be observed. In our devices investigation, the PCEs for PM6 : DSOCT-(TFIC) 6 -based organic solar cells demonstrated better performance than those for PM6: DSOCT-(TIC) 6 -based devices as a result of the higher current density and fill factor, which is consistent with our expectation ( Table S3 ). To further understand the structure–property relationship of the two synthesized molecules, theoretical calculation was performed. To simplify the calculation, all hexyl groups on the DSOCT core were replaced by the methyl group . Based on molecular orbital simulation, both DSOCT-(TIC) 6 and DSOCT-(TFIC) 6 demonstrate dispiro structures with degenerate HOMO and LUMO orbital sets . One can see the π orbital electron density nature of these HOMO and LUMO orbitals. The calculated energy gap between the HOMO and LUMO orbitals of the gas-phase DSOCT-(TIC) 6 is 1.96 eV. With the fluorination, both the HOMO and LUMO orbitals are stabilized while the LUMO is a bit more stable. Comparing the two optimized structures, the bond length, bond angle, and dihedral angle have little difference. The distance between the oxygen atom in the acceptor and the sulfur atom on the π bridge to form S···O intramolecular interactions is 2.70 Å. The TD-DFT modeling results show that DSOCT-(TIC) 6 has strong absorption peaks at 686.83 nm, while the fluorinated molecule DSOCT-(TFIC) 6 has a strong absorption peak at 703.05 nm . This is consistent with a previous study that shows a red shift upon the fluorination. Comprehensively, the two dispiro-based acceptor molecules DSOCT-(TIC) 2 and DSOCT-(TFIC) 2 demonstrated stronger absorptivity and a moderate band gap in comparison with the recently reported “star molecules” such as ITIC derivatives 28 , 31 − 33 and Y-series. 29 , 34 − 36 The three-armed π-systems serve as a better chromophore for light harvesting for a solar cell compared to the recently reported two-armed π-systems. A possible explanation for the low PCEs of the dispiro-molecule-based solar cells is the large torsion angles among the three π-systems. This would lead to conformational distortion of molecules in a blend film, resulting in significant “trap-assist” nonradiative decay of charge carriers. 37 − 39 To improve the device efficiency through molecular design, it is essential to enhance the coplanarity of the entire structure. 40 , 41 For instance, introduction of intramolecular noncovalent “conformational locks” such as S···O, S···F, S···N interactions would alleviate rotation of single bonds; 42 , 43 alternatively, using the “fusing strategy” to construct conjugated fused-ladder type oligomers would also improve the coplanarity of the molecular skeleton. 31 − 36 In summary, our recent work has demonstrated a novel synthetic approach for a 2,7-dihydrooxepine core bearing a dispiro-conjugated framework. Their structural formation is based on a key step for which a major selectivity toward a diol intermediate is more preferred. Such selectivity is attributed to the enhanced nucleophilicity of the aromatic anion by the electron-donating character of adjacent alkyl groups. The mechanistic study shows that a fast proton transfer from one −OH group to another is a key step that facilitates dissociation of HF and fluoroboric acid. Peripheral functionalization of the core at terminal sites gives rise to the corresponding organic semiconductor materials with strong light absorption and fluorescence. Their optical band gap and electrochemical band gap are consistent with computational studies. Fluorination on terminal sites not only reduces the energy level and band gap of DSOCT-(TFIC) 6 but also enhanced the current density and PCEs of the PM6: DSOCT-(TFIC) 6 -based organic solar cells. To further optimize the solar cell performance, structural modifications such as introduction of a noncovalent “conformational lock” or using the “fusing strategy” to improve molecular coplanarity, which will reduce nonradiative decay of the charge carriers in the blend film, may be considered. All chemicals and solvents were reagent grade. The dry solvent was collected from PURESULV Solvent Purification system PS-MD-5ON7 (Innovative Technology). Chromatography was performed on silica gel 60 (particle size = 300–400 mesh) and TLC was performed on an aluminum substrate coated with silica gel 60F524 (Merck, layer thickness = 0.2 mm). Nuclear magnetic resonance (NMR) spectra were obtained using TMS as an internal reference in Bruker Ascend 400 Hz in deuterated chloroform (CDCl 3 ) and dichloromethane (DCM), and spectra were referenced to the deuterated solvent peak at 7.26 and 77.16 ppm or 5.3 and 53.52 for proton and carbon NMR, respectively; all peaks were labeled and integrated accordingly. MALDI-TOF-MS spectra were obtained using Bruker Auto Bending Speed LRF with trans -2-[3-(4- tert -butylphenyl)-2-methyl-2-propenyl]malononitrile (DCTB) as a matrix substrate. Single-crystal data collection was performed on a Bruker D8 VENTURE Photon II diffractometer with graphite monochromated Mo Kα radiation at room temperature, operating at 50 kV and 30 mA. Compounds 3 , 4 , and 5 were synthesized previously according to the literature. 16 , 27 To a solution of diisopropylamine (13 mL, 92.55 mmol, 1.5 equiv) in freshly dried THF (100 mL) was carefully added n -BuLi (2.5 M in hexane, 74.04 mmol, 30 mL, 1.2 equiv) at −78 °C. After stirring for 20 min, 2,5-dibromo-3-hexyl-thiophene (20 g, 61.7 mmol, 1 equiv) in dried THF (50 mL) was added dropwise to the in situ generated LDA solution over 30 min and was further stirred for 1 h. Anhydride CuCl 2 (8.3 g, 61.7 mmol, 1 equiv) was then added in one portion, forming a dark blue solution, which was slowly warmed to rt and stirred for 19 h. The solution was diluted with PE and filtered through silica gel to give a clear yellow solution, which was concentrated to afford the product as yellow oil without further purification (19.6 g, 91% yield). 1 H NMR (600 MHz, CDCl 3 ) δ [ppm] = 2,66 (t, J = 8 Hz, 4H), 1.57–1.54 (m, 4H), 1.45–1.32 (m, 12H), 0.92–0.84 (t, J = 6.7 Hz, 6H); 13 C{ 1 H} NMR (151 MHz, CDCl 3 ) δ [ppm] = 141.5, 128.6, 114.6, 111.1, 31.6, 30.4, 29.1, 28.6, 22.6, 14.1. To a solution of 2 (18.55 g, 28.5 mmol, 1 equiv) in dried THF (50 mL) was added n -BuLi (2.5 M, 57.1 mmol, 22.8 mL, 2 equiv) over 30 min at −78 °C and was stirred for 20 min. TMS-Cl (7.6 mL, 59.9 mmol, 2.1 equiv) in THF (20 mL) was added quickly and the mixture was slowly warmed to rt and stirred overnight. The reaction mixture was quenched with saturated NH 4 Cl solution and the organic phase was separated and concentrated. The vicious mixture was diluted with PE, washed with water, dried with Na 2 SO 4 , and concentrated. The brown oil was further filtered through silica gel using PE as an eluent to afford the crude product mixture as a yellow viscous oil (16.55 g). This oil was directly used for the next step without any further purifications. 1 H NMR (600 MHz, CDCl 3 ) δ 2.67 (t, J = 8 Hz, 4H), 1.61–1.54 (m, 4H), 1.47–1.39 (m, 4H), 1.34–1.33 (m, 8H), 0.90 (t, J = 6 Hz, 6H), 0.36 (s, 18H); 13 C{ 1 H} NMR (151 MHz, CDCl 3 ) δ [ppm] = 148.3, 135.2, 133.9, 116.5, 31.7, 31.4, 30.6, 29.6, 22.5, 13.9. To a solution of freshly prepared 2 (7g, 10.99 mmol, 1 equiv) in dried THF (100 mL) was added n -BuLi (2.5 M, 22.01 mmol, 9 mL, 2 equiv) at −78 °C for 20 min. After stirring of this clear yellow solution for an additional 20 min at this temperature, a solution of 3 (5.5 g, 16.48 mmol, 1.5 equiv) in dried THF (30 mL) was added dropwise to the mixture. The orange suspension was then slowly warmed to ambient temperature and was further stirred overnight. After quenching with a saturated NH 4 Cl solution, the organic phase was separated and concentrated to a dark brown vicious oil. This oil was diluted with DCM (50 mL), washed with water (100 mL), dried with Na 2 SO 4 , and concentrated. Further purification of the crude oil using 15–20% DCM/PE on silica gel afforded the product as a light-yellow powder (7.1 g, 61% yield). 1 H NMR (600 MHz, CDCl 3 ) δ [ppm] = 7.41 (s, 2H), 6.88 (s, 2H), 4.29 (s, 2H), 2.01 (t, J = 4.8 Hz, 2H), 1.39 (t, J = 4.8 Hz, 2H), 1.19–1.08 (m, 4H), 1.00–0.85 (m, 8H), 0.82 (t, J = 7.4 Hz, 6H), 0.81–0.76 (m, 2H), 0.76–0.67 (m, 2H), 0.32 (s, 18H), 0.28 (s, 18H), 0.25 (s, 18H); 13 C{ 1 H} NMR (151 MHz, CDCl 3 ) δ [ppm] = 159.5, 157.3, 149.0, 143.4, 143.0, 142.8, 142.5, 139.9, 138.8, 135.0, 129.6, 128.8, 78.7, 32.1, 31.9, 30.5, 30.3, 22.6, 14.1; MS (MALDI–TOF): calcd for C 56 H 86 O 2 S 6 Si 6 m / z = 1150.36 [M] + , found: m / z = 1150.55 [M] + . To a solution of 6b (6.5 g, 5.64 mmol, 1 equiv) in a mixed solvent of dried chloroform (50 mL) and DMF (10 mL) was added dropwise a solution of NBS (6.52 g, 36.7 mmol, 6.5 equiv) in dried DMF (10 mL) at −25 °C. To this, one drop of acetic acid was added and the reaction mixture became a brown solution. The solution was slowly warmed to ambient temperature and stirred overnight in dark. After completion, the resultant mixture was diluted with DCM (150 mL) and washed several times with water to remove DMF. The combined organic phase was dried and concentrated to afford a dark oil. Further purification of the dark oil via chromatography by eluting with 25% DCM/PE afforded the product as a light brown solid (4.7 g, 69% yield). 1 H NMR (600 MHz, CDCl 3 ) δ [ppm] = 7.04 (s, 2H), 6.83 (s, 2H), 3.61 (s, 2H), 1.99 (td, J = 13.1, 4.8 Hz, 2H), 1.67 (td, J = 13.0, 4.5 Hz, 2H), 1.24–1.17 (m, 4H), 1.11–1.02 (m, 4H), 0.98–0.90 (m, 4H), 0.86 (t, J = 7.4 Hz, 6H), 0.84–0.79 (m, 2H), 0.73–0.63 (m, 2H); 13 C{ 1 H} NMR δ [ppm] = (151 MHz, CDCl 3 ), 154.3, 152.6, 140.1, 137.9, 137.6, 135.3, 133.7, 125.2, 125.0, 113.8, 113.5, 112.4, 80.4, 31.7, 30.2, 29.2, 28.7, 22.6, 14.2; MS (MALDI–TOF): calcd for C 38 H 33 Br 6 O 2 S 6 m / z = 1185.58 [M] + , found: m / z = 1185.37 [M] + . A solution of 8 (4 g, 3.35 mmol, 1 equiv) in DCM (200 mL) was dropwise with BF 3 –OEt 2 (2.38 g, 16.8 mmol, 5 equiv) over 30 min at ambient temperature with vigorous stirring; a dark green solution was formed. This solution was further stirred at room temperature overnight, followed by quenching with a saturated NaHCO 3 solution. The organic phase was dried with Na 2 SO 4 and concentrated. The resulting dark solid was purified via chromatography using PE as an eluent to afford the product as an off-white powder (3.5 g, 88% yield). 1 H NMR (600 MHz, CDCl 3 ) δ [ppm] = 6.34 (s, 4H), 1.87 (t, J = 8.4 Hz, 4H), 1.23–1.13 (m, 4H), 1.08–1.00 (m, 4H), 0.90–0.86 (m, 4H), 0.83 (t, J = 7.4 Hz, 6H), 0.82–0.76 (m, 4H); 13 C{ 1 H} NMR (151 MHz, CDCl 3 ) δ [ppm] = 150.9, 141.1, 138.6, 137.6, 135.1, 112.4, 111.0, 84.9, 31.7, 29.7, 29.5, 28.8, 22.6, 14.1; MS (MALDI–TOF): calcd for C 38 H 30 Br 6 OS 6 m / z = 1167.57 [M] + , found: m / z = 1167.36 [M] + . A solution containing 9 (1.73 g, 1.47 mmol, 1 equiv), PH( t -Bu 3 )BF 4 (51.2 mg, 0.176 mmol, 0.12 equiv), and 4-hexyl-5-(4,4,5,5-tetramethyl-1,3,2-dioxaborolan-2-yl)thiophene-2-carbaldehyde (B-TCHO) (6 g, 18.6 mmol, 13 equiv) in THF (30 mL) was mixed with K 2 CO 3 solution (2M, 20 mL). The mixture was stirred and purged with argon for 20 min. Pd(PPh 3 ) 4 (60 mg, 0.09 mmol, 0.06 equiv) was then quickly added to the mixture and the resulting yellow-brown solution was further purged for an additional 10 min. After refluxing for 48 h, the dark mixture was separated. The organic phase was diluted with DCM and washed with water (200 mL). When concentrated under reduced pressure, the crude product was purified via chromatography using 50%DCM/PE as the eluent to afford an orange powder. Further purification by gel permeant chromatography afforded the pure product as orange crystals (2.06 g, 75% yield). 1 H NMR (400 MHz, CDCl 3 ) δ [ppm] = 9.86 (s, 2H), 9.79 (s, 4H), 7.65 (s, 2H), 7.55 (s, 4H), 6.77 (s, 4H), 2.77 (t, J = 7.8 Hz, 8H), 2.60 (t, J = 7.6 Hz, 4H), 1.96–1.92 (m, 4H), 1.70–1.58 (m, 12H), 1.40–1.35 (m, 12H), 1.31–1.19 (m, 36H), 0.97–0.92 (m, 4H), 0.86 (t, J = 7.2 Hz, 12H), 0.79 (t, J = 6.6 Hz, 6H), 0.64 (t, J = 7.2 Hz, 6H); 13 C{ 1 H} NMR (100 MHz, CDCl 3 ) δ [ppm] = 182.8, 182.2, 153.8, 144.6, 143.1, 142.4, 140.9, 140.3, 140.2, 140.2, 139.0, 138.9, 138.0, 137.1, 136.9, 136.8, 128.7, 124.2, 84.7, 31.6, 31.6, 31.4, 30.4, 30.1, 29.8, 29.7, 29.2, 29.1, 28.9, 27.1, 24.9, 24.6, 22.6, 22.5, 22.4, 14.1; MS (MALDI–TOF): calcd for C 104 H 120 O 7 S 12 m / z = 1864.57 [M] + , found: m / z = 1864.12 [M] + . A solution of DSOCT-(TCHO) 6 (500 mg, 0.27 mmol, 1 equiv) and 2-(3-oxo-2,3-dihydro-1H-inden-1-ylidene)malononitrile (410 mg, 2.16 mmol, 8 equiv) in chloroform was stirred at 50 °C for 15 min, and 0.1 mL of pyridine was added. The solution gradually turned into a dark blue color and was refluxed overnight. After being cooled to room temperature, the solution was concentrated and the resultant crude product was purified via chromatography using 50% DCM/PE as the eluent to give a dark blue powder. Further purification using gel permeation chromatography and subsequent recrystallization in methanol afforded the product as dark blue crystals (590 mg, 75% yield). 1 H NMR (400 MHz, CDCl 3 ) δ [ppm] = 8.79–8.65 (m, 6H), 8.65–8.54 (m, 6H), 7.85–7.66 (m, 18H), 7.63 (s, 6H), 7.03 (s, 4H), 2.82 (t, J = 8.4 Hz, 8H), 2.70 (t, J = 7.7 Hz, 4H), 2.20–2.16 (m, 4H), 1.71–1.65 (m, 12H), 1.39–1.34 (m, 12H), 1.28–1.22 (m, 24H), 1.05–0.92 (m, 4H), 0.90–0.75 (m, 30H), 0.64 (t, J = 7.2 Hz, 6H); 13 C{ 1 H} NMR (100 MHz, CDCl 3 ) δ [ppm] = 187.9, 160.1, 148.5, 147.8, 145.7, 144.3, 141.9, 141.0, 139.9, 138.0, 137.1, 136.8, 135.2, 134.5, 125.2, 123.7, 122.5, 117.4, 114.5, 114.2, 69.3, 31.6, 30.6, 29.9, 29.7, 29.4, 29.1, 22.6, 22.6, 22.5, 14.1,; MS (MALDI–TOF): calcd for C 176 H 143 N 12 O 7 S 12 m / z = 2919.79 [M + H] + , found: m / z = 2919.86 [M + H] + . A solution of DSOCT-(TCHO) 6 (500 mg, 0.27 mmol, 1 equiv) and 2-(5,6-difluoro-3-oxo-2,3-dihydro-1H-inden-1-ylidene)malononitrile (496 mg, 2.16 mmol, 8 equiv) in chloroform was stirred at 50 °C for 15 min, and 0.1 mL of pyridine was added dropwise. The solution gradually turned into dark blue color and was refluxed overnight. After cooling to room temperature, the solution was concentrated and the resulting crude product was purified via chromatography using 50% DCM/PE as eluent to give dark blue powders. Further purification using gel permeation chromatography and subsequent recrystallization in methanol afforded the product as dark blue crystals (700 mg, 83% yield). 1 H NMR (400 MHz, CDCl 3 ) δ [ppm] = 8.72 (s, 6H), 8.59–8.44 (m, 6H), 7.71–7.64 (m, 6H), 7.61–7.52 (m, 6H), 7.05 (s, 4H), 2.83 (t, J = 7.8 Hz, 8H), 2.70 (t, J = 7.9 Hz, 4H), 2.21–2.16 (m, 4H), 1.75–1.63 (m, 12H), 1.38–1.33 (m, 12H), 1.30–1.20 (m, 24H), 1.04–0.94 (m, 4H), 0.92–0.74 (m, 30H), 0.66 (t, J = 7.3 Hz, 6H); 13 C NMR (100 MHz, CDCl 3 ) δ [ppm] = 185.8, 158.00, 153.3, 149.1, 148.4, 146.3, 144.7, 141.4, 138.0, 137.6, 137.2, 136.6, 134.5, 125.3, 121.7, 115.1, 114.0, 107.7, 70.2, 31.6, 31.5, 30.5, 29.9, 29.7, 29.4, 29.1, 28.8, 22.6, 22.5, 14.1; MS (MALDI–TOF): calcd for C 176 H 132 F 12 N 12 O 7 S 12 m / z = 3135.67 [M + H] + , found: m / z = 3135.52 [M + H] + . | Study | biomedical | en | 0.999999 |
PMC11696408 | Over the past decades, the importance of the battery technology has increased significantly as there is an ever growing need for portability and high storage capacity in a compact size to store the chemical energy and to convert it into electrical energy. There are many battery types, for example heavy-metal - acid, but arguably lithium ion (Li-ion) batteries have by far the most applications in portable electronics and electric vehicles. Batteries made of lithium are much lighter than nickel-based ones and are also more durable since no crystals form in the battery at all. The mainstream type of Li-ion batteries contain liquid electrolytes, but exhibit many drawbacks, such as limited voltage, poor mechanical strength and flammability. 1 − 3 The solid-state batteries (SSBs), on the other hand, are potentially much safer. 4 , 5 SSBs can utilize metallic lithium for the anode, making it possible to achieve high energy density, and employ a separator that ideally allows only lithium ions to pass through. 6 Since its discovery in the early 1990s, 7 LiPON (lithium phosphorus oxynitride, Li x PO y N z ) has been one of the most popular solid-state electrolytes used for planar lithium ion microbatteries. The success of LiPON thin-film electrolytes can be attributed to their excellent properties such as small thickness, good ion conductivity at room temperatures, high electronic resistivity, and unmatched long-term durability in terms of cycling performance and elastic energy storage capability.. 8 − 11 Bates et al. showed that the ionic conductivity of LiPONs increases significantly with increasing atomic percentage of N. 7 There are several ideas about how nitridation affects the structure, which are based on the fact that increasing number of N atoms promotes cross-linking by the formation of double (N d ) and triple (N t ) coordinated N bridges between P atoms. 12 , 13 According to one theory, Li-ion mobility is caused by the mixed anion effects created by nitriding. 14 , 15 From an electrostatic point of view, the different levels of covalency of P–N bonds concerning P–O, would affect the interaction with Li + and cause different ion conductivity. 16 Lacivita et al. investigated the Li + mobility in the amorphous LiPON electrolyte using ab initio molecular dynamics methods. They found that the mobility is strongly influenced by the chemistry and connectivity of phosphate polyanions near Li + 17 Complementing molecular dynamics with infrared spectroscopic experiments, it was determined that N forms both bridges between two phosphate units and nonbridging apical N (N a ). 18 According to the study by Yu et al., in addition to the fact that nitrogen is built into the structure of the deposited film and increases the electrical conductivity, it is electrochemically and mechanically stable, thus LiPON can also form a barrier against dendrites growing out of the Li anode. 8 Based on previous studies, it can be said that the mechanical stress causes roughening of the anode, which creates metallic protrusions that lead to the formation of dendrites. 19 Li forms dendrites during repeated cycling that may lead to short circuits, thermal runaway, and explosion hazards. 20 However, since this phenomenon has been in the focus of attention, investigations have also taken new directions and these studies have paved the way toward safer batteries. According to some studies, mechanical behaviors of the involved constituents play a critical role in the formation and suppression of Li dendrites and the corresponding interfacial stability. 19 , 21 Jana and Garcia investigated dendrite morphology and concluded that growth is a direct product of the competition between the rate of Li deposition and the plastic deformation of Li under pressure, 22 that is, the morphology of lithium is strongly dependent on the charge rate and feature size. There is a theory that dendrite formation can be prevented if the shear modulus of the electrolyte is about twice that of the metal anode and this value may be sufficiently high to mechanically suppress dendrite formation at the lithium/LiPON interface in thin-film batteries. In the study of Glenneberg et al. the morphological and electrochemical changes of LiPON under different external stress situations were investigated in a unique way. They employed bending experiments and observed that decreasing bending radii lead to a decrease in the LiPON resistance and also to reduced activation energies for the lithium migration as a result of the internal stress within the electrolyte layers, due to bending. 23 Kalnaus et al. investigated the resistance to cracking (fracture toughness) of LiPON by nanoindentation. 24 During the nanoindentation it was observed that the localized stress supporting the indenter tip can be relieved by three major mechanisms: densification (which appeared recoverable at room temperature), constant volume (isochoric) shear flow, and formation of new surfaces via fracture 25 and observed ductility and the ability to strain recovery 26 in this material (it was not possible to induce cracks). In this paper, authors focus on the micromechanical properties of LiPON thin films, since these parameters could crucially affect the electrochemical performance of SSBs. It was aimed at finding a possible explanation for a less-known strain recovery capability of LiPON, which could play a main role in the ion conductivity of the solid-state electrolyte. LiPON thin films are commonly deposited using reactive sputtering of a Li 3 PO 4 target in an N 2 atmosphere 27 , 28 or physical vapor deposition (PVD), such as sputtering. 29 Our layers were prepared according to the synthesis protocol described in our previously published paper. 23 These layers were sputtered via RF-Sputtering using a 4″ Li 3 PO 4 target (Plasmaterials Inc.). In order to deposit the LiPON onto smooth synthetic sapphire substrate (due to its chemical and mechanical resistivity) an RF power of 120 W was used, while having a sputter pressure of 2 × 10 –1 Pa and a gas flow of 100 sccm (Standard cubic centimeters per minute) dry nitrogen. The Li/P ratio widely utilized in the literature and known to influence the structure-was indirectly controlled, and stemming from the target’s composition and PVD process parameters. Sputtering for a total of 5 h led to a LiPON thickness of around 1 μm, which was verified by FIB-SEM studies. Based on XPS-studies a composition of Li 2.13 PO 2.47 N 0.67 was determined for the sputtered LiPON, 23 which is in perfect agreement with literature values. 30 , 31 According to the apparatus supplier (MBraun), the used 4-in. target in our setup (fixed substrate-to-target distance, substrate carrier rotation at 30 rpm) allows for homogeneous lateral distribution (both in-plane and thickness). The obtained layer exhibited exceptional homogeneity, with no detectable variations in thickness or chemical composition. The in situ indentations were carried out at room temperature inside a Mbraun-MB200B glovebox with Ar atmosphere and oxygen and water content less than 0.1 ppm. A custom-made nanoindenter was used without any load or strain feedback loop integrated. Instead of the traditional controlling modes, a constant platen velocity was applied during the tests which characterized the average strain rate, as in the case of previous studies. 32 , 33 The application of this natural-like controlling allows precise investigation of the stress-releasing and -accommodating mechanisms. During deformation, one end of a spring (having a spring constant of k = 1.72 mN/μm) was attached to the indenter tip while the other end was moved at a constant (platen) velocity v p . The controlling of the spring involves both loading and unloading phases, each executed at identical platen velocities but in opposite directions. Between these phases a holding phase was carried out, lasting half the duration of the loading phase. A total of 122 nanoindentation experiments were conducted ( Table 1 .) employing varied platen velocities ranging from 1 to 40 nm/s. Three distinct indenter tips were utilized, including two spherical ones with radii of 2 (named “Spherical 2’’) and 10 μm (named ′′Spherical 10’’), as well as a sharp-type Berkovich indenter. A spherical tip with radius R is indented to a depth h by an applied force F . Assuming that the material is elastically isotropic and behaves according to the Hertzian theory, 34 the indenter has a contact radius a with the material, given by 1 where E r is the reduced Young’s modulus given by 2 Here, E i and E s are the moduli of the indenter’s tip and specimen, respectively; similarly, V i and V s are the Poisson ratios for the tip and the specimen, 34 , 35 finally, the additive term C d is due to the additional elasticity originating from the frame and the natural imperfection of the sample supporting system of the device. The Load–Displacement F ( h ) function can be given in the early loading regime as 3 In the case of Berkovich indentation, the conventional approach for elasticity calculation is outlined in. 36 However, in this paper, in order to match the results of spherical indentations, an alternative method was employed to describe the elastic regime. In total 110 indentation experiments were carried out with spherical tips and E s and V s were obtained from literature data: the modulus was found to be E s = 73 GPa on average, and the Poisson’s ratio of LiPON was found to be V s = 0.25. 26 , 36 , 37 C d ′ was then considered as a fitting parameter, and could be calculated from (3). In the case of a 10 μm spherical indenter, the C d ′ parameter was determined as C d ′ (Sph10) = 0.196 ± 0.03 GPa –1 , while, for the 2 μm tip C d ′ (Sph2) = 0.088 ± 0.01 GPa –1 was obtained. To calculate the estimated radii and C d ′ parameter (elastic contribution of the device) of the Berkovich indentations, it was assumed that the C d ′ parameter is dependent on the sharpness of the indenter’s tip, where the use of a less sharp tip contributes more to the E r value via its less capability of penetration. We were using the R 1/2 E r product of (3) as the fitting parameter. Moreover, based on the two types of spherical indentations it could be seen that C d ′ changes with the same multiplicator as R 1/2 . Thus, in the case of the Berkovich tip, it was assumed that R Berk 1/2 , and C dBerk ′ changing with the same m 2&berk compared to the parameters of the spherical tip with radii of 2 μm, and R berk 1/2 E r (berk) = 12.34 μm 1/2 GPa comes from the fitting . It resulted, that R berk = 0.46 μm; C dberk ′ = 0.042 GPa –1 ; m 2&berk = 0.48. In this study, the reported novel deformation phenomenon is described as follows. Figure 1 plots a particular load–displacement curve with purple color. The displacement was calculated as the position of the indenter’s tip relative to the initial tip–sample touch. This spectacular deformation phenomenon initially starts with a fully elastic regime. In that early stage of the loading a Hertzian curve of eq 3 can be fitted perfectly as indicated by the green curve with an arrow in Figure 1 . This phase is followed by a sudden deformation event characterized by a notable increase in the displacement from 0.4 to 0.7 μm. Following this event, deformation continues to remain elastic again, and follows a shifted Hertzian curve. However, during unloading, a strain recovery is observed at a smaller force than that of the initial strain burst, i.e, a the curve exhibits hysteresis. Remarkably, no residual deformation is detectable after the indentation, suggesting the absence of any conventional “irreversible’’ plastic deformation. The duration and magnitude of this type of deformation event can vary significantly, as it is explained below. Table 1 summarizes the conducted experiments and some of the calculated parameters, separated by the platen velocities and the type of the tips. The columns named “Ratio’’ show the number of executed experiments in a given platen velocities, and the number of indentations where the deformation instability detailed above was unambiguously observed. In a given row, the values of A , F y , h y , and Slope parameters represent the averages. These values characterize the stored energy during the cycle ( A as the area of the hysteresis), the force and displacement at the onset of the plastic event F y , h y and the Slope depends on the rate of the event. The most important velocity dependence of our data is the presence of deformation instability. To define the occurrence of instability, we based our analysis on the fits shown in Figure 1 . The peculiar nature of these instabilities is that, following the sudden displacement burst, it is followed by the Hertzian curve corresponding to the initial elastic region, shifted along the displacement with Δ h . After fitting the initial elastic region, the Hertzian curve associated with the unloading part, except for the parameter Δ h , can generally be undoubtedly identified. As a criterion for the existence of instability, we chose an artificially selected threshold value for Δ h . If the Δ h value for the two fitted curves on the load–displacement graph exceeded 200 nm, we considered it an indication of instability. Based on Table 1 . these instabilities occur at high platen velocities in the case of the spherical tip with 10 μm radii, while at slower velocities for 2 μm spherical indentation , and are not present at all for the sharp Berkovich tip. On the other hand, all the investigated parameters show no significant dependency on the platen velocity. On the Figure 3 . Thirty more representative loading parts of the indentations curves given by the 2 μm radii spherical indentation (organized by the platen velocities) were selected regarding visualization. These curves show that there are instabilities with different yielding points, rates, and strain burst sizes. , moreover the whole loading–unloading parts in Figures S2–S4 . Numerous parameters can be associated with this reversible instability for characterization, allowing the derivation of some fundamental conclusions. The global yielding of the indentations, represented by the gray dashed horizontal lines in Figures 3 , S1, S4a , and decreases with the increasing tip sharpness with the registered values of 0.75, 0.35, and 0.25 mN. Furthermore, in the case of spherical tips, the stored energy (proportional to A values) during the reversible deformation cycle decreases for sharper tips. Additionally, the duration of these events (inversely proportional to the Slope ) also decreases for sharper tips. In our investigation, we explored the mechanical properties of the solid-state electrolyte LiPON. Given that both elasticity and plasticity play pivotal roles in determining the durability of LiPON-based batteries, 21 , 22 , 38 our research involved nanoindentation experiments employing diverse tip shapes and strain rates. Utilizing a specially designed nanoindenter, 33 our controlling method differed from traditional methods that typically use force or strain control. This departure from conventions allowed us to unveil a novel deformation event linked to the intricate structure of the examined material. The strain recovery, an integral aspect of these complex deformation properties, has been previously documented in the literature. 24 Previous studies have highlighted the exceptional elastic energy storage capacity of LiPON, 23 emphasizing its resistance to crack formation and identifying potential deformation mechanisms such as hydrostatic densification and isochoric shear when surface pop-in events occur. 24 Guided by our results , we propose an alternative deformation mechanism to explain the reversible instability. Assuming that the P(O,N) 4 tetrahedra exhibit mobility within the amorphous Li matrix, akin to internal friction in a viscous medium, the local accommodation of these tetrahedra enables volume reduction (local deformation). This phenomenon arises from the higher density of the tetrahedra compared to the pure amorphous Li matrix. Moreover, the reversible instability observed in our study can be elucidated by considering the frictional mobility of these tetrahedra as well. Since with the decaying of the external stresses, the tetrahedras able to earn the initial homogeneous distribution. The initiation of tetrahedral motion must occur at a certain force value F y . Once initiated, the avalanche-like cascade movements occur, representing a necessary condition for measurable deformation (and these stochastic properties generally accompany physical instabilities). This cascade effect propagates among neighboring tetrahedra, whereby the disappearance of a tetrahedron from its position creates a temporary vacancy, resulting in a higher density gradient in its proximity. This density gradient may provide the driving force to overcome the initial frictional forces, contributing to the cooperative tetrahedra movement. Assuming the mobility of P(O,N) 4 tetrahedra, during deformation even the chemical properties can change. According to simulations, 17 if the Li/P ratio decreases, the tetrahedra can connect (increasing the number of the N d bonds) and affect the Li + conductivity. If so, the deformation induced local P(O,N) 4 accommodation can also affect the Li + diffusion (via the increased N d bonds), which can explain the observation of Glenneberg et al. 23 They observed, with increasing bending deformation, a decrease in the LiPON resistance and reduced activation energies for the lithium migration. The cyclic properties of this deformation mechanism can be interpreted by Table 1 . In the case of the less sharp Spherical 10 indentations, the stored energy during a cycle (proportional to A ) is higher, which may result in the bigger activated volume via deeper h y values. This stress-affected volume under the tip has lower inhomogeneity compared to the Spherical 2 tip, which may prevent the cooperativity of the tetrahedra. This could have caused the longer duration of the events and the positive values of the Slope parameters (slow deformation rate during events), which could indicate the tendency of cooperativity. The reduced cooperativity of the tetrahedra in the case of sharp (or sharper) tip geometries could be attributed to the bigger inhomogeneity of the induced stress field under the tip. This implies that the volume activated by a less homogeneous stress field contains smaller regions with mechanical stresses exceeding the threshold needed to initiate the movement of the tetrahedra. This assumption can also explain the lack of instability in the Berkovich indentation, even if the tip possesses a nonperfect geometry. Understanding the mechanical behavior of LiPON films is crucial for further technological development, not only because of the durability of batteries, but also because the ion conductivity also depends on the deformation state of the LiPON. In this study, authors reported the mobility of the P(O,N) 4 tetrahedra within the amorphous Li matrix, akin to friction in a viscous medium. This capability can not only explain the reported experiments (unstable and sudden deformation event followed by most of the time total strain recovery), but also previously described phenomena such as enormous elastic energy storage capability, resistance to fracture, and deformation-dependent electrochemical properties. Generally, a strain rate-dependent instability can be explained by a cooperative phenomenon, as demonstrated by previous studies. 33 , 39 These cooperations exist between tetrahedra, exposed to a decent stress field. Since this field depends on the tip geometry, it can be inferred that the sharper the tip, the instability occurs with a lower probability. The varying level of cooperativity among tetrahedra can elucidate the absence of instability in the case of sharp indentation. Additionally, this novel deformation mechanism was not previously reported in the literature, as this study employed spherical-headed indenting controlled by different methods to unveil the unstable deformation. | Study | biomedical | en | 0.999996 |
PMC11696410 | Photoelectrochemical (PEC) sensors show potential for clinical diagnostics and environmental monitoring, offering low detection limits by minimizing background signals. This is possible owing to the separation between the readout source and the excitation source, which, in this case, is light. 1 The miniaturization and cost reduction of these devices require the use of compact light sources, printed electrodes, and photoactive nanomaterials that operate with low-power irradiation. 2 Titanium dioxide (TiO 2 ) is used in PEC analysis due to its photoactivity, cost-effectiveness, photostability, biocompatibility, and low toxicity. 3 TiO 2 exists in three main crystal structures: anatase, which is stable at low temperatures; brookite, typically found in minerals but challenging to synthesize; and rutile, which is stable at higher temperatures. 4 Platforms with enhanced photoactivity have been reported by combining TiO 2 with graphene-based materials. These composites offer large specific surface areas and improved conductivity, making them ideal for photocatalysis applications. For example, reduced graphene oxide with TiO 2 nanoparticles was used for photocatalytic degradation of the pollutant 4-nitrophenol in water. 5 Graphene/TiO 2 core–shell nanofibers with embedded graphene nanofibers were evaluated for phenol photodegradation. 6 Few-layer graphene oxide encapsulated with TiO 2 nanoparticles was used in the photocatalytic degradation of the organic water pollutant rhodamine B, with a 3-fold degradation rate compared with pure TiO 2 . 7 Three-dimensional (3D) architectures, such as graphene foam (GF), are attractive for their conductive network and high porosity, which minimize steric hindrance to immobilize biomolecules with preserved activity, 8 and improve photoelectrochemical performance. The production of composites and reproducible films with TiO 2 is challenging because of its low dispersibility. To address this limitation, we present a scalable method for synthesizing brookite on a graphene foam electrode (TiO 2 /GF) without the need for thermal annealing. Thermal annealing is commonly used to increase the crystallinity and improve the properties of semiconductor films. 9 , 10 However, this process can be energy-intensive, especially for large-scale applications. In contrast, the method presented here eliminates the need for thermal annealing, potentially reducing energy consumption and simplifying the synthesis process. We compared the photoelectrochemical performance of TiO 2 /GF with that of laboratory-produced electrodes using carbon ink (CNPs) modified under the same conditions. The tests were conducted with 0.1 M ascorbic acid (AA) since it is being used extensively as a probe in photoelectrochemical immunosensors, 11 aptasensors, 12 and genosensors. 13 The integration of a miniaturized, user-friendly, 3D-printed system with the TiO 2 /GF electrode demonstrates significant potential for on-site applications. Graphene foam electrodes (GF; Gii-Sens) were purchased from Integrated Graphene LTD . Printed carbon electrodes (PCEs) were manufactured according to the procedure described by Martins et al. 14 A 20% (w/v) titanium(III) chloride solution was acquired from Thermo Scientific (England, United Kingdom). l -ascorbic acid (AA, ≥ 98%), potassium chloride (KCl, ≥ 99%), sodium bicarbonate (NaHCO 3 , ≥ 99.8%), sodium chloride (NaCl, ≥ 99%), sodium phosphate dibasic (Na 2 HPO 4 , ≥ 98%), potassium phosphate monobasic (KH 2 PO 4 , ≥ 99%), potassium hexacyanoferrate(II) trihydrate (K 4 [Fe(CN) 6 ]·3H 2 O, 99%), and potassium hexacyanoferrate(III) (K 3 [Fe(CN) 6 ], 99%) were obtained from Sigma-Aldrich (England, United Kingdom). The silver–silver chloride conductive ink used for the pseudoreference electrode (Ag|AgCl) was obtained from TICON (Sorocaba, Brazil). Ultrapure water, provided by a Thermo Fisher system, had a resistivity of 18.2 MΩ cm. The phosphate-buffered saline (PBS) solution was formulated at the following concentrations: 137 mM NaCl, 10 mM Na 2 HPO 4 , 1.8 mM KH 2 PO 4 , and 2.7 mM KCl. Raman spectroscopy was performed by using a Renishaw Qontor confocal Raman microscope with a 532 nm excitation wavelength. Scanning electron microscopy (SEM) images were obtained with a JEOL JSM-7900F microscope operating at an accelerating voltage of 5.0 kV. Electrochemical impedance spectroscopy (EIS) measurements were carried out using a CompactStat system from Ivium Technologies (The Netherlands), while all other electrochemical tests were conducted with a Metrohm Autolab potentiostat (model PGSTAT12). Brookite TiO 2 was electrodeposited onto GF (or PCE) by using an electrochemical cell with temperature control, featuring a silver–silver chloride electrode (3.0 M KCl) as the reference and a printed carbon as the counter electrode. A 25 mM TiCl 3 solution, adjusted to pH 2.5 and heated to 80 °C, was employed. 15 Electrodeposition was carried out at 1.5 V for 10, 20, and 30 min, resulting in TiO 2 -10/GF (or TiO 2 -10/PCE), TiO 2 -20/GF, and TiO 2 -30/GF electrodes, respectively. The electrodes were then air-dried at room temperature. One carbon electrode was then coated with silver–silver chloride conductive ink to serve as a pseudoreference electrode (Ag|AgCl). Scheme 1 illustrates the 3D-printed portable photoelectrochemical system used for the photocurrent measurements, which includes a 3 W LED light (410 nm, 350 mW cm –2 ), a relay module to control the ON-OFF illumination cycles, and a cover to avoid external light interference. Further details can be found in our previous work. 1 Transient current measurements were performed with a potential of 0 V vs the open-circuit potential (OCP). Linear sweep measurements used a potential range from −0.2 to 0.5 V versus Ag|AgCl at a scan rate of 2 mV s –1 . ON-OFF cycles of 20 s for transient current curves, 10 s for linear sweep, and 60 s for the OCP measurements were adopted. The photoelectrochemical experiments were conducted in a PBS solution containing 0.1 M AA. EIS was performed with a 5 mM solution of [Fe(CN) 6 ] 3– / 4– (containing 5 mM K 4 [Fe(CN) 6 ] and 5 mM K 3 [Fe(CN) 6 ]) in 0.1 M KCl, from 1 Hz to 10 kHz with a 0 V bias versus OCP. All experiments were conducted with a 100 μL solution volume. The Raman spectra for the TiO 2 -10/GF and TiO 2 -10/CNPs electrodes in Figure 1 a show bands at 153, 252, 322, 412, and 633 cm –1 , characteristic of brookite TiO 2 . 16 − 18 For graphene foam, the Raman spectra display bands at 1352, 1588, 2693, and 2942 cm –1 , associated with D, G, 2D, and 2D’ vibrational modes . The D peak corresponds to the disordered structure of carbon black (amorphous carbon), while the G peak is associated with the high-frequency vibration of the carbon network. The D and 2D’ peaks are attributed to the interactions between two layers of graphene and disordered graphene/nanographene, respectively. 19 The TEM image in Figure 1 c for TiO 2 /GF reveals multilayer graphene structures decorated with TiO 2 nanoparticles. Electron diffraction analyses confirm the brookite phase, as shown in Figure 1 d. The SEM images in Figure 2 a–c show that the GF electrode exhibits an interconnected microporous network, enabling electrolyte ions to penetrate into the graphene electrode. 20 In contrast, the CNPs electrode, composed of graphite and carbon nanoparticles, has a more compact surface . Figure 2 d–f shows that the GF electrode retains a significantly larger surface area compared to the CNPs electrode, even after TiO 2 electrodeposition . The cross-sectional images and EDS mapping in Figure 3 a–e show the TiO 2 -10/GF and GF electrodes. The graphene foam electrodes have a carbon layer 37.5 ± 2.5 μm thick and a TiO 2 layer 4.8 ± 0.8 μm thick . This TiO 2 layer is 2.8 times thicker than the electrodeposited TiO 2 on CNPs (1.7 μm), likely due to better penetration of TiO 2 into the porous graphene structure. However, TiO 2 particles primarily form on the top surface rather than within the GF film, as evidenced by the mapping images . Figure 3 f(i) shows the GF, TiO 2 -5/GF, TiO 2 -10/GF, and TiO 2 -30/GF electrodes before exposure to AA, emphasizing the impact of different electrodeposition times on the TiO 2 layer, while Figure 3 f(ii) shows the same electrodes after 5 min of exposure to 0.1 M AA. The amount of electrodeposited material increases with time. Electrodes prepared for up to 10 min have a uniform film, while 30 min of TiO 2 deposition results in a nonuniform coating. After interaction with the AA solution, all TiO 2 /GF electrodes exhibited a color change from gray to yellow, indicating that electrons in the conduction band are altering the reflected light 21 (vide infra). The XPS spectra of TiO 2 -10/GF before and after exposure to AA are shown in Figure 3 g. The pristine sample exhibits two peaks at 464.73 and 458.98 eV, consistent with the Ti 4+ oxidation state. 22 − 24 Following acid exposure, a shift to higher binding energies (464.93 and 459.20 eV) is observed, suggesting charge transfer from the AA ligand to the TiO 2 conduction band. Cyclic voltammograms were recorded in a PBS solution (pH 7.4) at 50 mV s –1 for CNPS, GF, TiO 2 -10/CNPs, and TiO 2 -10/GF electrodes. As shown in Figure 4 a, the background current of graphene foam remains mostly unchanged after TiO 2 electrodeposition, a similar observation being made for the carbon substrate in Figure 4 b. A prominent reduction peak at −0.5 V can be assigned to oxygen adsorbed onto GF. 25 The carbon oxidation potentials are very close: 0.45 V for GF and 0.44 V for CNPs. The currents associated with carbon oxidation and water oxidation (potentials above 1.0 V) are higher for GF, most likely because of its larger surface area, as confirmed by the SEM images. To assess the electrochemically active surface area, voltammograms were obtained in a 0.1 M KCl solution containing 5 mM [Fe(CN) 6 ] 3–/4– . Figure 4 c shows more reversible redox pairs for GF than for CNPs electrodes, with a potential difference (Δ E ) of 0.12 V for GF (calculated as Ep a – Ep c ) and 0.53 V for CNPs. Here, “Ep a ” represents the anodic peak potential, and “Ep c ” denotes the cathodic peak potential. The anodic peak current (Ip a ) and cathodic peak current (Ip c ) are both 1.36 μA cm –2 for GF, whereas for CNPs, Ip a is 0.63 μA cm –2 and Ip c is −0.56 μA cm –2 . These values indicate that the electrochemically active surface area of GF is 2.3 times that of CNPs, as inferred from the Randles-Sevcik method. Moreover, the voltammograms show a decrease in peak current intensity as the electrodeposition time increases from 5 to 30 min. Further characterization of the electrodes after exposure to AA, shown in Figure 4 d, revealed oxidation and reduction peaks in the −0.7 to 0.3 V range. The TiO 2 surface has Ti atoms with incomplete coordination, making them highly reactive. 26 These Ti atoms form charge transfer (CT) complexes with electron-donating ligands, causing a red shift in absorption. 26 − 28 The XPS spectra of TiO 2 -10/GF before and after exposure to AA, shown in Figure 3 g, corroborate this observation. Scheme 2 illustrates the mechanism where AA is oxidized to dehydroascorbic acid (DHA) and subsequently desorbs from the electrode surface, 29 as evidenced by the disappearance of peaks in the cyclic voltammogram. Figure 5 presents the EIS spectra obtained in a 0.1 M KCl solution containing 5 mM [Fe(CN)6] 3– / 4– for the GF, TiO 2 -5/GF, TiO 2 -10/GF, and TiO 2 -30/GF electrodes. The Nyquist plots in Figure 5 a display a small semicircle at high frequencies, indicating kinetic control of the charge transfer process, and a linear region at low frequencies, representing diffusional control of the electroactive species. The GF electrode exhibits an incomplete semicircle, whereas the TiO 2 -10/GF electrode shows a more defined semicircular pattern, as shown in the inset of Figure 5 a. This pattern is characteristic of high surface area electrodes, where increased capacitance can lead to distortion or disruption of the semicircular shape in the Nyquist plot. 30 In addition, the electrodeposition of TiO 2 does not enhance the charge transfer resistance significantly, which is the behavior expected for semiconductors. This is attributable to the electrode’s high porosity and easy access of the [Fe(CN) 6 ] 3– / 4– redox probe to the conductive graphene surface. The ohmic resistance of the TiO 2 -5/GF and TiO 2 -10/GF electrodes (116 and 121 Ω, respectively) is slightly increased compared to the GF electrode (105 Ω), likely resulting from TiO 2 accumulation on the GF surface. However, the resistance decreases to 112 Ω for the TiO 2 -30/GF electrode. The Bode plot in Figure 5 b shows an increase in impedance at low frequencies with longer electrodeposition times. This is attributed to the formation of a thicker diffusion layer, which reduces the available surface area for the TiO 2 deposition. EIS analyses of the TiO 2 -10/GF electrode were conducted before and after exposure to an AA solution under light irradiation. The Nyquist plots in Figure 5 c show an increase in the ohmic resistance. The Bode plot in Figure 5 d reveals a decrease in the total impedance and a shift of the maximum frequency ( f max ) to lower values. This shift in f max , which is related to the electron lifetime (τ e ) in the material through the equation τ e = 1/(2π f max ), 31 , 32 indicates an increased electron lifetime. A lower f max suggests more time for electrons to participate in chemical reactions before recombining. The increased photocatalytic current intensity, resulting from a reduced rate of charge carrier recombination, supports this observation. Thus, the use of AA enhances the photocatalytic efficiency of the TiO 2 -10/GF electrode, making it an attractive probe for developing advanced immunosensors, aptasensors, and genosensors. The photo of the system used for photocurrent measurements is shown in Figure 6 a. Figure 6 b displays the transient current curves obtained in 0.1 M PBS solution under visible LED light irradiation (410 nm) for the CNPs, GF, and TiO 2 -10/GF electrodes. The photocurrents were 0.04, 0.12, and 0.52 μA cm –2 for the CNPs, GF, and TiO 2 -10/GF electrodes, respectively. Figure 6 c shows the curves in the presence of a 0.1 M AA solution. The GF and CNPs electrodes do not show a significant increase in the photocurrent when AA is added. In contrast, the TiO 2 -modified electrodes exhibit a boost in photocurrent, which can be attributed to the reduction in charge carrier recombination, as discussed in the previous section. The photocurrents for TiO 2 -5/GF, TiO 2 -10/GF, TiO 2 -30/GF, and TiO 2 /CNPs are 58.0, 170.4, 114.4, and 82.0 μA cm –2 , respectively. Since the applied potential can affect the stability and selectivity in photoelectrochemical measurements, mainly due to the contribution of faradaic current, the OCP was studied in the presence and absence of 0.1 M AA. Figure 6 d shows the OCP values for GF and TiO 2 -10/GF electrodes with and without AA (in PBS solution) during 60-s ON/OFF cycles. The potential changes are minimal without AA for the TiO 2 -10/GF electrode. With AA, the OCP ranges from −0.11 V (dark) to −0.39 V (light). Initially, the OCP values in the presence and absence of AA are close, but they do not return to the starting potential after the cycles begin. Attempts to extend the cycle times led to solution evaporation caused by heat from prolonged LED activation, an issue not seen in shorter cycles. Figure 6 e,f show that the electrochemical oxidation of AA begins just after −0.05 V. Experiments were consistently performed at OCP values from −0.05 to −0.07 V. No photocurrent gain is observed when increasing the potential from −0.2 to −0.05 V. The increase in potential only raises the current attributable to faradaic processes, with similar photocurrent at both high and low potentials. Therefore, performing measurements at the equilibrium potential helps one to achieve a lower baseline and avoids interference from the electrooxidation of AA and organic compounds in the sample. This work presents the electrosynthesis of a photoactive TiO 2 phase on graphene foam electrodes without the need for thermal annealing. The low-temperature electrodeposition method partially embeds TiO 2 into the porous graphene foam, resulting in a photocurrent of 170 μA cm –2 GEO -approximately 2.1 times the value for traditional carbon-based printed electrodes (82 μA cm –2 GEO ). TiO 2 -10/GF outperforms TiO 2 /CNPs electrodes, demonstrating that graphene foam enhances photocurrents and holds promise for TiO 2 -based photoelectrochemical platforms. Although TiO 2 was successfully electrodeposited into the graphene foam film, only the top layer was effectively modified (due to nucleation and growth at the surface). One may expect enhanced photoanodes if TiO 2 can be incorporated deeper into the foam structure. This is challenging and will require further work. Our findings also support the development of biosensors utilizing AA as a probe in conjunction with compact, low-power visible light sources, making the device suitable for point-of-care applications. | Study | biomedical | en | 0.999997 |
PMC11696418 | Dedicated studies on gas–water distribution in tight sandstone gas (TSG) reservoirs are essential for the development these reservoirs, as previous conventional formation water studies may not fully describe the complex geological conditions in tight sandstone reservoirs. 1 − 3 TSG reservoirs are plagued with high water production and complex gas–water relationship—both of which are key to the development of gas reservoirs. 4 − 6 In conventional gas reservoirs, gas is mostly sitting above the water due to buoyancy pressure. 7 The distribution of fluids is greatly affected by the distance between the source rocks and the reservoir, hydrocarbon generating potential, and the sealing capacity of the caprock. 8 , 9 In addition, reservoir properties (such as porosity and permeability) also control the gas–water distribution at the microscopic scale. 10 , 11 However, the circulation of internal fluids in TSG is complicated and the abnormal phenomenon of gas–water inversion sometimes occurs. 12 Previous evidence indicates that gas–water distribution in TSG is the result of multiple factors including tectonic evolution, hydrocarbon source rock potential, reservoir physical properties, and fracture dynamics. 13 , 14 Massimo et al. suggested that the major factor that controls gas–water distribution is structure and reservoir physical properties. 15 Gas in reservoirs is mostly concentrated on structural highs, mostly juxtaposed against traps, as seen in most gas reservoirs in the Ordos Basin. 16 , 17 Higgs et al. found that the properties of TSG are affected by the deep burial process, with obvious mechanical and chemical compaction, which is significantly dependent on the porosity formed by mineral alteration and results in gas saturation being positively correlated with reservoir porosity. 18 The presence of fractures in sandstone reservoirs have a significant impact on fluid migration pathways and plays a crucial role in the evolution of gas–water distribution. 19 , 20 Different source rock formations may have significant differences in gas production irrespective of the source rock properties. 21 − 24 In addition, TSGs have strong heterogeneity and relatively low physical properties, which lead to disproportionate gas reservoirs. 25 − 27 Recently, significant progress had been made in TSG research; however, these studies are limited to a single controlling factor, while ignoring the fact that the influence on the gas–water distribution may be multifaceted. 28 Western Yishan (WY) in the Ordos Basin is the most valuable gas base in China. 29 However, high water production and a complicated gas–water relationship have impaired the exploration and development process, 30 , 31 although many reports have documented the features of formation water and the diagenesis of reservoir rocks as well as gas–water distribution in strata. 32 , 33 However, the main controlling factors and accumulation process remain unclear. Hence, we investigated the following: (1) the macrocontrolling factors for gas–water distribution in TSG reservoirs such as hydrocarbon generation intensity, structure, sandstone distribution, and diagenesis; (2) microcontrolling factors that contribute to gas–water distribution in TSG reservoirs such as reservoir physical properties, pore-throat structure, and migration index; (3) natural gas accumulation processes [there are two periods of gas accumulation (175–200Ma; 105–140Ma), hydrocarbon generation intensity essentially determined the volume of gas accumulated in the reservoir, longer continuous charging is more conducive to the formation with high gas content, and gas reservoir is typical of a system that is charged during cementation]. The Ordos Basin is located in central China, to the west of the Lvliang Mountains and east of the Helan Mountains. 34 The Ordos Basin can be divided into six tectonic settings, namely, the Yimeng uplift, the western fault-folded zone, the Tianhuan Depression, the Weibei uplift, the Jinxi folded zone, and the Yishan slope, of which the Yishan slope is a gently sloping morphology . 35 The slope and adjacent Tianhuan Depression areas are considered primary gas reservoirs in the basin. 36 The study area, the WY Slope, is in the midwestern part of the Ordos Basin and covers approximately 10000 km 2 . It is adjacent to Sulige gas field in the north and connected with the Tianhuan Depression in the west. The upper Paleozoic strata in the WY Slope is a sequence of clastic rock sedimentary systems with marine and continental transition facies. 37 The Permian strata are successively developed into Taiyuan Formation (P 1 t), Shanxi Formation (P 2 s), and Lower Shihezi Formation and Upper Shihezi Formation (P 2 h), with a total deposited thickness of roughly 500 m. The source rocks are essentially coal beds and dark mudstone of the Taiyuan Formation and Shanxi Formation, which have the characteristics of being extensively hydrocarbon-generating. 38 TSG is derived from the Shanxi and Lower Shihezi Formations. 35 The first to seventh Members (He 1 to He 7) of the P 2 h are the regional cap rocks. The main gas producing strata are the first Member of the Shanxi Formation (Shan 1) and the eighth Member of Xiashihezi Formation (He 8), among which He 8 can be further subdivided into Upper and Lower He 8. The Shan 1 Formation developed as shallow meandering-river delta deposits, while the He 8 Formation developed as shallow braided-river delta deposits, with regional conformable contacts . 39 In order to establish the correlation between the physical properties of reservoirs and gas accumulation, we used sealed core drilling methods to obtain cores and perform correlation analysis. 40 , 41 A total of 148 rock samples from 35 wells were selected for test analysis (29 samples from Upper He 8, 82 from Lower He 8, and 37 from Shan 1 Member were randomly selected). The selected core samples were placed into the gas gathering device and immediately vacuum sealed for 5 min. This was followed by degassing for 72 h, and the amount of vacuum is recorded alongside collection of the extracted gas and water. The distillation extraction method is used to extract all bound water and remaining movable water from the dissolved samples, and gas saturation of samples is calculated by eq 1 and eq 2 . 1 2 where V 1 is the volume of water collected by the core after decompression, V 2 is the volume of water separated by the distillation of gas, V 3 is the volume of water collected during the last distillation of core, and M is the weight of the dry core sample. φ is the effective porosity, S w , water saturation, and S g , gas saturation. The porosity and permeability of the samples were analyzed by an overburden pressure porosity permeability measuring instrument PoroPDP-200. A total of 85 source rock samples (23 mudstones and 62 coal samples, among them 34 samples from Taiyuan and 51 from Shan 2 Formations) were collected from Taiyuan and Shan 2 Formations for further study. The organic carbon content of source rocks was tested by a Rock-Eval 2 analyzer (France). The samples were crushed and passed through 80 mesh sieves. Finally, samples were put into the high-temperature oxygen flow for combustion, and total organic carbon was converted into CO 2 . The total organic carbon content (TOC) was detected by an infrared detector. The pyrolysis analysis was performed under the test conditions of 300 °C constant temperature for 3 min to liberate free hydrocarbon as S 1 . This was followed by a programmed temperature rise of 50 °C/min from of 300 to 600 °C; then we got pyrolysis hydrocarbon S 2 . We burned these processed samples to obtain S 4 . Finally, TOC can be calculated from TOC = 0.83 × (S 1 + S 2 + S 4 ). The hydrocarbon generation intensity integrates all parameters related to hydrocarbon generation capacity source rocks and is calculated using ( eq 3 ). 42 , 43 3 where Q gas is hydrocarbon generation intensity, ×10 8 m 3 /km 2 ; H is the source rock thickness, m; ρ is the strata density, g/cm 3 , ρ mud = 2.6 g/cm 3 , ρ coal = 1.55 g/cm 3 ; TOC is the residual organic carbon content, wt %; C k is the organic carbon recovery coefficient, 1.5; 33 K is the hydrocarbon generation rate, mL/g TOC , K mud = 120 mL/g TOC , K coal = 265 mL/g TOC . Thin sections were impregnated with blue-dye resin, and a Zeiss Axio Scope A1 microscope was used to observe thin slices (n = 435, 113 samples from Upper He 8, 201 from Lower He 8, 121 from Shan 1 Member were randomly selected) and count points (450 points per slice) to quantify the clastic skeleton particles, authigenic minerals, and interstitial materials. The types of pore space, distribution characteristics, diagenetic stages, and the development of dissolved pores were identified. Pore throat structures of samples were tested by an ASPE-730 constant velocity porometer. Mercury was injected into samples at a quasi-static rate (5 × 10 –5 mL/min), which ensured that the interfacial tension (480 mN/m) and contact angle (140°) remained unchanged during the experiment. A high-precision pressure sensor was used to record the change in pressure with the amount of mercury injection during the experiment, and a mercury injection curve was generated. When the maximum pressure set by the experiment was reached, the mercury inlet valve was shut and left to stand for 15 min before mercury extraction took place. The pressure changes with the mercury extraction during the experiment were recorded through the sensor until no further changes in volumes are observed. In addition, according to the capillary force formula, the capillary radii corresponding to different pressures during the experiment were calculated and the columnar chart of pore size distribution was obtained by ( eq 4 ). 44 4 where P c is capillary pressure, Pa; σ is interfacial tension, 480 mN/m; θ is wetting angle, 140°; and R c is pore radius, μm. The migration distance of gas is determined by the change in hydrocarbon components as a result of the conditions along the migration path. 45 , 46 For the components with the same molecular weight (such as iC4 and nC4), the force on the surface of mineral rocks with low effective molecular diameter (such as iC4 = 5.278 Å) is relatively weak, while the force on the surface of mineral rocks with large effective molecular diameter (such as nC4 = 5.784 Å) is relatively large. 47 The diffusion coefficient of iC4 is also higher than that of nC4. Therefore, the iC4/nC4 ratio shows an increasing trend in migration direction, with the increase of migration distance, R 3 increases and R 4 decreases, making the migration index (ΔR 3 ) show an increasing trend as shown in eq 5 . 5 where R 3 = iC 4 /nC 4 and R 4 = iC 4 /nC 3 . A total of 120 gas samples were selected from Shan 1 and He 8 reservoirs (among them 30 samples from Upper He 8, 61 from Lower He 8, and 29 from Shan 1 Member were randomly selected). The components were detected by a CP-4900 gas chromatograph with a scanning rate of 1250u/s and an average consumption of 5ul of each gas sample. The samples were depressurized and dry filtered before being placed in the chromatograph. The experiment process adopted standard industrial standards as described by Yu et al. 17 From our study, we observed that the percentage of gas wells is the highest, accounting for 84% in Shan 1, and that water wells are few and scattered . In the Lower He 8, gas wells accounted for 59%, gas–water wells accounted for 35%, and dry wells accounted for 6%. Water wells are mainly scattered in the central and northern parts of the study area. The number of gas wells is the lowest in the Upper He 8, accounting for 29%, while gas–water wells and water wells are distributed throughout the region. From Shan 1 to He 8, the gas wells and water wells are distributed across the plane without obvious pattern. The presence of mudstone and tight sandstone in the formation resulted in the complex gas–water distribution with no obvious and uniform demarcation interface. However, the lateral connectivity of each sand body presented a pattern of upper gas and lower water intervals . The horizontal and vertical gas–water distribution characteristics showed that the water layer is not controlled by regional structure and no uniform boundary distinction between the gas–water layer on the plane. Hydrocarbon generation intensity is the amount of hydrocarbon generation of source rocks per unit area, which is related to the thickness of source rocks, the abundance, type, and maturity of organic material. 48 , 49 We calculated the hydrocarbon generation potential through geochemical parameters and the thickness of coal seams and dark mudstone. The central and southern parts of the study area have relatively low value in terms of hydrocarbon generation intensity, while the northern part has relatively high values, with the hydrocarbon generation intensity reaching 20 × 10 8 m 3 /km 2 . The Shan 1 reservoir is closer to source rocks and mainly produces gas, while the Upper He 8 produces high volumes of water, which suggests that hydrocarbon generation intensity partly controls the gas–water distribution vertically. From the distribution of wells in the study area, water production was common in the areas with low hydrocarbon generation intensity in the southeast, while high gas production was associated with high hydrocarbon generation intensity in the northeast. This suggests that hydrocarbon generation intensity is one of the main controlling factors of the gas–water distribution in this area. Structure plays an essential role in controlling the migration and accumulation of gas. The gas reservoirs have poor lateral continuity, with gas layers, gas–water layers, and water layers mostly intersecting and superimposed . However, we can see that the well located on a structural high surface showed gas production in Shan 1 and He 8 reservoirs. The gas layer of the Shan 1 Formation exists in isolated sand bodies cut off by tight sandstone layers in the downdip direction. On the other hand, the gas layer in the Lower He 8 was connected with the gas–water layer in the downdip Well Y13, and the same is true in the Upper He 8, which suggested Well Y11 has good reservoir-forming conditions. The Upper and Lower He8 of Y19 are both trapped within tight sandstone and mudstone units, as well as in the downdip direction, leading to low gas migration, while Shan 1 Formation in well Y19 was located in a structural high of a connected sand body, resulting in high gas accumulation. We also can see that well Y22 located in the structural low state produces gas/water . Based on exploitation practice of this gas reservoir, the upper part of the connecting sand body consisted of gas only, while the lower part consisted of a gas–water layer. This suggests that the distribution of gas and water within this sand body is primarily subject to lithology and structure. The superimposed relationship of sand bodies from Shan 1 to He 8 in the WY is divided into four main types: isolated, vertically superimposed, laterally tangential superimposed, and horizontally bridged sand bodies . The isolated sand bodies generally represented a single channel that underwent rapid deposition. 50 The vertically superimposed sand bodies showed a weak swing in amplitude of a single channel where sediment supply is sufficient and stable, while a strong oscillation will evolve to a laterally tangential superimposed sand body. 51 The horizontally bridged sand bodies represent a distributary channel with changing flow direction in multiple periods, but the duration of a single stage channel and sediment deposition time is relatively long. 52 The superposition relationships represented gas accumulation conditions, leading to different gas and water enrichments in sand bodies. In this study, vertically superimposed and laterally tangentially superimposed sand bodies showed high gas saturation. This suggests that both sand bodies have good lithophysical property combination, due to the scouring and superposition of multilevel distributary channels. Although vertical and lateral connectivity of the isolated sand body is poor in Shan 1, it is adjacent to source rocks with a sufficient gas supply and good sealing property, which is conducive for successful accumulation of natural gas. Therefore, distribution of sand bodies is a crucial factor influencing gas–water distribution as shown in Figure 5 . Pore structure and reservoir properties were controlled by lithofacies type and diagenetic modifications. 53 Diagenetic imprints in Shan 1 and He 8 units , showed that the influence of compaction, dissolution, and cementation is common in the region. This greatly determines rock porosity and permeability as shown in Figure. 6 . Diagenetic facies refer to the comprehensive product of the original sediment as a result of sediment adjustment to diagenesis and its evolution in the burial diagenetic environment. This reflects comprehensive characteristics of rock particles, cements, fabrics and pores and fractures. 54 , 55 According to the diagenetic characteristics of this area, the reservoir can be classified into four types of diagenetic facies : Classes I, II, III, and IV. The effects of dissolution gradually weakened while the effects of cementation in turn increases from Class I to IV. The results indicated that gas is mainly distributed in regions with Class I and II diagenetic facies, while dry-water wells are mostly concentrated in the regions with Class IV diagenetic facies. Hence, we concluded that diagenesis controlled the pore construction of the reservoir and is also one of the key controlling factors for the distribution of gas and water. Sand bodies provide migratory pathways as well as the gas storage units for the Upper Paleozoic gas reservoirs. The correlation between physical properties of the Upper He 8 and Shan 1 is weak, while the Lower He 8 shows a positive correlation, which illustrates gas saturation increases with the increase in reservoir porosity and permeability . Although the sand bodies of Shan 1 were poorly connected, it generally has good gas bearing potential due to its proximity to source rocks and sufficient gas supply. On the contrary, Upper He 8 sand bodies showed good physical properties, but insufficient gas supply as a result of migratory distance between the source rock and the reservoir. Lower He 8 reservoir is at a moderate distance from the hydrocarbon generation center compared with Upper He 8. When the hydrocarbon source supply and reservoir sealing are not much different, the greater the porosity and permeability, the higher the gas saturation. In this study, there is an insignificant difference in hydrocarbon generation intensity and diagenesis in this area , but the fluid is varied, with ① as gas, ② as gas/water, and ③ as water. We observed that local microscale structures in the study area cannot cause such distinct differences in fluid accumulation in the different well locations. Three samples from the Lower He 8 reservoir in wells Y11, Y13, and Y19 were tested for mercury injection analysis. The results illustrated that sample ① has the lowest irreducible water saturation and best connectivity and sample ② has the worst. Sample ③ showed high displacement pressure, which means poor reservoir connectivity. The areas with a complex pore-throat network have poor gas charge ability, which in turn leads to unexpelled formation water in the reservoir. Gas displaced water in the reservoirs with better pore-throat connectivity; therefore, gas wells are mainly situated in such a well-connected reservoir area. Also, in the late stage of accumulation, the occurrence of natural gas retarded the cementation and better preserved the original space of the reservoi. 30 Migration index (ΔR 3 ) is a good indicator of gas migration conditions in the study area. As the gas migration distance increases, the migration index (ΔR 3 ) increases significantly. The migration index distribution in Figure 10 shows that there are more water wells in areas with a low migration index, and gas wells are largely spread in regions with a high migration index. That can be explained by the throat network of water wet reservoirs being narrow and gas–water displacement being difficult, leading to difficulty in gas migration in the water wet reservoirs. The burial history and hydrocarbon generation and expulsion history in the study area showed that there are two periods of gas accumulation . 56 The first stage was the Early Jurassic 175–200Ma coal measure source rocks, which began to generate and expel hydrocarbons that migrated into the adjacent reservoirs to accumulate. At this onset expulsion stage, charging intensity was weak, with temperatures between 90 and 120 °C required for the onset of thermal transformation of kerogen. The second period occurred in the Early Cretaceous 105–140Ma, at temperature range 120–150 °C. This suggests that source rocks had reached a midhigh maturity phase and a large amount of kerogen was generated and expelled, with natural gas transported and charged through the transport system. 57 This period represents the peak of hydrocarbon accumulation in the study area. However, this period coincides with the middle to late diagenetic stage, with strong cementation and about 6.0% porosity, which is close to the present porosity values. Therefore, the formation process of the Upper Paleozoic gas reservoir is typical of a system that is “charged during cementation”. In the Early Cretaceous, natural gas produced from the Permian source rocks accumulated in the Shan 1 reservoir and displaced the formation water that originally existed there. These reservoirs were adjacent to the underlying source rocks and had high natural gas charging intensity , therefore, being generally gas-bearing, even in sand bodies with poor physical properties. However, accumulation gradually decreases toward He 8, where Lower He 8 is mainly composed of gas/water and Upper He 8 reservoirs are mostly water wet. This is due to a large distance between Upper He 8 and source rock with an insufficient gas supply and weak charge. Thus, partial gas–water replacement occurs in the region. The gas reservoirs in the study area lack edge-bottom water due to the tight reservoirs and the varied lithology and physical properties horizontally. Also, the formation is relatively flat; therefore, distinct differentiation for gas and water is nearly impossible, thus forming no distinct boundary of gas and water. In addition, the reservoirs had been relatively tight during accumulation, and formation water mostly remains in the reservoirs in the form of irreducible water, which also can interpret this phenomenon. Through our comparison and comprehensive analysis, we concluded that the accumulation mechanism in this TSG with low hydrocarbon generation intensity in the western Yishan Slope is as follows: (1) Hydrocarbon generation intensity essentially determined the volume of gas accumulated in the reservoir. Given that the lower gas supply in the local area with low hydrocarbon generation intensity, longer continuous charging is more conducive to the formation with high gas content. (2) Local microscale structure contributed to gas–water differentiation in the study area. (3) The superimposed sand bodies were scoured by multistage distributary channels and formed a good sand body, with vertically combined physical properties, which can enhance to gas enrichment. (4) The physical properties control the gas content of in reservoirs. Gas enrichment is enhanced in reservoirs with relatively good pore-throat connectivity, while sand bodies with relatively poor physical properties exhibit poor charge ability, therefore having generally low gas content. (1) Accumulation in reservoirs was subject to hydrocarbon generation intensity and diagenesis. The Shan 1 reservoir was close to source rocks and gas migrates vertically in short distance. Gas charge intensity is high and formation water is almost completely displaced by natural gas, thus forming the main gas-bearing layers. Conversely, the He 8 reservoir was at a significant distance from the hydrocarbon generation center, with insufficient gas accumulation potential. In this case more water or dry layers are generally prevalent. In addition, regions with strong dissolution and weak cementation are also favorable areas for gas accumulation due to their good reservoir physical properties and pore connectivity. (2) The gas–water distribution was associated with reservoir physical properties and the connectivity of pore-throat during the accumulation period. The areas with better physical properties had favorable pore-throat connectivity and low capillary pressure, thus easily forming gas layers. The reservoirs with poor physical properties have complex pore-throat structure and strong capillary pressure, resulting in the retention of original formation water. (3) Burial history and hydrocarbon generation and expulsion history showed that there are two periods of gas accumulation (175–200Ma; 105–140Ma), source rocks had reached a midhigh maturity, and a large amount of kerogen was generated and expelled, with natural gas transported and charged through the transport system during the second period. Longer continuous charging is more conducive to the formation with high gas content. Due to the infrequent development of large faults in the study area, coupled with the presence of only a limited number of small- and medium-sized faults in tectonically active regions, there is currently an insufficient seismic data foundation. Consequently, this aspect has not been thoroughly investigated; however, it aims to facilitate further discussions regarding its impact on gas reservoir distribution in future research endeavors. | Study | other | en | 0.999998 |
PMC11696428 | In the last few decades, TiO 2 has been considered to be one of the most promising catalysts due to its nontoxicity, strong oxidation properties, and cost-effectiveness. 1 − 5 Unfortunately, the photocatalytic capacity of TiO 2 is very limited on account of its broad bandgap, which can only exhibit catalytic activity under ultraviolet light. 6 , 7 Nonmetallic element (e.g., C, N, S, F, and P) doping is an effective way to improve the photocatalytic activity of TiO 2 . 8 − 12 These nonmetal elements can reduce the bandgap of TiO 2 through forming impurity energy levels and improving its visible light absorption. Meanwhile, defects such as Ti 3+ and oxygen vacancies can also improve the photocatalytic activity of TiO 2 effectively. 13 , 14 Oxygen vacancies can introduce an electron state vacancy band below the conduction band, thereby reducing the band gap, 15 while Ti 3+ can promote local excitation by achieving a three-dimensional transition from the gap state to the empty excited state inside the material. 16 Asahi et al. 17 reported that N-doping could dramatically increase the light absorption and activity of TiO 2 . Since then, the visible light catalytic activity of nitrogen-doped TiO 2 has been widely explored. 18 − 22 Nitrogen ions have a similar size to oxygen ions (0.171 nm for N 3– ions and 0.132 nm for O 2– ions) and a small ionization energy. 23 Nitrogen doped into TiO 2 can form an interstitial nitrogen or substitutional nitrogen. Substituted N always causes the 2p levels of N and O to mix, thus narrowing the bandgap of TiO 2 . However, interstitial N may produce a middle state higher than the maximum VB, which could also reduce the bandgap. 24 Experimental and theoretical calculations demonstrate that the nitrogen concentration significantly affects the optical properties of TiO 2 . 25 − 27 Lower nitrogen doping levels result in only a slight narrowing of the bandgap. However, there is an obvious bandgap narrowing at high N doping concentrations, especially with homogeneous doping in bulk TiO 2 , where the dopant and the TiO 2 have full and long-range coupling and could reduce the band gap effectively. 28 So far, the synthesis of high N-doping in bulk TiO 2 remains challenging in experiments due to the substantially higher formation energy needed for the process. Surprisingly, Liu et al. gave a new insight by predoping interstitial boron into TiO 2 and then doping with nitrogen, red TiO 2 was synthesized, which exhibited full visible light spectrum absorbance. 29 The red color of TiO 2 is due to deep layer nitrogen doping and oxygen vacancy. Is there any other element predoping that can also promote the doping of nitrogen into TiO 2 ? Research shows that sulfur could be doped into the TiO 2 lattice and change its geometric structure. 30 Usually, the substitution of the Ti 4+ ions (ionic radius, 0.068 nm) with the S 6+ ions (ionic radius, 0.029 nm) is chemically more favorable compared to the substitution of the O 2– ions (ionic radius, 0.132 nm) with the S 2– ions (ionic radius, 0.17 nm). Yang et al. used first-principles calculations to show that sulfur easily replaces Ti 4+ ions in the TiO 2 lattice. 31 In this work, the S-doped TiO 2 nanoparticles were synthesized hydrothermally from an industrial TiOSO 4 solution, and then, nitrogen doping was realized by using a simple high-temperature calcination process. The prepared nanoparticles (particle size of about 7 nm) showed a red color and had excellent visible light absorption. First-principles simulations showed that the presence of sulfur in TiO 2 can significantly reduce the formation energy of nitrogen doping. This may be because the presence of sulfur in TiO 2 can weaken the bonding energy of Ti–O bonds and contribute to nitrogen doping. The electron paramagnetic resonance test shows that red TiO 2 has high oxygen vacancy and Ti 3+ . The excellent visible light absorption of red TiO 2 results from the synergetic effect of oxygen vacancies, Ti 3+ , and nitrogen doping. This study is advantageous for the preparation of TiO 2 with enhanced visible light absorption capabilities. The industrial TiOSO 4 solution used in this work was obtained from a TiO 2 pigment factory, and its primary compositions are TiO 2 = 189 g/L, m (effective H 2 SO 4 )/ m (TiO 2 ) = 1.87. Commercial anatase TiO 2 and rhodamine B (Rh.B) were obtained from Fuchen Chemical Reagent Co., Ltd. (Tianjin, China). Methylene blue (MB) was obtained from Chengdu Chron Chemical Reagent (Chengdu, China). Melamine was obtained from Tianjin Kemiou Chemical Reagent Co., Ltd. (Tianjin, China). S-TiO 2 was synthesized by using a hydrothermal route. In a typical synthesis, 92 mL of industrial TiOSO 4 and 50 mL of water were heated to 96 ± 1 °C, respectively. Afterward, the TiOSO 4 solution was dropped in preheated water within 20 min (using a peristaltic pump) under constant stirring. After mixing, the solution was poured into a 200 mL Teflon-lined autoclave and aged at 110 °C for 3 h. After the reaction, the precipitate was centrifuged, washed with deionized water, and dried at 60 °C. Finally, the solid was sintered in air at 400 °C for 2 h, with a heating rate of 10 °C/min. The resulting sample was labeled as ST. The red TiO 2 was prepared through simple high-temperature calcination in air. First, 0.5 g of the as-prepared ST and a certain amount (0.8, 1, 1.2, or 1.5 g) of melamine were ground for 20 min thoroughly. Subsequently, the mixture was placed in a 50 mL crucible, capped, and calcined at 550 °C for 2 h at a heating rate of 10 °C/min. The obtained products were labeled as SNCT- X ( X was the mass of melamine). For comparison, 0.5 g of commercial anatase TiO 2 and 1 g of melamine were ground and calcined under the same conditions (the product was denoted as CMT). The synthetic process of red TiO 2 is depicted in Figure 1 . The activity of ST and SNCT- X was evaluated by degrading 10 mg/L model compounds, Rh.B and MB. A 300 W xenon lamp with an optical filter to cut off the short-wavelength components was used as the light source (λ ≥ 420 nm). The horizontal distance between the catalyst and the lamp was 35 cm. For every photocatalytic experiment, 0.05 g of the catalyst was added to 100 mL of solution. Before irradiation, the mixed solution was kept in the dark for 30 min, and then, the lamp was turned on. A specified amount of the sample was taken and centrifuged at constant intervals. The supernatant solution was analyzed by using a Vis spectrophotometer (Model 721, Shanghai Xinmao Instrument Co., Ltd. China) at λ = 554 nm (for Rh.B) and 665 nm (for MB) to study the degradation extent of the model compound. Transient photocurrent response and electrochemical impedance spectroscopy measurements were tested by an electrochemical workstation (CHI760E instruments, Shanghai Chenhua Instrument Co., Ltd. China) in a standard three-electrode setup with a Pt plate as the counter electrode, Ag/AgCl as the reference electrode, and FTO glass as the working electrode. The sample areas of 0.2826 cm 2 of the working electrodes were immersed in the electrolyte. The electrolyte was 0.2 M Na 2 SO 3 . A 300 W Xe lamp as the light source (>420 nm) was used for the photocurrent tests. The X-ray powder diffraction (XRD) patterns were tested by the Malvern X’Pert3 Powder, with Cu Kα1 irradiation. Raman spectra were obtained by a Renishaw inVia with the wavenumber between 100 and 900 cm –1 . Fourier transform infrared (FTIR) spectroscopy was tested by using a Thermo NICOLET 380. X-ray photoelectron spectroscopy (XPS) was carried out by a Thermo Scientific K-Alpha. Scanning electron microscopy (SEM) pictures were obtained on a ZEISS MERLIN SU8010 electron microscope. Transmission electron microscopy (TEM) images were taken using the FEI Tecnai F20 electron microscope. A Shimadzu UV-3600 spectrophotometer was used to test the optical properties. The surface area and pore size distributions of materials were measured using the Micromeritics ASAP 2460 nitrogen adsorption apparatus. The content of S was determined by a LECO CS230 carbon sulfur analyzer. The Bruker EMX PLUS electron paramagnetic resonance spectroscopy (EPR) was used to test the paramagnetic species. First-principles simulation is selected to calculate formation energy and substitution formation by using CASTEP 32 in the Materials Studio package. The ultrasoft pseudopotentials are selected to calculate the interaction between the electron and the nucleus. GGA and PBE functionals are used to describe the exchange-correlation energy of electrons. The lattice parameters were optimized and relaxed with a cutoff energy of about 314 eV. The k -point spacing of 0.07 Å –1 was used to sample the Brillouin zone. The energy is converged to 1.0 × 10 –6 eV/atom. The XRD patterns are shown in Figure 2 a. All the peaks of ST, CMT, and SNCT- X revealed similar characteristic peaks at 2 theta = 25.28°, 37.80°, 48.05°, 53.89°, 55.06°, and 62.68°, which correspond to anatase TiO 2 . 33 , 34 Moreover, the XRD patterns did not show any other impurities, except for the sample SNCT-1.5. Compared to the ST, the samples SNCT-0.5 and SNCT-0.8 have higher crystallinity, especially SNCT-0.5, in which the diffraction peak intensity increased and the width of the (101) crystal plane peaks became sharper. This is due to further high-temperature treatment of the sample ST, promoting an increase in its crystallinity. The peaks of SNCT- X become broader and weaker with the increase of melamine usage, indicating that the incorporation of nitrogen into TiO 2 decreases its crystallinity. Previous studies have proved that doping with nitrogen will reduce the crystallinity of TiO 2 . 35 , 36 This is probably due to the presence of defects on the grain boundary, which causes lattice strain and inhibits the grain growth. 37 We can also see that there is a small peak at 2 theta = 27.4° for the sample SNCT-1.5. This is due to the excess melamine decomposition of the resulting g-C 3 N 4 . 38 The grain sizes of samples were calculated using the Debye–Scherrer equation. 39 The crystallite sizes of ST, SNCT-0.5, SNCT-0.8, SNCT-1, SNCT-1.2, and SNCT-1.5 are 6.8, 16.2, 12.3, 7.3, 7.2, and 6.8 nm, respectively. UV–vis diffuse reflectance absorption of ST, SNCT- X , and CMT is shown in Figure 2 b. In comparison to ST, all the SNCT- X samples showed a marked increase in absorption strength in the visible region. In addition, the visible absorption intensity changes significantly with the increase in melamine dosage. The samples SNCT-0.5 and SNCT-0.8 only exhibited tail shoulder absorption, which is similar to previous reports of nitrogen-doped titanium dioxide. 40 , 41 The sample CMT also shows tail shoulder absorption in the visible region, similar to that of SNCT-0.5 and SNCT-0.8. With the further increase in the mass of melamine, the samples showed the overall translational absorption, which is probably because of the formation of band-to-band absorption. 42 , 43 The samples SNCT-1 exhibited the best visible light absorption, much better than ST and CMT. Further increasing the amount of melamine (the samples SNCT-1.2 and SNCT-1.5) results in a decrease in light absorption, possibly due to excess melamine decomposition leading to residual organic matter in the sample. The optical photographs of ST, SNCT- X , and CMT are also shown in Figure 2 b. The CMT shows a pale yellow color, similar in color to the well-known nitrogen-doped TiO 2 . For the sample SNCT- X , with the increase of melamine content, the color changed from the white of ST to pale yellow and further to red. We believe that the red color of the samples is attributed to the synergetic effects of Ti 3+ , oxygen vacancy, and high nitrogen doping, which will be discussed later. In order to more intuitively compare the optical properties of white and red TiO 2 , Figure 3 shows the light absorption and bandgap properties of ST and SNCT-1. As depicted in Figure 3 a, compared to the ST with an absorption edge of about 410 nm, the absorption edge of SNCT-1 is extended to about 672 nm, which almost covers the total visible region. The bandgap of samples was estimated using the (α h ν) 2 – h ν relationship 44 and is shown in Figure 3 b. In comparison to ST (bandgap energy of 3.25 eV), SNCT-1 has a much lower optical bandgap (2.10 eV), which is beneficial for visible light responsiveness. Figure 3 c shows the XPS valence band spectra of ST and SNCT-1, and the obtained valence band (VB) values are 2.41 and 1.89 eV, respectively. Because the bandgap of the sample is bound, the bottom potential of the conduction band (CB) of ST and SNCT-1 can be deduced to be −0.84 eV and −0.21 eV, respectively. The corresponding band structures are illustrated in Figure 3 d. Compared to ST, the VB position of SNCT-1 moves in the direction of the negative potential by 0.52 eV, and the CB position shifts in the direction of the positive potential by 0.63 eV. First-principles simulation results are shown in Figure 4 . As depicted in Figure 4 a, the calculated formation energy of nitrogen-doped TiO 2 at different N/Ti ratios is obviously reduced by the existence of S. In particular, the formation energy significantly decreases from 0.36 to 0.15 eV at the N/Ti ratio of 25 at. %, even falling by more than half. The calculated substitution energy of nitrogen-doped TiO 2 shows a different trend. At low N/Ti ratios (6.25 and 12.5 at. %), the substitution energy of nitrogen-doped TiO 2 with S is higher than that without S, but at high N/Ti ratios (18.75 and 25 at. %), the opposite is true. These results indicate that predoped S is beneficial for the preparation of highly nitrogen-doped TiO 2 . The morphology of the samples was analyzed by SEM. Figure 5 a,d reveals the SEM images of ST and SNCT-1. It is seen that the two samples exhibited a spherical shape and a high extent of agglomeration. Further morphology analysis was conducted using TEM and HRTEM. As depicted in Figure 5 b,e, the samples ST and SNCT-1 have similar TEM images. The diameters of ST and SNCT-1 were ∼7 nm, which is similar to XRD analysis. The HRTEM images clearly show the crystal lattice of ST and SNCT-1. The ∼0.35 nm interplanar spacing corresponds to the (101) plane of anatase TiO 2 . Figure 5 g–l illustrates the energy dispersive spectra (EDS) of SNCT-1. Apparently, there is a copresence of Ti, O, N, and S elements in SNCT-1, but the distribution of sulfur is relatively sparse, which is due to the high-temperature calcination resulting in the escape of sulfur. In addition, the distribution of the N element in the sample is relatively uniform, which proves that TiO 2 is doped in the sample. Raman spectra were used to explore the crystal properties of white and red TiO 2 . Figure 6 a shows the Raman spectra of ST and SNCT-1. The two samples have similar Raman spectra, and both have four apparent vibrational peaks at 147 cm –1 , 395 cm –1 , 515 cm –1 , and 640 cm –1 . These peaks correspond to anatase TiO 2 , 45 indicating that ST and SNCT-1 consist of a pure anatase phase. In addition, the peak strength of SNCT-1 is significantly lower than that of ST, which may be due to nitrogen doping leading to the formation of defects on the grain boundary of TiO 2. It has been shown that the material defects can affect the vibration mode of Raman. 46 The FT-IR spectra of ST and SNCT-1 are revealed in Figure 6 b. The absorption peak at 3424 cm –1 belongs to the stretching vibrations of the hydroxyl group, and the peak at 1623 cm –1 represents the bending vibrations of surface-adsorbed water. 47 The peak at about 1140 cm –1 belongs to the stretching vibrations of the S–O bond, and the band at 1050 cm –1 represents the Ti–O–S peak, which confirms that S exists in the ST sample. 48 These two peaks disappear completely in the SNCT-1 sample, which is because the high-temperature calcination of ST leads to the escape of S. 49 A broad peak between 700 and 1000 cm –1 is the librational band of adsorbed water. 50 The absorption band between 400 and 800 cm –1 belongs to Ti–O stretching vibration. 51 Compared to ST, SNCT-1 shows new peaks at 1236–1562 cm –1 , mainly belonging to the stretching and bending vibrations of N–H and C–N, 52 , 53 which confirm that the N element is successfully doped into TiO 2 . The surface areas and pore size distributions were tested by N 2 adsorption–desorption measurements. As depicted in Figure 6 c, the typical class IV Langmuir adsorption–desorption isotherm with H3 hysteresis loops exists for ST and SNCT-1, which indicates that mesopores are formed in both samples. 54 The surface areas of ST and SNCT-1 are approximately 186.8 m 2 /g and 157.7 m 2 /g. The decrease of surface areas for SNCT-1 is due to the high-temperature calcination. Figure 6 d shows the pore size distributions of ST and SNCT-1. Obviously, ST and SNCT-1 have similar pore size distributions, and their average pore diameters (calculated by the BJH method) are 4.2 and 5.2 nm. The BJH adsorption cumulative pore volumes of ST and SNCT-1 were 0.258151 and 0.225901 cm 3 /g, respectively. The surface elements and oxidation states were analyzed using XPS. As displayed in Figure 7 a, the survey spectrum of ST clearly showed the peaks of Ti, O, S, and C; however, the survey spectrum of SNCT-1 clearly showed the peaks of Ti, O, N, and C. This result shows that the N element was successfully doped into titanium dioxide. The S peak did not exist in SNCT-1 due to its low content, which is consistent with the EDS analysis and FT-IR analysis. The Ti 2p spectra of ST and SNCT-1 are shown in Figure 7 b. The Ti 2p peak of ST is only divided into two peaks at 464.5 and 458.8 eV, which belong to Ti 4+ 2p 1/2 and Ti 4+ 2p 3/2 , respectively. 55 For SNCT-1, the Ti 2p peaks shift toward lower binding energy and can be divided into two more peaks at 458.3 and 463.9 eV, which can be assigned to Ti 3+ 2p 3/2 and Ti 3+ 2p 1/2 . 56 From Figure 7 c, the O 1s spectrum of ST was divided into three peaks at 530.0 eV (belonging to lattice oxygen such as the Ti–O bond), 531.1 eV (belonging to oxygen in SO 4 2– ), 57 and 532.0 eV (assigned to surface hydroxyl groups or absorbed water). 58 The O 1s spectrum of SNCT-1 was divided into two peaks at 529.8 and 531.8 eV, which correspond to the peaks of 530.0 and 532.0 eV of the ST sample, but shifted by about 0.2 eV, may be due to the doping of the N element. The disappearance of the O 1s peak at about 531.1 eV in SNCT-1 is because of the escape of sulfur. Figure 7 d shows the S 2p XPS spectra of ST and SNCT-1. The S 2p XPS spectra of SNCT-1 have no obvious signal due to the high-temperature calcination of ST, which leads to the escape of S. The S 2p XPS spectra of ST could be fitted with two peaks at 168.4 and 169.6 eV, which correspond to S 2p 3/2 and S 2p 1/2 , respectively. Generally, these peaks are ascribed to S 6+ such as sulfur in SO 4 2– . 59 , 60 The N 1s XPS spectra of the sample SNCT-1 could be fitted with three peaks . The peak at 397.4 eV can be assigned to the Ti–N bond. 61 The peak at 398.7 eV is commonly referred to as oxygen atoms being replaced by N atoms to form the N–Ti–N bond. 40 In addition, the peak at 400.0 eV is referred to as the Ti–O–N bond. 62 The high-resolution C 1s spectra of the samples SNCT-1 were divided into three different peaks . The peak at 284.8 eV can be attributed to the adventitious carbon pollution in the XPS measurement, whereas the peaks at 286.0 and 288.0 eV correspond to the C–O bond and C=O bond, respectively. 63 The photocatalytic activity of ST and SNCT-1 was evaluated toward the decolorization of Rh.B and MB. For comparison, the activity of CMT was also evaluated under the same conditions. Figure 8 a reveals the photocatalytic degradation activity of Rh.B. Apparently, only negligible degradation of Rh.B was observed without a catalyst, indicating that Rh.B is relatively stable. Compared to CMT and ST, SNCT-1 shows better photocatalytic activity in the visible light degradation of Rh.B. As can be seen from Figure 8 b, the linear dynamics curve of ln( C 0 / C ) vs time is consistent with the first-order dynamics of the Langmuir–Hinshelwood (L–H) model, indicating that photodegradation is a pseudo-first-order reaction. 64 The calculated values of rate constants for all of the samples are shown in Figure 8 c. Obviously, the SNCT-1 sample exhibited the highest rate constant , which is far above that of the ST and CMT . The photocatalytic degradation of MB is provided in Figure 8 d. Compared to CMT and ST, SNCT-1 also shows obviously better photocatalytic activity in the visible light degradation of MB. As can be seen from Figure 8 e, the linear dynamics curve of ln( C 0 / C ) vs time is consistent with the first-order dynamics of the Langmuir–Hinshelwood (L–H) model. The calculated values of rate constants for all of the samples are shown in Figure 8 f. The SNCT-1 sample exhibited the highest rate constant , which is far above those of the ST and CMT . To sum up, SNCT-1 shows much better photocatalytic activity in the visible light degradation of Rh.B and MB, especially in the degradation of MB. The excellent photocatalytic activity of the SNCT-1 sample is attributed to the lower bandgap, which greatly improves the visible light absorption. The stability of the SNCT-1 sample was investigated by cycle experiments, and the results are shown in Figure 9 . The sample was recycled from the solution by centrifugation, washed, and dried at 80 °C for 10 h between tests during the cycling stability experiments. The photodegradation efficiencies of Rh.B were 92.2%, 82.1%, and 79.4% in the three cycles, respectively. The decrease in sample activity may be caused by partial inactivation of the catalyst. Electrochemical performance was measured to better understand the reason for the increased photocatalytic behaviors. The transient photocurrent response curves are shown in Figure 10 a. We can clearly see that SNCT-1 produced a higher photocurrent than ST, suggesting that SNCT-1 could more efficiently harvest solar light. In general, higher photocurrent values will lead to higher photocatalytic activity. 65 Figure 10 b illustrates the electrochemical impedance spectroscopy (EIS) profiles of ST and SNCT-1. Compared with ST, SNCT-1 exhibited a smaller radius of EIS, suggesting that the N-doping reduced the internal carrier transfer resistance and further speeded up electron transport. Room-temperature EPR was used to assess the oxygen vacancies in ST and SNCT-1. As depicted in Figure 11 a, there is almost no EPR signal measured in ST, whereas there is a significant signal in the SNCT-1 sample. Apparently, SNCT-1 has two obvious EPR peaks at g = 2.0010 and g = 1.9813. The former is widely recognized to be ascribed to oxygen vacancies. 66 , 67 The latter can be noted because of Ti 3+ . 14 , 68 These results demonstrated that treating ST with melamine resulted in the formation of a large number of oxygen vacancies and Ti 3+ . Research shows that oxygen vacancies and Ti 3+ can produce impurity levels just below the CB, which leads to increased visible light absorption. 69 In addition, we think that the red color of SNCT-1 mostly originates from the Ti 3+ and oxygen vacancies. The experiment of DMPO-EPR measurements is further used to detect the generation of active species under illumination ( Figure 11 (b,c). As shown in Figure 11 b, no characteristic signals could be detected in the dark, indicating that no active species are generated in the dark, while there are obviously •O 2– signals under visible light for 5 min, indicating that •O 2– are formed. 70 Figure 11 c shows no obvious •OH signals in the dark, but strong •OH signals are detected under visible light irradiation. 71 These experiments confirm that •OH and •O 2– radicals are the important active species for the photocatalysis of SNCT-1. The photocatalytic mechanism of red-TiO 2 was put forward and is shown in Figure 12 . EDS-mapping and XPS analysis have confirmed that the red-TiO 2 contains large amounts of nitrogen. A new energy level of the N 2p band can be formed above the VB of the O 2p, which reduces the bandgap of TiO 2 . Furthermore, according to the results of EPR and XPS, the red-TiO 2 produces abundant oxygen vacancies and Ti 3+ , which lead to the formation of an intermediate energy level below the conduction band. 72 The coexistence of N 1s energy levels, oxygen vacanies, and Ti 3+ makes the red-TiO 2 have excellent visible light absorption, thus efficiently separating the photogenerated electron–hole pairs. Under visible light irradiation, the excited electrons and adsorbed oxygen can form superoxide anion radicals (•O 2– ). 73 In the meantime, the holes can oxidize surface hydroxide to generate hydroxyl radicals (•OH). 74 Therefore, Rh.B and MB can be degraded by these active species. We have used a novel route to synthesize a red-TiO 2 photocatalyst by first synthesizing S-doped TiO 2 and then calcinating it with melamine. The predoped S could decrease the formation energy and substitution formation of nitrogen-doped TiO 2 , which is beneficial for high nitrogen doping. The red-TiO 2 has a small particle size (around 7 nm) and a low bandgap (2.10 eV), which exhibits excellent visible light absorption. EPR and XPS analyses show the red-TiO 2 with abundant oxygen vacancies and Ti 3+ . The synergetic effect of Ti 3+ , oxygen vacancies, and nonmetallic elements leads to the bandgap narrowing of TiO 2 . The red-TiO 2 exhibits much better photocatalytic activity in the visible light degradation of Rh.B and MB. | Study | biomedical | en | 0.999999 |
PMC11696440 | Dermal application is a way in which active ingredients can be applied effectively and efficiently through the skin. 1 When the active substance is applied dermally, it has advantages such as providing good patient compliance, being noninvasive, creating minimal drug–drug interactions, ease of application, reducing systemic side effects in case the disease originates from the skin, and providing continuous/controlled release at the site of action. 2 Dermal application may result in reduced pharmacological efficacy due to poor skin penetration of the active ingredients. It has been reported that various nanotechnological approaches such as liposomes, solid lipid nanoparticles, niosomes, transfersomes, ethosomes, nanostructured lipid carriers, nanoemulsions, dendrimers, and micelles can overcome these disadvantages. 3 An alternative approach used for this purpose is to develop a nanosuspension formulation. In nanosuspensions, active substance nanocrystals are pure active substance particles smaller than 1000 nm and are stabilized with appropriate surfactants and/or polymers. 4 The nanometer-sized stabilized particles of the active substance can be absorbed more quickly and easily through the skin and enter the underlying tissues. In nanosuspensions, active substances barely soluble in water have a large-surface area, and therefore both the dissolution rate and water solubility of the active substance increase. They provide accumulation of active substances in the skin in nanoparticulate form, increasing skin penetration and bioavailability of drug molecules by causing an increased concentration gradient. 5 , 6 In addition, it has been reported that these systems increase the dermal pharmacological effect of the active substance as it accumulates in skin appendages and skin layers, especially in epidermis. 7 , 8 The pH of an intact skin surface is generally 5.5, which is considered the classic cutaneous pH. This acidic pH value usually varies between 4 and 6 due to many factors such as age and gender. 9 pH is an important parameter affecting the rate of absorption of acidic and basic drugs, and the nonionized form of the drug penetrates better through the skin. The movement of ionizable particles in aqueous solutions is largely dependent on pH. 10 , 11 When the pH of the nanosuspensions is close to the pH of the stratum corneum, the nanosuspensions are in a nonionic form and the permeability of the drugs increases. 10 For this reason, in our study, the poorly water-soluble (nonionized) base form of lidocaine was used and nanosuspension formulations were developed. Lidocaine is a local anesthetic agent that is practically insoluble in water. 12 In percutaneous or dermal applications, LID penetrates the stratum corneum and desensitizes pain receptors in the skin. Disadvantages such as polymorphism and low bioavailability seen in crystalline pharmaceuticals limit the transdermal application of LID. 13 With drug carrier systems such as nanosuspensions, it is possible for an active substance to penetrate the skin more easily and to provide sustained effect with slow release of the drug substance. In recent years, targeting topically applied active substances to different skin layers as particulate carriers has become an important research topic. 14 For this purpose, many drug carrier systems such as nanoethosomes, 15 solid lipid nanoparticles, 14 microemulsions, 16 nanostructured lipid carriers, 17 silica nanoparticles, 18 and liposomes 19 have been prepared containing LID. In this study, a nanosuspension formulation of LID was prepared to benefit from the advantages of nanosuspensions. Nanosuspensions generally consist of active ingredient nanocrystals, surfactant or polymeric type stabilizers, and liquid dispersion medium. 20 The type and amount of stabilizing agents have a significant impact on the physical stability and in vivo behavior of the nanosuspension. Examples of the most commonly used stabilizers are poloxamers, polysorbates, cellulose derivatives, povidone, and lecithin. 21 In this study, LID nanosuspension formulations were prepared using the media milling method. This technique has advantages such as high flexibility in handling, simplicity, high reproducibility, low use of excipients, low batch-to-batch variation, and easy scale-up compared to other nanosuspension production methods. 22 , 23 When preparing nanosuspension by media milling, there are many process parameters that need to be optimized such as bead size, milling time, milling speed, and bead volume. 24 For this purpose, a factorial design with two 2 3 (2 levels, 3 factors) three repetitions was performed separately using Design Expert software to determine the most appropriate process parameters. The approach of using design of experiments (DOE) in quality by design (QbD) provides pharmaceutical researchers with the opportunity to obtain products in a shorter time with fewer experiments. 25 DOE helps identify and classify (critical or noncritical) various formulation and process parameters that affect system quality. Interactions between various input variables can be detected and quantified with a well-implemented DOE. It also provides the opportunity to predict desired quality attributes over the design space. 26 The choice of experimental design depends on the objectives of the experiment and the number of factors to be investigated. In this study, it was aimed to develop a nanosuspension formulation that would allow LID to accumulate in the skin and have a greater anesthetic effect. The use of the base form was preferred because it penetrates the skin more easily, accumulates in the stratum corneum, and maintains its local anesthetic effect for a long time. In order to increase its dermal efficacy, LID nanosuspensions were prepared by using experimental design. The effect of process parameters on the PS, PDI, and ZP values of nanosuspensions in the wet media milling method was determined. The effect of nanosuspensions on the dermal bioavailability of LID was explained by permeation and skin accumulation experiments. Additionally, the effectiveness of the formulations was evaluated in vivo by the tail flick test. Lidocaine base was kindly donated by VEM Pharmaceuticals (Turkey). POL 407 was kindly provided by BASF (Turkey), and PVA was provided by Wacker (US). All other reagents used were of analytical grade. Nanosuspensions were prepared by a wet media milling method using Retsch PM100. A bead mill vessel with a volume of 50 mL was used. The coarse suspension, beads, and blank sections each worked to approximately one-third of the entire boiler. 0.5% POL was mixed with a magnetic stirrer until completely dissolved in distilled water. 2% LID was added to this solution and mixed with ultra turrax at 15,000 rpm for 10 min. This coarse suspension was added to the milling bowl along with the beads. The milling bowl was placed in the device, and the device was operated at the specified speed and duration. At the end of the period, the nanosuspension and beads were separated from each other using metal sieves. 0.125% PVA (0.125%) was added to distilled water heated to 80 °C and mixed with a magnetic stirrer until completely dissolved. LID was added to the cooled solution, and ultra turrax was applied at 15,000 rpm for 10 min. Then, it was prepared in the same way as the process steps in the preparation of the POL/LID nanosuspensions. The stabilizers in the determined amounts were dissolved in distilled water, LID was added, and ultra turrax was applied for 10 min at 15,000 rpm. To prepare nanosuspension formulations by the wet media milling method, process parameters were optimized by using experimental design. Design Expert Version 8 software was used to determine the optimal process parameters for each stabilizer. To prepare POL and PVA nanosuspensions containing PVA or POL, 2 3 (2 levels, 3 factors) factorial designs with three repetitions were performed. The process parameters used in the prepared formulations are listed in Table 1 . The process parameters of the nanosuspension were decided by the relationship between the independent and dependent variables. The independent variables were milling speed, milling time, bead size, and the dependent variables were PS, PDI, and ZP. Nanosuspensions were prepared using the process parameters in the experimental design. PS, PDI, and ZP values of nanosuspensions were measured. Nanosuspensions were lyophilized using a Christ Alpha 1–2LD Freeze-Dryer. 2,5% trehalose was added to 2 mL of nanosuspension. After being frozen at −80 °C for 2 h, it was dried in a lyophilizer at −50 °C and 0.021 mbar pressure for 48 h. Then, the obtained powders were dispersed in distilled water, and PS, PDI, and ZP values were measured. PS, PDI, and ZP values were measured using the Malvern–Zetasizer. While measuring, 20 μL of nanosuspension was diluted to 2 mL with distilled water. DSC analysis was performed in lyophilized nanosuspensions, stabilizer-LID physical mixtures, and coarse LID powders using the DSC 60 Shimadzu instrument. Samples, weighed using a precision balance, were placed between aluminum pans and compressed. DSC thermograms between 25 and 300 °C were determined at a heating rate of 10 °C/min under nitrogen gas. XRD analysis was performed on lyophilized nanosuspensions, stabilizer-LID physical mixture, and coarse LID powder with a Rigaku Ultima-IV powder diffractometer. Under 40 kV voltage, the scanning range was applied to be in the range 5–120° at a 2θ angle. The spectra of the systems were examined by FTIR spectroscopy using lyophilized nanosuspensions. FTIR analysis was also performed with a coarse LID powder and excipients. The analysis was carried out in the scanning range of 600–4000 cm –1 and with the FTR-ATR disk printing technique. The morphological properties of POL and PVA nanosuspensions, stabilizer-LID physical mixtures, and coarse LID powder were obtained by imaging with SEM (Quanta 400F Field Emission). Samples coated with gold–palladium were analyzed by using an acceleration voltage of 5–20 kV. The LID concentration was analyzed by an UV spectrophotometer (Cary 60, Agilent Technologies, ABD) at 265 nm wavelength in pH 7.4 phosphate buffer media. The UV method was validated for parameters such as specificity, linearity, range, accuracy, precision, and robustness. Lyophilized nanosuspensions powder was weighed (W) and dissolved in pH 7.4 phosphate buffer (V). It was mixed at 250 rpm in a magnetic stirrer, and the concentration of LID (C) was determined by an UV spectrophotometer. The drug content was calculated using eq 1 ( 27 ) 1 Solubility studies were performed on coarse LID powder, stabilizer-LID physical mixtures, and lyophilized nanosuspensions. Distilled water or pH 7.4 phosphate buffer was taken into a glass vial, and an excessive amount of powder materials was added. It was mixed by vortex for 5 min. The samples were mixed in a water bath with a magnetic stirrer for 48 h at 37 °C. The samples were centrifuged at 15,000 rpm for 10 min, and the supernatant was separated and filtered through a 0.45 μm membrane filter and were determined with an UV spectrophotometer. The study was conducted in three parallel runs. 28 , 29 In vitro release studies were performed using Franz diffusion cells with dialysis membrane (cut off: 14,000 Da). pH 7.4 phosphate buffer was used as the release medium, and the study was carried out at 37 ± 0.5 °C at 500 rpm. Sink conditions were met by considering the saturation solubility of the active substances. At each time point, the concentration of the solution was adjusted to be well below the saturation solubility of the active substances. Samples were taken at specified times for 48 h, and pH 7.4 phosphate buffer was used on a release medium. The samples were filtered and analyzed with UV. To investigate possible mechanisms of LID release from the formulations, in vitro release data were fitted to a zero-order, first-order Higuchi model, and Hixson–Crowell and Korsmeyer–Peppas kinetics using DDSolver software. The equations of the kinetic models are given in Table 2 . Ex vivo skin permeation studies were carried out with the approval of the Gazi University Animal Experiments Local Ethics Committee . Skin samples were taken from healthy male albino Wistar rats (230 ± 10 g) in the Experimental Animals Laboratory of Gazi University Faculty of Pharmacy. The back areas of the sacrificed rats were shaved without damaging the skin, and the entire back skin was removed. The removed skin was wiped with distilled water. The skin was then placed on the Franz diffusion cell with the stratum corneum layer facing the donor segment and the dermal layer facing the receptor segment. The formulation was placed in a donor compartment. The receptor compartment was filled with 2.5 mL of pH 7.4 phosphate buffer. Samples were taken at certain times, and pH 7.4 phosphate buffer was added instead. The samples were filtered through a 0.2 μm membrane filter and analyzed by HPLC. As a result of the study, lag time, flux ( J s ), and permeability coefficient ( K p ) were calculated using eq 2 . 30 The lag time was determined based on the point where the linear part of the curve intersects the x -axis. 31 2 K p : permeability coefficient J ss : steady-state flux C v : total donor concentration After the ex vivo study, the skins were removed, and the upper parts of diffusion cells were washed with distilled water. The weighed skins were cut into small pieces. 2 mL of methanol was added to the skins taken into the tubes and left for 12 h. It was then vortexed for 60 s and centrifuged at 15,000 rpm for 10 min. The filtered supernatant was analyzed by HPLC. Based on the concentrations obtained, the amount of LID remaining on the skin was calculated by taking into account the weight of the skin samples. 32 , 33 Lyophilized nanosuspensions were kept in stability cabinets at 4 ± 2 °C, 25 ± 2 °C/60 ± 5% relative humidity, and 40 ± 2 °C/75 ± 5% relative humidity. Samples were taken at certain time intervals, and their characteristic properties (PS, PDI, ZP, and drug content) were examined. Wistar albino male rats, 10 weeks old and weighing 150–200 g, were used in in vivo studies. The animals were purchased from Kobay D.H.L. A.Ş. The rats were maintained under controlled conditions (temperature 20 ± 2 °C; humidity 55 ± 10%; 12/12 h light/dark cycle). In vivo studies were carried out at the Gazi University Faculty of Pharmacy Experimental Animals Laboratory with the approval of the Local Ethics Committee dated 22.04.2021 and 79,983, received from the Gazi University Animal Experiments Local Ethics Committee. The animals were randomly divided into three groups, and each group consisted of six animals. The experimental groups are summarized in Table 3 . To evaluate the local anesthetic effect, a tail-flick test was performed by using a tail movement measuring device (Ugo Basile, Varese, Italy). The formulations were applied topically to the tail at 16.5 mg of LID per kg. Tail flick time was determined by applying radiant heat to the dorsal surface of the rats’ tails. The cutoff value of the device was determined as 10 s to prevent tissue damage due to heat application. For each rat, reaction times were recorded at 0, 0.5, 1, 2, 4, 6, and 8 h. % Analgesic effect calculated using eq 3 . 34 , 35 3 GraphPad Prism 5.0 software was used in statistical analyses. The T -test was used to compare two groups, and the ANOVA test was used for further group comparisons. In the analyses, a significant difference was determined as p < 0.05. In vitro and in vivo studies were conducted in three parallel ways. The wet media milling method is one of the important methods used in nanosuspension preparation. This technique has advantages such as simplicity, easy scale-up, and the ability to obtain narrow particle size compared to other nanosuspension production methods. 22 Within the scope of our study, an experimental design was made while developing nanosuspension formulations. Different design types, such as factorial, Box–Behnken, and central composite, can be used for experimental design. 2 3 factorial designs were used in our study. Factorial designs allow the experimenter to choose which factors are important and at what levels. The most commonly used (two-level design) is the full factorial design and is described as 2 k designs. Here, “2” represents the number of base levels, and k represents the number of high and low-value factors. 36 The process parameters that need to be optimized when preparing nanosuspensions with the wet media milling method are generally stated in the literature as beads size, milling time, and milling speed. 24 To determine the most suitable process parameters for each stabilizer, 2 3 factorial designs with three repetitions (2 levels, 3 factors) were made using Design Expert software. After the formulations were prepared, PB, PDI and ZP values, which were determined as dependent variables, were measured. The compositions of the formulations prepared using different process parameters are shown in Table 4 . In a study, agomelatine nanosuspensions were prepared by optimizing the bead size (0.1, 0.5, and 1 mm), surfactant concentration (10, 25, and 40%), and milling speed (300, 450, and 600 rpm) parameters with an experimental design. As a result of the experimental design, it was stated that optimum nanosuspensions were prepared with 0.1 mm beads size, 10% stabilizer concentration, and 450 rpm milling speed. Nanosuspensions were prepared with PS, PDI, and ZP values of 210 ± 3 nm, 0.164 ± 0.01, and −17.2 ± 0.8, respectively. 37 In a study using the Box–Behnken experimental design, etodolac nanosuspensions were successfully prepared by the beads milling method. Beads size (0.1, 0.5, 1 mm), milling time (1, 2.5, 4 h), and milling speed (200, 400, 600 rpm) were selected as independent variables in terms of process parameters. At the end of the study, it was stated that PS, PDI, and ZP values changed positively with 0.5 mm beads size, 400 rpm milling speed, and 2–2.5 h milling time. In this way, PS, PDI, and ZP values of nanosuspensions stabilized with PVP K30 were obtained as, 188.5 ± 1.6 nm, 0.161 ± 0.049, and 14.8 ± 0.3 mV, respectively. 38 A first-order model with interaction terms was chosen to fit the experimental data to determine the optimal nanosuspension. The model equation ( eq 4 ) is 4 Y refers to dependent variables such as PS, PDI, and ZP. b 0 , b 1 , b 2 , and b 3 are the interaction coefficients, and X 1 , X 2 , and X 3 are the coded factors for the bead size, milling time, and milling speed, respectively. According to the statistical analysis results of POL/LID nanosuspensions, single or dual interactions of the process parameters are significant on PS, PDI, and ZP values ( Table 5 ). It was aimed for the particles to be nanosized , PDI values to be in a narrow range (0.1–0.5) and ZP value to be ≥ ± 20 mV. When the three-dimensional graphics are examined, PS and PDI values decreased with small beads size, low milling speed, and short milling time, and ZP values were obtained close to −30 . As a result of the factorial design in which POL was used as a stabilizer, the model equations given below were obtained. where A is the bead size (mm), B is the milling time (h), and C is the milling speed (rpm). AB , AC , and BC are the interaction between variables. When the statistical analysis of the formulations stabilized with PVA was performed, all of the main process parameters and binary interactions on PS were found to be significant ( Table 6 ). Considering Figure 2 , smaller PS was obtained with high milling time, small beads size, and low milling speed. The interaction between beads size and milling time on PDI was found to be significant. When the milling speed was kept constant, nanosuspensions with narrower PDI range were produced with low beads size and long-term milling. All pairwise interactions were found to be significant on the ZP values ( Table 6 ). As a result of the factorial design in which PVA was used as a stabilizer, the model equations given below were obtained. where A is the bead size (mm), B is the milling time (h), and C is the milling speed (rpm). AB , AC , BC , and ABC are the interaction between variables. As a result of the experimental design, the most suitable process parameters for both stabilizer types were determined as 0.5 mm beads size, 300 rpm milling speed, and 2 h milling time. PS, PDI, and ZP values of nanosuspensions prepared using these process parameters are given in Table 7 and Figures 3 and 4 . When the results are examined, it is seen that POL/LID nanosuspensions have smaller PS, narrower PDI, and more suitable ZP values than PVA/LID nanosuspensions. In a study, mirtazapine nanosuspensions were prepared by using different polymers and mixtures. When the active ingredient stabilizer ratio was 1:1, PS and PDI values of nanosuspensions prepared with POL were found to be 444 nm, 0.214, and nanosuspensions prepared with PVA were found to be 691 nm, 0.293, respectively. Supporting our study, better results were obtained with POL. 39 Attari et al. prepared nanosuspensions containing different concentrations of stabilizers. It was concluded that among the prepared nanosuspensions, those stabilized with POL had lower particle size than those stabilized with PVA. 40 Like our study, Sahu et al. prepared nanosuspensions stabilized with PVA in their research. The zeta potential of nanoparticles was found to be negative and in the range of 5–18 mV. It has been stated that the negative charge may be due to the ionization of the carboxyl group in an aqueous environment. Additionally, the average PS of felodipine nanosuspensions was found to be 60–330 nm and the PDI to be 0.3–0.5. 41 In another study, an olmesartan medoxomil nanosuspension using POL 407 was prepared by a combination of milling and probe sonication. Nanosuspensions were obtained with a size of 469.9 nm and exhibited negative ZP (−19.1 mV). 42 Figure 5 a shows the DSC thermograms of the POL/LID nanosuspension, and Figure 5 b shows the DSC thermograms of the PVA/LID nanosuspension. DSC analysis is performed to determine whether the drug substance showed any polymorphic change or incompatibility in the prepared formulations. The melting degrees of coarse LID powder and LID in the prepared nanosuspensions were found to be similar. The structure of LID was preserved in the prepared nanosuspension formulations, and no incompatibility was observed between excipient and LID. According to the literature review, the excipients used in our study are generally utilized in nanosuspension studies and do not show incompatibility. 43 , 44 XRD measurements are frequently encountered on nanosuspension formulations to examine the crystal structure and polymorphic change properties of the active substance. 45 In a study, XRD measurements were made using paliperidone coarse powder, its physical mixture, and nanosuspension form. At the end of the study, characteristic peaks of the active substance are observed in the physical mixture and nanosuspension formulations. As a result, it was stated that the crystal structure of the active substance was preserved and did not undergo polymorphic change. 46 Similarly, within the scope of our study, XRD measurements of lyophilized nanosuspensions, 1:1 physical mixtures of excipients and LID, and LID powder were made . It was observed that the crystal structure of the LID powder was preserved in both nanosuspension formulations. As a result, it was observed that the nanosuspension preparation method and lyophilization process did not change the crystal structure of the LID. In nanosuspension, the FTIR examination is generally performed to detect the interaction of excipients and active substances. ( 47 − 49 ) The effect of the preparation process and the stabilizers used on the chemical structure of the nanosuspensions can be examined by FTIR analysis. FTIR spectra of lyophilized nanosuspensions prepared within the scope of this study are shown in Figure 7 . The characteristics of LID are the aromatic CH stretching peak around 3000 cm –1 , the NH bending peak around 1600 cm –1 , the CO stretching peak around 1500 cm –1 , and the OH bending peak around 1250 cm –1 in POL and PVA nanosuspensions. The results are compatible with literature information. 50 From the FTIR results, it can be concluded that the lyophilization and wet media milling processes do not cause any effect, and the LID does not undergo polymorphic change. Morphological examination for nanosuspensions is an important examination that shows the shape and physical state of the particles. SEM images of nanosuspension formulations, physical mixtures, and LID coarse powder were taken to determine morphological properties. When the SEM image of the LID powder is examined in Figure 8 , quite heterogeneous coarse particles are observed, similar to the literature information. 50 Similarly, irregularly shaped active substance and stabilizer particles are seen in physical mixtures . POL/LID and PVA/LID nanosuspensions were obtained in a spherical shape and nanosize . It has been shown in many studies that coarse powder particles and nanosuspension particles of an active substance differ morphologically as well as in size ( ( 46 , 47 , 51 , 52 ) ) . Reducing the coarse powder to a nanometer size increases the solubility of the active substance. This feature is one of the most important advantages of nanosuspensions. The results of solubility studies conducted on coarse LID powder, stabilizer-LID physical mixture, and lyophilized nanosuspensions are shown in Table 8 . The solubility of lyophilized nanosuspensions was found to be higher than that of both physical mixtures and coarse powder. The solubility of the POL/LID nanosuspension was found to be 76% higher than the coarse powder and 44% higher than the physical mixture. For the PVA/LID nanosuspension, these rates were 59 and 34%, respectively. The solubility increased further with the use of POL as a stabilizer. Obtaining active substance particles in the nanometer range increases the saturation solubility and the dissolution rate of the drug because of the larger surface area. As a result of increased solubility, the concentration gradient between the active substance and physiological membranes increases, and the thermodynamic activity increases. This results in higher passive diffusion. 53 There are many nanosuspension studies on this subject. 54 − 56 Shah et al. conducted a comparative solubility study between lumefantrine nanosuspensions and crude active substances. At the end of the study, the saturation solubility of the rough lumefantrine powder was found to be 212.33 μg/mL, while the nanosuspension form was found to be 1670 μg/mL, a 7.8-fold increase was observed. 55 In another study, lutein nanosuspensions were prepared for their dermal use. In the solubility study, an increase in the solubility of the prepared nanosuspensions was observed compared to lutein powder. 57 Assem et al. conducted a comparative solubility study of beclomethasone dipropionate nanocrystals for the treatment of atopic dermatitis. As a result, it was concluded that nanosuspensions have higher solubility compared to coarse powder. 58 When the in vitro release profiles were examined , the amount of % LID released by POL/LID and PVA/LID nanosuspensions at 48 h was 1.3 times and 1.17 times higher than their coarse suspensions, respectively. In addition, it was determined that the % LID amount passing through the dialysis membrane at 12th, 24th, and 48th hours with POL nanosuspension was significantly higher than with PVA nanosuspension ( p < 0.05). The difference between the in vitro release rates of coarse suspensions and nanosuspensions is due to particle size. According to the Ostwald–Freundlich formula, the solubility of active substances increases as a function of particle size. 59 According to the Noyes–Whitney equation, as the surface area increases, the release rate/dissolution rate or the total amount of released/dissolved substance also increases. 60 Elmowafy et al. developed luteolin nanosuspensions with antioxidant and anti-inflammatory effects using different stabilizers. As a result of the in vitro release study, nanosuspension formulations (617.3 ± 25.6 and 468.1 ± 18.6 nm) showed significantly more release than the coarse luteolin . In addition, nanosuspensions containing different stabilizers with smaller particle sizes showed better release. It has been stated that this is due to the particle size difference, not the stabilizer. 61 Shen et al. developed nitrofurazone nanosuspensions. In the in vitro release study, at the end of 2 h, the nanosuspension form (89.5%) showed a significant increase in the dissolution rate compared to the physical mixture (26.3%) and the active ingredient powder (23.8%). 62 In another study, Mitri et al. prepared lutein nanosuspensions with a particle size of approximately 429 nm. In an in vitro release study using a cellulose nitrate membrane, it was determined that the prepared nanosuspensions showed higher release compared to coarse suspensions. 57 These results support our study. DDSolver provides a number of statistical criteria to evaluate the dissolution model. Among these parameters, R 2 adjusted, AIC, and MSC are the most popular and widely used in the identification of dissolution data modeling. When evaluating these data, this means that the mathematical model is more suitable for the release profiles as R 2 adj approaches 1, AIC decreases, and MSC increases. 63 Statistical parameters of the models describing LID release are listed in Table 9 . The values of the mathematical method fit to the release kinetics of the formulations are shown in bold. For both stabilizers, the most suitable model in the physical mixtures follows first-order kinetics. In nanosuspension formulations, release compatible with the Hixson–Crowell kinetics occurs. Chirumamilla et al. developed Meropenem nanosuspensions and conducted release studies. When the kinetic models were evaluated, as the particle size decreased, the release model switched from first-order kinetics to the Hixson Crowell model. This situation is explained as change in surface area to volume with time could be the probable reasons for increased solubility and dissolution of poorly soluble active ingredient on nanonization. 64 Similarly, in our study, coarse forms are compatible with first-order kinetics, and nanosuspensions are compatible with Hixson–Crowell. Skin penetration of the active substance from nanosuspensions is increased due to increased saturation solubility compared to μm-sized crystals. 65 This causes the concentration gradient to increase between the dermal formulation and skin, subsequently leading to a higher diffusion flux. In addition, since nanomaterials are quite sticky, they increase the time they stay on the skin. 66 In ex vivo skin permeation profiles made using rat skin , it was shown that the % LID amount passing through the skin at 48 h with POL and PVA nanosuspensions was 1.7 times and 1.57 times higher, respectively, than their coarse suspensions. In addition, it was determined that the % LID amount passing through the skin at 48 h with POL nanosuspension was significantly higher than that with PVA nanosuspension. This can be attributed to the particle size difference, and the penetrating power of POL 407. POL 407 is a nonionic surfactant, and it has been stated that it can interact with the skin, causing disruption of the lipid barrier in the horny layer and increasing skin permeability. 61 Pireddu et al. compared diclofenac acid nanosuspension and coarse powder in their ex vivo study using mouse skin. The amount of diclofenac passing through the skin after 24 h was found to be higher in the nanosuspension form than that in the coarse form. 67 As a result of ex vivo study, value of flux, permeability coefficient, and lag time were calculated ( Table 10 ). The lag time obtained as a result of our study was found to be similar to the literature in which nanosuspension formulations were studied. 6 When the flux values of the formulations are compared, POL/LID nanosuspensions are 1.65 times higher than coarse suspensions, and PVA/LID nanosuspensions are 1.36 times higher than coarse suspensions. The flux value of the POL/LID nanosuspension was 1.38 times higher than that of the PVA/LID nanosuspension. In addition, the permeability coefficient values of nanosuspension formulations were found to be higher than coarse suspensions. In a study, nanosuspension, physical mixture, and coarse suspensions of glabridin permeation from skin was compared ex vivo using Franz diffusion cells and Sprague–Dawley rat skins. As a result, glabridin permeation was higher in the nanosuspension formulation compared with the physical mixture and coarse suspension. Flux values were found to be higher in nanosuspension formulation than that in physical mixture and coarse suspension. 68 When Romero et al. compared cyclosporin-A nanosuspensions and coarse suspensions in their ex vivo study on pig ear skin, the nanosuspension form gave better results. 69 Applying the active ingredient in a nanoscale formulation can increase skin permeability through various mechanisms. Nanoparticulate systems have a large-surface area that increases the saturation solubility and the dissolution rate of the active substance. Additionally, they can increase diffusion by creating a high concentration gradient between the formulation and the skin. 61 They can easily pass through the stratum corneum and penetrate the dermal sublayers through sweat glands and hair follicles. It is thought that the storage effect, which occurs through accumulation in hair follicles, is effective in the penetration of nanosuspensions through the skin. 70 Additionally, it is important that the dermal nanosuspension has a negative ZP value. It has been reported that a negative ZP can easily propagate through the skin and diffuse into the lower layers more easily with the effect of electrostatic repulsion due to the skin being anionic. 71 The nanosuspension formulations developed in this study have negative ZP. It has been concluded that this has a positive effect on the passage of the active substance through the skin and its accumulation in the skin. For nanosuspension formulations where local effects are expected, the aim is for the active ingredient to accumulate in the skin and not pass into systemic circulation. Therefore, studies showing that the active substance accumulates in the skin are very important. As a result of the ex vivo permeation study, the amount of LID remaining in the skin was also determined . The statistical analysis showed that the amount of LID remaining on the skin at the end of the 48th hour with both POL and PVA nanosuspensions was significantly different compared to the coarse suspensions. Additionally, it was determined that the amount of LID remaining on the skin at the end of the 48th hour with POL nanosuspension was significantly different from that with PVA nanosuspension ( p < 0.05; n = 3). Because of the poor solubility, after penetration of a few drug molecules in solution through a biological membrane, further dissolution of active crystals is not rapid enough to replace the penetrating molecules. Consequently, the rate-limiting step for the absorption of such drugs is the dissolution rate. In contrast to this situation, nanocrystals have increased dissolution rates due to their large-surface area and higher saturation solubility than the coarse active substance. The active ingredient molecules dissolved in the nanosuspension system, penetrated the skin causing the larger concentration gradient, and diffused in the stratum corneum by creating an accumulation area. 72 Since nanosuspensions penetrate the skin better, they accumulate more in the skin than coarse powder. After entering the skin, a better local effect is achieved because the dissolution rate of nanosuspensions is greater than that of the coarse powder. In their ex vivo permeation study, Manca et al. aimed to increase the accumulation of active substances in the skin by using quercetin nanosuspensions. At the end of the study, nanosuspension formulations provided more quercetin accumulation in the stratum corneum, epidermis, and dermis compared to the coarse suspension. 5 Skin penetration of active substances in nanocrystal form increases because their size is at a level that allows them to move within the skin compared to μm-sized crystals. In addition, their high-surface area makes it easier for them to dissolve in the tissue while they diffuse through the skin. 66 The amount of LID in the lyophilized nanosuspension prepared with both stabilizers was found to be 92.75 ± 2.35% for POL/LID and 90.57 ± 1.78% for PVA/LID. One reason why nanosuspensions are effective drug formulations is that they generally offer relatively high drug loading. 10 In a study, a nanosuspension formulation of Rutin, one of the plant secondary metabolites with antioxidant properties, was developed using the media milling method. Drug loading capacity was found as 97.66 ± 3.33%. 73 The issue of stability is an inevitable problem encountered in the development of nanosuspension technology, and pharmaceutical industrial application is also the limiting step in the development of nanosuspension formulations. Nanosuspensions have a large-surface area due to their particle size and high-surface energy, causing agglomeration of the particles. Nanosuspension increases the dissolution of the active ingredient and can cause nanoparticle growth. Flocculation or nanoparticle growth during the manufacturing process or shelf life of nanosuspensions directly affects dissolution and in vivo performance due to the formation of larger particles. 74 The main function of the stabilizer used in nanosuspensions is to obtain physical stability of formulation by surrounding the active substance particles and providing a steric or ionic barrier. This barrier prevents Ostwald ripening and aggregation of the nanoparticles. 21 In our study, while preparing nanosuspensions, POL and PVA were used as stabilizers based on preliminary studies. POL is approved by the FDA as a pharmaceutical ingredient and is one of the most widely used hydrophilic nonionic surfactants. Additionally, it has been widely studied because of surrounding the surface of nanocrystals. 75 PVA is a well-established excipient used in various biomedical and pharmaceutical products due to its nontoxicity, noncarcinogenicity, and bioadhesion properties. PVA also acts as a good stabilizer for nanosuspensions, increasing the system’s stability by providing a steric barrier. 76 Long-term stability results of the POL/LID nanosuspension are shown in Figure 14 . It was observed that the PS, PDI, and ZP values of the POL nanosuspension did not change statistically for 12 months at 4 ± 2 and 25 ± 2 °C ( p > 0.05). Similarly, Mishra et al. developed hesperetin nanosuspensions for dermal application. Poloxamer-stabilized nanosuspensions have been reported to be stable at room temperature. 77 Stability results of PVA/LID nanosuspensions are listed in Figure 15 . PS, PDI, and ZP results of PVA nanosuspensions up to 12 months at 4 ± 2 °C and up to 6 months at 25 ± 2 °C were found to be similar to the initial ones. As a result, it was observed that POL nanosuspension maintained its physical stability for 12 months at 4 ± 2 and 25 ± 2 °C, and PVA nanosuspension maintained its physical stability for up to 12 months at 4 ± 2 °C and up to 6 months at 25 ± 2 °C. Based on these data, it was concluded that the physical stability of nanosuspensions in which POL was used as a stabilizer was better. Tail flick testing was applied to determine analgesic and anesthetic effectiveness. This test is a frequently used test to detect the anesthetic effect. There are many studies in which the tail flick test was used to determine the anesthetic effectiveness of LID. ( 78 − 81 ) Within the scope of this study, parameters such as latent time and analgesic effects are examined. The change in latent time of the formulations depending on time is given in Figure 16 . Starting from the 30th minute, POL/LID nanosuspension significantly prolonged the tail flick time and showed a better analgesic-anesthetic effect than the control group and the POL/LID physical mixture ( p < 0.05). In nanosuspensions, active substances barely soluble in water have a large-surface area, and therefore, both the dissolution rate and water solubility of the active substance increase. They provide active substance accumulation in the skin in nanoparticulate form, increasing skin penetration and bioavailability of active substance molecules by causing an increased concentration gradient. 5 , 6 In addition, it has been reported that these systems increase the pharmacological effect of the active substance on the skin as it accumulates in hair follicles and increases the time it stays on the skin. 7 According to the data on the change of the analgesic effect over time , it was observed that the nanosuspension form of LID had a better anesthetic effect compared to its coarse form. It is seen that the detected analgesic effect reaches its maximum level at the 120th minute. It is also supported by the literature that nanosuspensions increase the effectiveness of the active substance compared to coarse suspensions. 35 , 82 According to the literature review, evaluation is also made by calculating the area under the latent duration–time graph obtained from the tail-flick test. ( 81 , 83 , 84 ) Shin et al. developed different LID gel formulations. They evaluated the anesthetic-analgesic effect using the area under the time-dependent tail flick graph (AUC) and tail flick time values. 81 In our study, AUC values of the formulations were calculated based on latent period and % analgesic effect ( Table 11 ). According to the data obtained, the LID nanosuspension showed a higher analgesic-anesthetic effect than the coarse suspension. LID nanosuspension formulations were successfully prepared by the wet milling method. Using the experimental design, nanosized stable nanosuspensions were prepared for both stabilizers (POL and PVA). DoE is a suitable approach for optimizing process parameters in the bead milling method. More stable and smaller particle-sized nanosuspensions were produced by using POL as a stabilizer. POL is an advantageous polymer in the production of nanosuspension formulations. Permeation of the active substance through the skin and release from the dialysis membrane were higher in nanosuspension formulations than that in coarse powder. In addition, thanks to nanosuspensions, the active substance accumulates more in the skin, increasing the local effect. Thus, the dermal bioavailability also increases. In this study, process parameters were optimized through experimental design in the production of nanosuspension formulations by the wet milling method. The effect of POL and PVA in nanosuspension formulation was investigated, and POL was found to be more advantageous. Additionally, in the in vivo study, it was concluded that nanosuspensions increased the analgesic/anesthetic effect compared to coarse suspensions. | Review | biomedical | en | 0.999994 |
PMC11696535 | When making decisions under uncertainty, knowing the probabilities of different outcomes simplifies thinking about how people may approach choice problems by allowing us to apply the principles of rational decision theory . This family of theories gives us clear guidelines about how one should decide, enabling straightforward hypotheses for what goes on in the decision-maker’s mind. Under ambiguity, however, decision-makers cannot calculate risk. This introduces important difficulties in understanding how people make decisions with incomplete information, which incidentally happens to be the case with most everyday life decisions. As an example toy model, take Ellsberg’s famous demonstration: in a one-shot gamble to choose between a risky urn of 50 red (good) and 50 blue (bad) tokens and another ambiguous urn of 100 tokens with an unknown red/blue proportion, people tend to prefer choosing the former (risky) option over the latter (ambiguous) one . Such “ambiguity aversion” may be interpreted to mean that individuals believe that the number of winning tokens in the ambiguous gamble must be fewer than in the risky one. This notion of subjective belief – called ambiguity attitude – about the likely structure of one’s ignorance could help us understand how the agent may fill out the missing information necessary to make a choice. In many real-life situations, ambiguity does not necessarily become the complete absence of all information, but it can also indicate partially missing information. In such cases, one has to make up one’s mind with whatever partial information one has. “Partial” ambiguity attitude has been recently studied by manipulating the relative size of the ambiguity while keeping the valence of the information neutral. Ambiguity aversion is also observed in the face of partial ambiguity. Examining ambiguity aversion under partial ambiguity raises important and new questions. Available information often has some valence, sometimes promising benefit and other times cautioning against loss, pushing us toward or away from embracing the ambiguity vs. risk. In one study employing theoretical methods and behavioral experiments, asymmetric effects of positive and negative news were found. When available information supported a favorable outcome, ambiguity tolerance increased. However, unfavorable information did not affect the ambiguous attitude. Similar asymmetric treatments of positive and negative cues for decision-making under risk have been widely interpreted as the underlying cognitive basis of optimism bias . By employing an experimental paradigm that combined risky and ambiguous decision-making, we examined how subjective probability may be constructed from positive vs. negative partial information as the participants chose between a risky option and another partially ambiguous option. We quantified ambiguity attitude in humans by comparing preferences between varying risky and partially ambiguous gambles. Each trial of our experiment presented a choice between playing a risky or a partially ambiguous gamble with the same payoff size. We systematically and orthogonally manipulated the proportion of ambiguity/information and the valence of information by changing the proportion of good/bad news (i.e., positive vs. zero rewards). By applying a staircase method borrowed from sensory psychophysics, we estimated the risky equivalent of each partially ambiguous gamble. This equivalent risky gamble allowed us to infer each participant’s subjective fractionation of ambiguity . Following the earlier works on optimism bias under risk , we predicted that greater ambiguity tolerance should be observed when available information has positive vs. negative valence. Our results, however, demonstrated a much more nuanced behavior indicative of a flexible form of skepticism: when ambiguity size was tractable, subjective belief was sensitive to the valence of information; if the information was promising, ambiguity aversion increased, skeptically balancing the promising prospects of available evidence against the hazards of what might be hidden from the view. Conversely, when the information was disappointing, ambiguity tolerance increased, cautiously encouraging the participant to be more adventurous than what the available information guaranteed. When ambiguity was large, ambiguity attitudes were not affected by the valence of information. A total of 77 healthy participants (mean age = 27.4, SD = 4.3) were recruited in the study, consisting of 36 females (19–37 years old) and 41 males (20–35 years old). Participants were from a wide range of academic disciplines, either at the graduate level or in the last semester of their first degree. Participants received monetary payment based on their decisions at the end of the experiment (see monetary payment). All participants signed an informed written consent. The research was approved by Human Research Ethics Committee of the University of Tehran. Participants were individually assessed for attitudes toward ambiguity. Each participant was briefed about the “game” and payment scheme. Participants knew that there was no “right answer” to any of the choices that they would face, and they were only required to report their preferences. They were informed that some of their decisions would be randomly used to calculate their monetary reward. Hence, their choices would not result in any loss. Each participant had played a training session before the experiment to become familiar with the task procedures. Our experiment consisted of 270 two-alternative forced-choice (2-AFC) trials that presented a choice between playing a risky or an ambiguous gamble with the same payoff. Two gambles were presented simultaneously on the computer screen as pie charts, which indicated the number of different tokens in each virtual urn, and both urns contained 100 tokens . The red and blue areas of the pie charts represent the ratio of red and blue tokens. Participants were told that they would “win” if a red token was drawn from their chosen virtual urn. The known proportion of tokens was also shown numerically. Pie charts were rotated randomly to avoid using visual alignment in decisions between gambles. To introduce ambiguity, a portion of one pie chart was blocked by gray. Participants were informed that each ambiguous gamble had an underlying winning ratio assigned to it, which was hidden from the participant in the gray section. In this way, calculating the expected value of the ambiguous gamble was impossible. Participants were assured that a priori winning ratios were fixed during the experiment and would not be changed by experimenters. We systematically varied the properties of ambiguous gambles across trials . We crossed three Ambiguity Sizes, AS (25, 50, and 75%), with three ratios of winning tokens over a total number of tokens in the known part, which we call the Known Winning Ratio, KWR (0.2, 0.5, and 0.8). Different values of KWR imply different probabilities of winning for participants. For example, KWR = 0.2 specified that in the known part of the urn, 20% of tokens were winning tokens, and 80% were null, so the delivered information is asymmetric in favor of losing the gamble. Following the same rationale, KWR = 0.5 implies an equal probability of winning vs. not winning. Finally, KWR = 0.8 meant that the information available to the participant favored winning. The experiment was conducted in 3 sessions with 30 blocks of 3 trials. In each session, a fixed KWR was employed. Three ambiguous gambles of different ambiguity sizes were proposed within each block in randomly interleaved order. The order of which KWR to display in which session was randomized across participants. Participants had unlimited time to respond and did not receive feedback on the trials. To estimate the equivalent risky gamble corresponding to each ambiguous gamble, we employed an approach similar to the staircase method with variable step size, which is commonly used in psychophysics studies . The winning ratio of the risky gamble was adjusted adaptively across the session by a stochastic approximation staircase . If the participant preferred the risky gamble over the ambiguous one, then the winning ratio of the risky gamble was decreased in the next corresponding trial; if he/she preferred the ambiguous gamble, then the winning ratio of the risky gamble was increased. The changes in the winning ratio of the risky gamble were restricted to the AS. Thus, as the winning ratio of the risky gamble was changed depending on the participant’s choice, the subjective fractionation of the ambiguous part was estimated as the staircase covered the Ambiguity Size. The staircase started with proposing the winning ratio of risky gamble equal to the number of winning tokens in the ambiguous gamble plus half of the AS. The initial step size of the staircase was equal to a third of AS and decreased as the participant reversed his/her choice. Decrement of the step size followed a harmonic series (i.e., AS/4, AS/5, …, AS/10) and remained constant when it reached AS/10 . The choice of a large initial step size and its progressive decrement guaranteed the convergence of the staircase to the Point of Subjective Ambivalence (PSA) . The minimum winning ratio proposed by the staircase was equal to the number of winning tokens in the known part of the ambiguous gamble, while the maximum winning ratio proposed to participants was given by fractioning all of the ambiguous parts as winning tokens. After the experiment, the participants completed the Revised Life Orientation Test (LOT-R) . We used this questionnaire to measure trait optimism/pessimism, and its results did not affect the participants’ payment. People might not perform realistically in hypothetical situations . Hence, we informed participants that we would randomly select one trial from each session (3 trials in total) to run the selected gamble in that trial for their monetary payment at the end of the experiment. We labeled numbers from 1 to 100 with red/blue colors with respect to the proportion of tokens of the chosen gamble in that trial. We asked participants to pick a number between 1 and 100. If the color assigned to the number was “red,” we paid them 100 K Rials (equivalent to $3). Each participant also received 100 K rials for participating in the experiment. We anticipated that there might be a range of strategies to explain the ambiguity-resolving behavior. We simulated 1,000 agents for each of our suggested strategies (see details of cognitive strategies in the result section). The probability that the agent chooses an ambiguous gamble is calculated by a single logistic function: P Choice = e β.p 2 / ( e β.p 1 + e β.p 2 ), where p 1 is the probability of winning in a risky gamble, and p 2 is the subjective probability of winning in an ambiguous gamble. β is the slope of the logistic function or a noise parameter. For β near zero, choosing the ambiguous gamble or risky gamble has nearly the same probability. The probability of choosing the ambiguous gamble for high noise parameters tends to be 1. We used β = 0.2 for simulating random selection. Based on staircase results, we derived the corresponding AA in all nine experimental conditions for each simulated agent. The risky gamble consisted of winning (red) and null (blue) tokens, where winning resulted in a 100 K rials payoff. The ambiguous gamble was similar to the risky gamble, but a portion of tokens was not disclosed to participants, and they did not know the ratio of winning and null tokens in this unclosed proportion. Gambles were not played until the end of the experiment. Participants reported their preference between ambiguous and risky gambles at each trial. No feedback was given about gambling outcomes during the experiment. Figure 1A depicts a sample trial consisting of a risky gamble (left pie chart) with fully known probabilities of outcomes: a 40% chance of winning and a 60% chance of getting nothing. The right pie chart depicts a symmetric ambiguous gamble with partially known probabilities of outcomes (25% < chance of winning <75%). The winning ratio of the risky gamble varied systematically across trials to determine how the Known Winning Ratio (KWR) and Ambiguity Size (AS) influenced the participant’s choice. AS is the fraction of the ambiguous gamble covered by the gray sector. The experimental design combined three levels for AS [Ambiguity Size: small (25%), medium (50%), large (75%)], with three values for KWR [Known Winning Ratios: negative valence of information (0.2), neutral (0.5), positive valence of information (0.8)], giving rise to nine conditions. Figure 1B shows the nine conditions resulting from the 3 × 3 design. Subjective attitudes toward ambiguity were elicited using a staircase technique with variable step size. For example, a run of the staircase for a designated ambiguous gamble (A) is shown in Figure 1C . Every time the participant chooses the risky gamble (R), the staircase proposes a risky gamble with an increased number of null tokens in the next step. Every time the participant chooses the ambiguous gamble, the staircase updates the risky gamble with an increased number of winning tokens. We defined the Point of Subjective Ambivalence (PSA) between ambiguous and risky gambles as the average of the last 15 risky winning ratios proposed to the participant within a run of 30 trials. We used the PSA to infer how the participant must have fractionated the ambiguity into win ( n s { W }) and null ( n s { N }) subcomponents . We then calculated the Ambiguity Attitude (AA) for each participant in each condition as shown in Equation 1 : AA is a number between 0 and 1. A value of 0.5 shows that the participant split the ambiguous part equally between winning and null tokens (Ambiguity Neutrality). Values higher than 0.5 indicate that the participant divided the ambiguity in favor of the winning tokens . Values lower than 0.5 show that the participant interpreted the ambiguity negatively, favoring null tokens (Ambiguity Aversion). By employing various KWRs , we offered negative/neutral/positive valence of the information to the participants to measure the effect of the valence of the information on ambiguity attitude. Previously, the ambiguity attitude has been studied only under neutral information, where the probability of winning and not winning represented by partial information was equal . Our work goes beyond those previous studies by introducing different valences of information. We tested our hypothesis about the impact of the valence of information on the relative likelihood that participants attach to the ambiguous part. We predicted that the ambiguity attitude would be greater in positive than negative valence of information . A 3-way mixed ANOVA (KWR: negative, neutral, and positive; AS: small, medium, and large; gender: male and female; AA: dependent variable) was employed, showing a significant main effect of KWR [ F (2,679) = 12.18, p = 6.3e-6]. There was also a significant effect of gender [ F (1,679) = 7.24, p = 0.01], but no main effect of ambiguity size [ F (2,679) = 0.14, p = 0.87] and no significant interaction between the independent variables ( Supplementary Table S1 ). The main effect of KWR on AA revealed a marked asymmetry in resolving ambiguity in different conditions. We calculated the average of AAs at a fixed KWR for each participant. A comparison between AA in positive vs. negative conditions (KWR = 0.2 vs. 0.8) confirmed that the AA in positive conditions was significantly less than the AA in negative conditions [paired t -test; t (76) = 3.43, p = 0.001] . People were less ambiguity-tolerant in positive conditions relative to negative conditions. The ambiguity tolerance decreased as the information was more favorable. We concluded that dividing the structure of ignorance was biased with respect to the given evidence. People assume that the structure of the hidden part would be different from the structure of available evidence and fill out the missing bit of information differently when dealing with ambiguity. This kind of pessimism about given information in the domain of ambiguity needs more explanation and analysis. Additional analysis showed that, on average, female participants were more ambiguity-averse than males . The lack of a main effect of AS on AA indicated that the size of ambiguity had not affected the Ambiguity Attitude. This was consistent with previous studies on decision-making under partial ambiguity . To explain how the subjective structure of probability distribution in the hidden part was biased with the accessible information in the known part, we defined the Optimism Index (OI) by the following Equation 2 : Let us explain how the novel optimism index works. An optimistic person has a positive optimism index. This means that she has more ambiguity tolerance (less ambiguity aversion) in favorable conditions than in unfavorable conditions. Therefore, for an optimistic person, the higher KWR results in a higher ambiguity attitude (positive optimism index). Inversely, a pessimistic person has a negative Optimism Index. Her ambiguity aversion in the positive condition is bigger than in the negative condition. If the information is biased toward the winning, she assumes fewer winning tokens in the hidden part. A realistic person does not change her ambiguous attitude due to the valence of information. In other words, the proposed information cannot change her subjective belief about the distribution of tokens in the hidden part . The novel introduced the Optimism Index, which measures people’s sensitivity to the given information. We know that an ambiguous attitude also traces some optimism/pessimism trait. However, we should emphasize that the Optimism Index measures different issues. When we call someone ambiguity averse, she generally dislikes ambiguous options and perceives ambiguity as undesirable. However, here, the Optimism Index measures how she changes her subjective probability in line with accessible data. For example, if the biased structure to winning could lead to reduced ambiguity aversion. From this definition, we understand that someone could have a positive optimism index, and she could also be ambiguity-averse. To calculate the OI, we regressed AA on KWR values for each AS for each participant. Figure 2C shows the regression line for each Ambiguity Size pooled across all participants. Figure 2D shows the optimism index for each level of AS separately for male and female participants and the entire dataset. In our empirical data, a two-way ANOVA (dependent variable: Optimism Index) with factors of AS and gender showed that there was no main effect of gender [ F (1,225) = 2.47, p = 0.12] but a marginally significant main effect of AS [ F (2,225) = 2.92, p = 0.056] on slopes. A comparison between OI in AS = 75% with zero confirmed no significant difference [one sample t -test with zero; AS = 75%; t (76) = −1.42, p = 0.16]. When the ambiguity size was large, people tended to be more realistic. Moreover, the OIs in small and medium ambiguity size conditions were significantly less than zero [one sample t -test with zero; AS = 50%: t (76) = −3.14, p = 0.0024; AS = 25%: t (76) = −3.99, p = 1.00E-04] ( Supplementary Table S2 ). When the ambiguity size was tractable, people tended to be pessimists. Additional control measures showed that optimism indexes were not correlated with participants’ trait optimism (LOT-R) (AS = 25%: r = 0.03, p = 0.81; AS = 50%: r = 0.1, p = 0.39; AS = 75%: r = 0.18, p = 0.13) . To develop a rigorous theoretical framework for decision-making under ambiguity with asymmetric information, we require a weighting distortion function that can accommodate both symmetric and asymmetric information scenarios. To identify this function, we adopted the approach of basing the distortion function on observed indifference between a risky gamble with a known probability of 1 − A S ∗ K W R + A S ∗ AA winning and an equivalent ambiguous gamble (AS, KWR). We begin by introducing three well-established distortion functions from the literature that are applicable to our data. Subsequently, we present our proposed distortion function, which is inspired by one of these existing functions, and further inform you based on our empirical findings. The first model is the inverse S-shaped distortion function, introduced by Abdellaoui et al. : In Equation 3 α represents the index of insensitivity, and β represents the index of pessimism. The second weighting function considers the effect of ambiguity in a linear manner : In Equation 4 , γ , the fitted ambiguity aversion parameter ranges from −1 to 1, with 1 indicating maximum aversion. This differs slightly from our definition of ambiguity attitude (AA), which ranges from 0 to 1, with 1 indicating maximum ambiguity seeking. Finally, the third model, employed by Hsu et al. , incorporates the effect of ambiguity through an exponential structure. In Equation 5 , γ is the parameter that measures ambiguity aversion. We now turn to the development of our proposed model, which was informed by our empirical results. We constructed a generalized linear mixed-effects model with a group-level intercept, as shown in Equation 6 , treating AS and KWR as independent variables and AA as the dependent variable. Consistent with our previous 3-way ANOVA ( Supplementary Table S1 ), linear model fitting revealed significant effects of the intercept and KWR ( p < 0.001) but not AS ( p = 0.74). Therefore, we built a revised linear model with KWR as the sole independent variable and an intercept term: Fitting this model to the entire dataset, we obtained a coefficient of −0.16 for KWR and an intercept of 0.56, both of which were statistically significant . The negative coefficient for KWR aligns with our behavioral findings , demonstrating a negative relationship between ambiguity attitude (AA) and KWR. Furthermore, a Pearson correlation analysis yielded a correlation coefficient of ρ = −0.17 , confirming a significant negative correlation between AA and KWR. The foregoing analysis adopted a fixed-effects framework for the entire dataset. To incorporate potential heterogeneity in the AA-KWR relationship across individuals, we estimated subject-specific models based on Equation 7 , allowing for individual-level parameter variation. Leveraging our established understanding of the equivalent chance of winning for an ambiguous gamble (AS, KWR), we can propose a weighting function with the following structure as shown in Equation 8 : Furthermore, the objective probability is obtained by dividing the ambiguous part equally between the possible outcomes ( Equation 9 ): So, we have: And finally, by replacing Equation 7 into Equation 10 , we have Equation 11 , as follows: We define w empirical as the probability of winning in the risky equivalent gamble for each ambiguous gamble (AS, KWR). Note that here, w empirical is calculated from the behavioral data, with AA extracted from the subject’s behavior for each ambiguous gamble individually ( w empirical = 1 − A S × K W R + A S × A A empirical ). We then fit w empirical vector (9 conditions of the experiment) to the proposed w function to determine the best-fitting parameters for each subject separately. To compare the performance of our proposed weighting function, we evaluated it against three prominent weighting functions commonly used in decision-making under ambiguity research. Having all models, we fitted the w empirical vector to each weighting function for each subject individually. We calculated the error for each model for each of the 77 subjects. Subsequently, we employed a one-way ANOVA test to compare the error values across the different models. Each column in the ANOVA analysis consisted of the error of each model for all 77 subjects. The results indicated a significant error difference between the models ( F = 3.39, p -value = 0.018). A post hoc t -test comparing our model with the next best-performing model (inverse S-shaped distortion function [ Equation 3 ]) revealed a significant difference between them (CI = [0.013, 0.025], SD = 0.02, p -value < 0.001, df = 76). Please refer to the Supplementary material for details on the model comparison. This file contains predictions of each fitted distortion function for selected subjects, a boxplot of the error distribution for the four competitive models, and a detailed report of the t -tests comparing our model to the other models. In light of the proposed model, we can compare the empirical results to a number of plausible cognitively inspired hypotheses for the mental process shaping our subjects’ decision-making under ambiguity . A key strength of this approach is that the predictions drawn from these seemingly similar hypotheses are radically different once they are applied to the context of the experimental setup. As a result, even a qualitative comparison of the data to the predictions communicates our point sufficiently. If the participants interpreted every ambiguous gamble as a 50–50 risky urn, then the subjective ambivalence for each of the nine conditions would indicate that ( n S { W } + n O { W } = n S { N } + n O { N } = 50). This would be equivalent to assuming maximum var iance in the ambiguous gamble . A participant following this extremely simple and intuitive strategy would compare the probability of winning in a risky gamble with 50%. Remarkably, such a simple strategy would correspond to a very elaborate pattern of different negative slopes for the relationship between KWR and AA for different ambiguity sizes . Participants may fractionate the ambiguous part in the same proportion as the known part . The choice would involve comparing the ratio of winning tokens in the known part with the ratio of winning tokens in the corresponding risky gamble, where the quantitative outcome is ( n S { W }/ n S { N } = n O { W }/ n O { N }). This strategy predicts a unique positive slope for the relationship between AA and KWR . A paranoiac participant may assume that the portion of tokens in the ambiguous part is the inverse of the proportion displayed in the known sector . The quantitative outcome is ( n S { W }/ n S { N } = n O { N }/ n O { W }), which corresponds to predicting the same fixed negative slope for all ambiguity sizes . Participants may not incorporate the partial information to resolve ambiguity, always splitting the ambiguous part in half. Participants following this strategy would not adjust their belief in response to variation of partial information. AA would be independent of KWR but for a fixed intercept indicating “Ambiguity Aversion” and “Ambiguity Seeking.” Finally, a useful null hypothesis is the one. These predictions were obtained by simulating the agent taking up each strategy (see Method) and calculating the simulated agent’s AA . Both Variance Maximization (VM) and Reverse Extrapolation (RE), as illustrated in Figure 3B , exhibit a decreasing relationship between AA and KWR. However, the VM weighting function remains constant at 0.5, independent of KWR and AS. In contrast, the observed variation of AA concerning KWR resembles the VM plot in Figure 3B , as suggested by visual inspection. Conversely, a formal analysis reveals a stronger alignment between the structure of our proposed weighting function and the RE cognitive strategy (see Supplementary material for AA of different cognitive strategies). Comparing our proposed model with the RE strategy, we observe structural similarities, with the primary difference in the coefficients associated with AA. The coefficients of KWR in AA RE are −1, which is more extreme, leading to significant variations between conditions. Conversely, the coefficient of AA in our fitted model is more moderate (−0.16), resulting in a smoother variation. Additionally, the intercept of AA in our fitted model is approximately 0.56, which is close to 0.5, suggesting a strategy where subjects may simply ignore the information and equally divide the unknown probability between winning and losing. From this perspective, we can interpret the subjects’ strategy as a tempered version of RE, where the variation of AA is centered and confined to a narrower range around 0.5, thus exhibiting similarities to the VM strategy in visual representation. A further hypothesis drawn from the Variance Maximization strategy is that if the available information is already consistent with maximum variance (i.e., KWR = 0.50), the participant should have a much simpler task requiring much less cognitive effort to disambiguate the unavailable information. This would lead to the prediction that response times should be shorter when KWR = 0.50 compared to when KWR <> 0.50. On the other hand, many previous studies have shown that choice response times in human and non-human primates increase with variance in the evidence. These previous studies would predict maximum response time in KWR = 0.50. We analyzed the response times of choices between risky and ambiguous gambles (RTs longer than 20 s were excluded from the analysis). A one-way ANOVA on RTs indicated that KWR has a main effect [ F (2,76) = 2.73, p = 0.05]. There was a significant difference between conditions with negative/positive valence of information (KWR = 0.2 and 0.8) and the neutral condition (KWR = 0.5) [paired t -test; 20% vs. 50%; t (76) = 3.44, p = 0.0009; 80% vs. 50%; t (76) = −3.14, p = 0.002, Figure 4 ]. Not much is known about the role of information in constructing subjective belief under ambiguity, where the probability distribution over uncertain events is partially or completely unknown. To address this question, our study focused on how individuals use the evidence to disambiguate what they do not know. We combined a staircase procedure commonly used in sensory psychophysics with a classical risky choice paradigm in behavioral economics to estimate and extract the participants’ ambiguous attitudes. We directly elicited participants’ ambiguous attitudes by revealing their preferences in choosing between risky and ambiguous gambles in the context of an adaptive staircase. To test our main hypothesis, we introduced a novel approach by employing partial ambiguity, which goes beyond previous studies of decision-making under ambiguity . Some recent studies investigate how information could change ambiguity aversion , they taught people about the Ellsberg paradox and their own potentially suboptimal decisions in ambiguity. Results showed that this intervention reduced participants’ ambiguity aversion. Generally, it showed that information could modify human suboptimal strategies in the face of ambiguity. Peysakhovich and Karmarkar employed both empirical and theoretical methods to investigate how favorable and unfavorable information can influence the perceived value of ambiguous options. They showed that information added in favor of the winning raises the value of ambiguous gambling in the eyes of gamblers. However, no effect was found when the information favored losing the gamble. Their valuable work was distinct from our work in some aspects. First, they measured willingness to pay (WTP), balancing two factors: ambiguity aversion and subjective likelihood estimates, while we only focused on ambiguity aversion and how it could be swung by asymmetrical partial information. Second, most of their conditions were special cases that were excluded from our task, for example, (0 and 25%) and (50 and 0%) in their task, which equals KWR = 0 and KWR = infinity, respectively. Previous studies used a parametric computational model to interpret the ambiguity attitude. A softmax function has often been employed to model the probability of choosing the ambiguous gamble. Those previous works estimated the ambiguity attitude by applying a non-linear optimization constrained by the participant’s choice, which requires numerous assumptions about the shape of the distribution . In our study, we fixed the monetary payment for both gambles and changed the winning ratio of risky gambles. Our non-parametric method based on the staircase procedure empowered us to directly measure the ambiguity attitude. Therefore, we do not make any assumptions that may have affected the computational analyses. Our results also showed a valence-dependent asymmetry in how people handle promising and disappointing information to decide what they do not know. People do not fully trust the available evidence when they face ambiguity. Promising information pushes people to change their beliefs skeptically as they balance the promising prospects of available evidence against the hazards of what might be hidden. Conversely, disappointing information fails to thwart people from being adventurous about what might be hidden versus what the evidence suggests. In an unknown environment, people might have interpreted the evidence as a deceptive effort, as if somebody might have wanted to lead them on to take a bad risk or lose some benefits . Our results are consistent with these previous reports on context’s impact on valence’s role in the ambiguity domain. Although there are a number of advantages to holding positive expectations, there seem to be obvious disadvantages to ignoring negative information, such as underestimating risks. The asymmetric belief formation has been blamed for a host of disasters, such as overly aggressive medical decisions , ill-preparedness in the face of natural catastrophes , and financial collapse . Moreover, positively biased views of the self can lead to error and cost, as shown, for example, in overconfident traders . In an unknown environment, such a pessimistic attitude could help us handle information better when deciding what we do not know. A previous study concluded that individuals with greater ambiguity tolerance have a greater tendency to trust other people during social decisions . Although some previous studies in the non-social domain illustrated that individuals with higher ambiguity tolerance are more optimistic about the future according to the LOT-R test , our new measure showed that the amount of optimism or pessimism about life is not related to the optimism in the ambiguity domain. Future studies will be needed to disentangle the relationship between personality and behavior in ambiguity. | Other | other | en | 0.999995 |
PMC11696536 | In the past three decades, China has witnessed rapid economic growth alongside accelerated urbanization, yet significant economic disparities persist between urban and rural areas. This imbalance has driven numerous rural residents to migrate to cities in pursuit of better job opportunities, leaving their children behind in rural areas, often under the care of grandparents or other relatives. These children, known as ‘left-behind children (LBC)’, number 41.77 million according to the seventh national population census conducted by the National Bureau of Statistics of China in 2020, representing 14.03 percent of the nation’s child population. Separated from their parents for extended periods, LBC inhabit an environment where their physical and mental needs may go unmet . Comparative studies with non-LBC suggest a heightened susceptibility to psychological challenges , including emotional instability, social withdrawal, loneliness, and feelings of inferiority. These issues not only hinder the growth and development of LBC but also pose a potential threat to social harmony and stability. Social anxiety disorder is a significant issue among LBC, with a detection rate of 36.1%, compared to just 20.2% among non-LBC . For the unique cohort of LBC, enduring prolonged absence of parental companionship and emotional nurturing, their social well-being often heavily relies on peer interaction and support . However, due to deficits in social skills, many LBC exhibit a tendency towards withdrawal and avoidance of peer communication , exacerbating their symptoms of social anxiety . Social anxiety manifests as heightened anxiety, nervousness, or fear in specific interpersonal situations, characterized by nervous demeanor, fearfulness, excessive apprehension, aversion to eye contact, discomfort, and avoidance during social interactions . Prolonged separation from parents deprives LBC of direct emotional support and security, contributing to emotional loneliness and helplessness . This emotional deficit may lead LBC to display nervous, insecure, and avoidant behaviors in social contexts, increasing the risk of social anxiety disorder . Social anxiety, as a chronic condition, not only exacerbates over time but also elevates the risk of other severe mental health issues, such as moderate to severe insomnia, suicide attempts, substance abuse, and depression . Thus, advancing treatment modalities for LBC with social anxiety disorder is imperative for promoting their healthy development. In recent years, research has highlighted the importance of fostering social interactions and collaborative experiences in interventions aimed at addressing social anxiety in children. Evidence suggests that increasing opportunities for positive social interactions among peers can alleviate symptoms of social anxiety . With this in mind, the current study explores two distinct, peer-focused approaches—Interactive Video Games (IVG) and LEGO Game Therapy (LGT)—to reduce social anxiety in LBC by fostering structured, collaborative environments. In this study, IVG are employed as a means to create structured, peer-based social experiences. The games encourage active social engagement, helping children develop social skills, build relationships, and reduce anxiety through repeated interactions in a safe, controlled virtual environment . While IVG encompass a variety of game modalities, the intervention focuses on games that require interaction between players, thereby promoting the development of social competencies. Previous studies have shown that engaging in IVG can improve decision-making and cognitive abilities across various age groups, including older and younger adults , while also fostering positive emotional experiences, enhancing self-esteem, promoting well-being, and reducing anxiety . Instead of classifying games strictly as cooperative, competitive, or exergames, this study emphasizes the social interactions that can occur in various gameplay types. Cooperative games involve players working together toward a shared goal, while competitive games involve players competing against each other. Exergames, which promote physical activity, can also be cooperative or competitive, depending on their design. What sets the games used in this study apart is their ability to facilitate social interactions among children, whether through collaboration or competition, thereby helping them develop important social skills in an engaging way. A recent study comparing these game types found that cooperative gameplay was more effective in reducing anxiety and fostering verbal communication among children . Additionally, LGT has become a widely utilized method in recent years for enhancing children’s social and communication skills . This form of play therapy involves peer-based social interventions, wherein participants collaborate in social division of labor, construct LEGO models, and engage in verbal, visual, and non-verbal social communication . Through this process, individuals develop crucial social skills such as communication and empathy. Lego therapy is frequently employed in the treatment of children with autism spectrum disorder (ASD) and has shown efficacy in reducing parent-reported separation anxiety and self-reported social anxiety among children with ASD . In addition to this, LEGO games are used to improve the communication and social skills of children and teenagers of all ages, allowing children to experience a sense of mastery and achievement, and making it easier for them to explore, build and express themselves . Given the complexity of social anxiety among LBC, researchers have increasingly advocated for combination interventions, which integrate multiple therapeutic techniques to address the multifaceted needs of children. Combination approaches, which often blend structured virtual interactions with hands-on, physical activities, are considered particularly effective for students with diverse learning styles and social skill levels . By incorporating both IVG and LGT, this study seeks to leverage the unique strengths of each approach—IVG’s ability to engage children in immersive, interactive digital environments, and LGT’s emphasis on physical, collaborative play. This dual approach is designed to provide both virtual and real-world platforms for social skill-building, allowing LBC to build confidence, competence, and meaningful peer connections across different settings. The combined approach may yield more comprehensive results, as it addresses social anxiety from multiple angles, offering a dynamic range of interactions that cater to children’s varied preferences and needs. Despite ongoing efforts, the most effective methods for reducing social anxiety in LBC remain elusive, with limited evidence available regarding specific intervention strategies. Exploring diverse therapeutic approaches can enhance intervention outcomes by addressing multiple aspects of a child’s social and emotional experience. Virtual games capture children’s attention with interactive features, while hands-on LEGO play supports direct, physical social interaction, accommodating different preferences and learning styles .These approaches encourage a variety of social interactions, potentially enhancing engagement and supporting the development of social skills. However, few studies have examined how these interventions impact social anxiety among LBC. Therefore, this study aims to investigate whether different types of social interactions can reduce social anxiety in LBC. By fostering collaboration, emotional connection, and skill development, these interventions have the potential to address the core challenges associated with social anxiety and promote healthier social and emotional development in this vulnerable population. This study was designed as a experimental design conducted by a research team comprising an experienced professor with expertise in sports rehabilitation and clinical psychology, three postgraduate students in sports science, and three graduate students in psychology. The study employed a rigorous approach to data collection throughout its implementation. Initially, baseline measurements were taken for all participants to record their social anxiety score prior to the intervention. Subsequently, after 12 weeks of intervention, the same measurement process was repeated to evaluate the intervention’s effectiveness. Finally, a follow-up assessment was conducted six weeks after the intervention concluded to assess the sustainability of the intervention’s effects. The study received approval from the Biomedical Research Ethics Committee of Hunan Normal University and adhered to the ethical principles outlined in the Declaration of Helsinki for research involving human subjects. Written informed consent was obtained from all participants prior to their enrollment in the study. The study was conducted in a rural primary school (Huangdu School) located in Shaodong City, Hunan Province. The participants were selected from 8 different classrooms within the same school. Prior to the intervention, the children were familiar with each other as they attended the same school, though they were distributed across different classrooms. This existing familiarity allowed for a baseline level of social anxiety before the intervention began. Participants were randomly assigned to either the treatment groups (IVG, LGT and combined intervention group) or the control group. Children in the treatment groups were pulled out of their regular classroom activities for the intervention sessions, which were held four times per week. Each session took place in a designated intervention room, separate from the regular classroom setting, to create a focused and consistent environment for the therapy. The control group continued their regular school activities without any intervention. The treatment groups followed their standard school curriculum when not participating in the study, ensuring that their academic activities were minimally disrupted. Before starting the study, researchers used G*Power 3.1.9.7 software to determine the sample size. It was estimated that a minimum of 17 cases per group would be necessary, based on a power value of 0.85, an alpha level of 0.05, and an effect size of 0.44. The sample included 84 LBC, with 46 identifying as male and 38 as female. Their ages ranged from 9 to 11 years, with a mean age of 10.405 years (SD = 0.873). Hunan Province is known for its high number of LBC, particularly in Shaodong City. The inclusion criteria were as follows: (1) LBC in rural areas who have experienced parental separation for more than 2 years; (2) aged between 9 and 11 years old; (3) self-reported total social anxiety test score of ≥8; (4) no intake of antidepressant in the past 3 months; (5) absence of serious physical ailments or mental disorders; (6) no prior participation in LGT or IVG training. Exclusion criteria included: (1) gross or fine motor disorders; (2) receiving medication or psychological counseling for mental health issues. This study recruited a total of 94 LBC. After excluding 10 participants who did not meet the inclusion criteria, the remaining 84 participants were sorted alphabetically by last name and randomly assigned to four equal-sized groups (n = 21 each) using an online randomization tool ( http://www.randomizer.org ): IVG group, LGT group, combined intervention group, and the control group (CG). During the intervention, all of the LBC participated, with no absences reported. The entire intervention spanned 12 weeks, comprising three 45-min sessions per week. Participants formed teams of 7 based on personal preferences for game partners. Prior to each session, a 5 to 10-min warm-up session, led by a student assistant, was conducted. During this period, the research assistant ensured the IVG classroom’s cleanliness, organized the teaching environment, and verified equipment functionality. The classroom accommodated two X-BOX 360 Kinect™ game consoles (Microsoft, Redmond, Washington, USA), facilitating multiplayer games for two to four children simultaneously. The Xbox 360 Kinect™ is a motion-sensing input device developed by Microsoft for use with the Xbox 360 video game console. While two to four children played the game, the other three to five children either watched, discussed strategies, or waited for their turn. The session facilitator kept everyone engaged by encouraging group discussions about the game and explaining the players’ in-game choices. However, children who were not actively playing might have experienced a different level of engagement compared to those playing at the moment. The facilitator also managed the order of play to ensure that each child had an equal opportunity to participate. In a 45-min session, each child received about 20 min of actual gameplay. These consoles were connected to projectors to ensure all participants had clear visibility of the gameplay. The intervention unfolded in two phases: during the initial six weeks, participants engaged with the “Adventures” mode on Kinect, which includes a variety of interactive games designed to promote physical activity and teamwork. This mode is characterized by its immersive environments and engaging narratives that require players to complete specific tasks or challenges. The five games included—"Fruit Ninja,” “Reflex Ridge,” “20,000 Leaks,” “River Rush,” and “Rally Ball”—were selected for their ability to encourage movement and collaboration among players. Each game was structured to gradually increase in difficulty, helping participants build their skills over time. Participants began with “Fruit Ninja,” which required quick reflexes to slice fruit while avoiding bombs, establishing a baseline of engagement and motor skills. The sequence of games was designed to promote increasing levels of coordination and teamwork, with “Rally Ball” concluding the Adventures phase as a culmination of the skills learned. Subsequently, in the following six weeks, participants transitioned to the second phase, involving Kinect’s “Sports” mode. This mode offered a different set of challenges, focusing on traditional sports simulations that provided a competitive element. The six sports simulation games—"Tennis Ball,” “Basketball,” “Skiing,” “Football,” “Badminton,” and “Boxing”—were carefully selected to promote teamwork and physical fitness. In this phase, participants began with “Tennis Ball,” which emphasized hand-eye coordination and strategic placement, before moving on to “Basketball,” where they practiced shooting and passing in a team setting. Each game was designed to build upon the previous skills learned, culminating in “Boxing,” which required coordination and reflexes. Participants underwent a 12-week Lego play intervention, conducted thrice weekly, with each session lasting 45 min. The 21 participants were divided into three groups, each comprising two suppliers, two architects, and three engineers, overseen by a professionally trained therapist (student researcher). Throughout the intervention, the engineer delineated the required parts, while the parts supplier located the corresponding blocks, subsequently passing them to the architect for assembly under the engineer’s guidance. The building goals for each session were selected based on the participants’ developmental levels and interests, promoting creativity and teamwork. Examples of these goals included building a bridge, a vehicle, and a community center, all of which encouraged collaboration and problem-solving. Each goal was designed to be challenging yet achievable, providing a sense of accomplishment. At the start of each session, participants were informed of their building goal, and visual aids were used to show the desired outcome. Clear signals indicated when to change roles; for instance, after completing a significant part of the project, participants were encouraged to switch roles, allowing everyone to experience different functions within the team. Role changes occurred every 10 min or after specific tasks were completed, helping to keep everyone engaged and fostering a sense of shared responsibility. The therapist (student researcher) determined the developmental level of each group’s tasks by assessing the participants’ cognitive and social abilities throughout the intervention. While all groups had the same overall building goals, the complexity of the tasks varied to match each group’s developmental level. For example, one group might build a more complex bridge with advanced features, while another group focused on a simpler structure. This variation ensured that all tasks were suitable and challenging for the participants. Throughout the intervention, activity rules were prominently displayed on a bulletin board in the Lego therapy room, with strict adherence mandated for all participants. Examples of these rules included: ‘If you break it, you must fix it,’ ‘If you cannot fix it, seek assistance,’ and ‘If others are using it and you need it, ask first,’ among others. These rules aimed to instill positive habits and nurture a sense of community among participants. The process of group intervention is presented in Table 1 . The participants in this group engaged in a 12-week training program, meeting three times per week, which included sessions on IVG and LGT, as described above. Each session consisted of 45 min of interactive video game training, followed by a 30-min break, and concluded with 45 min of LEGO game training. The control group comprised a subset of 21 rural LBC randomly selected from the study population. They did not receive any specific intervention during the 12-week study period but were monitored similarly to the intervention groups. The control group served as a reference to evaluate the effectiveness of the intervention strategies employed in this study. First, basic information about the participants and their parents was collected, including general demographic information such as gender, age, parental marital status, parental outings, and parental frequency of returning home. Parental outings refer to the periods when parents are away from home engaging in activities outside the family environment, which may include work commitments, or social engagements. The Social Anxiety Scale for Children (SASC) for LBC was developed by La Greca . Chinese scholar Wang made Sinicization revision . The scale is applicable to children and adolescents aged 7 to 16 and contains 10 questions (for example: I am afraid of doing something I have not done before in front of other children), including two dimensions of Fear of Negative Evaluation (FNE) and Social Avoidance and Distress (SAD). The score was scored on a three-level scale: never =0, sometimes =1, always =2, and the higher the total score, the more severe the social anxiety. According to the Chinese urban norm, a score of ≥8 is considered as social anxiety . In this study, the Cronbach’s α of this scale was 0.892, and Cronbach’s α of all dimensions was greater than 0.7. Statistical analysis was conducted using GraphPad Prism (9th edition), and results were presented as mean ± standard deviation (M ± SD). Initially, differences in baseline characteristics among the four groups were assessed using either the chi-square test or one-way ANOVA. Subsequently, a repeated-measures ANOVA, employing a 4 (group: IVG group, LGT group, combined intervention group, control group) × 3 (time: pre-intervention, post-intervention, 6-week follow-up after intervention) design, was utilized to evaluate the effects of different intervention programs on the social anxiety levels of LBC. The significance level for all statistical tests was set at 0.05, with p < 0.05 (*) denoting statistical significance. Table 2 outlines the basic characteristics, overall self-reported social anxiety scores, and their dimensions for the four groups of participants. The findings indicated no significant differences in baseline social anxiety scores and their respective dimensions among the four groups ( p > 0.05). Table 3 presents detailed data on the total scores and two dimensions of social anxiety among the four groups at baseline and post-intervention, aiming to investigate the specific effects of different intervention strategies on social anxiety in LBC. Statistical analysis revealed a significant group-time interaction effect for the total score of SASC [ F (6,80) = 8.55, p < 0.01, η 2 = 0.24], indicating an interactive influence of intervention and time factors on social anxiety scores. Similarly, significant interaction effects were observed for the FNE factor [ F (6,80) = 4.32, p < 0.01, η 2 = 0.14] and SAD factor [ F (6,80) = 4.16, p < 0.01, η 2 = 0.14]. Given the presence of these interaction effects, further simple effect analyses were conducted. As depicted in Figure 1 , social anxiety scores and scores on all dimensions exhibited a gradual decrease over intervention time across different intervention groups, with distinct variations observed among the groups. In our examination of time-related factors, we observed significant changes in the total score of SASC [ F (2,80) = 47.83, p < 0.01, η 2 = 0.37], FNE factor [ F (2,80) = 40.14, p < 0.01, η 2 = 0.33], and SAD factor [ F (2,80) = 25.82, p < 0.01, η 2 = 0.24] over time. Notably, the total score on the SASC, as well as scores for the FNE and SAD factors in Group CG, exhibited no significant change over time ( p > 0.05). This finding suggests a relatively stable social anxiety status within this group throughout the intervention period. Conversely, participants in the IVG group, LGT group, and combined intervention group demonstrated significantly lower total social anxiety scores, FNE scores, and SAD scores at T 1 and T 2 compared to T 0 , with these differences achieving statistical significance ( p < 0.05). These results suggest a positive impact of these interventions on reducing social anxiety levels. In the analysis of group factors, significant effects were observed for the SASC total score [ F (1,80) = 15.27, p < 0.01, η 2 = 0.36], FNE factor [ F (1,80) = 9.37, p < 0.01, η 2 = 0.26], and SAD factor [ F (1,80) = 5.51, p < 0.01, η 2 = 0.17]. Initially, at baseline T 0 , no significant differences were found between intervention and control groups regarding SASC total score, FNE factor, and SAC factor ( p > 0.05). However, during the intervention period, the control group exhibited significantly higher SASC scores compared to the other three groups ( p < 0.05). Furthermore, the combined intervention group demonstrated significantly lower scores than the IVG group and LGT group at both T 1 and T 2 ( p < 0.05). Regarding FNE scores, the control group also scored significantly higher than the other three groups at both T 1 and T 2 ( p < 0.05). However, the combined intervention group scored lower than the Lego group at T 1 ( p < 0.05), with no significant difference between the two groups at T 2 . Concerning SAD scores, the control group showed significantly higher scores than the other three groups at T 1 ( p < 0.05). Notably, the only group with no significant difference from the control group at T 2 was the LGT group. Additionally, the SAD score of the combined intervention group at T 1 was significantly lower than that of the LEGO game group ( p < 0.05). While the LGT group and the IVG group did not exhibit significant differences in total SASC score and scores in each dimension ( p > 0.05), the IVG group scored lower than the LGT group at both T 1 and T 2 . This study represents the inaugural evaluation of the varying efficacy in ameliorating social anxiety among LBC through 12 weeks of IVG, LGT, and a combined intervention. The findings illustrate distinct alterations in social anxiety scores among the participants following each intervention. Notably, the combination intervention appeared to have the strongest effect on reducing social anxiety; however, it is difficult to attribute this improvement solely to increased peer engagement, as the combined intervention differed in duration and format from the other types. The effectiveness of interactive video games in reducing social anxiety among LBC echoes findings from research involving typically developing children. In an era dominated by digital engagement, interactive video games, particularly those played with friends in person, serve as a prominent platform for children’s recreation and social interaction. In an era dominated by digital engagement, motion-based, cooperative interactive video games, particularly those played in person with friends, serve as a valuable platform for children’s recreation and social interaction. This study highlights the potential of these specific types of interactive games to reduce social anxiety among LBC, offering insights into their therapeutic value for this demographic. The games in our study were carefully selected for their emphasis on collaboration and physical activity, aiming to foster engagement and social skills through shared, real-time experiences. In this intervention, a total of 11 games are used (adventures mode: “Fruit Ninja,” “Reflex Ridge,” “20,000 Leaks,” “River Rush,” “Rally Ball”; sports mode: “Tennis,” “Basketball,” “Skiing,” “Football,” “Badminton,” and “Boxing”). In contrast to conventional physical activities, interactive video games inherently engage children more actively, fostering their participation and enjoyment . These games provide a plethora of sensory stimuli, offering visually and auditorily enriching experiences , while also enabling children to assume diverse roles and navigate varied scenarios within the game environment, thereby fostering curiosity and an inclination toward exploration . At a mechanistic level, the anxiety-reducing effects of interactive video games bear similarities to physical activity. During gameplay, the release of neurotransmitters, including endorphins, in the child’s brain plays a pivotal role in mood regulation . These neurotransmitters have been associated with enhanced emotional states, reduced anxiety symptoms, and the induction of feelings of happiness and relaxation . Furthermore, engaging in challenging and successful experiences during gameplay promotes a balance of neurotransmitters such as dopamine, norepinephrine, and serotonin, thereby enhancing children’s positive emotional experiences . It is noteworthy that the majority of game schemes employed in this study involve cooperative gameplay with two or more participants. This format of play, which emphasizes in-person interaction, provides children with a real social environment where they can engage with others in a secure and structured setting. When examining the impact of video game play on social interactions, it is important to distinguish between playing alone or online with peers and participating in face-to-face social play. Both types of play can promote social connections, but they may have different effects on social skills and emotional well-being. Research shows that in-person play often leads to richer social interactions because it allows for non-verbal communication cues, such as body language and eye contact, which are essential for developing interpersonal skills . In contrast, while virtual play offers opportunities to connect, it may lack these important cues, resulting in a different quality of interaction. For instance, studies have found that individuals who primarily communicate online report higher levels of social anxiety than those who engage in face-to-face interactions, indicating that a lack of in-person contact may hinder the development of vital social skills . Moreover, playing video games in isolation can increase feelings of loneliness and disconnection . Conversely, multiplayer gaming can create a sense of community and shared purpose, even in a virtual setting. This suggests that while both virtual and in-person play have their advantages, the context of social interactions greatly affects their outcomes. Through gameplay, children are encouraged to collaborate towards shared objectives or interact based on common interests. This collaborative and interactive process not only enhances mutual communication among children but also cultivates their teamwork and problem-solving skills . Upon successfully achieving play objectives, children receive affirmative feedback from their peers, including praise and encouragement, which enhances their psychological assurance and mitigates feelings of self-doubt and anxiety commonly experienced in real social contexts. Future research should investigate whether interactive video game play that occurs virtually with others yields similar effects in reducing social anxiety. At the same time, interactive video games include more choices, which means endless possibilities. LGT emerges as a significant intervention in reducing social anxiety levels among LBC. This discovery not only enhances our theoretical and practical understanding of psychological interventions for this demographic but also aligns positively with pertinent research involving autistic children. While previous studies primarily focused on the efficacy of LGT in enhancing children’s social skills , this study delves deeper into its favorable impact on alleviating social anxiety among LBC. LGT presents a novel and effective intervention for LBC. Within the game, each child assumes a specific role, possesses clear responsibilities, and follows instructions to collectively accomplish tasks. This mode of play effectively simulates a small social circle, facilitating the gradual adaptation and acclimatization of children to interaction with others within the game. Participation in LGT empowers LBC to not only enjoy cooperative endeavors but also gradually cultivate self-assurance and efficacy throughout task completion . Through these activities, they acquire essential communication, sharing, and problem-solving skills, thereby strengthening their interpersonal interactions in real-world scenarios. Furthermore, LGT underscores the stimulation of children’s creativity and imagination . Within the game, children construct an array of models tailored to their preferences and ideas, fostering innovative thinking and problem-solving abilities. Concurrently, collaborative efforts with peers foster a sense of teamwork and the joy of achievement, further enhancing their inclination towards social engagement and self-assurance. The combined intervention demonstrated promising potential in reducing social anxiety among LBC by integrating the benefits of both IVG and LGT over a longer intervention period. These findings underscore the value of diverse, holistic approaches to mental health interventions that merge mind–body activities to support social engagement. While previous studies have often examined single interventions, such as either digital or physical play, these approaches may not fully address the multiple aspects of social anxiety. In this study, both IVG and LGT were utilized as structured social experiences that encouraged in-person interactions in varied settings. IVG provided opportunities for social skill practice through collaborative gameplay in a simulated environment, fostering initial comfort in peer interactions, while LGT emphasized hands-on interaction and the development of practical social skills. Rather than aiming to determine which method is superior, the combined approach in this study illustrates how different forms of social interaction can complement one another. The extended duration of the combined intervention may have also contributed to its effectiveness, as the sequential nature of IVG followed by LGT allowed children additional time to practice and reinforce social competencies. This approach may help bridge the gap between virtual and physical social settings, offering LBC varied opportunities to practice and strengthen social skills in ways that single interventions may not fully capture. Firstly, this study used self-report scales to measure social anxiety in LBC. While self-reports can gather a lot of information, they may be influenced by personal biases, which could affect accuracy. Future research could include physiological measures to provide more objective data on children’s reactions to social situations. Secondly, the study used a single-blind design, where the researcher was unaware of certain details but the participants were not. This could lead to participants being influenced by expectations when filling out the self-report scales. Future studies should consider a double-blind design to reduce these biases. Thirdly, another limitation is the different task complexities in the LGT intervention. While all groups had the same overall building goals, the specific tasks were tailored to each group’s developmental level. This could affect how comparable the results are between groups. Future research should use standardized tasks or account for developmental differences more carefully. Fourthly, it is essential to acknowledge the inherent limitations in comparing the Lego Therapy and IVG interventions. These interventions differ significantly in their nature, promoting either physical or digital play, and in the degree to which they focus on building versus physical movement. Additionally, the assignment of social roles within Lego Therapy creates unique social dynamics that are not present in the IVG intervention. These differences make it challenging to determine which intervention is more effective at decreasing social anxiety and under what circumstances. Furthermore, the Combination Intervention presents additional challenges in comparison. With over double the intervention time of the other two approaches, it is unclear whether its efficacy stems from the synergistic effect of combining both types of play or simply from the increased duration of intervention. These factors must be considered when interpreting the results, as they could significantly impact the outcomes. Moreover, there was also a difference in participation time between the IVG and LGT. In the IVG, children played in pairs, so not everyone was actively engaged for the full 45 min. In contrast, all participants in the LGT were involved throughout the session. This difference in engagement time could have impacted the results, so future research should create more balanced participation structures or examine how different levels of engagement affect outcomes. A key limitation of this study was the lack of counterbalancing between the order of the IVG and LGT interventions. Participants either experienced IVG or LGT first, which might have affected their experiences in the second intervention. Without counterbalancing, observed effects could be influenced by the order of interventions. Future studies should use a counterbalanced design to control for this potential bias. Finally, the study was based on a small group of LBC from one city in China. This raises concerns about how the findings might vary in different regions or cultures. Future research should aim to include a larger and more diverse sample of LBC to enhance the generalizability of the results. This study explored the impact of IVG and LGT on reducing social anxiety in LBC, with a focus on how different forms of social interaction might contribute to alleviating anxiety symptoms. Findings suggest that both IVG and LGT offer significant benefits in reducing social anxiety, with the combination intervention showing additional promise by integrating the strengths of both approaches. The results underscore the value of diverse, holistic approaches to mental health interventions, highlighting the rehabilitative potential of combining virtual and hands-on social experiences as a therapeutic strategy for this population’s mental health needs. Importantly, each of the individual interventions demonstrated efficacy, suggesting that either IVG or LGT can be a viable option for reducing social anxiety in LBC. Encouraging the adoption of both standalone and combined intervention strategies could enhance clinical practice for addressing social anxiety in LBC. Furthermore, adapting these interventions to consider the unique circumstances and cultural backgrounds of specific regions could improve their relevance and effectiveness, ensuring that the approach aligns with the distinct needs of LBC across diverse geographical settings. | Study | biomedical | en | 0.999995 |
PMC11696537 | One of the major factors for reduced cancer mortality is early detection through imaging-based screening ( 1 ). Recently, deep learning (DL) based methodologies have been employed to screen medical images ( 2 – 7 ). DL-based screening tools have not only been shown to feature high classification accuracy ( 8 ) and consistency ( 9 ) but also potentially allow for scalability: automated analysis of medical images significantly speeds up the diagnostic process and thus renders it possible to scan larger populations or conduct real-time assessments, thereby aiding in timely procedures. However, DL-based image analysis requires extensive training datasets and has been shown to suffer from generalization issues under distribution shifts ( 10 – 12 ). These limitations are particularly problematic in medical contexts where training data often is scarce and the need for robust generalization is critical due to the substantial variability in image quality encountered in real-world settings. Acquiring training data for supervised learning in medical image analysis poses significant challenges including stringent privacy regulations, the need for expert annotation, as well as ensuring that the dataset represents the breadth of pathological conditions and demographic variations. Furthermore, once models are trained, learned representations must be generalized to account for the considerable variability in medical image quality, which can be affected by diverse factors such as the technical specifications and calibration of imaging devices across different healthcare facilities but also patient-specific factors. Anatomical variations across individuals, along with involuntary movement during image capture, induce an additional source of noise. Consequently, ensuring robustness against distribution shifts is essential for the successful integration of DL models into the clinical environment ( 13 ). A distribution shift occurs when a classifier encounters an out-of-distribution test dataset whose statistical properties differ from those of the training data, posing challenges to the model's ability to generalize across new, unseen conditions. In other words, classification performance can deteriorate sharply, as the learned representations may overfit to the specific features present in the training data ( 10 , 14 ). In clinical settings, this can lead to a higher rate of misdiagnoses, missed findings, or false positives. In contrast, the human visual system exhibits remarkable robustness to variations in image quality, noise, and other distortions and can therefore maintain high recognition accuracy even under challenging conditions ( 10 , 15 – 17 ). In deep learning, a common practice to address such generalization challenges is the use of data augmentation strategies, whereby additional synthetic data is generated by applying transformations to existing images—such as rotation, scaling, and flipping, or by simulating common artifacts and variations [for reviews on data augmentation techniques used in medical imaging see ( 18 , 19 )]. This approach helps in creating a more diverse dataset that mimics a wider array of real-world conditions without the need for extensive new data collection [e.g., see ( 20 )]. By incorporating augmented data, DL models can be trained to be more resilient to the natural inconsistencies and discrepancies found in medical imaging. Although several reviews on the effects of data augmentation in medical imagery exist ( 18 , 19 , 21 , 22 ), to the best of our knowledge, no systematic investigation has addressed how these strategies impact robustness to distribution shifts. Understanding the robustness impacts of specific data augmentation strategies is key to ensuring that deep learning models can reliably adapt to the diverse and unpredictable conditions encountered in clinical practice. Here, we evaluate the robustness of three common data augmentation strategies to distribution shifts introduced by different types of parametric noise. The chosen data augmentations range from basic transformations to more complex strategies. Simple augmentations include rotation, flipping, and brightness & contrast adjustments, which provide varied versions of the original images. More advanced methods include Pixel-space Mixup ( 23 ), and Manifold Mixup ( 24 ). Pixel-space Mixup creates new training samples by blending pairs of images and their labels directly in pixel space, helping the model learn smoother decision boundaries. Manifold Mixup, extends this concept further by blending representations in deeper network layers rather than raw pixel data, thereby introducing intermediate states at a feature level. Additionally, to determine how well the augmented models align with human diagnostic abilities under distribution shifts, we tested four trained radiologists on the out-of-distribution data (840 collected psychophysical trials in total) and compared their performance against the models. The involvement of medical professionals serves as a valuable benchmark for the models' diagnostic accuracy, allowing us to directly compare the effectiveness of DL-augmented interpretations with that of human experts when confronted with out-of-distribution data. As a second contribution, we present DreamOn, a novel generative adversarial network (GAN) based data augmentation approach designed to enhance model robustness. GANs have previously gained recognition as a data augmentation strategy in various domains, especially in medical imaging [e.g., ( 25 , 26 )]. This has been motivated by a lack of available large, labeled training datasets for certain medical imaging modalities or specific medical conditions. However, in this study, we extend the traditional use of GANs by implementing a novel interpolation technique between classes, rather than simply generating synthetic samples. This was inspired by the process of dreaming in humans, where episodic memories are recombined to generate novel visual experiences during REM (Rapid Eye Movement) sleep [e.g., ( 27 )]. We mimic this process by first teaching a GAN to create images of a single class. Once trained, we introduce a pair of classes to the Generator, with the classes being combined in varying proportions rather than being weighted equally. This prompts the Generator to synthesize images that blend characteristics from both classes. This interpolation process is crucial because it generates additional images that sit near the decision boundaries between classes, making these images more challenging to classify. Previous studies [e.g., ( 28 )] have demonstrated that training a classifier on challenging images near decision boundaries can help the model establish more robust boundaries. This approach reduces the likelihood of overfitting to specific features and minimizes the influence of spurious correlations within the data. Consequently, this should help the model generalize better, particularly in high-noise environments, where maintaining performance is typically more difficult. Aligning with this prediction, DreamOn-augmented datasets resulted in across the board substantial improvements in image classification accuracy under high-noise conditions as compared with other data augmentation strategies. While expert radiologist outperformed all models in high-noise settings, DreamOn augmentation helped to narrow the gap between expert radiologists and deep learning models when handling out-of-distribution data. The experimental design was structured to compare different off-the-shelf data augmentations and to test the hypothesis that DreamOn enhances model robustness compared to other data augmentation strategies. This was achieved by evaluating classification performance on the publicly available Breast Ultrasound Image Dataset [BUSI, see ( 29 )], consisting of 780 labelled breast ultrasound images. As a comparison to DreamOn, we employed Manifold Mixup, Pixel-space Mixup, and more straightforward techniques such as rotation, flipping, and brightness & contrast changes. To assess the impact on the robustness of these augmentation techniques, we introduced three types of parameterized noise—Gaussian, speckle, and salt & pepper—each applied at seven intensity levels to get different test sets featuring a distribution shift. The different models were compared based on their ability to maintain high balanced accuracy and low expected calibration error (ECE) across noise levels. Additionally, the inclusion of the DreamOff control dataset allowed us to determine whether the observed improvements were due to the interpolation strategy used in DreamOn rather than just adding GAN-generated images to the training set. Lastly, four trained radiologists served as a benchmark by evaluating a subset of the test data, allowing us to put the model results into perspective. This comparison provided a clearer understanding of the deep learning models' robustness relative to human expertise, especially under high-noise conditions. We implemented and evaluated three common data augmentation strategies known to enhance the robustness of DL classifiers. Firstly, in what we call standard data augmentation (SDA), we applied random rotation (–15° to +15°), random horizontal flip as well as random adjustments in brightness and contrast to training images, as reported to be among the most effective ones in medical imaging ( 21 ). Random rotations and horizontal flips were included to simulate variations in patient positioning and imaging angles. Brightness and contrast are parameters that depend on the patient and examined tissue, but they can also be adjusted by the physician to some extent on the ultrasound device and may vary between different devices. Note that vertical flips were not used here, as this would not have been consistent with the shape of ultrasound images (i.e., an increase in the field of view with increasing depth, displayed from top to bottom). Secondly , pixel-space Mixup where training examples are created by linearly interpolating between random pairs of samples across classes on the pixel level and their corresponding labels ( 23 ). Lastly, Manifold Mixup extends pixel-space Mixup to the feature level, interpolating between representations at various latent layers of the network ( 24 ). Note that this was done during training and therefore with changing weights. The mixing proportions were determined by λ ( x , y ) = λ ⋅ x + ( 1 − λ ) ⋅ y where λ is a random value drawn from a Beta distribution λ ∼ Beta ( α , α ), x and y are two inputs. In addition to these off-the-shelf data augmentations, we evaluate a novel approach that combines the use of GANs to generate novel synthetic data with a biologically inspired idea: during REM sleep it is thought that previous episodic memories are recombined to internally generate novel visual experiences [e.g., see ( 27 , 30 )]. Here we mimic this process by feeding the generator of a fully trained conditional GAN with interpolated class labels and segmentation masks. To find out whether standard data augmentation can be combined with DreamOn to further improve the robustness, we also applied standard data augmentation (as described above) to the DreamOn images ( DreamOn + SDA ). To implement DreamOn, we closely followed the approach proposed by Iqbal and Ali ( 31 ) where a GAN is trained on medical images. However, we augmented the method described by a conditional GAN model similar to Odena et al. ( 32 ), allowing input of the desired class label, so newly generated synthetic images preserve a given target class. This is because the dignity of ultrasound imagery is not solely conveyed by the mass shape but also by other factors. Providing the generator with class information therefore allowed the learning of such. Additionally, the segmentation mask that was fed to the generator was synthesized by a separate GAN trained only on the BUSI segmentation masks. This enabled the synthesis of interpolated segmentation masks. To generate interpolated images, two non-zero weights were assigned to two classes such that they sum up to 1. See Figure 1 for three examples. Since the classes of the BUSI dataset are not balanced, assigning uniformly random weights to classes when synthesizing DreamOn images could potentially lead to an unfair advantage compared to the other data augmentation methods. To account for this potential confounder, we constructed the image generation pipeline such that the average weight input per class over the whole DreamOn dataset matched the true proportions of the BUSI dataset (normal: 17%, benign: 56%, malignant: 27%). The ground truth label of the DreamOn dataset was identical to the two non-zero weights used for its generation, the third unused class was set to zero. The whole DreamOn pipeline is depicted in Figure 2 . As it has been shown before, introducing such out-of-distribution (o.o.d.) data to training imagery can itself lead to improved robustness ( 33 ). To test whether a potential increase in robustness can be linked to interpolations rather than simply adding o.o.d. data, we employed an additional dataset of images created by the DreamOn architecture except for only using one class per image as input. We call this control data set DreamOff . For the detailed model architecture and training pipeline, see Figures S1 and S2 in the Supplementary Material . The code is available at https://github.com/lucle4/DreamOn . The BUSI dataset consists of 780 labelled (normal: 133, benign: 437, malignant: 210) images of breast ultrasound images, each with its corresponding segmentation mask ( 34 ). We randomly split the dataset into training (600), test (90), and validation (90) subsets. To enable maximal comparability between different data augmentation strategies, all training datasets consisted of two parts: first, the 600 original (non-augmented) images; and second, 600 augmented images which we manipulated/generated according to the respective approach (SDA, pixel-space Mixup, Manifold Mixup, DreamOn, DreamOn + SDA). Overall, we note that data augmentation approaches operating on the feature level, such as Manifold Mixup and DreamOn, can interpolate features at higher semantic levels of the information compared to pixel-wise data augmentation methods. For comparison, we also included a dataset that contains only original BUSI images (no data augmentation used; referred to hereafter as Vanilla ). The composition of all training datasets is given in Table 1 . In all datasets (including testing and validation), the class proportions were held constant (normal: 17%, benign: 56%, malignant: 27%). To test model robustness, we created three different test datasets by applying different noise types—gaussian, speckle, and salt & pepper—each with six intensity levels to the test dataset. These noise types were specifically chosen because they are representative of common distortions encountered in ultrasound imaging. Gaussian noise simulates random fluctuations that can occur due to electronic interference, speckle noise reflects granular noise patterns typical in coherent imaging systems like ultrasound, and salt & pepper noise models impulse noise that can result from sudden disturbances or transmission errors. By using these noise types, our robustness evaluation is designed to closely mimic the challenges faced in real-world ultrasound imaging, ensuring that our model's performance is assessed under conditions that are likely to be encountered in practical scenarios [see ( 35 )]. See Figure 3 for some examples. With each ascending level, there's a doubling in noise intensity, with the highest level calibrated such that most models perform at chance level (i.e., with ∼33% accuracy). Each training dataset (see Table 1 ) was used to train a ResNet-18 model from scratch [for architectureal details, see ( 36 )] in Pytorch ( 37 ). As has been shown before, ResNet-18 can be successfully used for classifying medical imagery ( 38 ). We used the Adam optimizer and cross-entropy loss with common hyperparameters (epochs = 100; batch size = 20; learning rate = 0.001; β 1,2 = 0.9, 0.999) without any finetuning. Model parameters were initiated randomly. Each ResNet-18 was trained for five runs to account for random variations. The checkpoint that reached the highest balanced accuracy on the validation dataset was used for testing. Balanced accuracy, which is calculated as the average accuracy per class to account for class imbalance, serves as our primary metric for model performance. We report and compare the median balanced accuracy across the five training runs for each training strategy to draw our main conclusions. Training of classifiers as well as the GAN was performed on UBELIX ( http://www.id.unibe.ch/hpc ), the HPC cluster at the University of Bern using an NVIDIA A100 GPU. The code for the different training strategies is available at https://github.com/lucle4/DreamOn . To benchmark the performance of the different models against human experts, we presented noisy images to n = 4 trained radiologists from the University Hospital of Bern. Of the participating radiologists, 2 were female and 2 were male, with a median experience of 18 years ( SDexp = 15.1). In a forced-choice image classification task, they had to classify 210 Gaussian noise images (30 images per noise level). Gaussian noise was used for testing due to its standard use in assessing robustness, therefore providing a reliable benchmark for comparing the performance of deep learning models and human experts in a controlled environment [e.g., ( 33 )]. We assessed model robustness using two main performance metrics: balanced accuracy and expected calibration error (ECE). Balanced accuracy is defined as the mean over the average accuracy per class, accounting for class imbalance in the dataset. It is a suitable metric for our study because it evaluates model performance across all classes, ensuring that improvements in robustness are not biased by the predominant class. The ECE measures the difference between the predicted confidence levels and the actual outcomes, providing insight into how well-calibrated the model's predictions are. Well-calibrated predictions indicate that the model's confidence aligns with its accuracy, an important factor in medical imaging where decision-making should reflect a reliable estimation of uncertainty. Both metrics were used to assess the stability of model performance under various noise levels, which serve as a proxy of robustness against real-world image distortions. To establish a threshold above which model performance could be considered significantly better than chance, we used the Clopper-Pearson method to calculate the upper bound of the 95% confidence interval around chance-level accuracy. For each noise condition, we compared the model's balanced accuracy against this threshold, considering performance significant if it exceeded this value. Across all three noise types and for all models, the balanced accuracy decreases as a function of noise intensity . However, this was not the case for the radiologists, for whom the performance increased from noise levels 1–3. Looking at the results in more detail, several patterns emerged. First, on original images (i.e., no noise), all DL models outperformed radiologists in terms of their median accuracy, indicating that in an environment with no added noise, model predictions are more accurate than human judgments. In this setting (original images), the Manifold Mixup and standard data augmentation outperform the other data augmentation strategies as well as the vanilla model. Second, in the low noise regime (level 1–3), Mixup approaches as well as standard data augmentation approaches continue to dominate—outperforming DreamOn and the vanilla model as well as radiologists. Third, in the high noise regime (level 4–6) however, the tables turn: here, radiologists outperform all DL models, indicating a robustness gap between human experts and models. Compared to all other evaluated data augmentation approaches, DreamOn features the highest median balanced accuracy in the high noise regime (best performing in 6 out of 9 high noise levels, see Table 2 ), thereby reducing the robustness gap between human observers and models. Notably, there is a clear superiority of DreamOn compared to DreamOff. It can, therefore, be safely argued that it is the interpolation that led to better performance rather than the introduction of GAN artifacts and therefore merely o.o.d. data. Interestingly, adding SDA to DreamOn images does not lead to further improvement in robustness. Quite in contrary, for high noise levels, this model performs among the worst. While DreamOn may not achieve the highest accuracy in low-noise and no-noise conditions, it exhibits the greatest robustness against noise, with the lowest drop in accuracy as noise levels increase, and consistently outperforms other methods in the high noise regime, where maintaining stable performance is crucial for real-world medical imaging applications. Additionally, we were interested in determining the extent to which models can sustain a performance significantly above chance under increasing levels of noise. Treating single image classification trials as independent Bernoulli trials, we calculated binomial 95% confidence intervals using the Clopper-Pearson method ( 39 ). This statistical approach enables us to establish the minimum performance threshold above which models can be considered to significantly exceed chance performance. For a chance level of p = 1 / 3 , and n = 90 classification trials (corresponding to the size of the test dataset), the upper bound of the one-tailed 95% confidence interval is ∼0.411 . Comparing median model performances with this threshold, we find that DreamOn performs significantly above chance for all but one (salt and pepper level 6) noise levels (see Table 1 ). Remarkably, under extreme noise conditions (noise level 6), no other model surpassed the chance level threshold, with the sole exception of the SDA model. However, it is important to note that the SDA model's performance did not consistently exceed chance across most other high noise conditions. To further quantify robustness, we calculated the difference between the highest and lowest reached median balanced accuracy for each model . This metric provides a direct quantification of how consistent a model's performance is across varying datasets. A smaller difference indicates that the model maintains its accuracy level regardless of changes in the data, signifying higher stability. When comparing this relative drop in median accuracy ( Δ ) across data augmentation strategies, we find that DreamOn features the lowest difference irrespective of the noise type, and thus shows the most stable performance. This lines up with the radiologists, who show an even lower delta in the gaussian noise condition. When examining the ECE, we find a similar pattern as with the balanced accuracy . Across all noise types and models, the ECE increases with increasing noise intensity, indicating reduced model calibration as a function of noise intensity. In practical terms, this means that under higher noise levels, the confidence scores provided by the models do not reliably reflect the true probability of a correct prediction, mostly leading to overconfident classifications. While model calibration generally declines in high-noise settings, DreamOn produces comparably well-calibrated probability estimates, with confidence levels that closely align with actual prediction accuracy even under noise. Only Manifold Mixup performs similarly in these challenging conditions. Taken together, maintaining above-chance performance in high-noise settings and preserving calibration indicate that DreamOn enhances the model's ability to make accurate predictions with reliable confidence estimates even under distribution shifts. To investigate the inter-rater reliability of the radiologists, we calculated the Fleiss' Kappa ( 40 ). For noise levels 0–4, κ was between 0.544 (noise level 4) and 0.681 (noise level 2) per level, corresponding to moderate up to substantial agreement. For noise levels 5 and 6, κ was 0.464 and 0.380, corresponding to fair up to moderate agreement ( 41 ). Thus, this consistency analysis indicates that the agreement among radiologists generally decreases as a function of noise intensity. This pattern suggests that even experienced professionals can struggle to maintain diagnostic accuracy. The observed variability among human raters can result from factors such as the complexity of certain images, the potential for increased subjective interpretation, and the noise's impact on key features critical for diagnosis. Nevertheless, in high-noise scenarios, even the worst-performing radiologist performs better than all DL models evaluated in this study. This indicates that while DreamOn effectively narrows the robustness gap between expert radiologists and deep learning models, the remaining gap is not a mere product of differences among radiologists but highlights the fundamental challenges in replicating human diagnostic resilience in adverse conditions. We conducted a comprehensive investigation of different popular data augmentation strategies on the robustness of a ResNet-18 model trained to classify breast ultrasound images. We also compared the model's performance with human experts in the field. Our results indicate that DreamOn—our proposed GAN-based data augmentation method that generates REM-dream-inspired synthetic data—can notably improve the model's robustness, thus narrowing the gap between human observers and DL models in the high noise regime. While all models experienced a decline in accuracy with increasing noise, DreamOn consistently outperformed other methods in the most challenging noise settings, demonstrating a notable improvement in robustness compared to standard approaches. It was the only method that maintained performance significantly above chance across nearly all noise levels. This robustness, coupled with its stability (evidenced by the smallest decrease in performance from no-noise to high-noise conditions, Δ ), positions DreamOn as a well-suited strategy for enhancing deep learning models in noise-intense medical image analysis. However, despite DreamOn's robust performance in high-noise environments, we observed a drop in accuracy in low-noise regimes. This reduction in performance could be attributed to the introduction of unnecessary complexity, where the challenging interpolations generated by DreamOn might lead the model to overfit on ambiguous examples rather than optimizing for cleaner, more straightforward cases. In such settings, the model could become overly specialized in handling difficult scenarios, resulting in a trade-off where robustness in high-noise environments comes at the expense of accuracy in low-noise or clean data conditions. Although DreamOn's performance in low-noise and no-noise settings is not as strong as some other augmentation methods, this should be viewed in the context of real-world medical imaging scenarios, where noise is often unavoidable. A model that excels in clean environments but rapidly deteriorates under noisy conditions may not be as useful in practice. DreamOn's strength lies in its ability to maintain accuracy as noise levels increase, exhibiting the lowest drop in performance across varying noise intensities. This robustness is critical in medical image analysis, where the ability to produce reliable results under suboptimal conditions is often more valuable than peak performance in ideal scenarios. Therefore, DreamOn's superior performance in high-noise environments suggests it is a more reliable choice for applications where image quality cannot always be guaranteed. Additionally, when combining DreamOn with Standard Data Augmentation (SDA), we noticed a performance drop compared to using either strategy alone. This may be due to conflicting learning signals: while DreamOn encourages the development of robust decision boundaries by creating difficult, boundary-challenging cases, SDA introduces broader variability through transformations like rotations and flips, which do not necessarily increase difficulty. The model might struggle to reconcile these different types of data, leading to suboptimal performance when both strategies are employed together. These observations highlight the complex interactions between different data augmentation techniques and underscore the need for further investigation into their combined effects. Furthermore, the superior robustness of DreamOn compared to other data augmentation methods highlights the potential of GAN-based techniques in enhancing the generalization capabilities of deep learning models in medical imaging [for a review, see ( 42 )]. The interpolation of class labels and segmentation masks enables the model to learn from a range of image variations not provided by traditional augmentation methods. The improvement in model robustness indicates that DreamOn could assist in preparing models to manage the inconsistencies and variability found in clinical settings. This enhanced robustness in high-noise environments suggests that such AI-driven tools could be particularly valuable as complementary aids to radiologists. By integrating models like DreamOn into diagnostic workflows, it is possible to develop AI systems that can assist in analyzing challenging cases where image quality is compromised, thereby enhancing the overall diagnostic accuracy and confidence of radiologists. However, it is important to note that it is uncertain how well the findings related to the employed noise types can be generalized to real-world noise stemming from different imaging equipment and protocols, or patient-specific factors such as movements or biological variability (e.g., tissue density). While radiologists outperformed all models at higher noise levels, this emphasizes the ongoing importance of human expertise in medical image analysis. However, the lower accuracy of radiologists on original images without added noise perturbations might reflect the model's ability to detect subtle patterns not readily apparent to the (trained) human eye. We also note that, similarly to REM dreams, the semantic meaning of produced interpolations might not directly correlate with reality. This is because diagnostic work-up is done along the lines of specific guidelines that assign findings to discrete categories. There are benign lesions that mimic malignancy and vice versa, and some lesions indeed have an intermediate appearance between malignant and benign (what ultimately makes them suspicious). But there is no continuum between these categories ( 43 ) such as is the case with DreamOn. Nonetheless, such augmented samples help in enhancing model robustness and act as an effective regularization component ( 21 , 44 , 45 ). We advocate that for clinical setups, while accuracy is important for deep learning models, their robustness and reliability might be even more important to ensure time-effective and trustworthy human-in-the-loop AI-assisted clinical workflows. In this regard, the proposed DreamOn data augmentation proposes a promising starting point to develop a stable framework for clinical situations where suboptimal imaging conditions occur. In the present study, we only investigated the robustness of one DL architecture (ResNet-18) and only employed a single medical dataset. Even though clinically relevant, the BUSI dataset is relatively small (780 unique images). Future research should thus focus on employing the DreamOn augmentation strategy for a wider variety of DL models, medical datasets, and additional types of perturbations to assess its robustness across more varied and complex noise conditions. It is also important to note that other advanced data augmentation strategies, such as additional GAN-based methods [e.g., ( 46 )], further Mixup variants [e.g., ( 47 )], and data augmentation with transformers [e.g., ( 48 )], were not covered in this study. Future research should explore these strategies to further validate and potentially enhance the robustness of our approach. Furthermore, the DreamOn approach could be improved by integrating other generative approaches such as diffusion models ( 49 ). Additionally, it would be ideal to develop a model that not only exhibits increased robustness in high-noise regimes but also maintains high accuracy across the board, including in low-noise and no-noise conditions. One limitation worth noting is that radiologists' data was exclusively obtained for gaussian noise, with other noise types not being covered. Nevertheless, it is known that humans typically perform well across different noise types in image classification tasks [e.g., ( 10 )]. Therefore, we anticipate that the radiologists' performance on the additional noise types would be similar to their performance on gaussian noise. In conclusion, the present study illustrates that REM-dream-inspired conditional GAN-based data augmentation through class and segmentation mask interpolation presents a promising approach to enhancing the robustness of deep learning models against noise perturbations in medical imaging. By benchmarking different data augmentation strategies against expert radiologists on out-of-distribution data, our study reveals a persistent gap in robustness between models and human experts, underscoring the need for continued advancements in AI to match human diagnostic proficiency. As the field continues to advance, incorporating biologically inspired data augmentation strategies could play a significant role in supporting radiologists and improving diagnostic accuracy in clinical settings. | Study | biomedical | en | 0.999998 |
PMC11696584 | Methicillin-resistant Staphylococcus aureus nasal surveillance (MRSA swabs) has been highlighted in recent literature and the 2019 IDSA Pneumonia Treatment Guidelines as a tool to avoid unnecessary empiric coverage for MRSA in pneumonia. 1 , 2 A MRSA swab has a 96.5% negative predictive value can be utilized by clinicians to de-escalate intravenous vancomycin and other anti-MRSA therapies for empiric therapy of community acquired pneumonia (CAP) and hospital-acquired pneumonia (HAP). 1 , 2 De-escalation of vancomycin for pneumonia when MRSA swabs are negative has been shown to reduce the duration of anti-MRSA therapy without increased in-hospital mortality or hospital length of stay. 3 , 4 There is currently no role for positive MRSA swabs in adding therapy for pneumonia given the low positive predictive value of 44.8%. 1 Pharmacists are in an ideal position to be monitoring MRSA swab surveillance. Part of the daily workflow of the pharmacist includes daily chart review to ensure proper antibiotic use based off indications and microbiology cultures. By allowing pharmacists to drive the MRSA nasal surveillance process, this would maximize the workflow of the pharmacist and relieve tasks from the providing physicians. Pharmacist driven MRSA nasal surveillance has the potential to reduce unnecessary MRSA coverage and reduce complications associated with inappropriate antibiotic therapy such as risk of adverse drug events, development of antimicrobial resistance, prevent drug-drug interactions, and reduce hospital expense. This study’s purpose is to measure the impact of a pharmacist conducting MRSA nasal surveillance and vancomycin de-escalation. This study is a retrospective pre-/post-intervention study approved by the Trinity Health Of New England institutional review board. This study evaluated the effectiveness of the stewardship initiative for MRSA swabs/vancomycin de-escalation. The antimicrobial stewardship team completed a four-week initiative from April 10, 2023, to April 28, 2023, to identify all patients actively being treated with vancomycin for pneumonia that would benefit from MRSA swabs and de-escalation of vancomycin. This period was compared to a control period, March 13th, 2023, to April 7th, 2023, the four weeks before the initiative started. During the control group, which followed the standard of care at our institution, each unit had an assigned floor pharmacists who occasionally monitored MRSA swabs and made recommendations to providers as time allowed outside their other daily responsibilities. In the intervention group, a daily list of patients on vancomycin was generated from the electronic health record (EPIC). The stewardship pharmacist reviewed the electronic health records of each patient identified to determine the indication for vancomycin. If the indication was pneumonia, the pharmacist would make recommendations relative to the MRSA swab as necessary. Providers were contacted and asked to order a MRSA swab if a patient was on vancomycin for pneumonia and had not had one ordered or performed within the past 7 days. All patients with MRSA swabs were followed and if negative providers were contacted by the stewardship pharmacist and recommended de-escalation if appropriate to do so based off the swab and other microbiology cultures. If a patient had vancomycin therapy that crossed the control and intervention timeline, they were included in the intervention group only if the pharmacist intervened prior to discontinuation of vancomycin. Patients were included if they were adults younger than 89 admitted as inpatients and receiving vancomycin for community acquired or hospital acquired pneumonia. Those 90 years and older were excluded per institutional IRB protocol to protect patient information. Patients could be counted twice if vancomycin was discontinued and restarted for a new respiratory infection of interest. They were excluded if they were diagnosed with VAP or had confirmed MRSA pneumonia. The primary outcome was percentage of patients empirically treated with vancomycin for pneumonia who had a MRSA swab ordered at the beginning of vancomycin therapy. Beginning of therapy was defined as the first 24 hours of therapy. The exception was if a MRSA swab was ordered on a Monday for vancomycin started over the weekend in the intervention group. There was no stewardship pharmacist on Saturday or Sunday, so Monday was the earliest they could intervene. If the provider accepted the intervention that Monday, it would still be considered beginning of therapy. Secondary outcomes included percentage of patients who had vancomycin appropriately discontinued following a negative MRSA swab; percentage of patients that had vancomycin inappropriately continued following a negative MRSA swab; average length of vancomycin therapy; and potential cost savings. If vancomycin therapy was given for multiple indications, all days of therapy were counted. The decision to collect data this way was made because there often was not clear documentation in providers’ notes when pneumonia was no longer a concern, and vancomycin was only being used for the other non-respiratory indication. The exception would be if there was a clear gap in vancomycin therapy for two different indications, specifically a respiratory infection of interest and a non-respiratory infection of interest. For example, if a patient was started on vancomycin for pneumonia, had it discontinued for multiple days, and was restarted for cellulitis, only the first course of vancomycin would be included in data analysis. If vancomycin was discontinued then restarted for pneumonia, both courses would be counted, excluding the days the order was discontinued. If a patient was on an extended dosing interval (i.e., every 48 hours) the days between doses counted towards days of therapy. A cost analysis of potential savings was conducted utilizing wholesaler prices obtained by the pharmacy purchasing team and laboratory staff. The current defined daily dose of vancomycin is 2 grams. 5 Over the course of our eight-week study, we looked at 110 patients, which when extrapolated would be 715 patients over the course of a year. At our institution, we purchase 10-gram vials of vancomycin that require reconstitution by a pharmacy technician. Each vial cost $18.99 when bought at a group purchasing organization (GPO) rate. This study included a comparator group. The comparator group was the four weeks prior to the implemented stewardship initiative. Based on our clinical and stewardship data to date, we anticipated a 40% event rate in the pre-initiative group and a 90% event rate in the post-initiative group. Assuming an alpha of 0.05 and power of 0.8, the study required a sample size of at least 13 in each group. The collected data was utilized to determine the frequencies of the primary and secondary outcomes within the study period. Primary and secondary outcomes were compared between the two groups using a Chi-square test, except for the secondary outcome of days of vancomycin therapy, which were compared using a t-test. 131 patients were screened for the study and 110 were included in statistical analysis (Table 1 ). For the primary outcome, we found a statistically significant difference favoring the utilization of a pharmacist driven stewardship initiative (Table 2 ). In the control group, only 36.1% (22/61) of patients had a MRSA swab ordered at the beginning of therapy; that number rose to 83.7% (41/49) in our intervention group . For our secondary outcomes, appropriate discontinuation of vancomycin was also statistically significant, with 61.2% (30/49) of all patients in the intervention group having his or her vancomycin discontinued following a negative swab compared to 19.7% (12/61) in the control group . When only looking at patients who had a MRSA swab ordered, 54.5% (12/22) and 73% (30/41) of patients with a swab ordered in the control group and intervention group respectively had their vancomycin discontinued following a negative result. Inappropriate continuation of vancomycin was statistically significant, favoring the control group. 12.2% (6/49) of all patients in the intervention group had vancomycin inappropriately continued, compared to 1.6% (1/61) in the control group . When looking at just patients with a swab ordered 4.5% (1/22) patients in the control group and 14.6% (6/41) patients in the intervention group had vancomycin inappropriately continued. Table 1. Summary of patient screening in each group Control Intervention Total screened 73 58 Excluded for age 5 3 Excluded for VAP 3 3 Excluded for receiving treatment at outside facility 1 0 Excluded for confirmed MRSA pneumonia 3 3 Included in analysis 61 49 Table 2. Primary and secondary outcomes in control and intervention group Control (n = 61) Intervention (n = 49) P Value Primary Outcome MRSA Swab ordered at the beginning of therapy, %(n) 36.1% (22) 83.7% (41) <0.0001 Secondary Outcome Appropriately discontinued vancomycin following negative MRSA swab, %(n) 19.7% (12) 61.2% (30) <0.0001 Inappropriate continuation of vancomycin following negative MRSA swab, %(n) 1.6% (1) 12.2% (6) 0.0235 Mean days of vancomycin therapy 3.85 3.53 0.32 The number of days of vancomycin therapy was not statistically significant between the two groups (3.85 vs 3.53, P = 0.32). However, if the length of therapy was reduced by one day, it would be estimated our site would save approximately $2,715.57 on drug cost annually. The projected savings do not include labor costs associated with reconstitution by a technician. In addition, by reducing the length of therapy, it would reduce the number of vancomycin levels needed to be run. The current cost for a vancomycin level at our institution is $87. Robust pharmacoeconomic studies are needed to fully ascertain the cost savings potential of utilizing MRSA swabs to de-escalate vancomycin, including but not limited to savings in vancomycin expenditures, pharmacy and nursing labor for drug preparation and administration, as well as vancomycin levels for therapeutic drug monitoring. We compared the rate of MRSA swabs ordered at our institution when there was a dedicated pharmacist managing MRSA swabs and when there was not. Significantly more MRSA swabs were ordered when a pharmacist intervened, and more patients had vancomycin therapy discontinued. This process is in line with current IDSA guidelines that highlight the usefulness of MRSA swabs to de-escalate MRSA coverage. 1 In the intervention group, there were statistically more patients who had vancomycin inappropriately continued, not due to inappropriate recommendations from the pharmacists, but rather because the increased number of MRSA swabs highlighted inappropriate practice. MRSA swab results can be utilized by antimicrobial stewardship committees to ensure providers are de-escalating therapy as necessary and provide education and interventions as necessary. We found the days of vancomycin therapy was not statistically different between the two groups. The potential reason for this is multifactorial. Our institution utilized MRSA culture at the time of the study, which take one to three days to result. 4 The amount of time to obtain results is significantly longer than the current gold standard PCR MRSA swabs, which can result in as quick at 30 minutes. 4 , 6 During the study, swabs took an average of 32.1 hours from collection time to result, delaying negative results and vancomycin de-escalation. Other similar trials that utilized PCR tests found days of vancomycin therapy was significantly less in the pharmacist intervention groups. 7 , 8 In addition, antimicrobial stewardship at our site was strong prior to our initiative. Our providers are trained to reevaluate empiric antibiotic therapy after several days, with many primary teams already de-escalating vancomycin early in therapy regardless of whether a MRSA swab was conducted. Cost savings due to a pharmacist driven stewardship program would be maximized at facilities that currently use PCR MRSA swabs. Similar studies that looked at the implementation of a pharmacist driven MRSA nasal swab policy found reductions of vancomycin anywhere from 14.5 hours to 46.6 hours. 6 – 10 The study had limitations. The rate at which MRSA swabs were collected varied. MRSA swabs are collected by nurses, and sometimes orders would go uncollected for days even after pharmacist intervention. Two patients in the control group required their MRSA swabs to be cancelled and reordered to prompt the nurse to collect it. Despite multiple days of vancomycin therapy, sensitivities of the MRSA swab should not be greatly impacted, but more research is needed on the subject. 3 In addition, it is possible there were patients on vancomycin that were not intervened on because only one pharmacist was involved in the intervention who was not scheduled to work on the weekends during the intervention period. Therefore, if an order was started and discontinued between Friday night and Monday morning, the pharmacist would not see the order. However, we estimate this would be a small number of patients. All patients that were started on vancomycin over the weekend were intervened on Monday morning if they were still receiving treatment. The results from this study can be used to support the wide-spread use of pharmacist driven MRSA nasal surveillance protocols at other institutions. At our site, we are implementing a pharmacist driven MRSA swab protocol, which will allow pharmacists to order MRSA swabs per protocol if the patient is receiving vancomycin for pneumonia. Our lab recently switched to PCR swabs, so it is anticipated this switch and implementation of the new protocol will allow for less broad-spectrum antibiotic use, shorter vancomycin therapy, and decreased drug and monitoring costs. This study may lead to future studies to evaluate the other downstream effects such as patient clinical outcomes, reduced vancomycin use, hospital cost, adverse drug events, drug-drug interactions, and antibiotic resistance. | Review | biomedical | en | 0.999997 |
PMC11696591 | Antimicrobial resistance has been identified by the Centers for Disease Control and Prevention (CDC) as well as the World Health Organization as one of the leading threats to human health. 1 , 2 Overprescribing of antimicrobials is estimated to contribute to the development of 2.8 million cases of antimicrobial-resistant infections in the United States every year. 1 Antibiotic prescribing is common in ambulatory visits for acute respiratory tract infections (ARTIs), 3 and it is estimated that approximately 50% of antibiotics prescribed for these encounters are unnecessary. 4 Antimicrobial stewardship programs (ASPs) have been developed in inpatient and ambulatory settings to reduce inappropriate antibiotic prescriptions (IAPs), 5 and ambulatory ASPs became Joint Commission-mandated in most ambulatory settings in 2019. 6 Ambulatory ASPs have been shown to decrease IAPs for select ARTIs. 7 However, there remains a great need to optimize and expand ASPs and their initiatives to further reduce IAPs. One ARTI for which antimicrobials are frequently prescribed is acute uncomplicated bronchitis (AUB). 8 , 9 The most recent evidence-based guideline for the treatment of AUB includes the key recommendation to “avoid prescribing antibiotics for AUB.” According to this guideline, antibiotics should be reserved for treatment AUB only if chronic lung conditions or concurrent diagnoses of other bacterial conditions requiring antibiotic treatment are diagnosed. 10 The aim of this study was to assess if a bundle of antimicrobial stewardship interventions (ASIs) in a large healthcare system impacted the proportion of IAPs for AUB in adults in ambulatory care visits. This was a quasi-experiment quality improvement (QI) study comparing the proportion of IAPs pre- versus postintervention in a health system’s ambulatory sites over a 2-year period. Institutional Review Board approval was sought and was waived as the study was identified as QI (nonhuman) research. A bundle of ASIs for AUB began in January 2021 (Table 1 ): (1) retrospective auditing of IAPs for AUB; (2) quarterly reporting of department-, clinic- and provider-level IAPs for AUB; (3) educational webinars on ASP and evidence-based guidelines for treatment of AUB; and (4) best practice alerts in the electronic medical record (EMR) when antimicrobials were prescribed for AUB. The preintervention period was January 1, 2020, through December 31, 2020, and the postintervention period was January 1, 2021, through December 31, 2021. Table 1. Summary of antimicrobial stewardship interventions during the pre- and postintervention periods Intervention Timeline Description Comments Auditing January 2021–December 2021 Retrospective auditing of inappropriate prescribing for acute uncomplicated bronchitis Automated electronic medical record report with patient- and department-level data used to assess the appropriateness of antimicrobial prescribing for acute uncomplicated bronchitis. Quarterly reporting Report dates: 1) March 2021 2) June 2021 3) September 2021 4) December 2021 Antibiotic prescribing reports finalized and sent to all ambulatory practices; included department-, clinic- and provider-level inappropriate antimicrobial prescribing for acute uncomplicated bronchitis Reports sent to each ambulatory site lead and practice manager or supervisor. Educational webinar August–September 2020 Topic: Introduction to ambulatory stewardship All ambulatory providers and practice managers invited to attend; webinars were broadcast in real time twice in 1 week, with recordings made available. Content: Overview of antimicrobial stewardship, global burden of antimicrobial resistance, review of ambulatory stewardship process and methodology. Educational webinar December 2020 Topic: Management of acute uncomplicated bronchitis All ambulatory providers and practice managers invited to attend; webinars were broadcast in real time twice in 1 week, with recordings made available. Content: Review of evidence-based guidelines on the diagnosis, evaluation treatment of acute bronchitis, and both pharmacologic and nonpharmacologic management of the condition, review of ambulatory stewardship process and methodology. Best practice alerts active in electronic medical record January 2021–December 2021 Real-time alert in electronic medical record appeared if antimicrobials were prescribed in an encounter where a bronchitis diagnosis was entered by the provider Content of alert: “This patient was prescribed an antibiotic with a diagnosis of bronchitis, a viral infection in the majority. Antibiotics are not generally indicated. The Centers for Disease Control and Prevention has recommended avoiding antibiotics unless the patient also has a chronic lung disease, immune deficiency, or an alternate bacterial infection.” A prompt to remove the antibiotic order(s) appears, prepopulated to remove the antibiotic orders. If the prompt to remove the antibiotic is changed to continue the antibiotic, an acknowledged reason must be completed, and choices include “Alternate bacterial infection,” “Comorbidity,” and “Other-see comments.” A prompt prepopulated with “Remove” the antibiotic order. Weekly automated reports were accessed from a reporting platform in the institution’s EMR (Epic Hyperspace®). Ambulatory patient encounters were identified using the 10 th revision of the International Statistical Classification of Diseases (ICD-10) codes J20.9 and J20.8 for “bronchitis.” Ambulatory encounters included visits to urgent care as well as primary care sites (internal medicine, internal medicine-pediatrics, and family medicine) and included both in-person and virtual visits. Encounters for individuals less than 18 years of age, duplicate encounters, and follow-up visits for the same instance of illness were excluded. Encounters were coded as “appropriate” if antibiotics were prescribed with the diagnosis of AUB and only if specific underlying conditions were documented in the patient’s EMR (chronic obstructive pulmonary disease, emphysema, pulmonary fibrosis, bronchiectasis, or immunodeficiency), or another diagnosis was made that required antimicrobial treatment (eg, sinusitis, community-acquired pneumonia), or no antimicrobial was prescribed with the diagnosis of AUB. Encounters were coded as “inappropriate” if an antibiotic was prescribed without an alternate diagnosis requiring antimicrobial treatment or if none of the aforementioned comorbid conditions were documented in the encounter, patient history, or patient problem list. The analysis for this study included descriptive and inferential statistics for both patient demographics and monthly comparisons of the proportions of IAPs between groups. All numeric variables were nonnormal between groups, displayed as median (25 th , 75 th percentile), and tested with the Wilcoxon rank sum test. Categorical variables (facility type, sex, and ethnicity) were displayed as count (percentage) and tested via χ 2 analysis. Demographic comparisons were assessed at an alpha of 0.05, while a Bonferroni adjustment was applied to the P -values for the monthly comparison to correct for multiple testing. A total of 8,176 encounters were included in this analysis. There were 4,694 encounters in the preintervention period and 3,482 encounters in the postintervention period . There was an overall decrease in IAPs for AUB preintervention compared to postintervention (44.9% vs 32.5%, [ P < .001]). Additionally, there was a decrease in IAPs for AUB from preintervention to postintervention in the following months: March , October , and November ; declines in IAPs were not statistically significant when comparing pre- to postintervention prescribing in the other months. Of note, there was an overall increase in urgent care visits from 32.6% in 2020 to 37.9% in 2021, while clinic visits decreased from 67.4% to 62.1% in the same time frame. There was an association between IAPs and facility type, with an overall higher proportion of IAPs in clinics compared to urgent care sites . There were no differences in IAP rates for AUB among demographic groups, including race, gender, or primary language in the preintervention or postintervention periods. Figure 1. Monthly comparisons of inappropriate prescriptions by group. This study corroborates previous reports of high rates of antimicrobial use for AUB in ambulatory care, 8 , 9 despite long-standing recommendations to avoid antibiotic use in AUB for most patients diagnosed as having this condition. The ASIs demonstrated a significant decline in IAPs for AUB, with salient decreases in IAPs during 3 months of the typical peak respiratory viral season in Michigan; these declines are likely due to the increased number of acute care visits for bronchitis during these months, allowing for increased power to detect statistically significant changes. Antimicrobial stewardship initiatives are institution- and/or health system-dependent and can vary significantly in their structure, duration, and impact. This study demonstrates that a bundle of ASIs can lead to a decrease in IAPs for AUB in adult patients. A previous study also demonstrated decreases in IAPs for ARTIs with a bundle of ASIs but also found that IAP rates rebounded after ASIs ceased. 7 It is currently recommended by the CDC that institutions and healthcare systems engage in improving antibiotic prescribing by developing and implementing strategies that align with evidence-based recommendations for the diagnosis and management of infections. As inappropriate prescribing may vary by practice location, focusing ASIs toward sites with higher rates of IAP may be effective. This study has several limitations. Data collection was retrospective in nature, and inclusion in the study was based on provider-selected ICD-10 codes for bronchitis, which is subject to selection bias. Inappropriate prescribing was based on whether specific underlying conditions were documented in the encounter notes, medical histories, and problem lists in the EMR, which may not be completely accurate for all patient encounters. Conversely, this study was strengthened by the data set size, which led to smaller margins of error and highly reliable results. Because the study included multiple ASIs in the bundle, it cannot be determined which intervention was the most impactful in reducing IAP for AUB. Lastly, the durability of the impact of the ASIs used in this study will be evaluated over time. This study adds to the growing body of evidence supporting that ASIs can meaningfully decrease IAP for ARTIs. Further studies are needed in different healthcare settings to confirm these findings, as well as compare which of many possible ASIs are most effective at reducing inappropriate antibiotic use. | Review | biomedical | en | 0.999998 |
PMC11696600 | Clostridioides difficile infection (CDI) is caused by a gram-positive, spore-forming bacterium found in soil and certain animals, including humans. 1 Symptoms range from mild-to-moderate diarrhea to severe presentations (ie, pseudomembranous colitis, toxic megacolon) or even death. 2 – 4 Risk factors include older age and healthcare exposures (eg, acute infections, antibiotic use, recent hospitalization). 5 – 7 CDI complications (eg, dehydration, diarrhea, sepsis) may necessitate hospitalization, with hospitalized patients potentially transitioning to long-term care facilities and remaining at high risk for CDI recurrence or readmission. 8 – 10 CDI recurrence rates range between 20% and 30%, 5 , 11 , 12 increasing healthcare burden and costs associated with hospital readmission/duration and death due to increasing severity of subsequent infections. 8 US CDI burden in 2019 was 58.3 and 139.1 per 100,000 person-years in 18–44- and 45–64-year-olds, respectively 13 ; among ≥65-year-olds, the annual CDI incidence rate was 385.8 per 100,000 person-years. 13 In a study using Merative MarketScan commercial and multistate Medicare/Medicaid databases, healthcare-associated (HCA) CDI rates decreased between 2011 and 2017 in 25–64-year-olds and ≥65-year-olds, whereas community-associated (CA) CDI rates increased in 25–64-year-olds (commercial insurance) and ≥65-year-olds (Medicare). 14 Similar trends were observed from 2012 to 2019 using Optum Medicare Advantage data from ≥65-year-olds when the overall percentage of HCA CDI declined from 53.2% to 47.2% and CA CDI increased from 46.8% to 52.8%. 15 Within this dataset, estimated 2018 mean CDI-associated healthcare costs among ≥65-year-olds were $13,500 per person within 2 months of follow-up. 15 Excess healthcare costs were higher for hospitalized versus nonhospitalized patients with either HCA or CA CDI. 15 CDI-associated mortality rates were up to 7.9% among elderly patients after 12 months. 15 CDI burden and associated healthcare costs are well studied among the elderly 15 ; however, data among <65-year-olds remain limited, despite previous studies suggesting a substantial burden among this age group. 16 – 18 This study expands upon existing literature by providing annual CDI incidence, along with estimated healthcare and patients’ out-of-pocket costs, and associated mortality rates among adults <65-year-olds using a commercially insured claims database. This retrospective cohort study included US adults 18–64 years old insured under commercial plans from the Optum® Clinformatics® Data Mart, which comprises members of a large national managed care company spanning all 50 states. The database includes pharmacy- and provider-submitted claims regarding approximately 12–14 million individuals annually and >65 million unique individuals between 2000 and 2020; all submitted claims are verified and de-identified before inclusion in Optum. Claims data include standard pricing for medical, pharmacy, and inpatient charges. No patient consent was obtained because this study was exempt from requirements for human subjects research owing to the use of only de-identified data. Analyses of annual CDI incidence and outcomes were conducted in 2 separate cohorts. For assessment of annual incidence, individuals were 18–64 years old, alive, and enrolled in an Optum commercial plan on January 1 of the corresponding calendar year between 2015 and 2019. Inclusion required continuous enrollment from January 1 of the calendar year until death, disenrollment, or end of the calendar year (whichever occurred first). The primary definition of CDI included any of the following: an inpatient claim with the International Classification of Diseases, 9th Revision (ICD-9) diagnosis code 008.45 or 10th Revision (ICD-10) diagnosis code A04.7x; an outpatient claim coded for CDI plus antibiotic therapy (nontopical metronidazole, oral vancomycin, or fidaxomicin) within ±14 days of diagnosis; or an outpatient C. difficile toxin test and antibiotic therapy within ±14 days of the test. CDI cases were required to have no prior CDI (as defined above) within ≤60 days of the CDI index date, the latter based on the Centers for Disease Control and Prevention surveillance definition of new incident CDI. 19 Because test results are not available from outpatient claims data, CDI cases were defined according to receiving the test regardless of results. Antibiotic therapy receipt indicated that the therapy was dispensed according to pharmacy claims data. For excess costs and mortality analyses, data included claims from 2015 to 2019, and individuals were required to be 18–64 years old on the index date selected between 2016 and 2018. The index date was assigned as the first CDI diagnosis (defined above) for patients with CDI and was a randomly assigned date between 2016 and 2018 for individuals without CDI. Outcomes were evaluated for CDI+ and 1:1 propensity score-matched CDI− controls who were continuously enrolled in the database for ≥12 months before the index date and had completed 12 months of continuous enrollment after the index date, unless preceded by death. CDI cases were classified as HCA or CA acquisition according to established guidelines. 7 , 15 HCA CDI included hospital-onset CDI diagnosed during a hospitalization or other healthcare facility stay, with an index date >3 days after admission. HCA CDI also included CDI following an inpatient, skilled nursing facility, hospice, long-term care facility, or nursing home stay with >1 day duration in the 4 weeks before the CDI index date. CA CDI included outpatient onset and inpatient onset within ≤3 days of admission and with no healthcare facility overnight stay in the 12 weeks before the CDI index date. Indeterminate cases were those not meeting definitions of CA or HCA CDI. Annual CDI incidence was assessed overall, by age group and by year from 2015 to 2019. CDI incidence rate was calculated as the number of CDI cases (defined above) each year between 2015 and 2019, divided by total follow-up time in person-years for the eligible study population in each index calendar year. Incidence rates are presented as the number of episodes per 100,000 person-years. For outcomes analyses, healthcare utilization, costs, and mortality were compared between CDI+ and CDI− individuals. Propensity score matching (PSM) was performed using multivariable logistic regression with 61 variables, including pre-index comorbidities and healthcare utilization and 1:1 matching of CDI+ and CDI– individuals as well as exact matching by 5 – 10-year age groups and index date 2-week windows. Greedy nearest-neighbor matching and a 0.1 caliper width of the SD of the logit of the propensity score were used for PSM of CDI+ and CDI− individuals. CDI+ patients without a suitable matched control were excluded from the analysis. For outcome analyses stratified by acquisition status, indeterminate case counts were low and were therefore included among HCA cases because indeterminate cases had prior healthcare facility encounters within 4–12 weeks before CDI diagnosis. All variables, including baseline and outcome measures, were analyzed descriptively. Standardized differences were computed as absolute differences in sample means divided by the pooled SD. A standardized difference of 0.1 was used as a cutoff to indicate a clinically meaningful difference. Healthcare costs (based on standard allowable amounts estimated by Optum) and patients’ out-of-pocket costs (eg, copays, deductibles, coinsurance) within ≤2 months of the index date were evaluated by age group, CDI acquisition type, and hospitalization status; hospitalized patients were defined as those who were hospitalized at the time of CDI diagnosis or hospitalized with a CDI diagnosis code ≤60 days post-index. Mortality was evaluated by age group and CDI hospitalization status, with proportions of CDI+ and CDI− individuals who had died at 1, 2, 3, 6, and 12 months after the index date compared using McNemar tests. Analyses were conducted using Statistical Analysis Software (SAS) version 9.4 (SAS Institute, Cary, NC, USA). To address concerns regarding the potential inclusion of false-positive cases, a sensitivity analysis was performed by excluding patients who met CDI criteria with only an outpatient C. difficile toxin test and antibiotic therapy within ±14 days of the test but who did not have confirmation of CDI diagnosis within 30 days. Patients included in this sensitivity analysis were only those who had either (1) an inpatient diagnosis, (2) an outpatient diagnosis plus an antibiotic prescription filled within ±14 days, or (3) a toxin test plus an antibiotic prescription filled within ±14 days, plus a subsequent CDI diagnosis within ≤30 days of the toxin test. Among 50–64- and 18–49-year-olds, 2015 CDI incidence was 217 and 113 cases per 100,000 person-years, respectively, which decreased by 23.0% to 167 and 87 cases per 100,000 person-years, respectively, in 2019 . In both age groups, the proportion of CDI cases was higher for CA than HCA CDI, and most patients were not hospitalized . Across calendar years and age groups, 10.0%–19.8% of CDI cases had HCA CDI, 4.4%–7.1% of CDI cases had indeterminate CDI, and 76.5%–86.9% of CDI cases had CA CDI . Between 2015 and 2019, 21.8%–24.1% of 50–64-year-olds and 13.6%–14.9% of 18–49-year-olds with CDI were hospitalized . Figure 1. (A) Annual CDI incidence rate in each age group. (B) Percentage of CDI+ patients in each age group over time by acquisition type. (C) Percentage of CDI+ patients in each age group over time by hospitalization status. CDI, Clostridioides difficile infection. Between January 2016 and December 2018, 23,513,801 adults were enrolled in the Optum database and were enrolled and alive during the year before the index year . Of 4,818,391 50–64-year-olds and 13,000,024 18–49-year-olds during 2016–2018, 6,787 and 7,033 patients, respectively, were CDI+ and met inclusion criteria. Patient characteristics are summarized in Table 1 . Before matching, CDI+ patients were more likely to be female and have comorbidities than CDI− controls. Table 1. Patient demographic and baseline clinical characteristics before propensity score matching a 50–64 years age group 18–49 years age group Characteristic CDI+ (n = 6,787) CDI– (n = 1,363,715) CDI+ (n = 7,033) CDI– (n = 2,663,671) Age, mean (SD), y 57.2 (4.2) 56.6 (4.1) 36.1 (9.2) 34.6 (9.2) Age range, n (%), y 18–29 n/a n/a 1,812 (25.8) 841,300 (31.6) 30–39 n/a n/a 2,224 (31.6) 883,265 (33.2) 40–49 n/a n/a 2,997 (42.6) 939,106 (35.3) 50–54 1,999 (29.5) 487,531 (35.8) n/a n/a 55–59 2,436 (35.9) 488,854 (35.8) n/a n/a 60–64 2,352 (34.7) 387,330 (28.4) n/a n/a Male sex, n (%) 2,782 (41.0) 687,049 (50.4) 2,886 (41.0) 1,380,603 (51.8) US region, n (%) Northeast 555 (8.2) 119,267 (8.7) 677 (9.6) 247,703 (9.3) Midwest 2,108 (31.1) 389,051 (28.5) 2,052 (29.2) 682,453 (25.6) South 2,882 (42.5) 554,388 (40.7) 3,008 (42.8) 1,051,114 (39.5) West 1,232 (18.2) 274,859 (20.2) 1,285 (18.3) 571,517 (21.5) Type of first incident CDI case, n (%) Hospitalization or other inpatient facility 1,264 (18.6) n/a 792 (11.3) n/a Outpatient 605 (8.9) n/a 662 (9.4) n/a Toxin test and antibiotic 4,918 (72.5) n/a 5,579 (79.3) n/a Acquisition status of first incident case, n (%) Healthcare associated 1,055 (15.5) n/a 693 (9.9) n/a Community associated 5,393 (79.5) n/a 6,098 (86.7) n/a Indeterminate 339 (5.0) n/a 242 (3.4) n/a Charlson Comorbidity Index, mean (SD) 2.0 (2.6) 0.5 (1.1) 0.9 (1.7) 0.2 (0.6) Charlson Comorbidity Index, n (%) 0 2,748 (40.5) 972,584 (71.3) 4,452 (63.3) 2,363,815 (88.7) 1 1,337 (19.7) 220,347 (16.2) 1,344 (19.1) 224,249 (8.4) 2 859 (12.7) 100,607 (7.4) 489 (7.0) 50,343 (1.9) 3+ 1,843 (27.2) 70,177 (5.1) 748 (10.6) 25,264 (0.9) CDI, Clostridioides difficile infection; CDI+, CDI positive; CDI–, CDI negative; n/a, not applicable. a Standardized differences for baseline demographic and clinical characteristics after propensity score matching are provided in Figures S1 and S2 . There were 6,332 CDI+ patients and matched CDI− controls in the 50–64-year age group, and 6,667 CDI+ patients and matched CDI− controls in the 18–49-year age group following 1:1 PSM . Following PSM, baseline characteristics for CDI+ cases and CDI− controls were well matched within age groups . Overall mean total healthcare costs at 2 months post-index were $18,453 for CDI+ and $6,819 for CDI− among 50–64-year-olds and $12,019 for CDI+ and $4,193 for CDI− among 18–49-year-olds, with differences of $11,634 and $7,826, respectively. Compared with CA CDI, HCA CDI was associated with higher total healthcare costs . Overall mean out-of-pocket costs for CDI+ and CDI− patients, respectively, were $990 and $417 (difference, $573) for 50–64-year-olds and $954 and $311 (difference, $642) for 18–49-year-olds. Higher out-of-pocket costs were observed with CA versus HCA CDI in both age groups (Table S1 ). Figure 2. Healthcare costs at 2 months post-index by CDI acquisition type and hospitalization status for patients (A) 50–64 or (B) 18–49 years of age. Differences between CDI+ and CDI– groups are reported. Δ, difference; CDI, Clostridioides difficile infection; CDI+, CDI positive; CDI–, CDI negative. Costs are shown in 2019 US dollars. Higher overall healthcare costs among CDI cases in both age groups were driven primarily by inpatient hospitalization costs, followed by outpatient costs (Table S1 ). Among 50–64-year-olds, mean total healthcare costs for hospitalized HCA CDI+ patients were $68,745 higher than matched CDI− controls and $37,646 higher for hospitalized patients with CA CDI . Among CDI+ nonhospitalized patients, mean total healthcare costs were $8333 and $2953 higher for patients with HCA and CA CDI, respectively, compared with CDI− controls. Mean total out-of-pocket costs for hospitalized HCA and hospitalized CA CDI+ patients were $722 and $1692 higher, respectively, than matched CDI− controls (Table S1 ). Among nonhospitalized CDI+ patients, mean out-of-pocket costs were $125 and $465 higher for patients with HCA and CA CDI, respectively, than for CDI− controls. Comparable findings were observed among 18–49-year-olds, although mean costs were generally lower than for the older group . Mean total healthcare costs for hospitalized CDI+ patients were $58,824 and $32,947 higher for HCA and CA CDI, respectively, than CDI− controls. Mean total healthcare costs were $5,126 and $3,564 higher in nonhospitalized HCA and CA CDI+ patients, respectively, than CDI− controls. Mean total out-of-pocket costs for hospitalized HCA and CA CDI+ patients were $1,223 and $1,886 higher, respectively, than CDI− controls (Table S1 ). Among nonhospitalized CDI+ patients, mean out-of-pocket costs were $139 and $554 higher for patients with HCA and CA CDI, respectively, than CDI− controls. Patterns of healthcare utilization based on acquisition status were similar between both age groups (Table 2 ; Tables S2 and S3 ). CDI was associated with an increased mean number of outpatient visits and a higher proportion of patients with emergency department visits across all patient groups, regardless of age, acquisition type, or hospitalization status. Similar results were observed for inpatient utilization; mean numbers of inpatient visits and days were higher among CDI+ patients versus controls, regardless of age, acquisition type, or CDI hospitalization status. Proportions of patients with outpatient prescriptions were higher among all groups of CDI+ patients versus CDI− controls (Table 2 ; Tables S2 − S3 ), except for hospitalized 50–64-year-olds with HCA CDI. Table 2. Healthcare resource utilization at 2 months post-index 50–64 years age group Aged 18–49 years age group CDI+ (n = 6,332) CDI– (n = 6,332) Difference CDI+ (n = 6,667) CDI– (n = 6,667) Difference Any outpatient visits, n (%) a 6,241 (98.6) 4,868 (76.9) 21.7 6,608 (99.1) 4,344 (65.2) 34.0 Mean number of outpatient visits per patient (SD) a 9.5 (12.1) 5.2 (8.2) 4.3 7.6 (7.8) 3.5 (6.2) 4.0 Any ED visits, n (%) 1,466 (23.2) 347 (5.5) 17.7 1,908 (28.6) 485 (7.3) 21.3 Mean number of ED visits per patient (SD) 0.3 (0.6) 0.1 (0.3) 0.2 0.4 (0.7) 0.1 (0.4) 0.3 Inpatient hospitalization, n (%) 1,418 (22.4) 322 (5.1) 17.3 1,012 (15.2) 257 (3.9) 11.3 Mean number of inpatient days per patient (SD) 2.5 (7.4) 0.5 (3.2) 2.0 1.5 (5.7) 0.3 (2.8) 1.2 Any outpatient prescription, n (%) 6,069 (95.8) 5,332 (84.2) 11.6 6,363 (95.4) 4,555 (68.3) 27.1 Mean number of outpatient prescriptions per patient (SD) 7.9 (6.6) 6.3 (6.4) 1.6 5.8 (5.5) 3.7 (5.0) 2.1 CDI, Clostridioides difficile infection; CDI+, CDI positive; CDI–, CDI negative; ED, emergency department; n, number of patients. Values presented after propensity score matching. Numbers and days spent in other inpatient facilities (skilled nursing facility, inpatient hospice facility, inpatient mental health/chemical dependence facility, or inpatient rehabilitation facility) are not shown in order to maintain patient de-identification due to small cell counts. a Excludes ED visits. At each follow-up between 1 and 12 months post-index, mortality rates were higher in 50–64-year-olds than 18–49-year-olds among both hospitalized and nonhospitalized patients . At 12 months, overall mortality among 50–64-year-olds was 4.2% for CDI+ patients versus 2.0% for CDI− controls ( P < .001). Among 18–49-year-olds at 12 months, overall mortality was 1.2% for CDI+ patients versus 0.6% for CDI− controls ( P < .001). Among hospitalized CDI+ patients matched to CDI– controls, excess mortality rates at 12 months post-index were 11.7% and 5.8% among the older and younger groups, respectively. In both age groups, excess mortality was higher among hospitalized versus nonhospitalized CDI+ patients and gradually increased through the 12 months after the index date . Figure 3. Mortality during follow-up (A, B) by CDI hospitalization status and (C) overall for patients 50–64 or 18–49 years of age. Differences between CDI+ and CDI– groups are reported. * P < .05; † P < .001. Δ, difference; CDI, Clostridioides difficile infection; CDI+, CDI positive; CDI–, CDI negative. Some percentages are rounded to maintain patient de-identification. Of the CDI+ patients, 2,732 (50−64-year-olds) and 2,388 (18−49-year-olds) met the stringent CDI definition requiring a CDI diagnosis. Residual imbalances were observed between matched CDI+ patients and CDI− controls for baseline comorbidities in these smaller subgroups (Tables S4 and S5 ). Outcomes were generally consistent with the main analysis; however, patterns of findings for healthcare utilization and costs showed higher overall out-of-pocket costs ($1,258 among 50–64-year-olds and $1,206 among 18–49-year-olds) and total number of outpatient or inpatient visits in the sensitivity analysis compared with the primary analyses (Tables S6 , S7 , S8 ). This large retrospective cohort study showed that annual CDI incidence among 18–64-year-olds was similar in 2015 and 2016, gradually decreasing to 167 and 87 cases per 100,000 person-years in older and younger age groups, respectively, by 2019. Most CDI patients were not hospitalized, and most CDI cases were CA versus HCA. Incidence rates and trends regarding the proportion of cases that were CA versus HCA were consistent with those reported in a study of <65-year-old CDI+ patients within the Veterans Health Administration database. 20 CDI was associated with increases in total healthcare costs of $11,634 in 50–64-year-olds and $7,826 in 18 – 49-year-olds. Hospitalization drove a large portion of CDI-associated cost increases for patients with both HCA and CA infections. CDI was also associated with increased out-of-pocket costs by $573 and $642 in 50–64-year-olds and 18–49-year-olds, respectively. Compared with HCA CDI, CA CDI was associated with higher out-of-pocket costs in both age groups, regardless of CDI hospitalization status. Healthcare utilization (eg, mean number of outpatient, emergency department, inpatient visits) was higher in the CDI+ than CDI− group, regardless of age, acquisition type, or CDI hospitalization status. The only exception to this trend was for the proportion of patients with outpatient prescriptions among hospitalized 50–64-year-olds with HCA CDI, which was likely because hospitalized patients have more limited opportunities for filling outpatient prescriptions. Differences in overall mortality between the CDI+ and CDI− groups increased through 12 months after the index date and were higher among hospitalized versus nonhospitalized CDI+ patients. Within 12 months, CDI was associated with 2.2% and 0.6% excess mortality among 50–64-year-olds and 18–49-year-olds, respectively. Among hospitalized CDI+ patients, respective excess mortality in the older and younger age groups reached 11.7% and 5.8%, respectively; however, this may be overestimated because the matched CDI– controls were not necessarily hospitalized, particularly those who were matched with HCA hospitalized CDI+ cases. In a similar analysis of US claims data from 2010 to 2014, Zhang and colleagues reported mean excess 6-month costs of $26,663 and $21,160 for primary (nonrecurrent) CDI among <65-year-olds and ≥65-year-olds, respectively. 11 Using the MarketScan commercial database for adults 25 – 64 years of age, Sahrmann and colleagues calculated that 1-year CDI-excess costs for HCA CDI and CA CDI were $43,127 and $13,105, respectively. 18 A Canadian population study using a PSM cohort further reported a 1-year 13% mortality risk due to community-onset CDI among all-aged individuals. 21 Using Medicare claims data for ≥66-year-olds, Olsen and colleagues reported a similar CDI-excess mortality risk of 10.9% at 1-year follow-up. 22 Our results are consistent with these findings in both young adult and elderly populations and highlight the vulnerability of younger hospitalized adults with CDI, although they are at relatively lower mortality risk than older patients. Given the considerable burden of CDI among <65-year-olds and the paucity of available data, further research is needed to determine rates of recurrence, specific morbidities, and high-risk groups within younger US adults. We have previously reported that CDI is associated with septicemia and urinary tract infections among Medicaid enrollees 25–64 years old 16 ; additional studies should be conducted to characterize the prevalence of complications such as colitis and irritable bowel syndrome. Strengths of this analysis include characterization of incidence, healthcare utilization, costs, and mortality associated with CDI among US adults <65 years old, for whom existing data are limited. However, there were some important limitations. Retrospective observational studies could lead to bias owing to unmeasured confounding variables. Moreover, claims databases may be associated with underreporting or misclassification of health outcomes, 23 , 24 and the only information available regarding mortality is the month and date of death, without details of the associated cause. Because Optum includes only members covered under commercial healthcare plans, results may not fully represent the 18 – 64-year-old population. The cost analysis was limited to costs incurred ≤2 months after diagnosis. Indeterminate CDI cases were considered HCA for the costs and mortality analyses, given the small numbers of such cases. Furthermore, CDI diagnosis codes may lack the specificity required to determine whether an event of interest occurred, based on a meta-analysis of 7 studies in which positive predictive value was only 72% for the CDI ICD-9 diagnosis code. 25 The use of laboratory data in this study may help minimize under-reporting of CDI; however, this may also lead to misclassification without the availability of diagnostic testing results. Finally, it is important to note that CDI+ patients and CDI– controls were matched on propensity score where prior hospitalization status (≤90 days prior to index date) but not hospitalization status post-index was included in the propensity score model. Thus, where hospitalized CDI+ patients were compared with CDI– controls in the analysis stratified by hospitalization status (ie, hospitalization status at the time of diagnosis or ≤60 days post-index), most CDI– controls were not hospitalized. This methodology could have resulted in an overestimation of CDI-excess costs and mortality in the hospitalized group. Among 18–49- and 50–64-year-olds, CDI was associated with substantially higher healthcare costs and mortality compared with matched CDI− controls. Identification and prevention of CDI among younger adults who are at increased risk for infection have the potential to significantly reduce both healthcare system and patient costs and mortality. | Study | biomedical | en | 0.999997 |
PMC11696603 | Community-acquired pneumonia (CAP) is one of the most common indications for antibiotic prescribing in the inpatient setting. 1 – 8 However, antibiotic prescribing for CAP is often inappropriate including excess duration or excessively broad empiric coverage. 1 – 3 Antibiotic stewardship (AS) programs aim to reduce guideline-discordant antibiotic prescribing and improve outcomes for patients with CAP. 7 – 9 Recently, the American Thoracic Society (ATS) and the Infectious Disease Society of America (IDSA) jointly published guidelines for the management of CAP in 2019, which emphasize use of antimicrobials with activity against Pseudomonas aeruginosa or MRSA in a very limited subset of inpatients at highest risk for infection with these pathogens. 10 Adapting these guidelines to treatment protocols or pathways as part of facility-based antimicrobial stewardship programs are agnostic to a patient’s race. 10 . Several assessments prior to 2019 evaluating race on processes of care for inpatients with pneumonia identified little evidence of race- or ethnicity-based differences in guideline-concordant prescribing for the management of CAP. 11 , 12 However, more recent studies suggest race or other social determinants of health (SDH) may affect receipt of antibiotics in different settings. 13 – 16 Evaluating SDH factors or race as a driver of disparities in antibiotic prescribing among inpatients is challenging but crucial for ensuring equitable and effective healthcare delivery. 11 We investigated the effect of race and ethnicity on antimicrobial agent choice and intensity among inpatients with pneumonia in a large Atlanta metropolitan healthcare system. We performed a retrospective analysis of inpatients admitted from January 1, 2019, through June 30, 2022, to four acute care hospitals of Emory Healthcare (EHC, Atlanta, GA, USA). These included Hospital A (suburban, non-profit, 582 beds, 46.1% are non-Hispanic Black), Hospital B (urban, non-profit, 537 beds, 71.6% are non-Hispanic Black), Hospital C (suburban, non-profit, 373 beds, 32.4% are non-Hispanic Black), and Hospital D (suburban, non-profit, 152 beds, 19.7% are non-Hispanic Black). Patients eligible for inclusion were adult (≥18 years age) inpatients who received at least one antibiotic during their inpatient hospitalization and were discharged from the hospital medicine service with an International Classification of Diseases, tenth Edition, Clinical Modification (ICD-10-CM) code for pneumonia registered at discharge (i.e., all patients who had one of the following ICD-10 codes either as primary or secondary diagnosis were considered for inclusion: pneumonia J10.0, J11.0, J12.X – J18.x, and J69.X, mycoplasma B96.0, klebsiella B96.1, ornithosis A70.X, and legionellosis A481). Patients admitted for ≥1 day to the intensive care unit were excluded from this analysis to reduce the likely number of providers involved in antibiotic ordering per patient encounter. Roughly 150 hospital medicine faculty worked across the 4 hospitals, including 8 providers working nights exclusively (i.e., nocturnists) and 12 Advanced Practice Providers. Patient age, sex, race, and ethnicity, ICD-10-CM discharge codes (allowing calculation of Elixhauser score), clinical microbiology data, and cumulative days of antimicrobial agents received for current encounter were extracted from the EHC clinical data warehouse (CDW) and covered all four hospitals. Race values are propagated from historical records within EHR, however per institutional practice, these values can be over-written by patient-provided race values via patient portal inputs at time of encounters. However, which encounters with values provided by patients are unknown. Race and ethnicity were assigned to mutually exclusive groups of Hispanic, non-Hispanic Black, non-Hispanic White, Other, or Unknown based on data entered into the medical record via facility-specific intake procedures. Antibiotic use data were generated based on barcode medication administration that had been validated internally and reflect administration during inpatient admission (and exclusive of doses provided in the Emergency Department). Concordance or discordance of choice of agent between Emergency Department and Hospital Medicine Service was not assessed. Each antibiotic administration was mapped to specific dates, and cumulative days of therapy (DOT) for each agent calculated and summed by NHSN defined antibiotic groupings and routes (e.g., IV/PO, exclusion of optic/otic/topical) for each encounter. 17 We present data for two groups: broad-spectrum hospital-onset infection agents (BS-HO) which we term anti- Pseudomonas agents and agents with activity against methicillin-resistant Staphylococcus aureus (anti-MRSA agents). The anti- Pseudomonas agents in our system consisted mostly of carbapenems, piperacillin/tazobactam, and third- and fourth-generation cephalosporins. For primary analysis, patient-encounters were categorized as either having or having not received any DOT of each antibiotic group. For the secondary analysis, we evaluated intensity of antibiotic exposure among patient encounters in which patients received at least one DOT of anti- Pseudomonas agents, expressed as % of patient-days receiving the agents. Administrative and clinical data were used to create proxy measures of infection severity, comorbid conditions, and established risk factors for pneumonia with P. aeruginosa . Severe infections included encounters with ≥1 blood culture positive for any bacteria (bacteremia), sepsis, or co-infection with a urinary tract infection or skin and soft tissue infection (identified through mapped ICD-10 codes on discharge). Comorbidities were summarized by calculating the Elixhauser comorbidity score. 18 In addition, we generated a second comorbidity score limited to a previously validated subset of conditions that are highly correlated with inpatient antibiotic exposure. 19 This “antibiotic prone comorbidity score” was left as an ordinal value (0, 1–2, and >2). The later score correlated strongly with the Elixhauser score but performed better in the prediction models and was retained in model building preferentially over the Elixhauser score. Established risk factors for P. aeruginosa included any clinical culture growing P. aeruginosa in the previous year, any ICD-10 code for cystic fibrosis, and any inpatient hospitalization in the prior 90 days. Established risk factors for MRSA pneumonia were growing MRSA in previous year or inpatient hospitalization in prior 90 days. Note, we had no data on nasal surveillance testing for MRSA to consider in predictive models. Descriptive analysis of characteristics by race was at the encounter level. Using univariate generalized estimating equations (GEE) logistic model to account for patients being hospitalized multiple times during the study’s duration, we estimated the unadjusted risk of receiving anti- Pseudomonas agents or anti-MRSA agents (separate models) for each demographic and clinical characteristic. Multivariable GEE logistic regression models, guided by backward selection using the “stepCriterion” function with the “qic” criterion in backward direction to facilitate model selection among variables of interest (having P<.10 on univariate). The analysis of race and ethnicity focused on mutually exclusive groups: Black non-Hispanic, White non-Hispanic, and Hispanic or Latino, with the other races grouped into an “Other” category. Tests for interactions between race and other variables identified significant interactions between age and race. R version 4.3.1 was used for all analyses. The secondary analysis was limited to patients receiving at least one DOT of anti- Pseudomonas agents or anti-MRSA agents (two separate models). Poisson regression modeling was used with length of stay (LOS) as an offset to evaluate the impact of race on intensity of antibiotic use, with calculation of an incidence rate ratio for each racial group. These analyses were limited to encounters between 2 and 14 days of LOS to minimize the impact of overly complicated or prolonged or exceptionally short encounters. Eligible covariates were chosen as described above. This study was reviewed and approved by the Emory IRB by expedited process under 45 CFR.46.110 and 21 CFR 56.110 because it poses minimal risk and fits expedited review category F as set forth in the Federal Register. Eligible patients discharged from Hospital Medicine were from 6,700 encounters with ICD-10 codes for pneumonia and complete demographic data; most patients were non-Hispanic White or non-Hispanic Black (50% and 42%, respectively) with the remaining Hispanic (3%) or other race (5%) (Table 1 ). When stratified by race and ethnicity, the percentages of the pneumonia encounters with severe infections or co-infections were remarkably similar between groups. Certain comorbid conditions such as renal failure, cardiovascular disease, and the cumulative number of comorbid conditions (sum of antibiotic-prone comorbid conditions) varied slightly by race (Table 1 ). Factors that potentially influence antibiotic choice were relatively uncommon and varied little by race, included co-infections (urinary tract infections, 16%; skin and soft tissue infections, 2.6%), recent isolation of P. aeruginosa from a clinical culture (2.5%), having cystic fibrosis (3.9%), and hospitalization in prior 90 days (13%). Table 1. Characteristics of Inpatient Encounters of Pneumonia Hospitalizations and Discharged by Hospital Medicine Service, by Race & Ethnicity, Emory Healthcare, January 1, 2019, to June 30, 2022 Characteristic, n (%) Overall (n = 6,700) Hispanic (n = 183) Non-Hispanic Black (n = 2,840) Non-Hispanic White (n = 3,340) Other (n = 337) Age (median years) [IQR] 72.0 [58.0, 83.0] 66.0 [48.5, 80.0] 64.0 [49.0, 75.0] 77.0 [67.0, 86.0] 73.0 [62.0, 83.0] Age >40 years 5,360 (80%) 140 (77%) 2,244 (79%) 2,706 (81%) 270 (81%) Age ≤40 years 1,340 (20%) 43 (23%) 596 (21%) 634 (19%) 67 (19%) Sex Female 3,492 (52%) 95 (52%) 1,583 (56%) 1,674 (50%) 140 (42%) Male 3,208 (48%) 88 (48%) 1,257 (44%) 1,666 (50%) 197 (58%) Infection Severity Bacteremia 781 (12%) 23 (13%) 330 (12%) 394 (12%) 34 (10%) Sepsis 2,066 (31%) 56 (31%) 852 (30%) 1,042 (31%) 116 (34%) Urinary tract coinfection 1,064 (16%) 30 (16%) 399 (14%) 588 (18%) 47 (14%) Skin/soft tissue coinfection 171 (2.6%) 3 (1.6%) 55 (1.9%) 109 (3.3%) 4 (1.2%) Underlying illness (UI) Valvular disease 1,363 (20%) 37 (20%) 373 (13%) 873 (26%) 80 (24%) Peripheral vascular 633 (9.4%) 19 (10%) 242 (8.5%) 347 (10%) 25 (7.4%) Paralysis 171 (2.6%) 1 (0.5%) 88 (3.1%) 74 (2.2%) 8 (2.4%) Chronic pulmonary 2,291 (34%) 56 (31%) 948 (33%) 1,202 (36%) 85 (25%) Diabetes complicated 1,249 (19%) 40 (22%) 684 (24%) 443 (13%) 82 (24%) Renal failure 1,733 (26%) 36 (20%) 930 (33%) 688 (21%) 79 (23%) Liver disease 621 (9.3%) 23 (13%) 244 (8.6%) 319 (9.6%) 35 (10%) Sum Antibiotic Prone UI None 1,487 (22%) 54 (30%) 539 (19%) 800 (24%) 94 (28%) 1–2 3,937 (59%) 96 (52%) 1,696 (60%) 1,959 (59%) 186 (55%) >2 1,276 (19%) 33 (18%) 605 (21%) 581 (17%) 57 (17%) P. aeruginosa risks P. aeruginosa prior year 167 (2.5%) 7 (3.8%) 51 (1.8%) 105 (3.1%) 4 (1.2%) Cystic fibrosis 262 (3.9%) 14 (7.7%) 99 (3.5%) 141 (4.2%) 8 (2.4%) Recent hospitalization 854 (13%) 24 (13%) 364 (13%) 433 (13%) 33 (9.8%) Insurance Type Private 1,490 (22%) 51 (28%) 721 (25%) 664 (20%) 54 (16%) Medicaid 936 (14%) 45 (25%) 603 (21%) 206 (6.2%) 82 (24%) Medicare 4,136 (62%) 80 (44%) 1,450 (51%) 2,415 (72%) 191 (57%) Hospital A 2,044 (31%) 59 (32%) 1,014 (36%) 897 (27%) 74 (22%) B 1,573 (23%) 17 (9.3%) 1,295 (46%) 237 (7.1%) 24 (7.1%) C 1,358 (20%) 51 (28%) 182 (6.4%) 958 (29%) 167 (50%) D 1,725 (26%) 56 (31%) 349 (12%) 1,248 (37%) 72 (21%) Antibiotic Agents Any anti-Pseudomonas 3,126 (47%) 80 (44%) 1,244 (44%) 1,642 (49%) 160 (47%) Any anti MRSA 2,779 (41%) 72 (39%) 1,194 (42%) 1,380 (41%) 133 (39%) In unadjusted GEE modeling, several measures of severity, co-infection, underlying illness, and P. aeruginosa risk factors were related to receipt of anti- Pseudomonas (Table S1 ) or anti-MRSA (Table S2 ) agents. Significant factors with the highest odds ratios associated with receipt in both comparisons included recent hospitalization (OR 1.4 for anti- Pseudomonas agents, 1.4 for anti-MRSA), sepsis (OR 2.7 for anti- Pseudomonas agents, 3.2 for anti-MRSA), Medicaid or Medicare insurance (OR 1.4 for anti- Pseudomonas agents, 1.5 for anti-MRSA) (Tables S1 and S2 ). Noteworthy was the observation that odds of receipt differed between the hospitals and inconsistently by class of agents (Table S1 and S2 ). In an adjusted GEE model, accounting for facility, and insurance status, patients with skin and soft tissue co-infections, diagnosis of sepsis, more antibiotic prone comorbid conditions, and hospitalization in the prior 90 days all were independent predictors of receiving either class of agents (Table 2 ). Positive P. aeruginosa clinical culture in prior year (aOR 7.18; 95% CI 4.28, 12.0) and diagnosis of cystic fibrosis (aOR 1.54; 95% CI 1.12, 2.12) were additional independent predictors of anti- Pseudomonas agent use but not anti-MRSA use. Retaining these independent predictors, race was a significant predictor of receipt of anti- Pseudomonas agents. Among younger (age ≤40 years) patients with pneumonia, non-Hispanic Black patients (aOR 0.45; 95% CI 0.29, 0.70) and Hispanic patients (aOR 0.38; 95% CI 0.15, 0.93) and had a significantly lower odds of receiving anti- Pseudomonas agents compared to other patients (Table 2 ). This effect was not present among patients over 40 years. Regarding anti-MRSA agents, after adjusting for facility, insurance status, co-infections, severity, comorbidities and recent hospitalization, race was not predictive of receipt of anti-MRSA agents (non-Hispanic Black patients aOR 0.96; 95% CI 0.84, 1.10, and Hispanic patients aOR 0.97; 95% CI 0.70, 1.36) (Table 2 ). Table 2. Multivariate model estimating independent effect of patient or illness characteristic on receipt of any day of therapy of the anti-pseudomonas agents, or anti-MRSA agents during 6700 inpatient hospitalizations with Pneumonia among 5820 patients Anti- P. aeruginosa agents Anti-MRSA Agents Characteristics aOR 95% CI aOR 95% CI Age <=40 Non-Hispanic White Ref Hispanic 0.38 0.15, 0.93 Non-Hispanic Black 0.45 0.29, 0.70 Other 0.77 0.30, 1.97 Age >40 Non-Hispanic White Ref Hispanic 0.91 0.63, 1.30 Non-Hispanic Black 0.98 0.85, 1.13 Other 1.13 0.88, 1.46 All Ages (>18 years) Non-Hispanic White Hispanic 0.97 0.70, 1.36 Non-Hispanic Black 0.96 0.84, 1.10 Other 0.98 0.76, 1.26 Female 0.88 0.79, 0.99 0.86 0.77, 0.96 Private Insurance * 0.71 0.62, 0.81 0.71 0.63, 0.82 Clinical Indications or Co-infections Sepsis 2.78 2.48, 3.12 3.29 2.94, 3.68 Urinary tract coinfection 1.68 1.17, 2.41 5.21 3.48, 7.81 Skin/soft tissue coinfection 0.96 0.83, 1.12 No. of chronic conditions associated with antibiotic use ** None – – 1–2 1.25 1.09, 1.43 1.11 0.97, 1.28 >2 1.61 1.34, 1.93 1.29 1.05, 1.59 Chronic paralysis 2.11 1.47, 3.02 1.46 1.05, 2.04 Diabetes with complications 1.23 1.05, 1.44 Patient history influencing choice + S. pneumoniae culture this encounter 0.58 0.27, 1.24 + P. aeruginosa in past year 7.18 4.28, 12.0 1.29 0.92, 1.82 Cystic cibrosis ICD-10 code 1.54 1.12, 2.12 Hospitalization in past 90 Days 3.29 2.85, 3.79 2.77 2.41, 3.19 * Adjusting for Facility, baseline patient was government-insurance (Medicaid, Medicare) ** Chronic conditions included: Diabetes, Valvular Disease, Paralysis, Chronic Pulmonary Disease, Peripheral Vascular Disease, Renal Failure, Liver Disease, HIV/AIDS, Lymphoma, Rheumatoid arthritis, obesity, alcohol abuse, and drug abuse. This difference in antibiotic use by race persisted when evaluating the intensity of antibiotic exposure among the subset of patients receiving some anti- Pseudomonas agents. In the Poisson GEE model adjusting for severity, comorbid conditions, and traditional risk factors for P. aeruginosa pneumonia, the incidence rate ratio for DOT with anti- Pseudomonas agents was 0.91 (95% CI 0.87, 0.96) for non-Hispanic Black patients compared to non-Hispanic White patients (Table S3 ). This corresponds to 9% fewer DOT with anti- Pseudomonas agents during an encounter of similar duration, severity, and comorbid illness. Neither race nor Ethnicity were associated with differences in intensity of anti-MRSA agents after adjusting for relevant factors (Table S3 ). We identified a disparity in the choice of antibiotic agents used among inpatients with pneumonia receiving care by Hospital Medicine Service, as well as slight differences in the intensity of treatment defined by the proportion of inpatient days receiving these agents; Black patients were less likely to receive any, and received less intense courses, of agents with activity against P. aeruginosa . However, this disparity was mostly limited to younger patients. In contrast, there were no disparity between patients’ race and receipt of agents with activity against MRSA. The inconsistency of finding differences in choice or intensity between patients’ race suggests the etiology of the disparities observed may be nuanced and subtle. Our data add some supporting evidence that, among younger non-Hispanic Black patients with pneumonia, choice of antibiotic may be influenced by the patient’s race. However, by no means is our study definitive. These findings complement some observations in the pediatric ambulatory setting regarding receipt of any antibiotics for respiratory infections occurring less often among Black patients, 20 or poorer processes of care among Black inpatients being evaluated for pneumonia. 21 In our study, the magnitude of the associations with antibiotic choice were very large for key categories of case-mix (e.g., severity, co-infection, risk factors for P. aeruginosa ) with adjusted odds ratios often >2.0, with the residual association of choice by race limited to patients under 40, a small subset of our patients (roughly 20%). This observation suggests the generalizability of these findings may be muted and require evaluation in a different patient population to better understand its implications. In addition, roughly half of all patients included in this study received agents with anti- Pseudomonas activity, such exposure being fairly common with a small difference in crude exposure between race groups. In fact, the crude difference in frequency of receipt between non-Hispanic Black patients and non-Hispanic White patients was small (5 percentage point difference). A strength of our study was an ability to account for the major drivers of antibiotic class choice including disease severity, co-infection, and traditional risk factors for P. aeruginosa ; evaluating proxy metrics that directly map to clinical conditions supporting empiric use of agents with anti- Pseudomonas activity. 10 The frequency of these indicators among the study patient encounters was low (roughly 15%) despite frequent use of these agents (roughly 47%). This suggests other drivers of agent choice are likely present and unmeasured. These unmeasured factors may relate to inclusion of some hospital-onset pneumonia, or physician beliefs and attitudes and deserving of ongoing stewardship efforts. 6 , 13 The frequency of use of these agents may also be explained by a considerable fraction of patients had ICD-10 codes for more than one infection (i.e., UTI and SSTI), and our evaluation did not account for timing of antibiotic therapy relative to each diagnosis. At the same time, we should recognize that similar findings were not apparent for exposure to anti-MRSA therapy. The drivers for choosing anti-MRSA therapy mirror in many ways those for anti- Pseudomonas therapy among pneumonia inpatients. 10 However, at these four hospitals, a program to roll out nasal PCR testing for MRSA detection to guide empiric therapy had begun. Perhaps such point-of-care testing reduced or eliminated any race-specific biases in prescriber behavior we may have uncovered regarding use of agents with activity against P. aeruginosa. Although the racial disparities in exposure to specific class of antibiotics was subtle, we believe the findings are not by chance. However, the driver of these differences is uncertain; although we could not evaluate the impact of COVID-19 on these findings, the differences observed could be related to differences in managing suspected COVID-19 disease among the younger patient population. Several limitations are worth noting. First, the data utilized to categorize race, and ethnicity was extracted from documentation in the electronic medical record (EMR) through facility-specific intake procedures. Prior research at other medical centers has shown discordance between races documented in the EMR when compared to patient report. 22 , 23 With more complete or accurate data on race and ethnicity, it is possible that our findings would have been different. Also, our infection syndromes were defined based on billing codes (i.e., ICD-10 codes), lacking specificity and sensitivity for definitive clinical infections. Importantly, our cohort likely included roughly 10%–15% of patients inappropriately capture as having pneumonia by ICD-10 codes. 24 Finally, our inclusion criteria were agnostic to timing of pneumonia, although the exclusion of ICU patients should have minimized the percentage of included patients with hospital-onset pneumonia; we also believe the proportion of disease classified as hospital onset would have been comparable across racial groups. Overall, our result emphasizes the importance of host factors, severity of illness, and previous clinical cultures with P. aeruginosa influencing antibiotic choice. They highlight the significance of individual health conditions and healthcare experiences, while suggesting that patients’ race and ethnicity may have some effect on the classes of antibiotic chosen by the prescriber. These findings are subtle and not consistent across antibiotic classes or patients’ race, ethnicity, or age group. They do demonstrate the necessity for improved ability to acknowledge and mitigate any inherent biases when prescribing antibiotics even in clinical situations where prescribers may believe all decisions are driven solely by clinical severity and indications such as inpatients with pneumonia. | Study | biomedical | en | 0.999997 |
PMC11696626 | Stroke stands as a significant global threat to mortality and quality of life, particularly when associated with atrial fibrillation (AF), a significant public health challenge [ , , , ]. AF, as the most prevalent sustained cardiac arrhythmia, affects millions worldwide and escalates the risk of stroke by fivefold [ , , ]. Emerging evidence indicates that the risk of ischemic stroke escalates in elderly patients with AF, rising from 4.6 % at ages 50–59 years to 20.2 % at ages 80–89 years, which calls for urgent attention . Numerous studies have indicated that ischemic stroke related to AF is linked to a notable risk of mortality, longer hospitalizations, and poorer functional outcomes [ , , , ]. Over the past two decades, we have witnessed significant strides in preventing AF-related strokes by anticoagulation use and managing risk factors . In 2019, the American College of Cardiology (ACC)/The American Heart Association (AHA) updated guidelines recommending oral anticoagulation for individuals with AF and CHA2DS2-VASc scores ≥2 in men and ≥ 3 in women . Variations in geographic location within the United States significantly influence the outcomes of stroke and AF. Regions such as the southeastern U.S., also known as the “Stroke Belt,” display higher incidence of stroke and related mortality . This inequality is commonly attributed to variations in healthcare accessibility, socioeconomic factors, and the prevalence of comorbid conditions like hypertension and diabetes. Moreover, gender and racial disparities significantly influence the risk and outcomes of stroke in AF patients. While men generally face a higher risk of developing AF, recent studies have revealed a 1.3-fold increased risk of stroke in women with AF, even among anticoagulated patients, with women experiencing a higher annual risk rate of 2.4 % compared to men [ , , ]. Furthermore, racial disparities are also apparent. Previous studies have indicated a decline in stroke incidence in the white population. In contrast, ischemic stroke incidence in the black population remains unchanged even after stratification by race and stroke subtype . These compelling findings underline the critical need to prioritize a comprehensive understanding of the relationship between stroke and AF, particularly considering the disparities in mortality rates associated with these conditions. Strokes associated with AF should be recognized as a distinct clinical entity that warrants dedicated research, separate from the broader category of ischemic and hemorrhagic strokes. The pathophysiology, risk factors, and recurrence patterns of AF-related strokes are significantly different from those of other stroke subtypes, indicating specific therapeutic requirements and implications. The embolic strokes associated with AF often necessitate customized anticoagulation strategies that are not universally applicable to all ischemic stroke patients. By investigating AF-related strokes as a separate category, we may enhance the precision of therapeutic interventions and ultimately improve patient outcomes. This study utilizes data from the CDC WONDER database from 1999 to 2020 to address these issues and better understand the mortality trends associated with stroke in AF patients. We aim to identify and characterize demographic trends and disparities in mortality rates among AF patients aged 25 and older. The study sourced data from the CDC WONDER (Centers for Disease Control and Prevention) database, a highly reliable and comprehensive repository of death certificates from all 50 states and the District of Columbia from 1999 to 2020. This study utilized de-identified, publicly available datasets issued by the government and voluntarily adhered to the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) guidelines for reporting. Due to the nature of the data, institutional review board approval (IRB) was optional. The study included adults aged 25 years or older diagnosed with atrial fibrillation between 1999 and 2020. We examined death records from the Multiple Causes of Death Public Use registry to identify stroke-related mortality in these patients. Stroke was defined as any type of stroke, including ischemic, hemorrhagic, or both. Stroke-related mortality was either the primary cause of death or a contributing factor. The cohort was identified using the International Classification of Diseases (ICD) codes, which identified our cohort as follows: I48 (Atrial fibrillation) and I60–69 (stroke). Demographic data, including age, gender, and race/ethnicity, were extracted, along with information on population size, urban-rural stratification, regional delineation, state-specific classification, and year and location of death. The location of death was categorized into medical facilities (outpatient, emergency room, inpatient, death on arrival, or status unknown), home, hospice, and nursing home/long-term care facility. Race/ethnicity was classified into Hispanic and non-Hispanic White, African American, Asian, or Pacific Islanders. Population assessment was conducted using the National Center for Health Statistics Urban-Rural Classification Scheme to define urban (large central metropolitan, large fringe metropolitan, medium metropolitan, and small metropolitan) and nonmetropolitan (micropolitan and noncore) counties according to the 2013 US census classification for reporting the place of death. Additionally, based on the 2010 US Census Bureau definitions, regions were categorized into Northeast, Midwest, South, and West. The crude and Age-Adjusted Mortality Rate (AAMR) per 100,000 individuals was calculated to investigate nationwide mortality trends. This involved the determination of the total number of fatalities attributed to stroke in the population with AF for each year. As per standard practice, the AAMR was calculated by standardizing the Stroke-related deaths using the 2000 US population and 95 % confidence intervals CI). The JoinPoint Regression Program (Joinpoint V 4.9.0.0, National Cancer Institute, Bethesda, MD, USA) determined the annual percent change (APC) and a 95 % CI in AAMR. AAMRs were employed to equitably compare mortality rates across different populations or historical periods. By analyzing AAMRs, the study was able to discern mortality patterns and identify significant fluctuations over time by utilizing log-linear regression models. Between 1999 and 2020, Stroke in AF patients accounted for a total of 331,106 deaths among adults aged 25 years and above in the United States (Supplementary Table 1). These fatalities were distributed across various settings, with the leading most occurring in medical facilities (43.2 %), 31.8 % in nursing homes/long-term care facilities, 15.6 % at the decedents' homes, 5.6 % in hospice facilities, and 3.7 % at other locations (Supplementary Table 2). The central illustration summarizing the study's characteristics and findings is presented in Fig. 5 . The age-adjusted mortality rate (AAMR) for Stroke in AF-related deaths among adults has shown a significant decrease from 7.4 in 1999 to 6.4 in 2020, with an Average Annual Percentage Change (AAPC) of −1.02 (95 % Confidence Interval [CI]: −1.55 to −0.53) ( p -value = 0.004). Notably, there was a significant decline in AAMR from 2015 to 2018 (APC: -7.22; 95 % CI: −8.86 to −4.99) , but no significant changes were noted from 1999 to 2015 (APC: -0.27; 95 % CI: −0.63 to 0.10) (p-value = 0.11). Lastly, there was a striking rise in AAMR from 2018 to 2020 (APC: 4.98; 95 % CI: 1.66 to 7.99) . (Supplementary Table 3). Throughout the study, adult women exhibited slightly higher AAMRs than adult men (overall AAMR for men: 6.6, 95 % CI: 6.6–6.6; for women: 7.1, 95 % CI: 7.0–7.1). The AAMR for adult men showed variable trends, and it has demonstrated decreased trends from 2015 to 2018 ; similar trends were noted in the females (APC: -8.10; 95 % CI: −9.86 to −4.80, p -value = 0.01). Whereas the male gender has shown increasing trends of mortality from 2018 to 2020 , but the female gender showed no significant difference in the same period (APC: 3.04; 95 % CI: −2.21 to 7.10, p-value = 0.19). . Fig. 1 Overall and Sex-Stratified Stroke related age-adjusted mortality rates per 100,000 in Adults with Atrial Fibrillation in the United States, 1999 to 2020. Fig. 1 Significant variability in mortality rates was found among different racial/ethnic groups, with the highest mortality occurring in White patients (289,277 deaths; 87.4 %), followed by Black patients (20,835 deaths; 6.3 %), Hispanic patients (12,333 deaths; 3.7 %), Asian or Pacific Islander patients , and the lowest number in American Indian or Alaska Native patients (930 deaths; 0.3 %). AAMRs were highest among Whites, followed by Black or African Americans, American Indian or Alaska Natives, Asian or Pacific Islanders, and Hispanic or Latinos (overall AAMR: White: 7.4, 95 % CI: 7.4–7.4; Black or African American: 5.4, 95 % CI: 5.3–5.4; American Indian or Alaska Native: 4.6, 95 % CI: 4.3–4.9; Asian or Pacific Islander: 4.5, 95 % CI: 4.4–4.6; Hispanic or Latino: 4.1, 95 % CI: 4.1–4.2). The AAMR of the Asian and White populations exhibited a decreasing trend from 1999 to 2020. Specifically, the AAPC for Asians was −1.60 (95 % CI: −2.50 to −0.24, p -value = 0.02), and for Whites, it was −0.82 (95 % CI: −1.35 to −0.33, p -value = 0.01). However, no significant changes were observed in the AAMR of Hispanic, American Indian, and Black populations during the same period. The AAPC for Hispanics was −0.67 (95 % CI: −1.33 to 0.52, p-value = 0.39); for Americans, it was −0.46 (95 % CI: −1.55 to 1.14, p-value = 0.76), and for Blacks, it was 0.59 (95 % CI: −0.11 to 0.98, p-value = 0.12). . Fig. 2 Stroke-related age-adjusted mortality rates per 100,000 Stratified by Race in Adults with Atrial fibrillations in the United States, 1999 to 2020. Fig. 2 Variations in AAMRs were observed among different states, with AAMRs ranging from as low as 4.3 (95 % CI: 4.1–4.6) in Nevada up to 11.9 (95 % CI: 11.3–12.6) in Vermont. States falling within the top 90th percentile included Alaska, Oregon, Rhode Island, Vermont, Washington, and West Virginia, which had approximately 1.5 times higher AAMRs compared to states in the lower 10th percentile, which included Arizona, Florida, Georgia, Kansas, Louisiana, Nevada, New Mexico, and New York. (Supplementary Table 6). On average, over the study period, the highest mortality was observed in the Western (AAMR: 7.9; 95 % CI: 7.8 to 8.0), followed by Midwestern (AAMR: 7.0; 95 % CI: 6.9 to 7.0), Northeastern (AAMR: 6.6; 95 % CI: 6.5 to 6.6), and Southern regions (AAMR: 6.6; 95 % CI: 6.5 to 6.6). . Fig. 3 Stroke-related age-adjusted mortality rates per 100,000 Stratified by Regions in Adults (≥25 Years) with Atrial Fibrillation in the United States, 1999 to 2020. Fig. 3 In the duration of the study, nonmetropolitan areas consistently displayed slightly higher Age-Adjusted Mortality Rates (AAMRs) compared to metropolitan areas, with overall AAMRs of 7.9 (95 % CI: 7.8 to 8.0) and 6.8 (95 % CI: 6.7 to 6.8) respectively. The AAMR of metropolitan areas experienced a decline from 1999 to 2020 [Metropolitan: Annual Percent Change (APC): -0.89, (CI: −1.17 to −0.69) ]. Conversely, the nonmetropolitan regions did not exhibit a statistically significant trend during the same period [Nonmetropolitan: APC: -0.09, (CI: −0.31 to 0.11) (p-value = 0.35)]. . Fig. 4 Stroke-related age-adjusted mortality rates per 100,000 Stratified by Urbanization in Adults (≥25 Years) with Atrial Fibrillation in the United States, 1999 to 2020. Fig. 4 Fig. 5 Central Illustration: Trends in Demographics and Disparities in Stroke-Related Mortality in Atrial Fibrillation Patients in the United States: 1999 to 2020. Fig. 5 In this comprehensive 20-year analysis of mortality data from the Centers for Disease Control and Prevention in the United States, we have uncovered several crucial findings regarding the impact of AF on stroke mortality: 1. The age-adjusted mortality rate for stroke in AF-related deaths among adults decreased from 1999 to 2020, with an overall annual reduction of 1.02 %. The decline was significant between 2015 and 2018 but stable from 1999 to 2015. From 2018 to 2020, there was a notable increase in mortality rates. A higher age-adjusted mortality rate was observed in adult women compared to men, with women demonstrating a decreasing trend from 1999 to 2020. Conversely, no significant difference in mortality rate was noted in men during the same period. 2. From 1999 to 2020, both Asian and white populations experienced a decrease in mortality rates, while no significant variance was observed in the mortality rates of Hispanic, American Indian, and Black populations during the same period. 3. Our analysis indicates elevated mortality rates in the western states, followed by the midwestern states. Nonmetropolitan areas demonstrated notably higher mortality rates than metropolitan areas. We observed declining mortality trends in metropolitan areas, whereas nonmetropolitan regions did not exhibit statistically significant trends. Our findings demonstrate a reduction in the age-adjusted mortality rate (AAMR) attributed to stroke in patients with AF from 1999 to 2018 and have noted a significant rise in mortality trends from 2018 to 2020. This decline is consistent with prior investigations, highlighting considerable progress and developments in medical interventions aimed at preventing strokes in AF patients, mainly through anticoagulants [ , , ]. These advancements have had a notable positive impact on patient outcomes and offer a promising prospect for the future. However, from 2018 to 2020, we observed a reversal in this mortality trend, which was also statistically significant; this recent uptick can be attributed to various factors, including the impact of the COVID-19 pandemic and the growing prevalence of comorbidities such as obesity, diabetes, and chronic kidney diseases in adults, all directly linked to AF and stroke mortality [ , , , ]. Numerous studies have been conducted to examine sex differences in stroke, revealing inconsistent findings concerning the mortality rates associated with stroke in women. Some research has indicated a higher incidence of stroke and venous thromboembolism in women, coupled with an elevated mortality rate compared to men. For instance, Wang et al. analyzed patients from the Framingham Heart Study, revealing a 1.6-fold higher risk of mortality in females compared to males . Similarly, findings from Dagres et al. in the Euro Heart Study, who investigated gender-related differences in adult patients with AF in Europe, demonstrated that women had an increased risk of stroke-related mortality with 1.8–1.9-fold and had higher comorbidities compared to men . Another study by Friberg et al. in the Swedish study found that comorbidities, including prior myocardial infarction, vascular disease, and renal failure, predict ischemic stroke and composite thromboembolism endpoints in AF patients . Our study supported these results, indicating a higher overall mortality rate among women compared to men (age-adjusted mortality rate: 7.1 % vs. 6.6 %). Women experience unique changes throughout their lifespans, such as pregnancy, hormonal changes, and exogenous hormone infusion, which may impact the vascular system. Also, women have increased odds of receiving nonoptimal anticoagulation, as was demonstrated by Eckman and his colleagues . Importantly, our research identified a substantial reduction in mortality rates for both men and women from 2015 to 2018, likely driven by the growing adoption of direct oral anticoagulants (DOACs). These medications have been pivotal in decreasing stroke risk among patients with AF, including women. Nevertheless, despite these advancements, women with AF face a notably higher risk of stroke compared to men, a disparity highlighted in recent studies . This underscores the urgent need for tailored management strategies to tackle these gender-specific challenges. It is crucial to consider these factors when assessing risk and developing prevention plans for women. Moreover, further research is essential to uncover the root causes of the elevated stroke risk in women, particularly post-menopause, and to investigate targeted interventions that could enhance vascular function and prevent strokes. These factors must be considered when evaluating risk and formulating prevention plans for women. There is a pressing need for additional research to pinpoint the causes of the heightened risk for women, particularly post-menopause, and to investigate the potential targeting of factors that impact vascular function for stroke prevention. Our research has uncovered significant disparities in stroke mortality across various racial and ethnic groups, highlighting the pressing need to confront persistent health inequities. The Caucasian/White population exhibited the highest AAMR at 7.4 %, followed by individuals of Black ethnicity at 5.4 %. The AAMR displayed a declining trend in the Asian and Caucasian populations from 1999 to 2020. However, no significant variances in AAMR were observed among Hispanic, Black, and American Indian populations. Prior research suggests that individuals of Black or African American descent experience a rising trend in stroke-related mortality, possibly due to a higher prevalence of risk factors such as diabetes, hypertension, and renal diseases, which increase the risk of stroke . These differences in stroke incidence are the primary drivers of the disparities in stroke mortality rates. Significant variations in mortality rates were observed geographically, with the Western region recording the highest AAMR (7.9) and Nevada the lowest (4.3). Previous studies have highlighted that southeastern states exhibit 2–4 times higher risks than others and have been identified as a ‘stroke belt’ for several decades . Furthermore, nonmetropolitan areas demonstrated notably higher mortality rates than metropolitan areas. We observed declining mortality trends in metropolitan areas, whereas nonmetropolitan regions did not exhibit statistically significant trends. These geographical and regional disparities underscore the importance of localized factors and access to healthcare in influencing stroke mortality rates. This emphasizes the need for interventions and resource allocation tailored to specific regions to ensure the most effective and targeted approach to reducing disparities. While progress has been made in reducing the mortality trends for AF-associated stroke, persistent disparities and recent fluctuations underscore the need for continued research and targeted public health strategies. It is crucial to address gender, racial, and geographic disparities to improve outcomes further and ensure equitable healthcare for all populations. The study has limitations, mainly due to its retrospective design. Relying on death certificates in the CDC WONDER database introduces the potential for inaccurate diagnosis, leading to misclassification bias. Furthermore, the absence of laboratory values, medication lists, and clinical data about general health conditions, comorbidities, and treatment limits a comprehensive understanding of mortality patterns. Nevertheless, compared to current literature, this study includes adults aged 25 and older from all racial backgrounds, providing a thorough analysis of stroke-related mortality trends across a diverse population. Covering the period from 1999 to 2020, our study offers a long-term perspective that enhances our understanding of the evolution of stroke mortality over more than two decades. The analysis reveals notable demographic and geographic disparities in mortality rates linked to stroke and AF. While mortality rates have generally declined, recent data indicate a heightened necessity for extended monitoring to ascertain whether this trend will continue or decrease. Specific interventions and equitable healthcare access must be deployed to mitigate these disparities and enhance outcomes for this demographic. Muhammad Abdullah Naveed: Writing – original draft, Methodology, Formal analysis. Sivaram Neppala: Writing – review & editing, Writing – original draft, Supervision, Investigation. Himaja Dutt Chigurupati: Writing – original draft, Methodology. Muhammad Omer Rehan: Visualization, Validation. Ahila Ali: Writing – original draft, Formal analysis. Hamza Naveed: Resources, Methodology. Bazil Azeem: Writing – original draft, Formal analysis, Data curation. Rabia Iqbal: Writing – original draft, Data curation. Manahil Mubeen: Formal analysis, Writing – original draft. Mashood Ahmed: Visualization, Validation, Data curation. Ayman R. Fath: Writing – review & editing, Supervision. Timir Paul: Writing – review & editing, Supervision, Project administration. Bilal Munir: Writing – review & editing, Supervision, Resources, Project administration. Not Applicable. The authors received no extramural funding for the study. The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this article. | Review | biomedical | en | 0.999996 |
PMC11696633 | Problematic alcohol use is one of the leading risk factors for population health worldwide and it often co-occurs with anxiety and depression . The lifetime prevalence of Alcohol Use Disorder (AUD) is 8.6 %, with 27.6 % of individuals with AUD also experiencing an anxiety disorder and 19.5 % a depressive disorder. Conversely, around one-third of individuals experiencing either anxiety or depression have struggled with AUD . Co-occurring AUD and mental illness are coupled with increased symptom severity, somatic complications, and impaired social functioning . Evidence suggests a bidirectional linkage between problematic alcohol use and anxiety/depression as alcohol use increases the risk of depression and anxiety through neurophysiological and metabolic changes and individuals with existing anxiety or depression may use alcohol as self-medication . Few individuals with problematic alcohol use engage in alcohol treatment . In Denmark, treatment is mainly organized in outpatient clinics, posing several barriers: Geographical distance to a clinic, clinic opening hours, and attendance for face-to-face therapy, all of which are further challenged by the massive prejudice and stigmatization surrounding alcohol problems and treatment . If people also struggle with anxiety/depression, these barriers may be further complicated and relevant. Also, co-occurring anxiety and depression have been associated with poor treatment outcomes . The core component in evidence-based alcohol treatment is psychosocial therapy founded on a combination of motivational interviewing and cognitive behavioural therapy (face-to-face alcohol therapy) . E-alcohol therapy is psychosocial therapy delivered via video conference. It mirrors face-to-face alcohol therapy by allowing client and therapist to see and hear each other in real-time over the internet. E-alcohol therapy has the potential to overcome geographical and psychological barriers by offering a treatment option that does not require physical attendance at a clinic and provides privacy and anonymity . These conditions could make e-alcohol therapy particularly beneficial for individuals with concurrent problematic alcohol use and symptoms of anxiety or/and depression. In a randomized controlled trial, we compared a proactive e-alcohol therapy intervention with face-to-face alcohol therapy (standard care), targeting individuals with problematic alcohol use who were not engaging in alcohol treatment. The therapy was proactive as the therapist initiated the first therapy session. The trial found that proactive e-alcohol therapy, compared to standard care, was more effective in increasing treatment initiation and compliance, while being equally effective in reducing alcohol intake . Besides that study, the literature on e-alcohol therapy is scarce. A study by de Beurs et al. investigated alcohol treatment via video conference during COVID-19 social distancing and found it non-inferior to in-person treatment in clinical effectiveness. Furthermore, a feasibility study on group sessions conducted via video conference — with the therapist at a remote site — identified high levels of client satisfaction, good session attendance, and low attrition . Based on post hoc analyses of data from the randomized controlled trial of proactive e-alcohol therapy, the aim of the present investigation was twofold: First, to explore whether anxiety or/and depressive symptoms modify the effect of proactive e-alcohol therapy on treatment initiation, compliance, and alcohol intake. Second, to examine the impact of proactive e-alcohol therapy on anxiety or/and depressive symptoms compared to standard care. The present study is based on post hoc analyses from a randomized controlled trial, which tested whether proactive e-alcohol therapy improved treatment initiation, treatment compliance, and alcohol intake in comparison to standard care. Participants were individually assigned in equal ratio to receive either standard care or proactive e-alcohol therapy and were followed up at 3- and 12-months post randomization. Data were collected at the National Institute of Public Health in Denmark and pre-registered with the Danish Data Protection Agency . The trial was reviewed by the Capital Region’s Committee on Health Research Ethics in Denmark but did not require formal approval under Danish law since it did not involve invasive procedures, medical drugs, or equipment. Also, the trial was prospectively registered with ClinicalTrials.gov . A more detailed description of the trial is reported elsewhere . A total of 356 individuals with problematic alcohol use were enrolled in the trial, recruited through a project website from January 2018 to June 2020. The website referred to the treatment as counseling and emphasized the benefits of making changes in drinking habits. It also highlighted that the treatment was free and could be accessed anonymously. The trial design and its implications were described, and visitors to the website had the option to complete the Alcohol Use Disorders Identification Test (AUDIT) and receive standardized written feedback on the benefits of changing drinking habits. Information on age and sex was provided with this anonymous test. The site was promoted on the internet using ads on Google and Facebook, as well as on relevant alcohol-related websites. The study included individuals with an AUDIT score of 8 or higher, aged 18 years or older, who had access to a personal computer, smartphone, or tablet with a functional camera, audio equipment, and an internet connection. Exclusion criteria were refusal to provide information about municipality of residence, phone number, and email address. Those excluded were informed of exclusion reasons and provided with details on where to get help. Upon registration, participants received an email with links to detailed participant information and the baseline questionnaire in which informed consent was obtained. Randomization took place upon completion of the baseline questionnaire (allocation ratio 1:1) . Participants were not blinded, and data analyses in this study were conducted without blinding. If allocated to proactive e-alcohol therapy, an alcohol therapist contacted the participant within three to five weekdays to schedule the first session. The standard procedure involved online therapy through video conferencing using Skype for Business, but a pragmatic approach allowed for sessions over the phone or, in rare instances, in person. The therapeutic content of the sessions was based on motivational interviewing and cognitive behavioral therapy. The individual participant's needs determined the number, frequency, and duration of sessions. Any need for pharmacological treatment was handled by the participant's general practitioner. Therapists were from Novaví, Denmark's largest provider of substance abuse treatment. Participants assigned to standard care were provided with contact details for their local alcohol treatment clinic and were prompted to initiate contact. Standard treatment consisted of face-to-face alcohol therapy and, when appropriate, pharmacological treatment. Sessions were delivered by an alcohol therapist at the individual's local municipal outpatient clinic, with the number, frequency, and duration of sessions tailored to each participant. In Denmark, alcohol treatment is, by law, free of charge and must be made available to all citizens within 14 days of inquiry, regardless of problem severity. Data for this study were self-reported by participants and collected through online questionnaires sent to their emails using the web-based software SurveyXact by Ramboll, Denmark. At baseline, participants provided information on alcohol use, previous alcohol treatment, motivation for change, anxiety and depressive symptoms, quality of life, self-rated health, social and demographic characteristics, and other health behaviours. At follow-up, participants primarily gave information on their alcohol use, their use of and experience with therapy. Information on anxiety and depressive symptoms was provided only after three months. Email reminders were dispatched after 48 hours and one week. For the follow-up questionnaires, three additional attempts were made to reach non-responders by telephone. At the 3- and 12-month follow-up, 87 and 80 participants, respectively, were reached by telephone and had the questionnaire resent via email. In total, three follow-up questionnaires were completed over the phone. Three modifiers were used in this study: anxiety symptoms, depressive symptoms, and combined anxiety and depressive symptoms. Anxiety symptoms were measured by the General Anxiety Disorder-2 Scale (GAD-2) and depressive symptoms by the Patient Health Questionnaire-2 (PHQ-2), which were combined into one questionnaire battery. Participants were asked how often, over the last 2 weeks, they experienced the following: 1. Feeling nervous, anxious or on edge; 2. Not being able to stop or control worrying (GAD-2); 3. Little interest or pleasure in doing things, 4. Feeling down, depressed, or hopeless (PHQ-2). The screening tools GAD-2 and PHQ-2 both range from a score of 0 to 6, with a score of ≥ 3 indicating moderate or severe symptoms. Anxiety and depressive symptoms were measured by combining GAD-2 and PHQ-2, with a total score of 12 and a score of ≥ 6 used as an indicator of moderate or severe symptoms. When used in combination, these screening tools are referred to as the Patient Health Questionnaire-4 (PHQ-4) . The Danish versions of the scales were used. The following primary outcomes were assessed 3 and 12 months after randomization: Initiation of treatment (defined as completion of one therapy session); treatment compliance (defined as completion of at least three therapy sessions); and total weekly alcohol intake in standard drinks, measured by Alcohol Timeline Followback (TLFB). Participants reported their daily consumption of standard drinks for the past week, starting with ‘yesterday’ and proceeding one day at a time. They specified their intake of beer, wine, and spirits. A standard drink was defined as 12 g of pure alcohol, equivalent to a 33 cl bottle of beer (4–5 % alcohol) or a 12 cl glass of wine (12 % alcohol). This definition and additional examples were provided with the TLFB in the questionnaire. Treatment initiation was derived from the question: ‘Have you currently had one or more sessions about your alcohol habits with an alcohol therapist?’. Treatment compliance was derived from questions on the number of completed sessions via video conference, telephone, and face-to-face. Anxiety symptoms, depressive symptoms, and anxiety and depressive symptoms were measured by GAD-2, PHQ-2, and PHQ-4, respectively, at 3 months post-randomization. Analyses of dichotomous outcomes were conducted using logistic regression, while continuous outcomes were analyzed with negative binomial regression, adjusting for baseline values, such as alcohol intake in standard drinks per week. Interaction analyses were performed to assess whether moderate-severe anxiety or/and depressive symptoms modified the differences between the intervention and control groups. Additional analyses of the modifying impact of anxiety or/and depressive symptoms as continuous scores were performed to ensure that the binary categorization of symptoms did not lose information. All analyses were carried out in Stata version 18. Results were computed for available cases. Sensitivity analyses employing the intention-to-treat principle were conducted for primary outcomes to verify result robustness. Following this principle, missing values were accounted for using multiple imputations by chained equations (m = 40 imputations) . Data were analyzed using the mi estimate command in Stata, which performs the estimation for each imputed dataset individually and then combines the results according to Rubin’s rules. Imputation was done separately for each arm. The imputation procedure included variables pre-hypothesized to potentially predict missing information (age, sex, education, baseline information: AUDIT score, alcohol intake, depression and anxiety, readiness to change, cohabitation status). Among the 502 individuals who filled out the baseline questionnaire and were assessed for eligibility, 379 (75 %) were randomly assigned to either proactive e-alcohol therapy (n = 187) or standard care (n = 192) between Jan 22, 2018, and Jun 29, 2020 . However, the final sample for analysis included a total of 356 participants: 179 in the proactive e-alcohol therapy group and 177 in the standard care group. During data cleaning, 23 participants were removed from the dataset because of duplicates/triplicates and a faulty randomization code, which included participants who did not meet the technical equipment criteria. Duplicates/triplicates refer to participants enrolled in the trial more than once, which occurred due to open enrollment and were filtered out based on email address. A more thorough description of the participant flow is reported elsewhere . Fig. 1 Participant flow in the trial. § Participants enrolled in the trial more than once. Demographic and behavioural characteristics were balanced in the two groups at baseline and is shown in more detail elsewhere . Of all participants, 170 (48 %) were female. The median age was 46 (interquartile range 36, 56). 260 (73 %) had ≥ 13 years of education and a corresponding proportion were employed. Among participants, the median weekly alcohol intake was 28 drinks, the median AUDIT score was 20, and 54 % had an AUDIT score of 20 or more which indicates a high risk of alcohol dependence. In total, 26 % had previous experience with municipal alcohol treatment or treatment provided by a general practitioner. Participants were highly motivated to change as reflected by a high readiness to change score. Table 1 shows the distribution of anxiety or/and depressive symptoms among participants in both groups at baseline. On the PHQ-4 scale, 37 % showed moderate-severe symptoms. In the proactive e-alcohol therapy group, the proportion with no symptoms of anxiety and depression and moderate-severe symptoms was slightly higher than in the standard care group ( Table 1 ). This was also the case for the GAD-2 scale measuring symptoms of anxiety and the PHQ-2 scale measuring symptoms of depression. Table 1 Anxiety and depressive symptoms among participants at baseline. Values are number (%) for categorical variables and median (interquartile range 25, 75) for continuous variables. Standard care (n = 179) E-alcohol therapy (n = 177) All (n = 356) Anxiety (GAD-2 score) No (0) 31 (17) 35 (20) 66 (19) Mild (1–2) 78 (44) 68 (38) 146 (41) Moderate or severe (3–6) 70 (39) 74 (42) 144 (40) Median 2 (1, 4) 2 (1, 4) 2 (1, 4) Depression (PHQ-2 score) No (0) 28 (16) 35 (20) 63 (18) Mild (1–2) 89 (50) 70 (40) 159 (45) Moderate or severe (3–6) 62 (34) 72 (41) 134 (38) Median 2 (1, 3) 2 (1, 4) 2 (1, 4) Anxiety and depression (PHQ-4 score) No (0) 17 (9) 25 (14) 42 (12) Very mild (1–2) 33 (18) 21 (12) 54 (15) Mild (3–5) 68 (38) 59 (33) 127 (36) Moderate or severe (6–12) 61 (34) 72 (41) 133 (37) Median 4 (2, 7) 4 (2, 7) 4 (2, 7) Table 2 shows the modifying impact of moderate-severe anxiety or/and depressive symptoms on the effect of proactive e-alcohol therapy vs. standard care on treatment initiation and treatment compliance at 3- and 12-month follow-up. Initiation of treatment at 3-month follow-up was higher in the proactive e-alcohol therapy group compared to standard care, both among participants with moderate-severe symptoms of anxiety and depression and those with no moderate-severe symptoms, with no significant interaction between intervention group and moderate-severe anxiety and depressive symptoms (p = 0.64). Also, for treatment compliance at 3-month follow-up, participants in the proactive e-alcohol therapy group were more likely to have completed at least three therapy sessions compared to participants in the standard care group, both among participants with moderate-severe symptoms of anxiety and depression and those with no moderate-severe symptoms, with no significant interaction between intervention group and moderate-severe anxiety and depressive symptoms (p = 0.40) ( Table 2 ). Results at 12-month follow-up were comparable. Similar results were observed in the intention to treat analyses, details are provided in Sup. Table 1 , Table 2 . Also, results were robust to the modifying impact of continuous symptom scores (Sup. Table S3 ). Table 2 Modifying impact of moderate-severe anxiety or/and depressive symptoms on the effect of proactive e-alcohol therapy vs. standard care on treatment initiation a and treatment compliance b at 3- and 12-month follow-up. Standard care E-alcohol therapy % (N/total N) % (N/total N) Odds ratio (95 % CI) Interaction p value c 3 months follow-up Treatment initiation Anxiety symptoms (GAD-2 score) 0.55 No (0–2) 46 (23/50) 87 (55/63) 8.1 (3.2 to 20.4) Yes (3–6) 71 (24/34) 92 (35/38) 4.9 (1.2 to 19.5) Depressive symptoms (PHQ-2 score) 0.97 No (0–2) 55 (29/53) 89 (55/62) 6.5 (2.5 to 16.9) Yes (3–6) 58 (18/31) 90 (35/39) 6.3 (1.8 to 22.2) Anxiety and depressive symptoms (PHQ-4 score) 0.64 No (0–5) 51 (28/55) 86 (55/64) 5.9 (2.4 to 14.2) Yes (6–12) 66 (19/29) 95 (35/37) 9.2 (1.8 to 46.4) Treatment compliance Anxiety symptoms (GAD-2 score) 0.88 No (0–2) 40 (20/50) 73 (45/62) 4.0 (1.8 to 8.8) Yes (3–6) 42 (14/33) 76 (29/38) 4.4 (1.6 to 12.1) Depressive symptoms (PHQ-2 score) 0.21 No (0–2) 44 (23/52) 71 (43/61) 3.0 (1.4 to 6.5) Yes (3–6) 35 (11/31) 79 (31/39) 7.0 (2.4 to 20.5) Anxiety and depressive symptoms (PHQ-4 score) 0.40 No (0–5) 43 (23/54) 71 (45/63) 3.4 (1.6 to 7.3) Yes (6–12) 38 (11/29) 78 (29/37) 5.9 (2.0 to 17.5) 12 months follow-up Treatment initiation Anxiety symptoms (GAD-2 score) 0.45 No (0–2) 58 (30/52) 85 (61/72) 4.1 (1.7 to 9.5) Yes (3–6) 78 (28/36) 89 (40/45) 2.3 (0.7 to 7.7) Depressive symptoms (PHQ-2 score) 0.55 No (0–2) 63 (37/59) 86 (63/73) 3.7 (1.6 to 8.8) Yes (3–6) 72 (21/29) 86 (38/44) 2.4 (0.7 to 7.9) Anxiety and depressive symptoms (PHQ-4 score) 0.97 No (0–5) 63 (36/57) 85 (62/73) 3.3 (1.4 to 7.6) Yes (6–12) 71 (22/31) 89 (39/44) 3.2 (0.9 to 10.7) Treatment compliance Anxiety symptoms (GAD-2 score) 0.83 No (0–2) 49 (25/51) 77 (51/66) 3.5 (1.6 to 7.8) Yes (3–6) 59 (19/32) 86 (36/42) 4.1 (1.3 to 12.5) Depressive symptoms (PHQ-2 score) 1.0 No (0–2) 51 (28/55) 79 (53/67) 3.7 (1.7 to 8.1) Yes (3–6) 57 (16/28) 83 (34/41) 3.6 (1.2 to 11.0) Anxiety and depressive symptoms (PHQ-4 score) 0.58 No (0–5) 52 (28/54) 78 (52/67) 3.2 (1.5 to 7.1) Yes (6–12) 55 (16/29) 85 (35/41) 4.7 (1.5 to 14.7) a Completion of one therapy session. b Completion of at least three therapy sessions. c Interaction between intervention group and symptoms. Table 3 shows the modifying impact of moderate-severe symptoms of anxiety or/and depressive symptoms on the effect of proactive e-alcohol therapy vs. standard care on alcohol intake (standard drinks/week) at 3- and 12-month follow-up. At 3-month follow-up, participants in the proactive e-alcohol therapy group had a lower weekly alcohol intake compared to those in standard care both among those with and without moderate-severe symptoms of anxiety or/and depression at baseline. There were no significant interactions between intervention group and moderate-severe anxiety or/and depressive symptoms. For example, at 3-month follow-up, participants with no moderate-severe anxiety and depressive symptoms in the proactive e-alcohol therapy group drank on average 15.6 drinks a week and those in standard care drank 20.9 drinks a week. Among those with moderate-severe anxiety and depressive symptoms, participants in the proactive e-alcohol therapy group drank on average 12.7 drinks a week, and those in the standard care group, 18.5 drinks a week. The p value for interaction was 0.86. At 12-month follow-up, there was no significant difference in alcohol intake among participants in the proactive e-alcohol therapy group and those in standard care, neither among participants with moderate-severe symptoms of anxiety or/and depression, nor among those without. Similar results for alcohol intake were observed in the intention to treat analyses, details are provided in Sup. Table 2 . Also, results were robust to the modifying impact of continuous symptom scores . Table 3 Modifying impact of moderate-severe anxiety or/and depressive symptoms on the effect of proactive e-alcohol therapy vs. standard care on alcohol intake (standard drinks/week) at 3- and 12-month follow-up. Standard care E-alcohol therapy Mean Mean Difference in means (95 % CI) a Interaction p value b 3 months follow-up Anxiety symptoms (GAD-2 score) 0.95 No (0–2) 20.8 15.1 −5.7 (−13.4 to 2.0) Yes (3–6) 17.9 13.2 −4.7 (−15.0 to 5.6) Depressive symptoms (PHQ-2 score) 0.48 No (0–2) 19.1 15.5 −3.6 (−10.4 to 3.1) Yes (3–6) 20.8 11.7 −9.2 (−21.3 to 2.9) Anxiety and depressive symptoms (PHQ-4 score) 0.86 No (0–5) 20.9 15.6 −5.3 (−12.9 to 2.3) Yes (6–12) 18.5 12.7 −5.8 (−16.5 to 4.8) 12 months follow-up Anxiety symptoms (GAD-2 score) 0.85 No (0–2) 15.2 15.7 0.5 (−8.0 to 9.0) Yes (3–6) 13.8 14.1 0.3 (−8.8 to 9.5) Depressive symptoms (PHQ-2 score) 1.0 No (0–2) 13.6 14.6 1.1 (−6.5 to 8.7) Yes (3–6) 15.4 14.1 −1.3 (−11.1 to 8.5) Anxiety and depressive symptoms (PHQ-4 score) 0.90 No (0–5) 14.5 15.2 0.7 (−7.5 to 8.9) Yes (6–12) 14.7 14.6 −0.1 (−9.5 to 9.4) a Adjusted by weekly alcohol intake at baseline. b Interaction between intervention group and symptoms. At 3-month follow-up, no difference was observed between the proportion of participants with moderate-severe anxiety or/and depressive symptoms in the two intervention groups: the proportion of participants with moderate-severe symptoms was approximately halved ( Table 4 ). Repeating the analyses with anxiety or/and depressive symptoms modelled on a continuous scale did not change the findings (Sup. Table S4 ). Table 4 Impact of proactive e-alcohol therapy vs. standard care on moderate-severe anxiety or/and depressive symptoms at 3-month follow-up. Standard care E-alcohol therapy % (N/total N) % (N/total N) Difference in % (95 % CI) a P value Odds ratio (95 % CI) a P value Anxiety symptoms (GAD-2 score ≥ 3) Available cases (AC) 19 (16/84) 19 (19/100) 0 (−11 to 11) 0.97 1.0 (0.5 to 2.1) 0.97 Intention to treat (ITT) 23 (41/179) 20 (35/177) −3 (−15 to 8) 0.59 0.8 (0.4 to 1.7) 0.64 Depressive symptoms (PHQ-2 score ≥ 3) AC 19 (16/84) 16 (16/100) −3 (−14 to 7) 0.52 0.8 (0.3 to 1.7) 0.59 ITT 21 (38/179) 17 (30/177) −6 (−16 to 5) 0.30 0.7 (0.3 to 1.4) 0.29 Anxiety and depressive symptoms (PHQ-4 score ≥ 6) AC 18 (15/84) 12 (12/100) −10 (–22 to 2) 0.11 0.5 (0.2 to 1.2) 0.11 ITT 21 (38/179) 13 (23/177) −11 (–23 to 2) 0.09 0.5 (0.2 to 1.1) 0.09 a Adjusted by symptoms at baseline. In Table 5 it appears that the mean number of video and telephone sessions in the proactive e-alcohol therapy group were similar among participants with and without moderate-severe symptoms of anxiety and depression at both 3- and 12-month follow-up. Table 5 Number of therapy sessions completed by participants with and without moderate-severe anxiety and depressive symptoms (PHQ-4 score ≥ 6) in the proactive e-alcohol therapy group, split by type of communication channel. E-alcohol therapy 3 months follow-up Problematic alcohol use + anxiety and depressive symptoms Problematic alcohol use Sessions (N) Participants (N) Mean Sessions (N) Participants (N) Mean Video sessions 84 28 3.0 137 35 3.9 Telephone sessions 54 21 2.6 75 29 2.6 Face-to-face sessions 10 4 2.5 20 4 5 Therapy sessions, total 148 232 12 months follow-up Problematic alcohol use + anxiety and depressive symptoms Problematic alcohol use Sessions (N) Participants (N) Mean Sessions (N) Participants (N) Mean Video sessions 175 16 10.9 286 30 9.5 Telephone sessions 120 15 8 156 20 7.8 Face-to-face sessions 45 3 15 39 5 7.8 Therapy sessions, total 340 481 The aim of this study was twofold. Firstly, we explored whether anxiety or/and depressive symptoms modified the effect of proactive e-alcohol therapy on treatment initiation, compliance, and alcohol intake. No significant interaction was found between moderate-severe anxiety or/and depressive symptoms and therapy group regarding initiation of treatment, compliance to treatment, or alcohol intake at 3- and 12-month follow-up. These results suggest that proactive e-alcohol therapy is similarly effective for individuals with problematic alcohol use, regardless of co-occurring anxiety or/and depressive symptoms. Previous studies on traditional face-to-face addiction treatment have found anxiety and depressive symptoms to pose a barrier to successful treatment . Consequently, proactive e-alcohol therapy might be a suitable treatment alternative for this subgroup of individuals with problematic alcohol use and co-occurring symptoms of anxiety or/and depression. Secondly, we examined the impact of proactive e-alcohol therapy on anxiety or/and depressive symptoms compared to standard care at 3-month follow-up. No significant difference in the proportion of participants with moderate-severe anxiety or/and depressive symptoms was found between the two intervention groups. As the number of participants experiencing moderate-severe symptoms was halved, this finding indicates that psychosocial alcohol therapy itself might have a beneficial impact on anxiety and depressive symptoms, independent of the therapy’s communication channel. This adds to the existing literature highlighting the effectiveness of psychosocial therapy for anxiety and depression and suggests potential benefits of interventions that address overlapping symptoms of multiple disorders. Current recommendations advocate for specific treatment of co-occurring mental disorders alongside alcohol treatment, as this approach improves prognosis . At baseline, there was a slightly higher number of participants with moderate-severe symptoms in the proactive e-alcohol therapy group compared to standard care. This imbalance could introduce confounding, potentially influencing our findings. Also, it is important in the interpretation of our findings to note, that only a few studies have been conducted on which screening instruments are applicable for identifying anxiety and depression among people with problematic alcohol use , and the GAD-2, PHQ-2, and PHQ-4 scales are indicative of symptoms only and not diagnostic tools. For some individuals entering alcohol treatment while experiencing symptoms of anxiety or/and depression, these symptoms may resolve during treatment, while they may persist or worsen in others, which can increase the risk of negative outcomes. Rabinowitz et al. found that symptom trajectories of anxiety and depression were linked to treatment attrition among individuals in alcohol treatment, and these trajectories were associated with patient demographics and substance use. For example, they found that women were more prone to experience persistent anxiety and depressive symptoms. The group with concurrent problematic alcohol use and anxiety or/and depressive symptoms in this study is likely heterogeneous in terms of symptom trajectories, which is not captured by baseline measurements with GAD-2, PHQ-2, and PHQ-4, potentially masking subgroups that exhibit varying degrees of response to the intervention. The use of the screening tools may further be limiting, as the tools measure only a few core anxiety and depressive symptoms. Participants with other symptoms may not have been detected. However, GAD-2, PHQ-2, and PHQ-4 have been validated in several studies, all of which show they have good sensitivity and specificity for detecting anxiety and depression . Participants in this trial were highly motivated to change, as indicated by a high readiness to change score at baseline . The proactive phone call made by the therapist to arrange the first e-alcohol therapy session may have tapped into this motivation among participants in the intervention group. Additionally, this proactive component could have been particularly valuable for participants with co-occurring anxiety or/and depressive symptoms, who could have significantly benefited from having the therapist take the lead in initiating the first session. If this were the case, it would likely have been reflected in the initiation outcome. Similarly, the proactive email containing contact information for a local clinic in the standard care group might have also benefitted the participants with co-occurring anxiety or/and depressive symptoms, potentially impacting the initiation outcome. Typically, people would need to locate this information on their own. This study makes an important contribution to the existing literature by exploring the relationship between proactive e-alcohol therapy and symptoms of anxiety or/and depression among individuals with problematic alcohol use. It builds on a large trial with a 12-month long-term follow-up, effectively engaging a population distinct from those typically seen in alcohol treatment (for example, participants were more frequently female, employed, and had lower weekly alcohol consumption) . These characteristics are important for assessing the generalizability of the study's results. It is also important to note that this study population was self-motivated to seek treatment and highly motivated to change. The study has several limitations: Firstly, it was not adequately powered to detect interactions between the intervention group and anxiety/depressive symptoms. Building on this, the impact of proactive e-alcohol therapy on anxiety or/and depressive symptoms was also not further explored in subgroups defined by baseline symptoms. Secondly, the findings rely on self-reports. Thirdly, there was a substantial number of participants lost to follow-up in the trial, with varying rates between the two intervention groups. Result robustness was verified by supplementary analyses employing the intention-to-treat principle. In conclusion, this study suggests that proactive alcohol therapy is effective for individuals with problematic alcohol use, regardless of co-occurring anxiety or/and depressive symptoms. Moreover, it finds that proactive e-alcohol therapy and standard care have a similar impact on reducing symptoms of anxiety and depression. It is crucial to gain a better understanding of the severity of these concurrent problems and how severity may impact the effectiveness of proactive e-alcohol therapy. The study was funded by TrygFonden. The funders of the study had no role in study design, data collection, data analysis, data interpretation, or writing of this paper. Due to data privacy regulations, data generated during this study are not publicly accessible. Access to anonymized data may be granted upon evaluation by the principal investigator and the trial management group. Additionally, project-related documents will be available upon request. All inquiries should be directed to the corresponding author. Kia Kejlskov Egan: Writing – original draft, Project administration, Investigation, Formal analysis, Data curation, Conceptualization. Veronica Pisinger: Writing – review & editing, Writing – original draft, Validation, Formal analysis, Conceptualization. Ulrik Becker: Writing – review & editing, Supervision, Funding acquisition. Janne Schurmann Tolstrup: Conceptualization, Formal analysis, writing - review and editing, Supervision, Funding Acquisition. During the study period, the National Institute of Public Health, University of Southern Denmark, received a grant from Novaví to evaluate their other treatment offers. This evaluation was conducted by KKE and UB. UB had travel expenses covered by Novaví for his participation in a meeting on the Faroe Islands about local alcohol treatment opportunities. UB also received an honorarium for giving a lecture on evidence-based alcohol treatment for Novaví. All authors declare no competing interests. | Review | biomedical | en | 0.999996 |
PMC11696641 | Endometriosis (EMS) is a non-malignant gynecological disease characterized by ectopic growth of endometrial tissue, affecting 5–10% of women of reproductive age worldwide. 1 Infertility and dysmenorrhea are common clinical manifestations. Because the pathogenesis is not clear, the early diagnosis and treatment of EMS are stuck in a bottleneck. The pathogenesis of EMS involves many aspects such as heredity, environment, infection, and immunity. 2 Although there are a variety of mechanism models to explain the occurrence of EMS, the exact pathogenesis has not been known so far. Among them, retrograde menstruation is widely accepted as one of the models, the main reason is that retrograde menstruation does occur frequently in EMS. 3 Studies have shown that abnormal hormonal changes in the body, the role of inflammatory factors, immune disorders, genetic and epigenetic factors, and environmental factors are important reasons for the development of EMS, and the endometrium is greatly affected by hormones, especially estrogen, during menstruation in primates. 4 , 5 Endometrial stromal cells have been widely used in the study of endometrial EMS, and the proliferation and ectopic growth of endometrial stromal cells is one of the main entry points for the pathogenesis of endometrial EMS. Current studies have proved that epigenetics plays a role in the occurrence and development of EMS. 6 , 7 Epigenetic modifications are chemical or physical modifications that affect gene function and thus regulate gene reading and expression without altering the nuclear DNA sequence. These changes are both heritable and reversible, and are a key factor in the progression of EMS. They act as a catalyst for the invasive spread of cells. MicroRNAs (miRNAs) are a class of non-coding RNAs that regulate gene expression post-transcriptionally by partially binding to target mRNAs and participate in various activities at the cell level. 8 , 9 , 10 miRNAs can inhibit or promote cancer by targeting oncogenes or suppressor genes. The study of miRNA has provided a new perspective for understanding how tumors develop and led to further ideas in terms of their diagnosis, treatment, and prognosis. The normal function of miRNAs being abnormally inhibited is one of the significant factors in the occurrence of EMS. Zhou et al. 11 found that miR-205-5p can directly target ANGPT2 and indirectly regulate the AKT/ERK signaling pathway, playing an inhibitory role in EMS. The expression of miR-370-3p tends to be downregulated in EMS, which can inhibit the proliferation, metastasis, and invasion of hEM15A cells and promote cell apoptosis. Li et al. 12 found that miR-92a inhibits the development of EMS by inhibiting the expression of PTEN. In vitro experiments confirmed that miR-92a, through antagomir inhibition, can enhance the therapeutic effect of progesterone, thereby inhibiting stromal cell proliferation and reducing the formation of ectopic lesions in mouse models of EMS. This suggests that miR-92a may be a key regulator of EMS proliferation and metastasis. In our research, we have focused on miR-450b-5p based on the results of chip analysis. According to the miRBase database, the total length of miR-450b-5p is 22 nucleotides. To date, studies of miR-450b-5p have been confined to rhabdomyosarcoma and corneal eye disease, with none on the role of miR-450b-5p in EMS. 13 , 14 Therefore, the aim of this study was to identify the mechanism via which HOXD10 influences the invasiveness of EMS in the hope of establishing a basis for identifying new therapeutic targets. Our study discovered that miR-450b-5p is upregulated in ectopic endometrial tissue, while GABPA and HOXD10 are downregulated. We demonstrated that miR-450b-5p suppresses GABPA, leading to decreased HOXD10 expression, which in turn promotes cell proliferation and invasion and inhibits apoptosis, contributing to EMS. Bioinformatics and a luciferase reporter assay confirmed miR-450b-5p’s direct targeting of GABPA. Western blotting and RT-qPCR further validated this regulation at both protein and mRNA levels. Our findings suggest that the miR-450b-5p/GABPA/HOXD10 axis could be a promising therapeutic target for EMS. We obtained the GEO EMS dataset using the Xiantao Academic Bioinformatics tool, which synthesizes data from multiple transcriptome databases. The necessary datasets were retrieved using “endometriosis” as the keyword, “Homo sapiens,” as the search condition, and “chip” as the GEO2R type to be analyzed. Finally, the following four GEO datasets were selected: GSE5108 , GSE7305 , GSE23339 , and GSE58178 . Differentially expressed genes that were upregulated or downregulated were extracted from these datasets and their intersections identified. The screening condition was |log2 fold-change| >1. After correction, p < 0.05. The fold-change was used to extract the intersecting differentially expressed genes, which were then visualized using a Wayne diagram . The intersecting differentially expressed genes were screened to identify those that were not previously linked with EMS. Next, RT-qPCR was performed to confirm the differentially expressed genes to be included in our mouse model of EMS. Figure 1 HOXD10 is low expressed in EMS (A) Upregulated and downregulated differentially expressed genes were extracted from four GEO datasets: GSE5108 , GSE7305 , GSE23339 and GSE58178 , respectively, and intersection differentially expressed genes were extracted by Wayne diagram. (B) The expression of differential genes in normal and ectopic endometrial tissues of mice was detected by RT-qPCR. The results showed that HOXD10 expression decreased in the EMS group compared with the control group (∗∗∗ p < 0.001). (C) Immunohistochemical results showed that HOXD10 expression was low in EMS patients ( p < 0.05). We found that the average expression of HOXD10 mRNA was more than three times lower in ectopic endometrial tissues than in normal control tissues . Immunohistochemistry also showed that HOXD10 expression was significantly lower in ectopic lesions from patients with EMS than in normal human endometrial tissues ( p < 0.05) . Next, we investigated changes in proliferation, migration, invasion, and apoptosis in hEM15A cells after overexpression of HOXD10. RT-qPCR confirmed that HOXD10 expression was more than three times higher in hEM15A cells that overexpressed HOXD10 (the OE- HOXD10 group) than in normal control cells that did not overexpress HOXD10 (the OE-NC group); this finding was statistically significant. HOXD10 expression was more than 3x higher in cells that were transfected with plasmids to over-express HOXD10 . We selected the OE-2 (overexpression of HOXD10-2) group for the next experiment. Cell Counting Kit (CCK)-8 assays revealed that optical density increased more slowly in the OE-HOXD10 group than in the OE-NC group, indicating that proliferation of hEM15A cells was significantly inhibited by HOXD10 overexpression . Transwell migration and invasion assays showed that transfection with OE-HOXD10 significantly inhibited migration and invasion of hEM15A cells in comparison with the control cells . Annexin V/propidium iodide (PI) double staining and flow cytometry showed that a high proportion of Annexin V-positive cells promoted apoptosis in the OE-HOXD10 group . Collectively, these findings confirmed that HOXD10 alters cell behavior and has a role in proliferation of endometrial cells. Figure 2 Effect of overexpression of HOXD10 on hEM15A (A) RT-qPCR verified the overexpression efficiency of HOXD10, and the results showed that the expression of HOXD10 in OE-1(overexpression of HOXD10-1), OE-2(overexpression of HOXD10-2) and OE-3(overexpression of HOXD10-3) was more than three times(∗∗ p < 0.01). (B) The proliferation ability of hEM15A after OE-HOXD10 was detected by CCK-8 assay, and the results showed that the proliferation ability of OE-HOXD10 group was significantly decreased compared with the control group(∗ p < 0.05). (C and D) Transwell detected the migration and invasion ability of hEM15A after OE-HOXD10, and the results showed that the migration and invasion ability of OE-HOXD10 group was significantly lower than that of control group (∗ p < 0.05). (E) Annexin V/PI double staining+flow sorting to detect the apoptosis of hEM15A after OE-HOXD10. The results showed that the apoptosis rate of OE-HOXD10 group was increased compared with the control group (∗∗ p < 0.01). Having demonstrated that HOXD10 is a crucial inhibitory factor in EMS, we then explored its upstream regulation. We selected a region of approximately 2 kb near the transcription start site for HOXD10 and used the Jaspar TFBS hub of UCSC to locate transcription factors. We identified GABPA to be a transcription factor that could potentially regulate HOXD10. RT-qPCR showed low expression of GABPA in ectopic endometrial tissues . Subsequently, GABPA was overexpressed, and OE-1(overexpression of GABPA-1),OE-2(overexpression of GABPA-2) and OE-3(overexpression of GABPA-3) all reached overexpression rates, with significant p -values . RT-qPCR showed that the HOXD10 expression level was significantly increased when GABPA was overexpressed . The effect of GABPA on transcriptional activation of HOXD10 was evaluated using a double luciferase reporter assay. The results showed that GABPA bound to the promoter region of HOXD10 and promoted transcription of the luciferase reporter gene, resulting in a corresponding increase in the biofluorescence emitted by the luciferase-catalyzed substrate . GABPA was shown to bind to the promoter region of HOXD10 for transcriptional activation . Figure 3 HOXD10 is transcriptionally upregulated by GABPA (A) The expression of GABPA in normal and ectopic endometrial tissues of mice was detected by RT-qPCR. The results showed that GABPA expression decreased in EMS group compared with control group (∗ p < 0.05). (B) RT-qPCR verified the overexpression efficiency of GABPA, and the results showed that the expression of GABPA in OE-1(overexpression of GABPA-1), OE-2 (overexpression of GABPA-2) and OE-3(overexpression of GABPA-3) was more than three times than the control group (∗∗ p < 0.01). (C) RT-qPCR verified the overexpression efficiency of GABPA, and the results showed that the expression efficiency of OE-GABPA group was more than 3 times that of NC group(∗ p < 0.05). (D and E) Double luciferase reporter gene assay showed that GABPA regulates HOXD10 transcription (∗∗ p < 0.01). To explore the role of GABPA in regulation of expression of HOXD10 in hEM15A-related phenotypes, we overexpressed GABPA and knocked down HOXD10 and vice versa. Western blotting showed that after overexpression of GABPA in hEM15A cells, expression of HOXD10 was significantly increased; the cell immunofluorescence results showed that the fluorescence intensity was higher in cells that overexpressed GABPA and that the HOXD10 expression level was significantly lower in the group with GABPA knockdown in comparison with control cells. The fluorescence intensity decreased . The CCK-8 assay showed that cell proliferation was slower in the group with GABPA overexpression only and more rapid in the group with HOXD10 knockdown only in comparison with the control group. Cell proliferation was more rapid in the group with both GABA overexpression and HOXD10 knockdown than in the group with GABPA overexpression only and also more rapid in the group with GABPA knockdown only than in the control group. Cell proliferation was slower in the group with HOXD10 overexpression than in the control group and was also slower in the group with both HOXD10 overexpression and GABPA knockdown than in the group with GABPA knockdown only . Annexin V/PI double staining and flow sorting showed that, compared with those in the control group, Annexin V-positive cells were greater in number in the group with GABPA overexpression only and in the group with HOXD10 knockdown only. There were fewer Annexin V-positive cells in the group with both HOXD10 overexpression and GABPA knockdown than in the group with GABPA overexpression only. Compared with the findings in the control group, there were fewer Annexin V-positive cells in the group with GABPA knockdown only and more Annexin V-positive cells in the group with HOXD10 overexpression only. Moreover, there were more Annexin V-positive cells in the group with both HOXD10 overexpression and GABPA knockdown than in the group with GABPA knockdown only . The transwell assay results showed that fewer cells entered the lower compartment in the group with GABPA overexpression only than in the control group and that more cells entered the lower compartment in the group with HOXD10 knockdown only than in the control group and in the group with both GABPA overexpression and HOXD10 knockdown than in the group with GABPA overexpression only. Compared with that in the control group, the number of cells that entered the lower compartment was higher in the group with GABPA knockdown only and lower in the group with HOXD10 overexpression only. The number of cells that entered the lower compartment was also lower in the group with both HOXD10 overexpression and GABPA knockdown than in the group with GABPA knockdown only . The transwell assay with Matrigel showed that there were fewer transmembrane cells in the group with GABPA overexpression only than in the control group. There were more transmembrane cells in the group with HOXD10 knockdown only than in the control group and in the group with both HOXD10 overexpression and GABPA knockdown than in the group with GABPA overexpression only. Compared with the findings in the control group, there were more transmembrane cells in the group with GABPA knockdown only and fewer transmembrane cells in the group with HOXD10 overexpression only. There were also fewer transmembrane cells in the group with both GABA knockdown and HOXD10 overexpression than in the group with GABPA knockdown only . These results suggested that knockdown of HOXD10 when GABPA is overexpressed can reverse the proliferation, migration, invasion, and apoptosis of hEM15A cells induced by overexpression of GABPA. Conversely, overexpression of HOXD10 when GABPA is knocked down can reverse the proliferation, migration, invasion, and apoptosis induced by GABPA alone in these cells. These findings indicated that GABPA inhibits proliferation, migration, and invasion of hEM15A cells via transcriptional activation of HOXD10, promoting apoptosis. Figure 4 GABPA relies on HOXD10 to regulate cell behavior (A) Western blot and cellular immunofluorescence experiments showed that the expression level of HOXD10 increased after GABPA overexpression, whereas the expression level of HOXD10 decreased after GABPA knockout(∗ p < 0.05). (B) CCK-8 assay, the proliferation of Control+OE-GABPA+si-NC group was slower than that of Control+OE-NC+si-NC group, and the proliferation of Control+OE-NC+si-HOXD10 group was faster than that of Control+OE-NC+si-NC group. The proliferation of Control+OE-GABPA+si-HOXD10 group was faster than that of Control+OE-GABPA+si-NC group. The proliferation of Control+si-GABPA+OE-NC group was accelerated compared with Control+si-NC+OE-NC group, and the proliferation of Control+si-NC+OE-HOXD10 group was slowed down compared with Control+si-NC+OE-NC group. The proliferation of Control+si-GABPA+OE-HOXD10 group was slower than that of Control+si-GABPA+OE-NC group(∗∗ p < 0.01). (C) Control+OE-GABPA+si-NC group Annexin V positive cells increased compared with Control+OE-NC+si-NC group, and Control+OE-NC+si-HOXD10 group Annexin V positive cells decreased compared with Control+OE-NC+si-NC group. Annexin V positive cells were decreased in Control+OE-GABPA+si-HOXD10 group compared with Control+OE-GABPA+si-NC group. Annexin V positive cells decreased in the Control+si-GABPA+OE-NC group compared with the Control+si-NC+OE-NC group, Annexin V positive cells increased in the Control+si-NC+OE-HOXD10 group compared with the Control+si-NC+OE-NC group. Annexin V positive cells were increased in Control+si-GABPA+OE-HOXD10 group compared with Control+si-GABPA+OE-NC group(∗∗ p < 0.01). (D) The number of cells entering the lower compartment in Control+OE-GABPA+si-NC group was lower than that in Control+OE-NC+si-NC group, and the number of cells entering the lower compartment in Control+OE-NC+si-HOXD10 group was higher than that in Control+OE-NC+si-NC group.More cells entered the lower compartment in Control+OE-GABPA+si-HOXD10 group than in Control+OE-GABPA+si-NC group. The number of cells entering the lower compartment was higher in Control+si-GABPA+OE-NC group than in Control+si-NC+OE-NC group, and the number of cells entering the lower compartment was lower in Control+si-NC+OE-HOXD10 group than in Control+si-NC+OE-NC group. The number of cells in Control+si-GABPA+OE-HOXD10 group was lower than that in Control+si-GABPA+OE-NC group(∗∗ p < 0.01). (E) The number of transmembrane cells in Control+OE-GABPA+si-NC group was lower than that in Control+OE-NC+si-NC group, and the number of transmembrane cells in Control+OE-NC+si-HOXD10 group was higher than that in Control+OE-NC+si-NC group. Control+OE-GABPA+si-HOXD10 group had more transmembrane cells than Control+OE-GABPA+si-NC group. The number of transmembrane cells in Control+si-GABPA+OE-NC group was higher than that in Control+si-NC+OE-NC group, and the number of transmembrane cells in Control+si-NC+OE-HOXD10 group was lower than that in Control+si-NC+OE-NC group. The number of transmembrane cells in Control+si-GABPA+OE-HOXD10 group was lower than that in Control+si-GABPA+OE-NC group (∗∗ p < 0.01). Having demonstrated that GABPA is an important inhibitory factor in EMS, we explored its upstream regulation using the ENCORI_hg38 and miRDB databases to identify the miRNAs that complement and bind to the 3′-UTR sequence of GABPA and those that potentially regulate the transcription factor GABPA. miR-450b-5p was screened for higher expression in animal tissues than the control tissues . To examine the interaction between GABPA and miR-450b-5p, we first compared the expression of GABPA by adding an miR-450b-5p mimic and an miR-450b-5p inhibitor. Both western blotting and RT-qPCR showed that expression of GABPA was downregulated after addition of mimic-miR-450b-5p and upregulated after addition of inhibitor-miR-450b-5p . Next, we used double luciferase reporter assays to confirm the interaction and detect whether miR-450b-5p directly binds to GABPA. The results showed that miR-450b-5p could bind to the 3′-UTR of GABPA and that addition of the mimic led to degradation of luciferase mRNA, inhibition of translation, and reduction in luciferase protein, with a corresponding decrease in the biofluorescence emitted by the luciferase-catalyzed substrates. The relative fluorescence intensity was lower in the GABPA-3′-UTR-wt + mimic-miR-450b-5p group than in the GABPA-3′-UTR-wt + NC mimic group . These findings demonstrated that miR-450b-5p can inhibit GABPA and that GABPA is the direct target gene of miR-450b-5p. Figure 5 GABPA is downregulated by miR-450b-5p (A) The results of RT-qPCR showed that miR-450b-5p was highly expressed in ectopic endometrial tissues of EMS mice compared with normal control mice(∗∗ p < 0.01). (B) The knockdown and overexpression efficiency of miR-450b-5p was detected by RT-qPCR(∗∗ p < 0.01). (C) The effects of knockdown and overexpression of miR-450b-5p on GABPA expression were detected by western blot(∗ p < 0.05). (D) Double luciferase reporter gene assay in HEK293T cells showed that miR-450b-5p regulates GABPA(∗∗ p < 0.01). (E) Double luciferase reporter gene assay in hEM15A showed that miR-450b-5p regulates GABPA. (∗ p < 0.01). (F) Specific binding sites of miR-450b-5p and GABPA-3 'UTR. Next, we examined whether regulation of expression of GABPA by miR-450b-5p plays a role in promotion of EMS by miR-450b-5p. To this end, we overexpressed miR-450b-5p and knocked down GABPA and vice versa to examine whether GABPA is necessary for miR-450b-5p to be able to promote EMS. We observed the hEM15A cells for changes in proliferation, apoptosis, migration, and invasion after adding mimic-miR-450b-5p and inhibitor-miR-450b-5p and transfecting OE-GABPA and si-GABPA. CCK-8 assays indicated that proliferation was more rapid in the mimic-miR-450b-5p only group and slower in the GABPA overexpression only group when compared with that in the control group and also slower in the GABPA overexpression + mimic-miR-450b-5p group than in the mimic-miR-450b-5p only group. Compared with that in the control group, proliferation was slower in the inhibitor-miR-450b-5p only group and more rapid in the GABPA knockdown only group. Moreover, proliferation was more rapid in the inhibitor-miR-450b-5p + GABPA knockdown group than in the inhibitor-miR-450b-5p only group . Annexin V/PI double staining and flow cytometry showed that there were fewer Annexin V-positive cells in the mimic-miR-450b-5p only group and in the GABPA overexpression only group than in the control group, fewer Annexin V-positive cells in the mimic-miR-450b-5p + GABPA overexpression group than in the mimic miR-450b-5p group, and fewer Annexin V-positive cells in the GABPA knockdown only group than in the control group. There were also fewer Annexin V-positive cells in the group with inhibitor-miR-450b-5p + GABPA knockdown than in the group with inhibitor-miR-450b-5p only . Transwell assay results showed that more cells entered the lower chamber in the mimic-miR-450b-5p only group than in the control group and that fewer cells entered the lower chamber in the GABPA overexpression only group than in the control group. Fewer cells entered the inferior compartment in the mimic-miR-450b-5p + GABPA overexpression group than in the mimic-miR-450b-5p only group and in the mimic-miR-450b-5p only group when compared with the findings in the control group. However, more cells entered the inferior compartment in the GABPA knockdown only group than in the control group and inhibitor-miR-450b-5p + GABPA knockdown group when compared with the findings in the inhibitor-miR-450b-5p only group . The transwell assay with Matrigel showed that there were more transmembrane-penetrating cells in the inhibitor-miR-450b-5p only group than in the control group. There were fewer transmembrane cells in the GABPA overexpression only group than in the control group and in the mimic-miR-450b-5p + GABPA overexpression group than in the mimic-miR-450b-5p only group. Compared with the findings in the control group, there were fewer penetrating cells in the inhibitor-miR-450b-5p only group and more penetrating cells in the GABPA knockdown only group. There were also fewer penetrating membrane cells in the inhibitor-miR-450b-5p + GABPA knockdown group than in the inhibitor-miR-450b-5p only group . These results suggested that miR-450b-5p can affect proliferation, apoptosis, migration, and invasion of hEM15A cells by targeting GABPA. Figure 6 miR-450b-5p relies on GABPA to regulate hEM15A cell behavior (A) CCK-8 assay, the proliferation of mimics miR-450b-5p+OE-NC group was accelerated compared with mimics+OE-NC group, and the proliferation of mimics-NC+OE-GABPA group was slowed down compared with mimics-NC+OE-NC group. The proliferation of mimics miR-450b-5p+OE-GABPA group was slower than that of mimics miR-450b-5p+OE-NC group. Inhibitor-miR-450b-5p+si-NC group had slower proliferation than inhibitor-NC+si-NC group, and inhibitor-NC+si-GABPA group had faster proliferation than inhibitor-NC+si-NC group. Inhibitor-miR-450b-5p+si-GABPA group had faster proliferation than inhibitor-miR-450b-5p+si-NC group(∗ p < 0.05). (B) Annexin V/PI double staining+flow sorting, the mimics miR-450b-5p+OE-NC group had lower Annexin V positive cells than the mimics+OE-NC group, and the mimics-NC+OE-GABPA group had higher Annexin V positive cells than the mimics-NC+OE-NC group. The Annexin V positive cells in miR-450b-5p+OE-GABPA group were higher than those in mimics miR-450b-5p+OE-NC group. The Annexin V positive cells in inhibitor-miR-450b-5p+si-NC group were higher than those in inhibitor-NC+si-NC group. The Annexin V positive cells in inhibitor-NC+si-GABPA group were lower than those in inhibitor-miR-450b-5p+si-NC group. The Annexin V positive cells in the inhibitor-miR-450b-5p+si-GABPA group were lower than those in the inhibitor-miR-450b-5p+si-NC group(∗ p < 0.05). (C) Transwell migration test showed that the number of cells entering the lower chamber was higher in mimics-miR-450b-5p+OE-NC group than in mimics-NC+OE-NC group, while the number of cells entering the lower chamber was lower in mimics -NC+OE-GABPA group than in mimics-NC+OE-NC group. The number of cells entering the lower compartment in mimics miR-450b-5p+OE-GABPA group was lower than that in mimics miR-450b-5p+OE-NC group. The number of cells entering the inferior compartment was lower in inhibitor-miR-450b-5p+si-NC group than in inhibitor-NC+si-NC group group, and the number of cells entering the inferior compartment was higher in inhibitor-NC+si-GABPA group than in inhibitor-NC+si-NC group. The number of cells entering the lower compartment was more in the inhibitor-miR-450b-5p+si-GABPA group than in the inhibitor-miR-450b-5p+si-NC group(∗ p < 0.05). (D) Invasion test showed that the number of transmembrane cells in mimics -miR-450b-5p+OE-NC group was higher than that in mimics-NC+OE-NC group, and the number of transmembrane cells in mimics-NC+OE-GABPA group was lower than that in mimics-NC+OE-NC group. The number of transmembrane cells in mimics miR-450b-5p+ OE-GABPA group was lower than that in mimics miR-450b-5p+OE-NC group. The number of penetrating cells in inhibitor-miR-450b-5p+si-NC group was lower than that in inhibitor-NC +si-NC group, and the number of penetrating cells in inhibitor-NC +si-GABPA group was higher than that in inhibitor-NC +si-NC group. The number of penetrating cells in inhibitor-miR-450B-5p +si-GABPA group was higher than that in inhibitor-miR-450B-5p +si-NC group (∗ p < 0.05). To study the effect of GABPA on the ability of miR-450b-5p to regulate expression of HOXD10, we added a mimic and an inhibitor of miR-450b-5p to hEM15A cells and observed the expression of HOXD10 after transfection with OE-GABPA and si-GABPA. Our RT-qPCR results showed that the expression level was lower in the mimic-miR-450b-5p only group than in the control group. Conversely, expression of HOXD10 was higher in the GABPA overexpression only group than in the control group and was also higher in the mimic-miR-450b-5p + GABPA overexpression group than in the mimic-miR-450b-5p only group. The HOXD10 expression level was higher in the inhibitor-miR-450b-5p only group than in the control group and lower in the GABPA knockdown only group than in the control group. The HOXD10 expression level was also lower in the inhibitor-MiR-450B-5p + GABPA knockdown group than in the inhibitor-miR-450b-5p only group . Western blotting showed that the HOXD10 expression level was lower in the mimic-miR-450b-5p only group than in the control group, higher in the GABPA overexpression only group than in the control group, higher in the mimic-miR-450b-5p + GABPA overexpression group than in the mimic-miR-450b-5p only group , higher in the inhibitor-miR-450b-5p only group than in the control group, lower in the GABPA knockdown only group than in the control group, and lower in the inhibitor-miR-450B-5p + GABPA knockdown group than in the inhibitor-miR-450b-5p only group . Figure 7 miR-450b-5p relies on GABPA to inhibit HOXD10 expression (A–C) RT-qPCR and western blot results showed that overexpression of miR-450b-5p could restore the phenotype generated by overexpression of GABPA that promoted HOXD10 expression, and knockdown of miR-450b-5p could restore the phenotype generated by GABPA that inhibited HOXD10 expression (∗ p < 0.05). We developed a mouse model of EMS to investigate whether HOXD10 plays a role in progression of the disease and to determine the role of HOXD10 in vivo . LV-oe-HOXD10 cells, which overexpress HOXD10, were purchased from the Gemma Company (Glastonbury, CT, USA) and transfected into hEM15A cells to detect the efficiency of lentivirus (LV) overexpression. Western blotting showed that the expression level was higher in the LV-oe-HOXD10 group than in the LV control group . Recipient mice underwent transplantation of uterine fragments from donor mice of the same genus. Twenty-four hours after transplantation, the mice were randomly divided into a negative control group ( n = 10) that received intraperitoneal injections of LV for 4 weeks and an OE-HOXD10 group ( n = 10) that received intraperitoneal injections of LV-oe-HOXD10 for 4 weeks. At the end of treatment. the weight, size, and number of endometriotic lesions were recorded. Tissue morphology, expression of HOXD10, and proliferative markers were observed by immunohistochemistry and hematoxylin-eosin (HE) staining, and apoptosis markers were detected by western blotting to evaluate the extent of apoptosis of cells in the endometriotic lesions. Measurements of lesion weight, volume, and number were lower in the LV-oe-HOXD10 group than in the LV control group . The immunohistochemical and HE staining results showed lower expression levels of proliferative markers and higher HOXD10 expression levels in the LV-oe-HOXD10 group than in the LV control group with a return of tissue morphology to normal . Western blotting showed that the expression levels of pro-apoptotic molecules (i.e., caspase-3, caspase-9, and Bax) were higher and those of anti-apoptotic molecules (e.g., Bcl-2) were lower in the LV-oe-HOXD10 group than in the LV control group. Compared with the findings in the LV control group, the expression levels of the three pro-apoptotic molecules were lower and the expression level of the anti-apoptotic molecule Bcl-2 was higher in the LV-oe-HOXD10 group . Pathological analysis showed that stromal cells became smaller with degeneration of glandular epithelial vacuoles, a decrease in ectopic endometrial cells, and an increase in apoptosis in the model group. These findings indicated that overexpression of HOXD10 had a therapeutic effect in a mouse model of EMS. Figure 8 Experimental results in animal models (A) Lentivirus overexpressing HOXD10(LV-oe-HOXD10) was transfected into hEM15A, and the overexpression efficiency of lentivirus was detected by western blot (∗ p < 0.05). (B) Recipient mice of the same genus were transplanted uterus fragments from donor mice, and the mice were randomly divided into two groups 24h after transplantation: Group 1 was a negative control group ( n = 10), and the LV-control was injected intraperitoneally for 4 weeks; The second group, HOXD10 oe group ( n = 10), was intraperitoneally injected with LV-oe-HOXD10 for 4 weeks. Compared with the LV-control group, the weight, size and number of lesions in the LV-oe-HOXD10 group decreased (∗ p < 0.05). (C) The white arrow indicates ectopic endometrial tissue in an EMS mouse model. Immunohistochemistry and HE staining showed that the expression of proliferative markers in LV-oe-HOXD10 group was lower than that in LV-control group, the expression of HOXD10 was higher, and the tissue morphology returned to normal. (D) Western blot showed that the expression of pro-apoptotic molecules in LV-oe-HOXD10 group was higher than that in LV-control group, while the expression of anti-apoptotic molecules was lower in LV-oe-HOXD10 group, while the expression of anti-apoptotic molecules was higher than that in LV-control group(∗ p < 0.05). To study the function of miR-450b-5p in human EMS, we analyzed the expression level of miR-450b-5p in ectopic endometrial tissues . RT-qPCR showed that the expression level of miR-450b-5p was lower in human ectopic endometrial tissue than in normal endometrial tissue. Moreover, western blotting revealed that expression levels of GABPA and HOXD10 proteins were higher in ectopic endometrial tissues than in normal endometrial tissues . Furthermore, analysis of 20 samples of ectopic endometrial tissue revealed a negative correlation of GABPA and HOXD10 expression levels with the miR-450b-5p expression level. The GABPA expression level was positively correlated with the HOXD10 level . Overall, these findings suggested that downregulation of miR-450b-5p and upregulation of GABPA and HOXD10 may contribute to the progression of human EMS. Figure 9 Role of miR-450b-5p in EMS (A) RT-qPCR showed that miR-450b-5p was highly expressed in the endometrial tissues of EMS patients, while GABPA was low expressed in the endometrial tissues of EMS patients(∗ p < 0.05). (B) Western blot showed low expression of HOXD10 in endometrial tissues of EMS patients (∗ p < 0.05). (C) The software analysis showed that the RNA level expression of miR-450b-5p and GABPA at the tissue level was negatively correlated, the protein level expression of GABPA and HOXD10 at the tissue level was positively correlated, and the RNA level expression of miR-450b-5p and HOXD10 at the tissue level was negatively correlated. EMS is characterized by ectopic growth of endometrial glands and stroma outside the uterine cavity, which leads to symptoms. It is a common disease among women of childbearing age, with an incidence of 10–15%. Approximately 50% of cases are associated with infertility, which has a significant impact on women’s health and quality of life. The pathogenesis of EMS is multifaceted and remains largely unresolved. 15 There is no effective treatment for the cause, and both surgical and pharmacological treatment are associated with a high recurrence rate. Recent studies have found that the atopy of this disease is caused by the interaction between multiple gene loci and the environment, in which a change in the pelvic immune microenvironment, obstruction of the apoptosis pathway, and abnormal expression of aromatase may play important roles. 2 MicroRNAs are a class of endogenous long non-coding RNA molecules that contain 18–22 nucleotides and appear to be involved in the pathogenesis of EMS. Despite not encoding proteins themselves, miRNAs can inhibit or switch off the expression of target genes at the post-transcriptional level by binding to the 3′-UTR of their target mRNAs. MicroRNAs are important regulatory molecules in all living organisms and have been found to be dysregulated in various types of malignancy, diabetes, and other metabolic diseases. 16 Functional experiments have confirmed that miRNA is involved in the pathogenesis of endoheterogeneity, such as tissue hypoxia, inflammation, apoptosis, adhesion, and angiogenesis. 17 , 18 , 19 Homeobox (HOX) genes play an important role in the growth of tumors. 20 In a study of lung adenocarcinoma, Ma et al. identified miR-10b to be a promoter of cancer. 21 They also found that miR-10b targeted HOXD10 and that HOXD10 was inhibited when bound to miR-10b, resulting in increased aggressiveness of tumor cells. However, expression of HOXD10 increased when antagonists were added and progression to metastasis of lung adenocarcinoma was inhibited. Myers et al. also found that sustained expression of HOXD10 can inhibit formation of new blood vessels. 22 Conversely, when expression of HOXD10 is knocked down, the inhibitory effect on angiogenesis is weakened, increasing the aggressiveness of tumors. Studies have found a close relationship between many diseases, including cancer and cardiovascular disease, and regulation of gene expression. 23 , 24 , 25 In-depth study of the molecular mechanism via which gene expression is regulated could provide novel ideas and methods for early diagnosis, prevention, and treatment of disease. GA-binding protein A (GABPA) is a transcription factor with a key role in development and differentiation of cells and tumorigenesis. 26 Guo et al. demonstrated that GABPA can activate telomerase/TERT in bladder cancer and drive luminal differentiation of urinary tract epithelial cells by directly activating transcription of FOXA1 and GATA3, thereby inhibiting the aggressiveness of tumor cells and playing a tumor-suppressive role. 27 In a series of in vivo and in vitro experiments, Zhang et al. found that GABPA inhibited the invasion and metastasis of hepatocellular carcinoma cells by partially regulating E-cadherin, thereby acting as a suppressor gene in terms of metastasis and the prognosis of the disease. 28 However, the reasons for the different roles of GABPA in progression of different tumors and its specific mechanisms of action remain unclear. One possible explanation is that as a transcription factor, GABPA may be involved in regulation of various downstream genes; therefore, the way GABPA acts may depend on the type of tumor and its specific cellular microenvironment and signaling pathways. 29 miRNA was initially discovered by Anelli et al. in chronic lymphocytic leukemia. 30 They confirmed low expression of miR-15a and miR-161, thereby paving the way for investigation of miRNA in other human tumors. Further studies confirmed that miRNA is associated with many processes, including proliferation of tumors, apoptosis, and cell differentiation, and plays an important role in the development of tumors. Jin et al. found that the transcription factor TWIST can induce the transcription of miR-10b in breast cancer, and at the same time, miR-10b can affect the expression of a series of metastasis-related genes by regulating the transcription and translation of the target gene HOXD10, thereby promoting the invasion and metastasis of breast cancer. 31 For example, miR-378 improves the survival of malignant glioma cells by reducing the activity of caspase-3, thereby promoting angiogenesis and tumor growth. 32 Epithelial-mesenchymal transition (EMT) enables tumor cells to adapt to changes in their microenvironment, causing them to escape the tumor and enter the vascular system for metastasis and spread. miRNAs are also involved in EMT of tumor cells, whereby epithelial cells with weak diurnal polarity are transformed into stromal cells with strong depolarity in vivo . 33 In previous studies, it was found that the increased expression of miR-450b-5p is associated with the enhanced inflammatory response observed in liver ischemia/reperfusion injury (IRI), which includes the upregulation of pro-inflammatory cytokines such as tumor necrosis factor-alpha (TNF-α), interleukin-1 beta (IL-1β), and interleukin-6 (IL-6). miR-450b-5p exerts its effects by targeting alpha B-crystallin (CRYAB), and the suppression of CRYAB leads to the activation of the NF-κB signaling pathway. 34 This suggests that miR-450b-5p is involved in the body’s inflammatory response.Our study found that miR-450b-5p was abnormally expressed in ectopic endometrial tissue through previous studies. Meanwhile, bioinformatics analysis found that miR-450b-5p could act on GABPA, thereby regulating the change in activity in the signaling pathway and affecting the biological activity of endometrial cells, which has generated ideas for future research. In this study, expression of miR-450b-5p in ectopic endometrial tissue samples was detected by RT-qPCR. Subsequently, the effect of miR-450b-5p on the biological activity of endometrial cells was detected by in vivo and in vitro experiments. Using a combination of bioinformatics analysis, luciferase reporter assays, western blot assays, and additional experiments, we confirmed the expression of its target genes, clarified the role of miR-450b-5p in development of EMS, and further explored the molecular mechanism of miR-450b-5p. Our results have laid a theoretical foundation for understanding the molecular mechanism of EMS and searching for further molecular therapeutic targets. GABPA is a transcription factor that plays a key role in development and differentiation of cells and tumorigenesis and is involved in a large number of physiological and pathological processes. Until now, there has been no research on GABPA in EMS. One study showed that GABPA can control expression of KIS, thereby inhibiting migration of vascular smooth muscle cells, which in turn affects the phosphorylation and activity of p27. 35 In a series of in vivo and in vitro experiments by Zhang et al., 28 GABPA was found to partially regulate E-cadherin, thereby inhibiting the invasion and metastasis of liver cancer cells. Our study found low expression of GABPA in human ectopic endometrial tissues. Bioinformatics analysis and software prediction suggested that miR-450b-5p could target GABPA, which was confirmed in hEM15A cells. A luciferase reporter assay confirmed that miR-450b-5p can bind to the 3′-UTR of GABPA and inhibit its expression. Western blotting and RT-qPCR confirmed that miR-450b-5p could act on the 3′-UTR of GABPA and interfere with its expression at both the mRNA and protein levels. Previous studies have shown that dysregulated expression of the HOX gene is involved in development of lung, ovarian, breast, colon, bladder, and prostate cancers. 36 HOXD10 is a member of the HOX gene family. In renal clear cell carcinoma, HOXD10 acts as a tumor suppressor to inhibit invasion and migration of cancer cells by modulating E-cadherin and EMT. 24 In a study by Pan et al., HOXD10 was found to activate expression of miR-7 and IGFBP3 and to lead to a biologically inhibitory phenotype, suggesting a potential therapeutic role in colorectal cancer and demonstrating that HOXD10 is frequently methylated and silenced and contributes to development of this type of cancer. 25 In the present study, we explored the expression of HOXD10 in an hEM15A cell line and in vivo , changed the expression level of HOXD10 using plasmid transfection technology, and observed its effects on the proliferation, apoptosis, invasion, and migration of hEM15A cells. HOXD10 was predicted as a possible target of GABPA by software analysis and confirmed by dual luciferase reporter assay. To explore the mechanism via which HOXD10 influences the invasiveness of EMS and to provide a basis for selection of new therapeutic targets, we collected data on clinical cases to evaluate the relationship between HOXD10, GABPA, and miR-450b-5p. We used RT-qPCR and western blotting to detect the expression of HOXD10 mRNA and protein and its clinical significance in the ectopic endometrial tissue samples from 20 patients with EMS and in normal endometrial samples. In summary, our results suggest increased expression of miR-450b-5p and decreased expression of GABPA and HOXD10 in ectopic endometrial tissues. This research has confirmed that GABPA is the direct target gene of miR-450b-5p and that GABPA protein can bind to miR-450b-5p. Overexpression of miR-450b-5p or GABPA knockdown can inhibit proliferation, migration, and invasion of cells and inhibit apoptosis. Therefore, the miR-450b-5p/GABPA/HOXD10 signaling pathway may be a potential target in the treatment of EMS. Our study found that miR-450b-5p is significantly overexpressed in EMS lesion tissues, affecting the proliferation, migration, invasion, and apoptosis of hEM15A cells through the modulation of the GABPA/HOXD10 axis, which plays a role in the pathogenesis and progresion of EMS. However, the research was constrained by a small sample size, comprising only 12 normal endometrial tissue samples and 12 ectopic endometrial lesion samples. Therefore, increasing the sample size is essential to validate these findings. While combining datasets from different studies can enhance classification performance, it also introduces challenges such as batch effects and variations in technical and biological aspects. Additionally, integrating multiple microarray datasets from various platforms may lead to missing values due to differing gene coverage. Future research should aim to address these limitations by expanding the sample size to further investigate the pathogenesis of EMS. Further information and requests for resources and reagents should be directed to and will be fulfilled by lead contact,Yuan Yang. The study did not generate new unique materials or reagents. • This paper does not report nucleotide sequencing-associated datasets, proteomics, peptidomics, metabolomics, structures of biological macromolecules, or small-molecule crystallography • Any additional information about the data reported in this paper is available from the lead contact upon request. This study was supported by the Regional Scientists Fund of the National Institute of Science, Natural Science Foundation of China , the Lanzhou University 2023 Education and Teaching Reform Research Project , and Lanzhou University Medical Graduate Training-Obstetrics and Gynecology Professional Degree Master Graduate Management Demonstration Project . Y.H. and Y.Y. were mainly responsible for the conception and design of the study, acquisition of data, and drafting of the manuscript. Y.H. and Y.D.W. acquired, analyzed, and interpreted the data and drafted the manuscript. R.Y.L. and Y.M.L. contributed to the analysis and interpretation of the data and drafting of the manuscript. Y.Y. supervised the entire project, contributed to the conception and design of the study, drafted and critically revised the manuscript, approved the final version submitted for publication, and is considered the corresponding author. All the authors have read and agreed to the published version of the manuscript. The authors declare that they have no competing interests. REAGENT or RESOURCE SOURCE IDENTIFIER 10% fetal bovine serum Gibco,USA A5670701 TRIzol reagent Invitrogen,USA 15596026CN PrimeScript RT Master Mix Takara, Japan RR036A Lipofectamine 3000 Invitrogen,USA L3000015 Matrigel Corning,USA 354230 CCK-8 Dojindo Laboratories, Japan CK04 Annexin V-FITC kit Life Technologies,USA GMS10132 1% penicillin/streptomycin Solarbio, China P1400 Trypsin-EDTA Solarbio, China T1300 Anti-beta Actin Abcam,UK RRID: ab8226 Anti-HOXD10 Abcam,UK RRID: ab138508 Anti-GABPA Abcam,UK RRID: ab224325 Anti-cleaved-capase-3 Abcam,UK RRID: ab214430 Anti-caspase-3 Abcam,UK RRID: ab184787 Anti-cleaved-capase-9 Cell Signaling Technology,USA RRID: 9507S Anti-caspase-9 Abcam,UK RRID: ab202068 Anti-Bax Abcam,UK RRID: ab3191 Anti-Bcl-2 Abcam,UK RRID: ab182858 PBS Solarbio, China P1020 Tris-Glycine Running Buffer Solarbio, China T1070 WB Transfer Buffer Solarbio, China D1060 We purchased SPF grade females from Lanzhou University Animal Laboratory Center BALB/c mice (6–8 weeks of age), all animal experiments were approved by the First Hospital of Lanzhou University Research Ethics Committee . Specific pathogen-free female BALB/c mice aged 6–8 weeks were obtained from the Laboratory Animal Center of Lanzhou University Medical School and fed for 1 week. Estradiol (100 μg/kg body weight) was injected subcutaneously into recipient and donor mice once a week until the end of the experiment. On the day of modeling, both sides of the uterine horns of the donor mouse were prepared, and medium containing PBS was added to remove fat and mesangium and other tissues. The uterus of the donor mouse was cut open and into small pieces measuring approximately 1–2 mm 3 . There were 10 recipient mice in each group, and the ratio of donor mice to recipient mice was 1:2. The uterine fragments were divided into several portions of the same weight and injected into the abdominal cavity of the recipient mice. The mice were killed 2 weeks later, and endometrial tissues of mice in the normal control group and the abdominal ectopic endometrial tissues of mice in the EMS group were collected for further experiments. The endometriosis mouse model was constructed again using the same method as above, and the mice were randomly divided into two groups after transplantation: the first was a negative control group that received intraperitoneal injection of lentivirus for 4 consecutive weeks (the LV control group) and the second was a group that received intraperitoneal injection of lentivirus overexpressing HOXD10 for 4 consecutive weeks (the LV-oe-HOXD10). The weight, quantity, and size of the lesions were measured. The lesion tissues were then stored in a refrigerator at −80°C for further experiments. During laparoscopic surgery, ectopic endometrial tissue samples were collected from 12 patients with endometriosis. Control samples were taken from normal endometrial tissue from 12 women with tubal factors associated with infertility. All procedures were approved by the Human Ethics Committee of Lanzhou University First Hospital. The hEM15A endometrial cell line was purchased from the Typical Culture Preservation Committee Cell Bank of the Chinese Academy of Sciences.The hEM15A cells were placed in 10% fetal bovine serum (Invitrogen, San Diego, CA, USA) and 1% penicillin/streptomycin(100 U/mL) in complete culture medium. The cells were cultured at 37°C in an incubator with 5% CO 2 saturation humidity. Total RNA was extracted from hEM15A cells using TRIzol reagent (Invitrogen, Carlsbad, CA, USA) and immediately reverse-transcribed into complementary DNA using PrimeScript RT Master Mix (Takara, Shiga, Japan) according to the manufacturer’s instructions. RT-qPCR was then performed on a real-time PCR system using SYBR Premix Ex Taq II. Using GAPDH as a reference, the mRNA expression level of the target gene was calculated using the 2-ΔΔCt method. Total protein was extracted from hEM15A cells using radioimmunoprecipitation assay buffer and quantified using a bicinchoninic acid assay kit (Solebol, Beijing, China). Protein lysate (50 μg) was loaded onto sodium dodecyl sulfate-polyacrylamide gel for electrophoresis, after which the separated protein bands were transferred to a nitrocellulose membrane. After blocking with 5% skim milk powder at ambient temperature for 1 h, the membrane was incubated with the primary antibody at 4°C for 24 h. After washing with Tween 20-containing Tris-buffered saline, the membrane was incubated with the secondary antibody at ambient temperature for 1 h. After a final wash, the film was exposed, and the blot strips were visualized using a gel imaging system (Fusion FX5, Vilber, Collégien, France). All antibodies were purchased from Abcam (Cambridge, UK). The over-expressing plasmids (OE-HOXD10, 5′-TGAGGTCTCCGTGTCCAGTC-3'; OE-GABPA, 5′-AGCTTAGTGTACAGGTAATTT-3'; mimic-miR-450b-5p, 5′-GAGTCGTGCATTAAGATATTA-3′) were purchased from Shanghai Gima Pharmaceutical Technology Co. Ltd (Shanghai, China). After digesting the cells, a 6-well plate was laid, and when the cells reached 50%–60% fusion, the original medium in the 6-well plate was discarded for cell transfection. After 2–3 rinses in sterile phosphate-buffered saline (PBS), 1.5 mL of serum-free medium was added to each well. Next, 125 μL of serum-free medium and 5 μL of Lipofectamine 3000 reagent were added to a new sterile 1.5-mL EP tube and gently mixed, after which the tube was left to stand for 5 min at room temperature. We then added 125 μL of serum-free medium and 100 pmol of stay transfection sequence to another sterile 1.5-mL EP tube. The two solutions were gently mixed and allowed to stand at room temperature for 10–15 min. The mixture was added to the 6-well plate, gently shaken and mixed horizontally, and cultured in an incubator at 37°C. After 8 h, the fluid was changed to a medium containing 10% fetal bovine serum, and transfection efficiency was assessed after 24–48 h. HOXD10-overexpressing lentivirus (LV-oe-HOXD10) and negative control lentivirus were sourced from Shanghai Gemma Gene Medical Technology Co., Ltd. (Shanghai, China). 293T cells in a good growth state were taken and cultured in a 10cm large dish. After the cell density reached about 30%, the packaging was started. The packaging system , the mass ratio of X: Y: Z was 4:3:1. After the configuration is complete, gently blow and mix well, transfer B to A, gently blow and mix well, and leave for 15 min. Then the mixture was transferred into 293T cells that had been starved with low serum concentration for 1 h, replaced with normal medium, and continued to be placed back in the incubator. After 48 h, the venom was concentrated, and the concentrated venom was transferred into hEM15A cells to infect cells. Then the infection effect was judged by observing the GFP luminescence. The luminescence of green fluorescent protein (GFP) in cells was observed by fluorescence microscope. Finally, the infection effect was detected by qPCR experiment, and the knockdown/overexpression cell line was successfully constructed, and further follow-up experiments were conducted. The promoter region of HOXD10 was inserted into the 5′ end of the luciferase reporter gene and denoted as HOXD10-pro-wt. Next, 1 × 10 5 293T cells were inoculated on 24-well plates, and 100 ng of plasmid were transfected with Lipofectamine 3000 reagent after culture for 24 h. Luciferase activity was measured 48 h after transfection using the dual luciferase reporter assay system (Promega, Madison, WI, USA) according to the manufacturer’s instructions. Cell proliferation capacity was evaluated using a CCK-8 (Dojindo Laboratories, Kumamoto, Japan) under different conditions at 24, 48, and 72 h according to the manufacturer’s instructions. An Annexin V-FITC kit (Life Technologies, Waltham, MA, USA) was used for detection of apoptosis by flow cytometry according to the manufacturer’s instructions. The cell samples were analyzed using a FACScan system (BD Biosciences, Wokingham, UK). Annexin V(+)/PI(−) represents cells in early apoptosis and Annexin V(+)/PI(+) represents cells in late apoptosis. After being starved of serum for 24 h, the cells were digested with trypsin and then centrifuged, after which the culture medium was discarded. Next, the cell density was adjusted to 1 × 10 5 /mL, and 600 μL of complete medium containing 15% fetal bovine serum were added to each lower chamber of the 24-well plate, and 200 μL of cell suspension were added to each upper chamber. The 24-well plates were cultured in a cell incubator for 24 h. The next day, the culture medium was discarded, and the cells were fixed with 4% paraformaldehyde. The fixed solution was then removed and air-dried for 10 min, after which the crystal violet-stained chamber was cleaned and air-dried using PBS. The cells were then counted under a microscope, photographed, and stored. The gun head, centrifugal tube, and transwell 24-well plate were pre-cooled, and Matrigel (Corning, Corning, NY, USA) was mixed with the pre-cooled gun head. The Matrigel and serum-free medium were diluted at a ratio of 1:8, evenly spread at the bottom of the upper transwell chamber, and placed in an incubator for 3 h. After incubation, the excess liquid in the upper chamber was removed by suction. Next, 100 μL of serum-free medium were added to each well, after which the plate was placed in the incubator for 30 min to hydrate the basement membrane. The fluid in the upper chamber was removed by suction, the cells were inoculated, and the remaining steps were as described above for cell migration. The slices were successively added to xylene I for 20 min, xylene II for 20 min, anhydrous ethanol I for 5 min, anhydrous ethanol II for 5 min, and 75% alcohol for 5 min, and then washed with water. The slices were then placed in hematoxylin dye solution for 5 min, washed, differentiated, washed, returned to blue, and then rinsed in running water. The slices were dehydrated in 85% and 95% gradient alcohol for 5 min each, and then stained with eosin dye for 5 min. The slices were successively placed into anhydrous ethanol I for 5 min, anhydrous ethanol II for 5 min, anhydrous ethanol III for 5 min, xylene I for 5 min, and xylene II for 5 min, followed by examination under a light microscope. The tissues were fixed with 4% paraformaldehyde solution, embedded in paraffin, cut into 4-μm sections, and immunostained. The slides were soaked in xylene and ethanol for dehydration, incubated with 3% H 2 O 2 in the dark for 25 min, and then closed with 3% bovine serum albumin at 25°C for 30 min. The slides were incubated with the primary antibody HODX10 (1:100) and Ki67 (1:100) at 37°C for 1 h. The slides were then washed and incubated with an enzyme-labeled secondary antibody at room temperature for 60 min, after which the nuclei were reverse-stained with hematoxylin for 5 min. Finally, the slides were examined under the light microscope. A 24-well circular climbing plate was placed into the hole of the 24-well plate, and an appropriate number of cells was placed in the 24-well plate with the climbing plate and cultured in the incubator at 37°C. The cells were removed, the medium was discarded, and the cells were washed in PBS. After the 4% paraformaldehyde was solidified, the cells were cleaned again with PBS, 0.5% Triton X-100 was added to each well, and the plate was left at room temperature for 10 min. The Triton X-100 was then discarded and the plate was cleaned with PBS. Next, 3% bovine serum albumin sealing liquid was placed into each hole. The plate was then left to stand at room temperature for 1 h, after which the sealing liquid was discarded. The diluted primary antibody was added to the tablet, which was then placed in a refrigerator at 4°C away from light and incubated overnight. On the following day, after a wash in PBS, the secondary antibody diluted with sealing solution was added to avoid light at room temperature. After incubation for 1 h, the plate was washed in PBS. DAPI working liquid was added to each hole and the plate was kept away from light for 10 min, after which it was washed in PBS. The sliver was carefully removed, the excess liquid was blotted by absorbent paper, 5 μL of anti-fluorescence quencher were added to the slide, and photographs were obtained using a confocal laser microscope. The experimental data in this study are summarized as the mean ± standard deviation and were plotted and analyzed using GraphPadPrism 7.0 software (GraphPad Software Inc., San Diego, CA, USA). The Student’s t test was used to compare data between the groups. A p -value < 0.05 was considered statistically significant. | Review | biomedical | en | 0.999996 |
PMC11696643 | The non-recombining region of the Y chromosome (NRY), which is uniquely inherited along male lines, offers significant potential for applications in forensic science and molecular anthropology. Analyses of the genetic structure and genomic diversity of ethno-linguistically different human populations, informed by databases such as the 1000 Genomes Project, 10K Chinese People Genomic Diversity Project, and the Human Genome Diversity Project, revealed that populations with diverse ethnolinguistic backgrounds possess distinct genetic architectures influencing human traits and diseases. 1 , 2 , 3 Initiatives such as the All of Us Research Program enhance the understanding of human genetic diversity by focusing on previously underrepresented populations, thus reducing European bias in genetic research. 4 , 5 , 6 Despite abundant genomic resources for mitochondrial DNA and autosomes, comprehensive resources for the Y chromosome remain scarce. 4 , 7 , 8 , 9 , 10 The complexity and high repetitiveness of the Y chromosome sequence have historically hindered detailed studies of its structural variations and the biological implications of its variants. However, recent advancements in capture sequencing and long-read sequencing technologies have facilitated more precise Y chromosome assembly, which is critical for various applications. 11 , 12 The completion of a telomere-to-telomere (T2T) assembly of the Y chromosome, coupled with population genetic analyses of 43 diverse human Y chromosomes, underscores the complexity and variability of their sequencing characteristics and population-specific variations. 11 , 12 These developments significantly advanced the ability to engage high-confidence NRY regions and measurable Y chromosome segments in forensic investigations, population genetic studies, and molecular anthropology, promising substantial impacts on multiple disciplines. China’s vast genetic, cultural, and ethnic diversity reflects a history shaped by complex movements and admixture events involving ancient Yellow River millet farmers, Yangtze River rice cultivators, diverse Paleolithic hunter-gatherers, and Western Eurasian pastoralists. 13 This intricate genetic history underpins the spatiotemporal diversity observed in ancient and modern East Asian populations. 8 , 13 The origins and dispersal of the Sino-Tibetan (ST) language family, which is predominant in eastern Eurasia and comprises the Tibeto-Burman (TB) and Sinitic languages, remain debated. Hypotheses suggest that the ST languages originated in North China, the Tibetan-Yi Corridor (TYC) in western Sichuan, and northeastern India on the southern Qinghai-Xizang Plateau. 14 Analyses of ancient DNA from the Yellow River Basin revealed connections between Neolithic millet farmers and early highland East Asians, including populations in the Qinghai-Xizang Plateau and Nepal. 15 , 16 , 17 Mitochondrial DNA and Y chromosomal data have highlighted the Paleolithic origins of the region’s initial settlers and their links to broader East Asian maternal and paternal lineages. 15 , 18 , 19 , 20 Population genetic studies suggest that the genetic composition of modern Tibetans was shaped by both Paleolithic colonization and Neolithic expansion events. 21 , 22 This is further corroborated by recent ancient DNA studies identifying a Holocene link between millet farmers and ancient Qinghai-Xizang Plateau populations, as well as a deep genetic connection between Tibetans and early Asians. 16 Due to their varying natural environments and interactions with culturally diverse groups, geographically distinct TB-speaking populations show differentiated population structures. 16 , 23 While core Tibetan populations on the Plateau display unique genetic profiles, those in the surrounding lowlands have been influenced by gene flows from neighboring Indians, Central Asians, and other East Asian populations. 16 , 23 , 24 This complex genetic legacy underscores the need for further exploration into paternal genetic diversity and population evolutionary processes among geographically distinct TB groups, offering profound insights into the demographic processes that have shaped regional human history. Recent Chinese genomic cohorts, such as STROMICS, the China Kadoorie Biobank, ChinaMAP, the NyuWa Genome Resource, and the Born in Guangzhou Cohort Study, have documented the genomic diversity of the Chinese populations. 3 , 13 , 25 , 26 , 27 , 28 , 29 These studies have significantly contributed to filling the gaps in the genomic data of Chinese populations and advancing human health equity. 5 Despite these advancements, the genomic resources of the Y chromosome and their potential to elucidate the paternal genetic history of this group have not yet been explored. To address the missing diversity of Y chromosomes in China, we launched the YanHuang cohort, aimed at sequencing over 100K ethnolinguistically diverse Chinese males to delineate the complete genetic landscape of Y chromosome variations and investigate the paternal origins of ancient and modern Chinese populations. Our pilot work reported the paternal genetic background of diverse admixture models within the Han Chinese and ethnic minority groups. 13 , 30 Wang et al. constructed a phylogenetic tree from modern and ancient East Asian populations, revealing multiple founding lineages from ancient farmers, herders, and hunter-gatherers that shaped the paternal gene pool of contemporary East Asians. 13 Another study highlighted the diverse contributions to East Asian paternal lineages and introduced the "Weakly-Differentiated Multi-source Admixture model" to decode the complex demographic history of Han Chinese populations using extensive genomic data. 30 However, paternal genomic diversity, settlements on the Qinghai-Xizang Plateau, and potential geographical corridors facilitating population exchange between highland and lowland areas remain uncharacterized in the current era of sequencing. Y chromosome markers are pivotal in reconstructing paternal demographic history, enhancing forensic paternal biogeographic inferences, and refining pedigree searches. 7 , 9 , 23 Specifically, Y chromosome short tandem repeats (Y-STRs) are frequently utilized in genetic research due to their effectiveness. 31 , 32 , 33 , 34 Analyzing numerous Y-STRs enhances haplotype identification resolution within populations, improving the discriminative capacity of genetic analysis. However, the high mutation rates of Y-STRs, ranging from 1.0 × 10 −4 to 1.0 × 10 −3 per generation, introduce challenges by potentially altering haplotypes within the same lineage, complicating forensic familial searches. 35 Conversely, Y chromosome single nucleotide polymorphisms (Y-SNPs), which have lower mutation rates of approximately 1.0 × 10 −8 per generation, provide a stable method for preserving paternal lineage information over extensive periods. 36 Here, we reported large-scale paternal genomic data aimed to refine paternal lineage investigations by distinguishing lineages via the shared haplotypes or haplogroups, thereby providing a clearer picture of the genetic structure and forensic characteristics of geographically distinct TB-speaking populations. This comprehensive approach enhances the understanding of genetic diversity and supports forensic applications by providing more accurate lineage information. We present an integrated YanHuang Y chromosome genomic resource encompassing data from 9,901 ethnolinguistically diverse individuals across 38 ethnic groups and 34 provinces . This objective was to identify the founding lineages of TB people and reconstruct their paternal demographic history. The dataset comprises three distinct types of data. First, whole Y chromosome sequences from 994 modern and 303 ancient individuals 16 , 18 , 37 , 38 , 39 , 40 , 41 were used to reconstruct the phylogenetic relationships between modern and ancient Chinese populations and estimate the chronology of divergence, expansion, and migration events in modern and ancient TB populations. Second, we analyzed 4,298 genetic profiles featuring population-specific SNP and STR variations from Chinese populations to explore the genetic relationships and landscape of TB people and other reference Chinese populations. 42 , 43 , 44 Finally, we examined genetic data from 4,306 individuals with high-density Y-SNPs from 38 ethnic groups in the Chinese Paternal Genomic Diversity Project (CPGDP) to elucidate the origins and dispersal patterns of two TB-related founding lineages. Figure 1 Geographical position, phylogeny, and phylogenetic relationships between modern and ancient populations (A) Geographical distribution of the newly whole-genome sequenced and genotyped Tibeto-Burman (TB)-speaking populations and reference groups. (B) Detailed map of the Chinese regions encompassing the newly collected TB groups. (C) Time-calibrated TB-dominant D and O lineage phylogeny showing the main founding lineage highlighted in this work. (D) Maximum likelihood-based phylogenetic relationships showing a clustering pattern between modern and spatiotemporally different ancient populations. Y chromosome sequences provide insights into the common patrilineal ancestors of founding lineages. We sequenced genomes from 72 TB-speaking representative samples and integrated them with modern and ancient Eurasian data from the pilot work of the YanHuang cohort, 13 creating a comprehensive dataset of 1,297 Y chromosomes. The four B2b1a1b African lineages served as the basal branch in our time-stamped phylogenetic analysis. This analysis revealed a coalescence between the D and O founding lineages 65,339 to 74,810 years ago (ya). Divergence and admixture events, indicated by BEAST analyses, suggested prolonged population bottlenecks followed by recent expansions in these lineages. Specifically, the D1a1a and D1a1b lineages diverged between 43,354 and 51,057 ya after a 19,270-year bottleneck. The D1a1a lineage then split into the D1a1a1b and D1a1a1a1b sublineages after an 11,340-year period of stability, after which it expanded during the Neolithic transition . We identified two Neolithic TB-related lineages of the O2a2b1a1a1a4a-CTS4658 and D1a1a1a1b-Z31591 expanded in TB people. The Tibetan-dominant D1a1a1a1b lineage expanded between 4,692 and 6,663 ya, likely coinciding with the Proto-Tibetan adoption of millet or barley farming and adaptation to high-altitude environments. Similarly, a lineage associated with Sherpa and other Tibetan populations expanded between 5,403 and 7,040 ya, as observed in the Pumi and other groups. Further exploration of phylogenetic patterns among modern and ancient populations led to the construction of a unified paternal genealogy . The early population structure, associated with multiple typical East Asian lineages, revealed that at least two distinct ancestral founding lineages contributed to the genetic pool of the TB-speaking populations. Phylogenetic analyses among modern and ancient East Asians confirmed genetic continuity during the Neolithic period across major genetically differentiated regions: the northern Yellow River Basin, southern Yangtze River Basin, Amur River Basin, and Qinghai-Xizang Plateau. The D1a2 lineages found in the Jomon people represent an early divergence from the Qinghai-Xizang Plateau-related D1a1 lineages. Two primary sublineages of D1a1, D1a1a, and D1a1b diverged during the Upper Paleolithic period. The D1a1a lineage was identified in both modern Tibetan and Yi populations, as well as in ancient individuals from the Qinghai-Xizang Plateau , including those from archeological sites such as Samdzong, Gebusailu, Qulongsazha, Sangdalongguo, and Gebusailu. The D1a1b lineage was observed in the Tibetan and Yi populations, as well as in the Mosuo and Pumi populations and in Iron Age individuals from Nyingchi Kangyu (D1a1b1a). The O2a2b1a1a1a4a lineages were identified in the Yi, Lahu, Pumi, and previously documented Zhuang populations, which clustered with 39 individuals from the Bronze Age to historical periods in highland areas . Meanwhile, the N1b2 lineage has been observed in modern Yi and Tujia populations, as well as in several ancient individuals from the Qinghai-Xizang Plateau (Sangdalongguo, Laga, Gebusailu, Qulongsazha, and Zongri). Additionally, four ancient individuals from high-altitude regions belong to the O2a2b1a2a1a lineage. Overall, TB-related O-CTS4658 and D-Z31591 lineages were identified in both modern and ancient TB individuals, which clustered with a diverse range of ancient highland populations. Figure 2 Geographical position and pathPhynder placement of ancient East Asian samples belonging to two TB-founding lineages into this fully resolved Y chromosome phylogeny (A) Phylogenetic and clustering patterns of D lineages among modern TB people and ancient eastern Eurasian individuals. (B) O lineages carried by modern TB people, Tai-Kadai people, and ancient highland Qinghai-Xizang Plateau individuals. The geographical positions of key ancient individuals were labeled in the middle maps. Ancient individuals were denoted via the green background. The base map was officially approved with the number GS1674 ( http://bzdt.ch.mnr.gov.cn/ ). To comprehensively explore the genetic patterns of the TB-speaking population and their relationships with other reference groups, we genotyped paternal diversity data of large-scale populations via more cost-effective genotyping methods. We reported 4,298 Y chromosome haplotypes, including 37 Y-STRs and 215 Y-SNPs, with 519 newly genotyped TB individuals. These data were submitted to the YHRD database, revealing significant genetic diversity across ethnolinguistically distinct populations . Among the TB-speaking individuals, 495 unique Y-STR haplotypes were identified, distributed as follows: 85 Tibetans in Muli (TML), 93 Tibetans in Chengdu (TCD), 104 Yis in Liangshan (YLS), 137 Sherpas in Dingjie (SDJ), and 58 Tibetans in Qinghai (TQH), indicating considerable genetic heterogeneity. The shared haplotype between the two populations highlighted their genetic interconnectedness. The haplotype diversity (HD) ranged from 0.9978 to 1.0000, demonstrating the robust discrimination power of our genetic profiling, especially when using the AGCU Y37 kit ( Table S5 ). This kit outperformed Yfiler Plus and Yfiler in delineating genetic diversity due to its overall higher discrimination capacity (DC) and lower haplotype match probability (HMP) in the studied population ( Table S5 ), supported by the analysis of 234 alleles across 31 single-copy loci. ( Table S6 ). We identified a unique 20.3 microvariant allele at DYS627, traced to a 'G' deletion at the 19th repeat unit via Sanger sequencing . The analysis of three multicopy loci—DYS527, DYS385, and DYF387S1—revealed 133 allele combinations, indicating significant genetic diversity, with rapidly mutating Y-STRs (RM Y-STRs) showing greater diversity than slower-mutating loci such as DYS391, DYS437, and DYS645 ( Tables S6 and S7 ). Comprehensive Y-SNP-STR analysis associated all samples with microvariant alleles at DYS518 and the Q1a1-F746 haplogroup, enhancing our understanding of genetic structures and refining paternal lineage analysis for forensic and anthropological applications ( Table S8 ). Our findings revealed an increase in shared haplotypes and a decrease in discrimination capacity as the number of genotyping markers decreased. This underscores the need for a tailored Y-STR panel for Chinese populations to reduce the risk of false matches in forensic applications. Genetic diversity (GD) assessments indicated that multicopy loci exhibited the highest diversity, with RM Y-STRs showing significant diversity. However, due to their high mutation rates, RM Y-STRs are less suited for paternal kinship identification, whereas conventional Y-STRs, which mutate more slowly, are preferred for reliable lineage tracing. Our integrated analysis linked all samples with microvariant alleles at DYS518 to the Q1a1-F746 haplogroup. This finding emphasizes the necessity of considering both allele and haplogroup data to clarify genetic lineage and historical migratory events, thereby enhancing the accuracy and reliability of forensic and genealogical investigations. We identified a wide range of Y chromosome haplogroups across geographically distinct TB-speaking populations. Specifically, 44 haplogroups were observed in 254 Tibetan individuals: 25 in TML, 19 in TCD, and 19 in TQH. For YLS, 33 haplogroups were identified, while SDJ exhibited only seven, demonstrating reduced genetic diversity . The Haplogroup diversity (HGD) varied significantly, from a low of 0.6183 in SDJ to a high of 0.9376 in YLS. Prevalent haplogroups among Tibetans included D1∗-M174, which was found in more than half of the individuals across the three Tibetan subpopulations, and O2∗-M122, which was notably more frequent among YLS individuals. Additionally, O1b∗-P31, N∗-M231, and O1a∗-M119 were common in YLS, with N1b2-M1819 being the dominant subhaplogroup of N-M231. In the SDJ, O2∗-M122 dominated (98.14%). Subhaplogroup analysis revealed distinct distribution patterns. For example, in Tibetan populations, subhaplogroups D1-M174, such as D1a1a1a1b-SK541 and D1a1b1a2∼-PH97/Z34364/Z34365, showed variable frequencies across regions, with D1a1a1a1b-SK541 being the most frequent in YLS. In contrast, the subclade O2a2b1a1∗-M117 of O2-M122 was most prevalent across the Tibetan and Yi populations, highlighting different genetic legacies and geographical differences. This detailed haplogroup profiling underscores the complex genetic landscape of TB-speaking populations and provides crucial insights into their historical migrations and interactions. The distinct haplogroup compositions reflect the unique evolutionary histories and adaptive strategies of these populations at different altitudes, influenced by both their environment and their historical migration patterns. Figure 3 Fully-resolved Y chromosome phylogeny and paternal genetic history of TB people (A) High-resolution phylogenetic tree and haplogroup frequency heatmap for five TB-speaking populations. This figure presents a streamlined phylogenetic tree alongside a heatmap that illustrates the distribution frequencies of various haplogroups across five distinct TB-speaking populations. (B) Median-joining network topologies derived from Y-SNP-STR haplotypes. This series of networks elucidates the genetic relationships and evolutionary divergence within key paternal lineages among the studied populations, with each panel focusing on different levels of haplogroup resolution. Network topology for D1a1a-M15 subhaplogroups, which were denoted via different backgrounds. Network topology illustrates the diversity within D1a1a-M15 subclades, which is denoted by the different colors of the circle. (C) Network depicting the structure of the D1a1b-P99 subhaplogroups. Detailed topology of D1a1b-P99 subhaplogroups, highlighting specific lineage relationships. (D) Network topology for O2a2b1a1a1a4a-CTS4658 sublineages showing branching patterns. Detailed view of the haplogroup distribution within the O2a2b1a1a1a4a-CTS4658 sublineages. We extensively investigated the phylogeographic distribution of founding haplogroups among TB speakers. Our findings indicated the prominent presence of the D1a1a haplogroup in Southwest China, particularly among TB-speaking communities . The D1a1b haplogroup showed a broader regional presence in both Southwest and Northwest China, especially among geographically diverse Tibetan groups . The O2a1 haplogroup, predominantly found among East Asians (including many Han Chinese individuals), was notably absent in Northwest East Asians . Conversely, O2a2 exhibited a broad distribution across East and Southeast Asians . The N1a haplogroup was primarily found in northern European and northern East Asian populations, while N1b appeared predominantly in southwestern East Asia . Additional haplogroups, such as C2∗-M217, G∗-M201, and J∗-M304, were present in minor frequencies within these populations, underscoring a complex mosaic of paternal lineages in geographically diverse regions of China . Our data also revealed rich diversity within Tibetan populations, including rare haplogroups such as LT-P326 and L-M20 , suggesting varied historical interactions and migrations. This analysis provides a comprehensive overview of the genetic structure within TB-speaking populations, highlighting the significant variation and widespread distribution of specific haplogroups and enhancing our understanding of their historical and evolutionary backgrounds. Population genetic work suggested that STR markers with high mutation rates have a stronger power to illuminate recent dynamics of human genetic history. 45 We analyzed Y-STR haplotype data to investigate the paternal genetic structure of TB speakers, revealing significant genetic relationships within and across TB populations . The initial findings showed significant genetic proximity within regional subgroups . For instance, the TML and Tibetan in Nagqu (TNG) populations exhibited no measurable genetic distance, indicating strong genetic continuity. Similarly, Tibetan subpopulations such as Tibetan in Shigatse (TSG) demonstrated close genetic affiliations with TML, suggesting regional genetic coherence among highland Tibetan communities. Using 29 Y-STR markers, we confirmed that Tibetan populations share more genetic similarities with other highland groups than with lowland East Asian populations . The YLS population showed a close genetic affinity with the Qiang in Beichuan (QBC) population, underscoring shared genetic traits across geographically and culturally connected groups. In contrast, the SDJ population was markedly distinct from other TB-speaking groups, aligning more closely with certain lowland Han populations. This highlights the complex mosaic of genetic influences in this region due to historical migrations and interactions. These results underscore the importance of regional and cultural contexts in shaping genetic structures, contributing to a deeper understanding of genetic diversity within East Asian populations. Figure 4 Geographical distribution and genetic relationships of newly collected and reference populations (A) Multidimensional scaling analysis based on the Fst genetic distance matrix comparing newly collected TB populations with 28 Chinese reference groups. (B) Neighbor-joining phylogenetic tree derived from the 27-Y-STR-based Rst genetic distance matrix illustrating the genetic relationships among the populations studied. (C) Principal component analysis depicting clustering patterns among target TB-speaking populations and 87 global reference populations. The base map was officially approved with the numbers GS1760 and GS2761 ( http://bzdt.ch.mnr.gov.cn/ ). The analysis of 27 Y-STR haplotypes confirmed close genetic relationships among Tibetan groups, highlighting significant affinities, particularly between the YLS and Sichuan Hui populations. The SDJ population showed closer genetic ties with the Han populations of Henan and Shanxi, with Rst values of 0.2634 and 0.2663, respectively. MDS based on 29 Y-STRs revealed a distinct Tibetan-related cluster, underscoring strong genetic links across Tibetan populations in different geographic locations . This is notable among Tibetan groups in the TYC and Qinghai-Xizang Tibetan, and between geographically different Tibetan groups in Northwest China. Interestingly, the SDJ population appeared to be genetically isolated from other East Asian groups, corroborated by high Rst genetic distances greater than 0.21 ( Table S10 ). Additionally, MDS analysis using 29 Y-STRs indicated that YLS shows greater genetic similarity with the linguistically related Yi population in Guizhou than with the geographically close QBC population. Conversely, the 27-Y-STR-based MDS revealed a cluster predominantly associated with Sinitic languages, positioning the SDJ and Hainan Li populations as distinct from the other analyzed groups . In this context, the TML closely aligns with other highland Tibetans, while the TCD and TQH are distinct from typical high-altitude Tibetan populations. The clustering of the YLS and Hui populations from Shaanxi underscores their shared genetic makeup, emphasizing the complex interplay of geography, language, and genetics in shaping the population structure of East Asia. Y-SNP haplotype analysis revealed clear clustering patterns among Tibetan and other East Asian populations. The TML and TCD populations were closely related to TNG and TSG. Similarly, TQH showed significant genetic links to TSG and TCD. YLS exhibited notable genetic affinity with the Hui population in Xinjiang, reflecting shared paternal lineages and regional genetic influences. The SDJ group shared genetic closeness with the northern Han Chinese, possibly indicating historical migrations or genetic admixture . MDS analyses of 113 overlapping Y-SNPs suggested that the newly studied Tibetan populations (TML, TCD, and TQH) formed a distinct cluster; YLS was closely related to TSG, while SDJ was distinctly separate from other global populations . Additional MDS analysis using 157 Y-SNPs revealed that TML and TCD grouped with TSG, highlighting strong regional genetic coherence, while YLS aligned more closely with Mongolian reference populations . Phylogenetic analyses confirmed the genetic proximity of Tibetan populations across different regions and underscored the distinct genetic makeup of the SDJ population . This complex genetic landscape illustrates the diverse genetic heritage of TB-speaking populations and underscores the impact of geographic separation and historical migrations on genetic diversity. PCA patterns based on haplogroup frequencies revealed significant insights into the population structure of TB-speaking groups. We identified distinct clusters associated with geographic and ethnic origins . An extensive Asian-related gradient stretched from southern Han Chinese to Pathan populations in Afghanistan, while European and American populations aligned along the second principal component (PC2), showing diverse genetic backgrounds . Within East Asia, a pronounced north‒south genetic gradient encompassed Han-, Hui-, and Mongolian-related clusters. This gradient was particularly marked in a focused analysis of East Asian populations, where Hui/Mongolian and Tibetan-related clusters were distinct, and the Austronesian-speaking Gaoshan population of Taiwan and Han Chinese from Shanxi occupied the extremes . The target Tibetan populations aligned closely with other East Asian groups, reflecting shared regional heritage. Notably, the YLS population showed closer genetic affiliation with the Hui population from Henan, while the SDJ population was isolated from other East Asian groups, indicating complex historical interactions and migrations within these regions. To investigate the distribution patterns and evolutionary trajectories of major haplogroups among TB groups, we utilized MJ network topologies constructed from haplotypes derived from 27 Y-STRs and 157 Y-SNPs . We found that haplogroup D1a1a∗-M15, particularly its subhaplogroup D1a1a1a1b-SK541, was mainly present at low and middle altitudes in the TCD and YLS populations, with a lower prevalence among highland Tibetans . Another significant subhaplogroup, D1a1b∗-P99, especially D1a1b1a2∼-PH97/Z34364/Z34365, was distributed across various Tibetan populations, indicating broad geographical spread among high-altitude communities . Our analysis revealed that haplogroup O2∗-M122 was significantly prevalent across TB-speaking populations. Within this group, subhaplogroup O2a1∗-L467 was widespread among Han Chinese individuals, whereas O2a2a∗-M188 was predominant in southern Han Chinese individuals . Subhaplogroup O2a2b1a1∗-M117 exhibited high frequencies among the newly studied TB speakers and the Sinitic-speaking Hui and Han populations . O2a2b1a2a∗-F444 was notably prevalent among the Han and Hui populations . Particularly striking was the prominence of O2a2b1a1a1a4a-CTS4658 and its subhaplogroups in the SDJ, where it formed a star-like topology in the MJ network, suggesting a recent rapid expansion in this highland population . This subhaplogroup also showed significant differences between the Tibetan and Yi groups, although it was less common in the Han Chinese population. These findings enrich our understanding of the complex genetic makeup and historical migrations of TB groups across different regions of Asia. To investigate factors influencing genetic diversity among ethnolinguistically and geographically distinct populations, we conducted an AMOVA using 27 Y-STR and 157 Y-SNP markers across 33 Chinese populations categorized by ethnicity, linguistic affiliation, and altitude. Our analysis revealed that variations among groups and populations derived from 157 Y-SNPs were significantly greater than those from 27 Y-STRs ( Table S14 ). Specifically, among-group variations based on ethnic categorization (15.32% for 157 Y-SNPs and 5.41% for 27 Y-STRs) exceeded those based on linguistic (8.70% for 157 Y-SNPs and 2.34% for 27 Y-STRs) or altitude groupings (6.65% for 157 Y-SNPs and 2.70% for 27 Y-STRs). Within-group variations among populations sharing the same altitude or linguistic family were notably more pronounced than those between ethnically similar groups. Intrapopulation variations accounted for the majority of genetic differences among Chinese populations, exceeding 82% for 157 Y-SNPs and 93% for 27 Y-STRs. These findings underscore the enhanced discriminatory power of 157 Y-SNPs over 27 Y-STRs and their utility in tracing paternal lineages among diverse Chinese groups. We finally examined the phylogeographical distribution of key mutations within the CPGDP resource. From a cohort of 232,413 individuals, we screened 918 samples from the D-Z31591 lineage and 3,388 from the O-CTS4658 lineage. Among the D-Z31591 sublineages , we observed substantial population expansions. Notably, we collected 13 samples from Xizang, 8 from Qinghai, and 155 from Sichuan . Analysis of haplogroup frequency and Y-SNP/STR profiles indicated that the highest frequencies occurred on the Qinghai-Xizang Plateau, suggesting that this region was a potential origin and center of post-colonization expansion for these ancient highlanders . This pattern was further supported by optimized correlation analysis. Within the D-Z31591 lineage, 793 samples were from Hans, 33 from Tibetans, and 21 from Yis, representing the top three ethnic groups. For O-related TB founders, equivalent methodologies revealed the highest frequencies predominantly in the Qinghai-Xizang Plateau and Southwest China, as supported by optimized hotspot analysis . Pearson correlation analysis between geographical coordinates and prefecture-level frequencies indicated no significant correlation with latitude for the D-Z31591 lineage and marginal negative correlations for other parameters . These results suggest that the formation of these lineages in highland and lowland East Asians was not solely driven by isolation by distance. Finally, we analyzed the correlation between human migration patterns inferred from autosomal and Y chromosome evidence by comparing ADMIXTURE-based ancestral proportions and lineage frequencies. A positive correlation emerged between our identified lineages and Lubrak-related highland East Asian ancestry , underscoring a significant genetic link. Figure 5 Phylogeographical analysis and correlation results of two TB-related founding lineages (A and B) Haplogroup frequency and optimized hotspot analysis results for O2a2b1a1a1a4a-CTS4658 and D1a1a1a1b-Z31591. The red color in the left panel indicates a higher haplogroup frequency, and the yellow color indicates a low haplogroup frequency in the frequency spectrum. The red color in the right panel denotes the possible original center. (C) The correlation between the founding lineage frequency and the geographical coordinates and ADMIXTURE-based admixture proportion using Pearson correlation analysis. The blue color indicates a positive correlation, and the red color indicates a negative correlation. ∗ represents 0.05 < p value <0.01, ∗∗ represents 0.01 ≤ p value <0.001, ∗∗∗ represents p value <0.001. (D) The correlation between the frequency of two founding lineages and the latitude and longitude coordinates using Pearson correlation analysis. The base map was officially approved with the number GS2767 ( http://bzdt.ch.mnr.gov.cn/ ). Previous genetic studies on paternal genetic diversity have sought to elucidate the formation of East Asia through preglacial and postglacial migrations via southern and northern routes. 46 , 47 , 48 , 49 These studies also examined complex migrations and admixture within and between lowland and highland East Asia using low-density Y-SNP variations 22 , 31 , 32 , 50 , 51 and sex-biased adaptations shaping uniparental gene pools. 47 The Qinghai-Xizang Plateau, known for its harsh environmental conditions such as high altitude, low temperatures, severe aridity, and oxygen scarcity, has been home to human settlement since the Paleolithic era. 49 , 52 , 53 Despite these formidable challenges, modern humans established themselves in the region, with many Paleolithic sites across the plateau dating back to around 20,000 ya. 49 , 54 However, genetic research reveals that present-day Tibetan populations have their origins in Neolithic East Asia, specifically northern China. 14 , 55 Recent gene flow has been strongly indicated by previous studies. 49 , 56 However, fine-scale paternal genetic history from eastern regions, including the northeastern Qinghai-Xizang Plateau and the TYC, remains largely unknown, particularly from large-scale high-density Y-SNP data or whole-genome sequencing data. We reported an integrated YanHuang Y chromosome genomic resource, focusing on the formation of modern highland East Asians through whole Y chromosome sequencing, Y-SNP/STR genotyping of TB-speaking individuals, and high-density Y-SNP data from ethnolinguistically diverse Chinese populations across 34 provinces. Our study identified prevalent paternal lineages within highland TB-speaking populations, highlighting haplogroups D1∗-M174 and O2∗-M122, especially O2a2b1a1∗-M117. The TB-speaking SDJ predominantly exhibited haplogroup O2a2b1a1a1a4a∗-CTS4658, indicating a unique paternal lineage. Among the lowland TB-speaking YLS, the dominant haplogroup was O2∗-M122 (O2a2b1a1∗-M117 and O2a2b1a2a∗-F444), with significant occurrences of D1∗-M174 (D1a1a1a1b-SK541), N∗-M231 , O1b∗-P31 (O1b1a1∗-PK4), and O1a∗-M119 (O1a1a∗-P203.1), indicating diverse genetic backgrounds. The D1-M174 haplogroup, integral to the East Asian paternal lineage, is particularly frequent among Tibetan and some Japanese populations, illustrating its historical significance and geographical spread. 57 Variations within this haplogroup, such as D1a1a∗-M15 and D1a1b∗-P99, underscore their regional importance 22 and are often considered Tibetan-specific lineages. The presence of this lineage across different Tibetan groups from Muli to Qinghai suggests a deep-rooted and widespread historical influence. It is widely believed that haplogroup D-M174 represents the remnants of the earliest modern human settlers on the Qinghai-Xizang Plateau, who likely endured through the Last Glacial Maximum. 22 , 49 The migration patterns of D1a1a-M15, derived from D1-M174, highlight its evolution and expansion from western Sichuan northward into Qinghai and across the TYC into the Himalayas, reflecting significant migratory events and adaptations. 58 These genetic insights enrich our understanding of the paternal genetic structure among Tibetan-speaking populations and enhance our knowledge of their historical migrations and interactions across diverse ecological and geographical landscapes. The haplogroup O2∗-M122, predominant among the newly analyzed Tibetan-speaking populations, is widely distributed across East and Southeast Asia. 32 , 42 , 59 , 60 , 61 Studies, including those by Yan et al., indicate that approximately 40% of Han Chinese people trace their paternal lineage to late Neolithic progenitors, particularly from the Oα (O2a2b1a1∗-M117), Oβ (O2a2b1a2a1a∗-F46), and Oγ (O2a1b1a1a1a∗-F11) lineages. 60 These lineages significantly shaped the paternal genetic landscape of East Asian populations during the Neolithic period. Previous studies have confirmed that approximately 6,000 ya, farmers from the Yangshao culture in the middle Yellow River basin, carrying the O2a2b1a1a-F5 lineage, migrated to the Qinghai-Xizang Plateau. 49 , 62 Additionally, based on ancient DNA from the Banpo site, it is possible that the Yangshao culture also contributed to the spread of haplogroup O2a1b1a1a1a-F11. 63 The subhaplogroup O2a2b1a1a1a4a∗-CTS4658 was notably prevalent among the Sherpa population, with network analyses indicating recent rapid expansion. This high frequency in the SDJ population may be due to the localized population of Sherpas in China, primarily residing in Dingjie County within the Tibet Autonomous Region, highlighting the genetic distinctiveness of the Sherpa and Tibetan communities on the Qinghai-Xizang Plateau. Meantime, the phylogeographic analysis confirmed the highest frequency among highland Tibetans and their neighbors. Taken together, our time-labeled phylogeny of the O and D lineages, along with phylogenetic relationships among modern and ancient Chinese populations, confirmed that both Paleolithic and Neolithic genetic legacies contributed to the formation of proto-TB populations. The haplogroup O1a1a∗-P203.1, with its upstream haplogroup O1a-M119 observed in the remains from the Liangzhu site, 64 predominantly observed in the YLS, is widespread among southern Chinese and Southeast Asian populations and appears among eastern and northern Han Chinese populations. 42 , 65 , 66 Subhaplogroups of O1b∗-P31, notably O1b1a1∗-PK4, frequently found in the TML and YLS, are prevalent across southern Chinese and South Asian populations, Southeast Asian tribal communities, and even among the Japanese population. 42 , 66 , 67 , 68 Additionally, ancient DNA sequences confirm that around 3,000 ya, the Wucheng people in Jiangxi Province carried the O1b1a1a-M95 lineage. 64 Conversely, the sublineage O1b2∗-M176 is common in Japanese, Korean, and some Manchu populations. 57 , 67 The strategic positioning of the TML and YLS along the TYC, a significant migratory route to the Qinghai-Xizang Plateau, highlights the influence of ancient southern East Asian migrations carrying O1b-related subhaplogroups on the genetic landscape of modern ST-speaking populations, 69 , 70 explaining the relatively high frequency of O1b∗-P31 observed in the TYC populations. To elucidate the genetic relationships and differences among geographically different TB groups and various East Asian reference populations, we conducted genetic analyses, including genetic distance estimations, MDS, PCA, AMOVA, and phylogenetic relationship construction. These analyses utilized data from haplogroup frequencies, Y-STR/Y-SNP haplotypes, high-density SNP profiles and whole-genome sequences. Notably, the results based on Y-SNP haplotypes and haplogroup frequencies provided a more precise reflection of genetic affinity and differentiation among the ethnolinguistically diverse groups compared to Y-STR haplotypes. This enhanced resolution underscores the value of using diverse genetic markers to capture the complex patterns of genetic affinity and differentiation within and between populations. Our analysis aimed to enhance the understanding of paternal demographic history among diverse TB-speaking populations. We found that the TML and TCD populations maintained close genetic ties with the Ü-Tsang Tibetans, notably the TSG and TNG groups. In contrast, the TQH population was more genetically aligned with the Kham Tibetan population, particularly the Tibetan_Chamdo population from the eastern Qinghai-Xizang Plateau. The YLS population showed a significant genetic affinity with Hui populations from Sichuan, Shaanxi, and Henan, suggesting considerable gene flow from these regions into the Yi population in the TYC. Conversely, the Sherpa population exhibited distinct genetic traits, supported by unique haplogroup distributions observed in the SDJ, indicating their relative genetic isolation from other groups. This study has certain limitations, such as limited sampling locations and high coverage of Y chromosome variations. Tibetan populations are widely distributed across the Qinghai-Xizang Plateau, Qinghai, and Sichuan, with smaller populations in Gansu and Yunnan. Expanding sampling to these regions would help provide a more comprehensive understanding of the population history of TB groups. Additionally, much of the data used in this study relies on genotyping and haplogroup frequency information. In the era of whole-genome sequencing, using whole Y chromosome sequences could capture more genetic information and offer deeper insights. Lastly, incorporating large-scale ancient DNA data from the Paleolithic and Neolithic periods in South Asia, surrounding areas of the Qinghai-Xizang Plateau, and the Yellow River basin and Yangtze River basin would further elucidate the complex and dynamic genetic landscape of Tibeto-Burman populations. This study utilized three kinds of advanced Y-SNP genotyping technologies to create a valuable genetic resource for forensic genetics and molecular anthropology. Our analysis highlights a strong correlation between specific allelic variations in Y-STRs and well-defined haplogroups, providing a theoretical framework for predicting haplogroups from Y-STR haplotypes. Despite variability within Y-STR haplotypes across similar haplogroups in Chinese populations, we found a consistent association of identical Y-STR haplotypes with specific haplogroups. This confirms a robust relationship between Y-STR haplotypes and haplogroup classifications. This study also revealed a distinct correlation between the complex paternal genetic structures of Chinese populations and their geographical and linguistic contexts. This finding underscores the utility of Y chromosomal markers in forensic pedigree analysis and paternal biogeographical ancestry assessments. Our findings suggest that geographically diverse TB groups exhibit distinct paternal genetic histories yet share close genetic ties with northern lowland East Asians, supporting a shared origin in North China for the ST people. Overall, this work deepens our understanding of genetic diversity and underscores the broader applicability of genetic markers in anthropological and forensic investigations. Further information and requests for genomic resources should be directed to the lead contact, Guanglin He ( [email protected] ). This study did not generate new unique reagents. • Data: The Y-STR and Y-SNP haplotype data for 519 TB-speaking individuals have been deposited in the YHRD database ( https://yhrd.org/ ) under accession numbers YA004726 (TCD), YA004729 (TML), YA004613 (TQH), YA004223 (YLS), and YA004730 (SDJ). The supplementary materials contain all additional data used in this study. The data collection and usage adhered to the guidelines stipulated by the People’s Republic of China on the administration of human genetic resources. • Code: This article does not report the original code. • All other items: Requests for access to the raw data should be directed to Guanglin He at [email protected] or Mengge Wang at [email protected] . We express our gratitude to all the volunteers who contributed to this study. We acknowledge the financial support received from the National Natural Science Foundation of China for M.W., for G.H., and from the National Social Science Foundation of China (Major Project Grant No. 23&ZD203 ) for G.H., and from Open Research Project of the Ministry of Public Security for M.W. Additional support for G.H. includes the Open Project of the Key Laboratory of Forensic Genetics of the Ministry of Public Security , the Center for Archaeological Science of Sichuan University ( 23SASA01 ), the 1‧3‧5 Project for Disciplines of Excellence at West China Hospital, Sichuan University , and the Sichuan Science and Technology Program . Chao Liu, Mengge Wang Huijun Yuan, and Guanglin He conceived and designed the study. Mengge Wang and Guanglin He collected the samples. Mengge Wang and Guanglin He extracted the genomic DNA and performed the genotyping. Yunhui Liu, Lintao Luo, Yuhang Feng, Zhiyong Wang, and Ting Yang performed the population genetic analysis. Mengge Wang, and Guanglin He drafted the article. Mengge Wang and Guanglin He revised the article. The authors declare no competing interests. REAGENT or RESOURCE SOURCE IDENTIFIER Deposited data Y-STR and Y-SNP haplotype data This study YHRD: https://yhrd.org/ . National Genomics Data Center: https://ngdc.cncb.ac.cn/bioproject/browse/PRJCA028381 Software and algorithms BWA v0.7.13 Li and Durbin 71 http://bio-bwa.sourceforge.net ; RRID: SCR_010910 Picard v3.0.0 N/A http://broadinstitute.github.io/picard ; RRID: SCR_006525 GATK v4.2.6.1. McKenna et al. 72 https://gatk.broadinstitute.org/hc/en-us ; RRID: SCR_001876 BCFtools v1.8 Li 73 https://www.htslib.org ; RRID: SCR_005227 VCFtools Danecek et al. 74 https://vcftools.github.io/index.html ; RRID: SCR_001235 GeneMapper ID v.1.5 N/A https://www.thermofisher.com/order/catalog/product/4475073 ; RRID: SCR_014290 Chromas Lite V2.6.6 N/A https://technelysium.com.au/wp/chromas/ ; RRID: SCR_000598 HaploGrouper Jagadeesan et al. 75 https://gitlab.com/bio_anth_decode/haploGrouper the STR Analysis for Forensics (STRAF) Gouy et al. 76 https://straf-p7bdrhm3xq-ew.a.run.app/https://github.com/agouy/straf YHRD website N/A https://yhrd.org/pages/tools/amova SPSS v.25.0 N/A https://www.ibm.com/support/pages/downloading-ibm-spss-statistics-25 ; RRID: SCR_002865 R v4.3.3 R CoreTeam 77 https://cran.r-project.org/bin/windows/base/ ; RRID: SCR_001905 Arlequin v.3.5 Excoffier et al. 78 https://cmpg.unibe.ch/software/arlequin35/ ; RRID: SCR_009051 MEGA v.7.0 Kumar et al. 79 https://www.megasoftware.net/ ; RRID: SCR_000667 Surfer v.19. Relethford 80 https://www.goldensoftware.com/products/surfer/ MVSP v.3.22. N/A https://www.kovcomp.co.uk/downl2.html Network 10.1 N/A https://www.fluxus-engineering.com/sharenet.htm Network Publisher N/A https://www.fluxus-engineering.com/sharenet.htm Y-LineageTracker Chen et al. 81 https://github.com/Shuhua-Group/Y-LineageTracker ArcMap N/A https://www.esri.com/en-us/arcgis/products/arcgis-desktop/overview RaXML v8.0.0 Stamatakis et al. 82 https://github.com/stamatak/standard-RAxML ; RRID: SCR_006086 pathPhynder Martiniano et al. 83 https://github.com/ruidlpm/pathPhynder BEAST v.1.10.4 Suchard et al. 84 https://beast.community ; RRID: SCR_010228 LogCombiner v1.10.4. Drummond and Rambaut 85 https://beast.community/logcombiner Tracer v1.7 Rambaut et al. 86 https://beast.community/tracer ; RRID: SCR_019121 TreeAnnotator v1.10.4 Drummond and Rambaut 85 https://beast.community/treeannotator FigTree v1.4.4 N/A http://tree.bio.ed.ac.uk/software/figtree/ ; RRID: SCR_008515 This study followed ethical standards set by the Medical Ethics Committees of West China Hospital of Sichuan University and the principles of the International Declaration of Helsinki. We collected samples in three batches. First, we obtained peripheral venous blood from 519 unrelated TB-speaking individuals in various communities after providing informed consent for genotyping STR and SNP profiles. This included 254 Tibetan individuals from multiple locations: 101 from Muli County, Liangshan Yi Autonomous Prefecture; 95 from Chengdu, Sichuan Province; and 58 from Qinghai Province. Additionally, we sampled 104 Yi participants from Liangshan Yi Autonomous Prefecture and 161 Sherpa participants from Dingjie County, Shigatse, Tibet Autonomous Region. We integrated these data with 3,779 previously reported genotypes from ethnolinguistically diverse Chinese populations to characterize general paternal profiles across China . Second, we collected 72 representative samples from the D-Z31591 and O-CTS4658 lineages for whole-genome sequencing and merged them with 918 modern samples from the pilot work of the YangHuang cohort and 303 ancient Y chromosome sequences from published ancient autosome-based studies to elucidate the demographic dynamics of TB people further ( Tables S15 and S16 ). Finally, we collected additional samples to explore the evolutionary history of the founding TB lineages. This included 918 samples from 31 provinces covering 275 prefecture-level cities associated with the D-Z31591 lineage and 3,388 samples from 34 provinces covering 373 prefecture-level cities linked to the O-CTS4658 lineage for high-density Y-SNP genotyping. The resource encompassed 37 ethnic groups and over four thousand ST-speaking individuals, including 3,811 Han Chinese individuals, 89 Tibetan individuals, 84 Yis individuals, 39 Huis individuals, 38 Manchus individuals, 27 Bais individuals, and 168 individuals from 31 other minority groups. All participants provided informed consent, and the study procedures were approved by the Medical Ethics Committee of West China Hospital, Sichuan University . The study was conducted following the Human Genetic Resources Administration of China (HGRAC) guidelines and adhered to the principles of the 2013 revision of the Helsinki Declaration. The whole genomes of representative samples were sequenced using the DNBSEQ-T7 platform (MGI, Shenzhen, China) following an in-house protocol. 3 We used BWA v0.7.13 71 to map the raw sequencing reads to the GRCh37 human reference genome and Picard v3.0.0 to remove duplicate reads. Base quality score recalibration was performed using GATK v4.2.6.1. Y chromosome BAM files were extracted and combined with reference targeting sequencing 20 Mb Y chromosome BAMs. 72 The GATK HaplotypeCaller, CombineGVCFs, and GenotypeGVCFs modules were used for the joint calling of genome-wide variants. 72 We focused on high-quality Y chromosome regions, specifically the 10 Mb region used in Poznik’s population evolution modeling. 87 Quality control was performed using BCFtools v1.8, filtering variants with missing call rates greater than 5%, base quality less than 20, and heterogeneity rates greater than 15%. 73 Variants with missing call rates exceeding 5% were removed using VCFtools. 74 The raw sequencing reads of ancient Tibetans were downloaded from the Genome Sequence Archive of the National Genomics Data Center ( https://ngdc.cncb.ac.cn/gsa-human/ ) and aligned following standard ancient DNA research protocols. 41 Quality-controlled BAM files were used for integrative analysis between modern and ancient genomic data and haplogroup classification. As a quality control measure, we used male DNA standard 9948 (Promega Corporation, USA) as a positive control throughout the study. For Y-STR haplotype profiling, we employed the AGCU Y37 Kit for multiplex amplification of 37 Y-STR loci. 88 Ultrapure water served as the negative control. Each reaction mixture included 2 μL of reaction mixture, 1 μL of Y37 primers, 0.2 μL of DNA polymerase, and 1 μL of DNA template at 2 ng/μL, with the final volume adjusted to 5 μL using 0.8 μL of deionized water (ddH 2 O). Thermal cycling was conducted on a ProFlex 96-well PCR system (Thermo Fisher Scientific) under the following conditions: initial denaturation at 95°C for 2 min, 30 cycles of denaturation at 94°C for 30 s, annealing at 60°C for 1 min, extension at 72°C for 1 min, a final extension at 60°C for 20 min, and holding at 4°C. We analyzed the amplified products using an ABI 3500XL Genetic Analyzer. The electrophoresis setup included 9.8 μL of deionized formamide, 0.2 μL of AGCU Marker SIZ-500 internal standard, and 1 μL of either the amplified product or the Y37 allelic ladder standard. The electrophoresis parameters were an injection time of 10 s at 1.2 kV, followed by a 3-min prerun and a 22-min electrophoresis at 15 kV. Data interpretation was performed using GeneMapper ID v.1.5 software. To identify microvariant alleles not cataloged in the standard Bin file, we used Sanger sequencing for validation. We used the DYS448 amplification primers from Hohoff et al. , 89 the DYS570 and DYS627 primers from Ballantyne et al. , 35 and the DYS527 primers from the NIST website ( Table S1 ). The PCR amplification mixture included 10 μL of QIAGEN Multiplex PCR Master Mix (2×), 1 μL each of forward and reverse primers (10 μM), 2 μL of DNA template (2 ng/μL), and 6 μL of ddH 2 O. The PCR conditions, which varied by primer-specific annealing temperature, are detailed in Table S2 . After amplification, we verified the specificity of the PCR products via polyacrylamide gel electrophoresis. We then sequenced the PCR products using the Sanger method to genotype the alleles accurately. Sequencing analyses were performed using Chromas Lite V2.6.6 software (Technelysium Pty Ltd., Australia), ensuring precise allele identification. We genotyped 215 Y-SNP loci using SNaPshot panels following protocols described by Wang et al . 90 Y-SNP profiles were analyzed with GeneMapper ID v.1.5 software. High-density Y-SNPs from 918 D-Z31591 and 3,388 O-CTS4658 samples were genotyped using the Thermo Fisher Scientific Illumina 23MF_v1 array, which includes 769,530 SNPs, 27,280 of which are phylogenetically informative Y chromosome SNPs. We manually classified haplogroups for 215 Y-SNP-based genotypes and used Haplogrouper 75 for haplogroup inference on high-density Y-SNP data and whole Y chromosome sequences, adhering to the Y-DNA Haplogroup Tree 2019–2020 standards. For our Y-STR data analysis, the allele frequencies and genetic diversity of each Y-STR locus were calculated using the STR Analysis for Forensics (STRAF) software. 76 To ensure data clarity, three multicopy loci—DYS527, DYS385a/b, and DYF387S1—were excluded from the analysis. Furthermore, the allele count for DYS389II was adjusted by subtracting DYS389I to derive DYS389b. Allele frequency was computed using the direct counting method for multicopy loci, copy number variations, and null alleles. The frequency of each Y-STR haplotype was calculated using the formula = x / N , where x represents the number of observed haplotypes and N is the total sample size. HD, GD, HMP, and DC were derived using the following formulas: HD / GD = N × ( 1 − ∑ i = 1 k p i 2 ) / ( N − 1 ) , HMP = ∑ i = 1 k p i 2 , and DC = k / ∑ i = 1 k ( p i × N ) . Here, pi is the frequency of the i-th haplotype, k is the number of haplotypes, and N is the sample size of each studied population. To evaluate genetic distances among geographically diverse populations, we analyzed 29 Y-STRs cataloged in the Y Chromosome Haplotype Reference Database (YHRD), including subsets of 27 Y-STRs from the Yfiler Plus kit and 17 Y-STRs from the Yfiler kit. We estimated genetic distances (Rst) using the AMOVA&MDS tool on the YHRD website. The resulting Rst genetic distance matrix was subjected to multidimensional scaling (MDS) analysis in SPSS v.25.0 and visualized using R software 77 to enhance the interpretability of genetic relationships. Detailed descriptions of the reference populations used in this analysis are provided in Tables S3 and S4 . 42 , 43 , 44 , 91 We performed an analysis of molecular variance (AMOVA) with Arlequin v.3.5 to assess molecular variance within and between these populations. 78 Additionally, we constructed a neighbor-joining (NJ) phylogenetic tree based on the Rst matrix using MEGA v.7.0 79 to further elucidate phylogenetic relationships. We calculated haplogroup frequencies within the studied populations using the direct counting method. HGD was determined by the formula HGD = × ( 1 − ∑ i = 1 k p i 2 ) / ( N − 1 ) , where pi represents the frequency of the i-th haplogroup, k is the total number of observed haplogroups, and N is the sample size. To visualize the distribution of major haplogroups among the TB populations, we generated contour maps using Surfer v.19. 80 Genetic distances (Fst) between geographically distinct populations were calculated based on either 113 Y-SNPs common among worldwide populations or 157 Y-SNPs common among Chinese populations using Arlequin v.3.5. We conducted MDS analysis of the Fst genetic distance matrix with SPSS v.25.0 to determine spatial genetic relationships. Additionally, we performed principal component analysis (PCA) based on haplogroup frequencies using MVSP v.3.22. Phylogenetic relationships were further delineated through an NJ phylogenetic tree constructed with MEGA v.7.0. AMOVA based on Y-SNP haplotypes was conducted using Arlequin v.3.5 to determine variance components attributed to different levels of population grouping. To elucidate genetic relationships among populations, we constructed a median-joining (MJ) network using Network 10.1 and Network Publisher software, integrating Y-SNP-STR haplotypes. To enhance analysis accuracy, we excluded DYS385a/b due to its multicopy nature and treated DYS389 as two separate loci: DYS389I and DYS389b (calculated as DYS389II−DYS389I). For DYF387S1, we considered only the DYF387S1b allele. In this network analysis, we assigned Y-SNPs a high weight of 99 due to their lower mutation rates, providing stability to the network structure. Conversely, Y-STRs, with greater variability, were assigned weights ranging from 1 to 5, inversely proportional to their mutation rates. This weighting system balanced the contributions of SNPs and STRs, offering a detailed and nuanced view of the genetic landscape and historical population dynamics. 92 We conducted spatial correlation analysis using R software and Y-LineageTracker, 81 applying parameters such as –level and –freq to estimate haplogroup frequencies at both the provincial and prefectural levels. We examined the geographical distribution and potential phylogeographic origins of founding lineages through spatial autocorrelation analysis performed in ArcMap. We obtained high-quality variant calls using the sequence masks and filters mentioned above, which served as high-confidence 10 Mb targeted Y chromosome regions. The final dataset of 994 samples was used to construct a maximum-likelihood tree via RaXML v8.0.0, 82 with 200 rapid bootstrap inferences and a maximum-likelihood search. We then integrated 303 ancient Y chromosome sequences into the reconstructed reference phylogeny for combined analysis using pathPhynder. 83 Coalescent times for each node were estimated using Bayesian Markov Chain Monte Carlo (MCMC) methods with BEAST v.1.10.4 software. 84 To preserve the phylogenetic topology, we included additional samples from other haplogroups, specifically four samples from haplogroup B, to root the tree. 93 We conducted four parallel runs, each with a different seed number, and merged them using LogCombiner v1.10.4. 85 Each run consisted of 60 million chains, logging every 3,000 steps. The results were manually inspected using Tracer v1.7 software 86 and the initial 25% was discarded as burn-in using TreeAnnotator v1.10. 85 Consistent parameters were maintained across all runs, including the GTR substitution model with Gamma and Invariant sites heterogeneity model, a strict clock with a uniform distribution prior to mutation rate (7.4e-10; 95% CI: 6.7e-10-8.6e-10 mutations/nucleotide/year), and the Bayesian Skyline model with a group size of 10. The NO-M214 node served as the calibration point for estimating coalescence age, with an age of 41,900 years (95% CI: 40,175–43,591). 94 The maximum clade credibility tree was then visualized using FigTree. We conducted a Pearson correlation analysis using R software between the founding lineage frequency and the geographical coordinates and ADMIXTURE-based admixture proportion in the Figure 5 C, where ∗ represents 0.05 < p value <0.01, ∗∗ represents 0.01 ≤ p value <0.001, ∗∗∗ represents p value <0.001. Meanwhile, we conducted another Pearson correlation analysis between the frequency of two founding lineages and the latitude and longitude coordinates in the Figure 5 D, where the R value is considered significant if the p -value is less than 0.05. | Review | biomedical | en | 0.999998 |
PMC11696645 | Insects have recently gained attention as an alternative nutrient source to meet the increasing demand for the world's growing population . Depending on the insect species, they can enter the food chain as animal feed or directly as human food. Black soldier fly larvae (BSFL, Hermetia illucens (L.), Diptera: Stratiomyidae) are considered one of the species with greatest potential for feed production and are, compared to other insect species, currently produced in the largest volume for this purpose . On top of that, they can be applied in the processing of organic waste. Due to the production of amylases, lipases and proteases, BSFL can convert a large variety of organic wastes, including vegetables, animal tissues and manure, into body tissue that serves as high quality nutrients for animals. In this way they can play a major role in recycling organic waste and circularizing the food chain [ , , , ]. Since 2017, following the adoption of Regulation (EU) 2017/893, the use of proteins from BSFL and six other insect species, has been allowed as feed for aquaculture animals, provided that they are not reared on substrates which contain ingredients from animal origin . From 2021 onwards, the list of authorized species has been expanded from seven to eight with the addition of silkworms. Moreover, insect proteins could also be used in feed for pigs and poultry . Consequently, BSFL enter the feed and food chain at an increasingly broader scale . As insects enter the feed and/or food chain, good hygiene and monitoring practices are needed to ensure a safe end product for both animal and human health. Specific attention needs to be paid to microbiological hazards, i.e. the growth and/or transfer of foodborne pathogenic microorganisms for animals and humans in or on the larval biomass . There are several factors that may affect the dynamics of the microbiota during rearing: 1) the transfer of microorganisms from parent to offspring, 2) microorganisms that are naturally present in the rearing substrate and 3) hygiene of the rearing/production conditions . When insects are exposed to foodborne pathogens during rearing, these could accumulate in the insect guts, resulting in a potential safety risk when the insects are consumed as feed or food . The bacterial species Bacillus (including the Bacillus cereus group), Clostridium perfringens , Salmonella enterica and Staphylococcus aureus are the most relevant risks regarding food safety of edible insects . Especially spore-formers, such as Clostridium and Bacillus species, are a major concern, because endospores are highly resistant to the processing stresses that are used to eliminate microbiological hazards, such as pressure sterilization . B. cereus is a common cause of food poisoning in humans. Cases of infection in other mammals are more rare, but recently a large feed-related outbreak of severe infection in pigs was reported . Species from the B. cereus group can cause two types of food-associated gastrointestinal diseases. The enteropathogenic strains cause diarrhea and abdominal pain, while the emetic types cause symptoms including nausea and vomiting. The pathogenicity of B. cereus is attributed to the secretion of toxins, such as hemolysin BL (Hbl), nonhemolytic enterotoxin (Nhe), cytotoxin K (CytK) and cereulide [ , , , ]. Because endospores are highly resistant to (thermal) processing stresses, they can survive in the feed chain and may thereby not only endanger animal, but eventually also human health. To explore the dynamics of microorganisms during rearing, a typical experimental design consists of inoculating the rearing substrate with the microorganism and subsequent testing for possible colonization of the substrate and insects via classical microbial counting and/or sequencing . These types of inoculation experiments have been performed for a variety of food pathogens, such as Escherichia coli , Staphylococcus aureus , Salmonella and Enterococcus species [ , , , , , ]. BSFL seem to exert some sort of antimicrobial activity as they seem to be capable to reduce or eliminate different food pathogens . This could give the impression that the presence of food pathogens in the rearing substrate should not be a concern. There are, however, also studies that report the opposite result, namely that the number of food pathogens in the substrate stays the same or increases . Studies on the dynamics of B. cereus during rearing of BSFL were lacking. Therefore, the effect of BSFL on the growth of pathogenic microorganisms has to be studied more extensively . The aim of this study was to explore the interaction between BSFL and the foodborne pathogen B. cereus . The effect of BSFL on the survival and growth of B. cereus present in the rearing substrate was investigated, and vice versa; the effect of the B. cereus on the performance – in terms of survival and yield – of the BSFL. Moreover, it was investigated if the presence of B. cereus in the substrate causes bio-accumulation of the microorganism in BSFL. Therefore, a range of rearing trials with BSFL were conducted, after inoculating the substrate with different levels of B. cereus (either vegetative cells or endospores). After seven days of rearing, the presence of B. cereus vegetative cells or endospores in the frass (i.e. a mixture of residual feed substrate, larval feces and cuticles) and the larvae were determined. Different rearing conditions were included: 1) substrate without B. cereus and without larvae (S), 2) uninoculated substrate with larvae (S + BSFL), 3) substrate inoculated with B. cereus (either vegetative cells or endospores), without larvae (S + BC), 4) substrate inoculated with B. cereus (either vegetative cells or endospores) and with larvae (S + BC + BSFL) ( Supplementary Tables 1a and b ). Two different B. cereus strains were used in this study: reference strain DSM31 ( hblACD & nheABC positive) and strain B3465 ( nheB & ces positive) originating from food (isolated within the Dutch national monitoring plan of Microbiology). Strains were stored in Brain Heart Infusion broth (BHI; Biotrading, Mijdrecht, The Netherlands) supplemented with 15 % glycerol at a temperature of −80 °C. Before use, the strains were cultivate on Tryptone Soya Agar (TSA; Biotrading, Mijdrecht, The Netherlands) for 24 ± 2 h at 30 ± 1 °C. After incubation, one single colony was suspended in 9 ml BHI broth which were statically incubated for 24 ± 2 h at 30 ± 1 °C. Subsequently, the BHI culture was enumerated by plate counts using Tryptone Soya Agar (TSA; Biotrading, Mijdrecht, The Netherlands) which were incubated 24 ± 2 h at 30 ± 1 °C. Spores were obtained by inoculating colony material from TSA on Hydrolysate of Casein Tryptone (HCT) agar medium at 30 ± 1 °C for 5 days, at which >90 % of the culture consisted of free spores (examined by phase contrast microscopy). Spores were harvested (based on the method as described by Ceupens et al. ) by dissolving colony material in 20 mL sterile physiological salt solution (0.85 % NaCl). After centrifugation at 10,000× g for 15 min the spore pellet was washed with 10 mL sterile physiological salt solution. This washing step was repeated once, but then the pellet was dissolved in 10 mL of ethanol (50 %) solution. Then, the spore suspension was incubated at 5 ± 3 °C overnight (18 h) to eliminate vegetative cells. The washing procedure was repeated twice. Finally, the endospores were suspended in 10 mL sterile distilled water and kept at 5 ± 3 °C until further use. The exact spore concentration was determined by plating appropriate dilutions on TSA. Substrate was prepared by mixing the finely ground wheat bran (Meelfabriek De Jongh, Steenwijk, The Netherlands) and tap water in a 35:65 ratio (w/w). 50 g of the substrate was inoculated with vegetative cells or endospores with target levels of 4, 6 and 8 log CFUs. The added inoculum volume was 1 mL, so a final ratio of 35:65 (w/w) was obtained. To the uninoculated rearing conditions (S and S + BSFL), 1 mL tap water was added to obtain the same ratio. Both the inoculated and uninoculated substrates were homogenized by stirring and left at room temperature. Added concentrations can be found in Supplementary Table 1a . The different rearing conditions S, S + BSFL and S + BC + BSFL were performed in triplicate. Rearing condition S + BC was performed in duplicate. 50 mL of BHI contaminated in duplicate with the lowest spike concentration (4 log CFUs) were included as additional control to test the inoculum. To exclude contamination of the BHI, a blank BHI control (50 mL) were included in duplicate. ( Supplementary Table 1b ). BSFL originated from a colony maintained by InsectoCycle (InsectoCycle; Wageningen; The Netherlands). During the first 7 days, larvae were grown on standard feed, containing wheat, cornmeal, soy scrap Hipro, brewer's yeast and barley at 27–29 °C, with a relative humidity of 55 %. At day 7, an intended number of 50 larvae were added to each relevant cultivation tray containing 50 g of diet, i.e. 1 larva/gram of diet (S + BSFL and S + BC + BSFL). The cultivation trays were cylindrical (diameter 100 mm, height 40 mm) and were closed with a lid containing a circular area (diameter 40 mm) in the center which was covered by a mesh (SPL Life Sciences Co., Ltd., Gyeonggi-do, South Korea) to allow air circulation, but to prevent the escape of larvae. Before the start of the experiment, it was verified whether the larvae were not already naturally infected with B. cereus , as described in sections 2.6 , 2.7 . The larvae were reared for 7 days until day 14 post-hatching, in a climate chamber at 28 °C. The samples without larvae were incubated under the same conditions. Sampling of larvae and frass took place on day 7. Larvae were separated from the substrate using a sterile tweezer and then washed twice in sterile, demineralized water to remove adhering residual material. Finally, the larvae were collected and placed in sterile plastic bags. The larvae were then killed by pulverizing them within the plastics bags, utilizing mechanical pressure (rolling pin) to ensure immediate death. Subsequently, frass and larvae samples were weighed, transferred in a sterile filter bag and diluted ten-fold in Peptone Physiological Salt Solution (PPS; Biotrading, Mijdrecht, The Netherlands). Larval and frass were homogenized for 60 s using a Smasher (bioMérieux). 1 mL of the homogeneous suspension in PPS (section 2.4 ) was transferred to a sterile 1.5 mL Eppendorf tube. The samples were heated at 100 °C for 1 min in a thermoblock (Eppendorf ThermoMixer F1.5, Hamburg, Germany) at 800 rpm, to simulate the post-harvesting processes (blanching). To check the effect of the heat-treatment on vegetative cells and to check whether spores were present in the vegetative inoculum, an additional experiment was conducted. For this, both strains were grown in triplicate in BHI for 24 ± 2 h at 30 ± 1 °C. One replicate was heated at 100 °C for 1 min, as used in the original experiment. The second replicate was heated at 80 °C for 10 min (pasteurization, to eliminate vegetative cells) . And the last replicate was taken untreated. The different samples were enumerated by plate counts using TSA which were incubated 24 ± 2 h at 30 ± 1 °C. Larvae and frass from both the untreated and heat-treated samples were analyzed for the presence of presumptive B. cereus , by using 100 μL of tenfold dilution series for plate counts on BACARA® agar (bioMérieux Benelux B.V.; Amersfoort; The Netherlands) at 37 ± 1 °C for 24 ± 2 h. Presumptive colonies randomly selected and confirmed on the Microflex LT/SH™ mass spectrometer (Bruker Daltonics GmbH & Co. KG; Bremen; Germany) by the direct colony method . Presence of B. cereus in both the untreated and heat-treated samples was analyzed by real-time PCR for presence of the enterotoxin gene nheB , as this gene is present in both strains used. For this purpose the untreated and heat-treated samples were diluted tenfold in PPS and DNA was extracted by using bead-beating followed by DNA purification. 500 μL of the sample was transferred to a new 1.5 mL Eppendorf tube. After centrifugation at 10,000× g for 15 min, the pellet was dissolved in 500 μL ZymoBIOMICS Lysis Solution (Zymo Research Europe; Freiburg im Breisgau; Germany) and transferred to a 2 mL tube containing 0.1 mm silica beads (MP Biomedicals; Eschwege; Germany). Bead-beating was performed in a Fastprep-24™ device (MP Biomedicals) in five cycles of 1 min (6.5 m/s), with a 5 min period of rest at room temperature. The tubes were centrifuged at 10,000× g for 1 min and 75 μL of the supernatant was used for further purification. This was performed with the KingFisher Flex Purification System (Thermo Fisher Scientific, Breda, The Netherlands) using the QuickPick Plant DNA kit (Bio-Nobile, Pargas, Finland) according to manufacturer's instructions. Detection of the representative toxin gene nheB was based on real-time PCR using the following nucleotides: forward primer (nheB-FW1) 5′-GCAGCTGAAAGTACAGTGAAAC-3′, reverse primer (nheB-RV1) 5′-TCAAGCCTTCTGGTCCTAATG-3′ and probe (nheB-P1) 5′-HEX-CGCCAGTTCATGCGGTAGCAAA-BHQ1-3’. The primers were designed based on the nheABC gene sequence (NCBI accession number Y19005.2 ). As internal amplification control (IAC) the primers and probe described by Deer et al. were used. Each 25 μL reaction volume contained 12.5 μL 2x TaqMan Multiplex Master mix, 150 nM IAC-probe, 200 nM IAC-forward primer/IAC-reverse primer/nheB-forward primer, 300 nM nheB-probe, 400 nM nheB-reverse primer and 3 μL DNA template. The amplification program, carried out on a CFX96 Touch Real-Time PCR Detection System (Bio-Rad Laboratories B.V.; Veenendaal; The Netherlands), consisted of an initial denaturation at 95 °C for 15 min, followed by 40 cycles at 95 °C for 10 s for denaturation and 58.5 °C for 60 s for primer binding and extension. For PCR results, the increase in the fluorescence signal of the reporter dye detected was visualized by the CFX Maestro software v2.3 (Bio-Rad). Quantification cycle (Cq) values represent the PCR cycle in which a first increase in fluorescence over a defined threshold occurred for each amplification plot. For statistical analyses, the software SPSS Statistics for Microsoft Windows (version 25.0.0.2, IBM Corp., Armonk, NY, United States) was used. Because the treatments were performed in triplicate, a Gaussian distribution could not be assumed. Therefore, non-parametric statistical tests were performed to determine the statistical significance of the results. Statistical analysis was performed on the S + BC + BSFL conditions: larvae vs frass and untreated vs heat-treated samples. The distributions of the treatments were compared using a Kruskal-Wallis test (α = 0.05). In the inoculated samples without larvae (BHI medium + BC and S + BC), both strains (vegetative cells and endospores) could be detected, confirming that the B. cereus cultures were viable and able to colonize the substrate ( Supplementary Tables 1b and 2 ). As expected, the uninoculated substrates without and with larvae (S and S + BSLF) were below the detection limit, which shows that the larvae were not naturally infected with B. cereus ( Supplementary Tables 1b and 2 ). In addition to analyzing the samples via enumeration on BACARA plates, their DNA was extracted. The isolated DNA was analyzed by real-time PCR for the presence of B. cereus -specific enterotoxin gene nheB . The Cq-values are an indication for the amount of nheB (and thereby B. cereus ) present in the sample. The obtained Cq-values show that both the inoculum of strain B3465 and of strain DSM31 contained nheB at a high level (Cq-value of respectively 20 and 18 on average), confirming the results that were obtained by plate counting ( Supplementary Table 2 ). The Cq values from the BHI medium + BC and/or S + BC condition give an indication for the B. cereus levels that could be found in the larvae and frass. Dilution ranges of the B. cereus strains were prepared in order to inoculate the substrates with target levels of 8, 6 and 4 log CFUs/50 g substrate, which equals 7.3, 5.3 and 3.3 log CFUs/gram. The actual obtained concentrations were close to the target levels, namely: for B3465 vegetative cells 5.20, 3.20 and 1.20; for B3465 spores 6.05, 4.05 and 2.05; for DSM31 vegetative cells 7.19, 5.19 and 3.19; and for DSM31 spores 7.06, 5.06 and 3.06 log CFUs/gram substrate ( Supplementary Table 1a ). These 12 inoculated substrates were used for rearing experiments, all conditions were tested in triplicate. BSFL were reared on these substrates for seven days, after which the frass and the larvae were analyzed for the presence of B. cereus by plate counts on BACARA. In the conditions where the substrates were inoculated with vegetative cells, B. cereus could not or barely be detected in the larvae after seven days of rearing . This could indicate that vegetative cells were not transferred to the larvae or that the vegetative cells were initially transferred to the larvae during the first days of rearing, but were eliminated in their body due to antimicrobial activity. Comparing the results of the frass with the larvae-free controls (S + BC condition), the detectable levels of B. cereus were lower. However, it must be pointed out here that the detectable levels in the larvae-free controls were already quite low, shown by high Cq values ( Supplementary Table 2 ). Tabel 1 Enumeration of typical B. cereus on BACARA plates (expressed in log CFU/gram; average of biological triplicates) and Cq values of real-time PCR for the detection of nheB – B3465 and DSM31 vegetative cells. Arithmetic mean and standard deviation. Light-grey shaded: heat-treated samples (1 min at 100 °C). Kruskal-Wallis test (α = 0.05), n.s. Tabel 1 B3465 vegetative cells Experimental condition Inoculation level (log CFU/gram) Frass Larvae Unprocessed Heated Unprocessed Heated Count (log CFU/gram) Cq (qPCR) Count (log CFU/gram) Cq (qPCR) Count (log CFU/gram) Cq (qPCR) Count (log CFU/gram) Cq (qPCR) S + BC + BSFL 5.20 0.23 ± 0.41 N/A N/A N/A 0.33 ± 0.58 N/A N/A 39.62 0.52 ± 0.91 N/A N/A N/A <0 N/A N/A N/A 3.20 0.21 ± 0.37 N/A 38.18 N/A <0 32.73 36.83 N/A <0 N/A 39.89 N/A 0.32 ± 0.56 N/A 38.15 N/A 1.20 <0 N/A N/A N/A <0 37.41 N/A N/A <0 N/A N/A N/A <0 N/A N/A N/A DSM31 vegetative cells S + BC + BSFL 7.19 <0 N/A N/A N/A <0 37.87 N/A N/A <0 N/A N/A N/A <0 N/A N/A N/A 5.19 <0 N/A N/A N/A <0 N/A N/A N/A <0 N/A N/A N/A <0 N/A N/A N/A 3.19 <0 N/A N/A N/A <0 N/A N/A N/A <0 N/A N/A N/A <0 N/A N/A N/A Fig. 1 Presence of B. cereus vegetative cells and endospores in larvae and residual substrate after 7 days of BSFL rearing. Graphs represent B. cereus counts in log CFU/gram sample for B3465 (upper graphs) and DSM31 (lower graphs), vegetative cells (left) and endospores (right), highest inoculation levels. Black bars indicate the inoculation levels, white bars the residual substrates and striped bars the larvae. Grey-shaded results indicate the heat-treated samples. Kruskal-Wallis test (α = 0.05), n.s. Fig. 1 In the conditions where the rearing substrates were inoculated with endospores, B. cereus was detected in the frass as well as in the larvae after seven days of rearing. The B. cereus load detected in the larvae was equally high as in the substrate. Though the spike concentrations of strain B3465 were lower than of strain DSM31, inoculation of the rearing substrate with B3465 endospores resulted in equal (or even trending towards higher) microbial loads in the frass and larvae than inoculation with strain DSM31 ( Table 2 ) , suggesting that the survival properties of B. cereus are strain-dependent. The inoculated substrates were heat-treated and tested for the presence of B. cereus . Surprisingly, CFUs were found for substrates that were inoculated with vegetative cells ( Supplementary Table 2 ). This suggested that spores might also have been preset in the vegetative inoculum. Therefore, overnight cultures from the two B. cereus strains were exposed to the heat-treatment used in this study to mimic blanching (1 min at 100 °C) and a standard treatment used to eliminate vegetative cells (10 min at 80 °C) ( Supplementary Table 3 ). Since both heat-treatments equally reduced the cell count, the results indicated that indeed spores were present in the culture of vegetative cells. Hence, it is concluded that the CFUs that were detected in the substrate upon heat-treatment ( Supplementary Table 2 ) are thus formed by the spores that were present in the culture of vegetative cells, and not from vegetative cells. Remarkably, this reduction in CFU counts upon heating on the DSM31 culture was more pronounced than on the B3465 culture, indicating that the percentage of spores in the B3465 culture was considerably higher. In the inoculated substrates with larvae (S + BC + BSFL) the additional heating step did not make a difference when the substrate was inoculated with vegetative cells, since there were few B. cereus cells present to kill in the first place ( Table 1 ). The small amount of cells that are detected must be the endospores that were present in the inoculum, because they are not killed by the heating step. In the conditions where the rearing substrates were inoculated with spores, an additional heating step did not reduce the amount of CFUs detected . The additional heating step considerably reduced the amount of background microbiota (while the effect on B. cereus CFUs is minimal as observed in the case of larvae), enabling counting of B. cereus CFUs. Data obtained by real-time PCR confirmed that there were approximately equal levels of nheB present in the frass as in the larvae ( Table 2 ). In the cases where the rearing substrate was inoculated with a lower spike concentration, the Cq-values were higher, indicating a lower level of the nheB gene. Table 2 Enumeration of typical B. cereus on BACARA plates (expressed in log CFU/gram; average of biological triplicates) and Cq values of real-time PCR for the detection of nheB – B3465 and DSM31 spores. Arithmetic mean and standard deviation Light-grey shaded: heat-treated samples (1 min at 100 °C). Kruskal-Wallis test (α = 0.05), n.s. Table 2 B3465 spores Experimental condition Inoculation level (log CFU/gram) Frass Larvae Unprocessed Heated Unprocessed Heated Count (log CFU/gram) Cq (qPCR) Count (log CFU/gram) Cq (qPCR) Count (log CFU/gram) Cq (qPCR) Count (log CFU/gram) Cq (qPCR) S + BC + BSFL 6.05 3.78 ± 0.08 31.53 32.02 31.57 4.25 ± 0.14 30.89 31.11 30.26 4.93 ± 0.04 30.39 30.70 30.30 4.83 ± 0.15 30.05 30.19 29.89 4.05 0.84 ± 0.78 N/A 39.54 N/A 1.56 ± 0.35 38.20 N/A N/A 2.26 ± 0.09 38.53 37.86 N/A 2.51 ± 0.10 38.29 37.02 38.17 2.05 <0 N/A N/A N/A <0 39.60 N/A N/A <0 N/A N/A N/A 0.65 ± 0.57 N/A N/A N/A DSM31 spores S + BC + BSFL 7.06 1.75 ± 1.57 31.72 32.39 30.63 2.71 ± 0.11 31.25 32.29 30.57 3.03 ± 0.18 32.87 33.34 30.53 2.78 ± 0.16 31.59 32.87 30.98 5.06 0.35 ± 0.61 38.32 39.62 35.15 1.11 ± 0.50 38.37 37.70 36.67 0.43 ± 0.75 36.31 38.22 N/A 0.33 ± 0.58 39.52 37.58 37.49 3.06 <0 N/A N/A 39.73 <0 N/A N/A N/A <0 N/A 39.25 N/A 0.33 ± 0.56 N/A N/A N/A To investigate the dynamics of the foodborne pathogen B. cereus during rearing BSFL, rearing substrate was inoculated with either vegetative cells or endospores. B. cereus endospores were found to survive in BSFL and in the frass, while vegetative cells did not survive, neither in BSFL nor in the substrate. Regulation (EC) No 142/2011 requires that for the production of animal feed, processed animal proteins (PAPs) of non-mammalian origin must have been processed in accordance with one of five standard methods to reduce microbiological contamination (Chapter 3 of Annex 4). Depending on the particle size; certain time, temperature, and pressure requirements are prescribed. Alternatively, ‘method 7’ involves authorization of a novel method by the respective competent national authority, which requires demonstration of reduction of Clostridium perfringens , Salmonella , and Enterobacteriaceae – but not B. cereus . Based on these observations, it is hypothesized that blanching may reduce the B. cereus vegetative cell count, but does not kill endospores and therefore does not completely sterilize the sample. In this way, endospores could end up in the feed chain and potentially endanger animal and eventually human health. Remarkably, the data suggests that the presence of BSFL reduces vegetative B. cereus growth, because in the S + BC + BSFL condition, B. cereus could not be detected in the larvae nor in the frass, while the level in the larvae-free controls (S + BC) was higher. In the case of absence of vegetative B. cereus – or even reduction as observed in this study – formation of toxins is unlikely. However, the presence of toxins was not tested, so it cannot be ruled out that this could still pose a safety problem if formed prior to the insect rearing stage. A possible explanation for the observation that the presence of larvae reduced the level of detected B. cereus , is the excretion of antimicrobial peptides by BSFL. Previous studies, in which these type of inoculation experiments have been performed for Escherichia coli , Staphylococcus aureus , Salmonella species and Enterococcus species [ , , , , , ], reported antimicrobial activity of BSFL as they are capable of suppressing the growth of different microorganisms by the secretion of antimicrobial compounds . Another possible explanation is that spores germinate and that, subsequently, these vegetative bacteria do not survive. The finding that the detectable levels of B. cereus are almost equally high before and after heat-treatment, indicates that the detected B. cereus are endospores, either formed during the rearing experiment, or because they were already present in the inoculum. To get a clearer view on the dynamics of B. cereus vegetative cells during insect rearing more investigation is required, especially by measuring on different time points during the rearing experiment. If the presence of BSFL reduces B. cereus , that could give the impression that the presence of food pathogens in the rearing substrate should not be a concern. However, a recent study by Moyet et al. reported that the presence of BSFL in potato substrate increased the survival and growth of B. cereus , and a study by De Smet et al. the increase of Salmonella . These contradicting observations suggest that the antimicrobial activity may be bacterial species and substrate dependent . To what extent the findings of this study can be extrapolated to other substrates, such as from waste-streams, should therefore be subject of further research. In the current study, the endogenous microbiota present in the feed were also active during the experiments: the wetted substrate stored at 28 °C formed an ideal environment for the growth of background microorganisms, especially fungi. In initial pilot experiments, growth of background fungi hindered the counting of B. cereus colonies on Mannitol egg Yolk Polymyxin (MYP) plates. Previous studies also reported hinder of background microbiota in the rearing substrates and advised the use of an extra selective or elective aid (such as the introduction of an antibiotic-resistance gene in the target microorganism) . Therefore, natamycin was added as an anti-fungal compound to the MYP plates. This indeed inhibited the growth of fungi, but did not hinder the growth of other bacteria (data not shown), which were still hindering an accurate count of B. cereus CFUs. It was therefore decided to plate the samples on BACARA agar, which is a chromogenic medium that is selective for B. cereus species. In some of the conditions tested, there still grew many background microbiota on the BACARA plates, indicating that they must be closely related to B. cereus . Further diluting the samples also diluted the background microbiota and made it possible to count the B. cereus CFUs. Though the background microbiota did in this way not influence the microbiological analysis, any inhibiting or competing effects of the background microbiota on the growth of B. cereus in the substrate cannot be ruled out. Results from these pilot experiments also suggested that vegetative B. cereus inoculated at low levels did not survive in the substrate, hence selecting comparatively high inoculation levels to ensure analytical recovery and mimic a commercial worst-case scenario. This study shows that B. cereus endospores present in the rearing substrate, can be transmitted from substrate to black soldier fly larvae and that the heating step tested in this study did not reduce endospore count in the larvae. As a consequence, endospores may end up in the feed chain that pose a safety hazard to animals and humans. Consequently, to avoid B. cereus endospores from entering the feed and food chain and to ensure a safe end product for both animals and humans, it is advised to test substrate ingredients for the presence of B. cereus for BSFL rearing. According to Dutch Regulation BWBR0005758, the level of B. cereus per gram/mL of food stuff, should be lower than 5 log CFUs. In France, the same threshold is applied . It is suggested that insect producers maintain this norm. In this study it was shown that B. cereus endospores present in the rearing substrate were transferred to the larvae of the black soldier fly. The endospores also survived in the frass and their counts were not reduced by potential antimicrobial activity of the larvae. B. cereus vegetative cells were below the detection limit, but it could not be proven that the pathogen was not initially ingested and eliminated during the rearing experiment, as the vegetative bacteria did not survive either in the substrate without larvae. An additional heat-treatment did not kill the endospores. Furthermore, the microbial load detected in the larvae and frass was strain-dependent. In conclusion, to ensure a safe end product for animal and human health, it is recommended to analyze substrate ingredients for the absence of B. cereus spores. K. van Kessel: Writing – original draft, Methodology, Investigation, Data curation. G. Castelijn: Writing – review & editing, Supervision, Conceptualization. M. van der Voort: Writing – review & editing, Supervision, Conceptualization. N. Meijer: Writing – review & editing, Methodology, Funding acquisition, Conceptualization. Relevant data from the experiments is provided in the manuscript or supplementary materials. Any other data can be made available upon request, depending on confidentiality. This project was funded by the Dutch Ministry of Economic Affairs through a Public-Private Partnership project (“Controlling the safety of insects for food and feed”) of the Topsector AgriFood . The Ministry had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. The commercial entities within the project consortium were: Proti-Farm R&D, BV; Protix Biosystems; Bestico B.V., and ForFarmers. The commercial entities in the consortium had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: Nathan Meijer reports financial support was provided by Dutch Ministry of Economic Affairs. This project was funded by the Dutch 10.13039/501100004725 Ministry of Economic Affairs through a Public-Private Partnership project (“Controlling the safety of insects for food and feed”) of the Topsector AgriFood . The Ministry had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. The commercial entities within the project consortium were: Proti-Farm R&D, BV; Protix Biosystems; Bestico B.V., and ForFarmers. The commercial entities in the consortium had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. If there are other authors, they declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | Study | biomedical | en | 0.999995 |
PMC11696653 | The menisci are a pair of wedge-shaped fibrocartilaginous semilunar structures within the knee joint, located between the femoral condyles and the tibial plateau, improving the incongruence between the cartilaginous surfaces with their concave structure . The menisci have a role in load transmission, shock absorption, stability, nutrition, joint lubrication, and proprioception [ , , ]. Meniscal degeneration or injury can lead to significant functional impairment and contribute to the development of osteoarthritis (OA), highlighting the importance of understanding meniscal biology and pathology [ , , ]. The menisci are composed of a dense extracellular matrix (ECM), primarily comprising water (72 %) and collagen (22 %), with the remaining dry weight attributed to proteoglycans (PGs), non-collagenous proteins, and glycoproteins . Among the imaging techniques available for 3D visualization of tissues, X-ray micro-computed tomography (micro-CT) is a non-destructive method capable of exploring the internal microstructure of materials and generating three-dimensional (3D) models [ , , , ]. It is particularly valuable for imaging high-attenuating structures like bone. However, the visualization of soft tissues remains challenging due to their inherently low X-ray contrast. Therefore, contrast enhancement techniques, such as the use of contrast agents (CAs), are essential for differentiation of the various structures within such samples. Several recent studies have compared various CAs for the visualization of soft tissues [ , , ]. Various staining methods, such as iodine-based solutions, phosphotungstic acid (PTA), and osmium tetroxide, have been tested as CAs to observe the development of chicken embryos, distinguishing different organs, and to compare the penetration of the CAs . Iodine is a versatile CA that can be used to enhance the radiodensity of any tissue, as it is known to form complexes with the helical coil structure of the polysaccharide glycogen and its plant counterpart, starch, without bonding specifically to any other cellular or extracellular components [ 14 , , , , ]. For tissue penetration of CAs in large samples, only potassium iodide (KI) stained the whole tissue within a short time . Lugol (KI 3 ) solution is among the anionic iodine-based solutions frequently used as CAs. It is composed of potassium iodide (KI) and iodine (I 2 ) in a 2:1 ratio, dissolved in water or ethanol, KI serving to increase the solubility of I 2 . In solution, an equilibrium is established between I 2 and iodide ion (I − ), and the resulting triiodide ion (I 3 − ) : I 2 + I − ⇋ I 3 − Within cartilaginous and fibrocartilaginous tissues, iodine distribution occurs mainly by diffusion and is influenced by ECM biochemical constituents [ , , ]. PGs, being negatively charged, change the transport characteristics of anionic CAs into the tissue. Indeed, iodine distribution at equilibrium is significantly higher in the meniscus than in cartilage, a result that correlates with the lower content of PGs in the meniscus compared with that in cartilage . Additionally, polar interactions can occur between anionic CAs and polar functional groups, such as –OH and –NH, present in collagen and other ECM components . The aim of this work was to investigate iodine staining protocols for large samples, such as the meniscus, in different animal species. The relevance of the study lies in its detailed evaluation of the CA diffusion mechanism within the meniscus, considering the width of the tissue and the time-dependent changes in radiodensity. Additionally, volume changes caused by the staining processes were evaluated and the iodine ions taken up by the tissue were identified. This comprehensive approach provides valuable insights into optimizing iodine staining protocols, enhancing the visualization of soft tissues, and ultimately improving the accuracy and efficacy of micro-CT imaging. Three Swiss alpine sheep ( Ovis aries ) and six mix breed pig ( Sus domestica ) hindlimbs were obtained from respectively two and three animals (Musculoskeletal Research Unit – MSRU -, University of Zurich, Switzerland). Review and/or approval by an ethics committee was not required for this study because the hindlimbs were collected from research animals after they were sacrificed at the study-specific endpoints of other research projects. The knee joints were freed from surrounding soft tissues. The patella as well as the collateral ligaments were excised to expose both menisci. Then, sheep menisci (n = 3 medial and n = 3 lateral) and pig menisci (n = 6 medial and n = 6 lateral) were then isolated from the knee joint capsule and cut at the level of the bony attachments of the anterior and posterior horns. The entire menisci were fixed for 24 h in 4 % paraformaldehyde (PFA) and then immersed for 48 h in either water or PBS according to the subsequent staining solution. All the sheep and six pig medial and lateral menisci were soaked in an aqueous solution of Lugol (KI 3 ) (i.e., 1.25 % w/v of I 2 and 2.5 % w/v of KI; total CA volume for each sample ca. 100 ml) for a total of 24 days. The remaining six pig medial and lateral menisci were stained for the same duration in a phosphate-buffered saline (PBS)-based KI 3 solution with the same concentration (total CA volume for each sample ca. 100 ml). Aliquots of 1 ml of the staining solutions were collected at 8 time points up to 24 days and stored for the subsequent analyses with micro-CT, Ultraviolet–visible (UV–vis) spectroscopy, and pH measurements . Fig. 1 Schematic illustration of the materials and methods applied in this study. ( A ) Sample processing steps. ( B ) Micro-CT imaging analysis. ( C ) Volume analyses and radiodensity measurements. Fig. 1 The menisci (wrapped in parafilm), as well as 1 ml samples from the staining solution were scanned with micro-CT (EasyTom XL Ultra 230–160 micro/nano-CT scanner, RX Solutions, Chavanod, France) at specific time points: before staining (day 0), and after 1, 4, 8, 12, 16, 20, and 24 days of immersion in staining solution . The images were acquired by setting a rotation step of 0.25°, a number of average frames of 3, and 5 images per frame. The scanner operated at 70 kV and 70 μA, with a nominal resolution set to 25 μm or 30 μm depending on sample size, and each acquisition took 14 min. For higher-resolution imaging, scans were conducted at 8.5 μm and 2.5 μm voxel sizes. The parameters for the 8.5 μm scans included a rotation step of 0.3°, a frame rate of 1.5, an average of 5 frames per scan, and acquisition time of 64 min. For the 2.5 μm scans, the scanner was operated at a voltage of 90 kV and a current of 60 μA, with a frame rate of 1, an average of 5 frames per scan, and a rotation step of 0.18°. In this case, the acquisition time was 168 min. All the CT datasets were reconstructed using the filtered back-projection algorithm, a small ring artifact reduction and a 75 % Sinus window function. The CT datasets were analyzed using the open-source image processing package Fiji and the software application Avizo (Thermo Fisher Scientific, MA, USA). The meniscal tissue was segmented using the IsoData algorithm and for each sample, the segmented volumes were normalized and calculated at specific staining time points . The distribution of CA in the tissue was visualized creating 3D volume renderings using the maximum intensity projection (MIP) technique and averaging 150 spatially consecutive radial slices of representative menisci for each time point. The original CT datasets were converted into an 8-bit format, and the linear attenuation coefficient (μ) was measured for each sample at specific staining times. To provide a quantitative standardize measure of the tissue radiodensity, the Hounsfield Unit (HU) values were calculated for each meniscal sample at different time points during the staining process. To determine the CA distribution along the radial direction from the outside of the meniscus to the inside, CT data were analyzed using MATLAB at different time points. CA uptakes were determined within a region of interest (ROI) extending from the outer periphery to the inner regions of the meniscus, with radial width dimensions varying depending on the species and sample and a fixed height of 0.25 mm. The values were averaged from 100 CT slices taken from anatomical radial slices of the center of the meniscus . The gray levels of the images were converted to HU, and the uptake of CA within the meniscus, expressed as a normalized percentage, was determined by subtracting the initial images without CA from the images with CA and normalizing with the mean of the maximum HU value in the images. The formula used to calculate the normalized uptake in % was: U % = ( H U − H U d a y 0 ) ( H U max − H U d a y 0 ) × 100 where U% is the percentage of the contrast agent uptake, HU is the Hounsfield Unit value of the current image, HU day0 is the mean HU value of the day 0 image (baseline without contrast agent), and HU max is the maximum HU value observed across all images. Then, the uptake along the width of the meniscus was then calculated by averaging the HU values across columns (i.e., the 0.25 mm height) for each image, resulting in a one-dimensional uptake profile parallel to the radial axis of the central meniscus. These profiles were combined across all time points and samples to calculate a mean uptake profile. The width of the menisci was normalized from 0 to 1 to facilitate comparison. The resulting profiles were visualized using a heatmap with a color scale from blue to red through green and yellow representing the uptake levels over time and depth. Fig. 2 Schematic illustration of the analyses of contrast agent uptake over the width of the menisci. Contrast agent uptakes were determined within a region of interest ROI, averaged from 100 CT slices taken from anatomical radial slices of the center of the meniscus. Fig. 2 The staining solutions were segmented utilizing the software application Avizo (Thermo Fisher Scientific, MA, USA). Similarly to the radiodensity measurement of the menisci, the original CT datasets were converted into an 8-bit format, and μ was measured for each sample . To provide a quantitative standardize measure of the solution radiodensity, the HU values were calculated for each iodine solution at specific time points during the staining process. The percentage change in radiodensity was then determined by comparing the HU values from day 0 to day 1 of the staining solutions: R % = ( H U d a y 0 − H U d a y 1 ) ( H U d a y 0 ) × 100 Staining solutions were diluted by a factor of 500 and the absorbance spectra were acquired with 1.5 nm resolution and a scan rate of 300 nm/min in the range of 210–800 nm using a spectrophotometer (Varian Cary 50 UV–vis, Varian, Inc., California, USA). Three distinct peaks at 228, 288, and 351 nm, corresponding to the iodine ions, were identified in the Lugol solution. To investigate the uptake of iodine ions by meniscal tissue, the water- and PBS-based Lugol solutions were analyzed at days 0, 1, and 24. Before sample analysis, a solvent baseline measurement, either water or PBS, was recorded for use as a blank sample. The samples were diluted as follows: the water and PBS phases were diluted 3000 times to measure I − and 500 times to measure I 3 − . Regarding the pH analysis, pH values of both water- and PBS-based iodine solutions were measured at days 0, 1, and 24 of the staining period using a FiveEasy pH meter (Mettler-Toledo, Schweiz GmbH). Single Factor Anova and Tukey-Kramer's Test were performed for multiple comparisons. Statistical significance was set at P < 0.05. Following the automatic segmentation approach described above, 3D volume renderings of the tissue were generated for each staining time point. The 3D renderings captured temporal changes in volume, in radiodensity and in dyeing of different structural features . Fig. 3 Micro-CT imaging of the menisci during the staining period. Axial and radial 3D volume renderings of a sheep meniscus during the staining period using the maximum intensity projection (MIP) technique. The yellow line highlights the triangular shape of the meniscus in the radial orientation. Scale bars in the axial views: 10 mm; scale bars in the radial views: 5 mm. Fig. 3 At day 24 of the staining period, the menisci were imaged at progressively higher resolutions, i.e. 25 μm , 8.5 μm and 2.5 μm voxel size . The higher resolution scan allowed the visualization of distinct anatomical features of the meniscus, including collagen fiber orientation and blood vessels. This highlights the efficacy of the staining process in enhancing the visibility of specific anatomical and structural characteristics. Fig. 4 Cross-sectional 2D images of a stained meniscus scanned at different resolutions after 24 days of staining. The menisci were imaged at nominal resolutions of 25 μm ( A ), 8.5 μm ( B ), and high-resolution of 2.5 μm ( C ). Blood vessels are indicated with an asterisk (∗), and collagen fiber organization is highlighted with a hash symbol (#). Scale bars: 1 mm. Fig. 4 Volumetric analyses were performed on all samples. Immersion in the CA solution caused a reduction in tissue volume. Using the initial volume (day 0) as the reference, one day of staining resulted in a statistically significant average reduction in meniscal volumes: 15 % for sheep and 20 % for pigs when stained in water-based solutions. The volume shrinkage remained consistent over the remaining staining period. Specifically, by days 4 and 24, sheep samples showed volume reductions of 18 % and 21 %, respectively, while the volumes of pig samples decreased of 27 % on day 4 and 30 % on day 24 . Fig. 5 Volumetric measurement of the menisci during the staining period. Boxplots are coloured in beige for sheep menisci and in blue for pig menisci stained in water-based iodine solution, and in light green for pig menisci stained in PBS-based iodine solution. For each sample group: day 0 vs. remaining staining times ∗∗∗ 0.001 < P , ∗ 0.01 < P < 0.05; day 1 vs. remaining staining times ### 0.001 < P , ## 0.001 < P < 0.01, # 0.01 < P < 0.05; day X of pig menisci in PBS vs. corresponding day X of pig menisci in water &&& 0.001 < P . Fig. 5 A decrease in volume was also observed in pig meniscal samples stained in the PBS-based iodine solution. However, this decrease was statistically lower than in those stained with the water-based solution and did not progress beyond day 4. Specifically, the volumetric decreases for days 1, 4, and 24 were 7 %, 10 %, and 10 %, respectively, compared to day 0 . Radial micro-CT images of sheep and pig menisci were captured throughout the staining period of 24 days. These images suggested that the water- and PBS-based iodine solutions primarily diffused through the peripheral region of the tissue, while the inner region exhibited limited uptake. The low attenuation at the exact surfaces is an artifact due to anatomical misalignment of the overlapping surfaces across the 150 slices. Radiodensity analyses of the menisci stained in the water-based iodine solution revealed an increase in HU over the first four days of the staining period . In sheep and pig samples, HU values from day 8 until day 24 are statistically significant higher compared to day 0 (sheep average value: 49 ± 17.3 HU, pig average value: 28 ± 96.7 HU) and day 1 . From the radiodensity graph , an increasing trend can be observed between days 1 and 4 in both sheep and pig samples. After this initial staining period, no additional iodine uptake was observed. Average HU values on days 8 and 24 were 2838 ± 240.7 HU and 2820 ± 366.6 HU for sheep menisci, 3231 ± 369.1 HU and 3384 ± 334.1 HU for pig menisci, respectively. Fig. 6 Radiodensity measurement of the menisci during the staining period. Radial micro-CT images (average of 150 consecutive slices) of sheep ( A ) and pig ( B ) samples in water and pig ( C ) samples in PBS for all time points. ( D ) HU values of each sample group for all time points. Boxplots are coloured in yellow and in orange for sheep and pig menisci stained in water-based iodine solution and in light blue for pig menisci stained in PBS-based iodine solution. For each sample: day 0 vs. remaining staining times ∗∗∗ 0.001 < P ; day 1 vs. remaining staining times ### 0.001 < P , ## 0.001 < P < 0.01, # 0.01 < P < 0.05; day X of pig menisci in PBS vs. corresponding day X of pig menisci in water && 0.001 < P < 0.01. Fig. 6 Similarly to the samples stained in iodine water-based solution, HU values of the menisci stained in the PBS solution increased significantly from day 0 (average value: 175 ± 20.5 HU) to all other staining times. After day 1 , the radiodensity values remained relatively stable and from day 8 to day 24, the radiodensity values were significantly statistically reduced compared to the corresponding samples stained in the water-based iodine solution . The CA diffused in the three different sample groups solely in one direction, specifically from outwards to inwards . In sheep menisci stained in the aqueous iodine solution, the uptake of the CA increases significantly during the first 4 days, during which 70 % of the tissue width is stained. However, iodine absorption continued throughout the entire staining period until day 24 . For pig menisci stained in the aqueous-based solution, 8 days of staining are necessary for iodine to diffuse into 70 % of the tissue width. However, less diffusion is observed into the remaining inner portion of the tissue . The iodine solution in PBS diffused more rapidly than the aqueous counterpart into pig menisci. Indeed, after one day of staining, the iodine uptake is higher in this group of samples compared to those immersed in the aqueous solution. Similarly to the other sample groups, iodine diffusion continues throughout the entire staining period . Fig. 7 Width-wise contrast agent diffusion within the meniscus during the staining period. Mean (n = 100 slices per sample) contrast agent diffusion of sheep menisci ( A ; n = 6 samples) and pig menisci ( B ; n = 6 samples) stained in water-based iodine solution. ( C ) Mean (n = 100 slices per sample) contrast agent diffusion of pig menisci (n = 6 samples) stained in PBS-based iodine solution. Fig. 7 As expected, the radiodensity analyses of the iodine solutions exhibited an opposite trend to that of the meniscal samples. Specifically, the greatest decrease in iodine concentration occurs during the first day of staining for both water- and PBS-based solutions . These data support the previously described radiodensity results of meniscal tissue, which showed a significant increase in iodine uptake after one day of staining . Fig. 8 Radiodensity measurement of the contrast solutions during the staining period. HU values of the water-based contrast solutions are outlined in light green for sheep menisci and in dark green for pig menisci, the PBS-based contrast solution is outlined in pink. For each sample group: day 0 vs. remaining staining times ∗∗∗ 0.001 < P , ∗∗ 0.001 < P < 0.01, ∗ 0.01 < P < 0.05; day 1 vs. remaining staining times ### 0.001 < P , ## 0.001 < P < 0.01, # 0.01 < P < 0.05; day X of pig menisci in PBS vs. corresponding day X of pig menisci in water &&& P < 0.001. Fig. 8 The percentage change in radiodensity, accounting for initial HU differences, showed similar decreases in CA uptake over time for both solutions. Specifically, the decrease from day 0 to day 1 was 6.75 % for sheep menisci in water, 7.29 % for pig menisci in water, and 7.02 % for pig menisci in PBS. These values indicate comparable diffusion behavior of the CA in both water-based and PBS-based solutions. Lugol's solutions were subsequently analyzed using UV–visible spectroscopy, identifying three peaks attributed to two different iodine ions: I − and I 3 − . The peak at 228 nm corresponding to I − showed no change in peak intensity after 1 and 24 days in the water-based solution, while a slight difference was observed in the PBS-based solution . Whereas, the peaks corresponding to I 3 − exhibited a temporal decrease in intensity across all three sample groups. Both peaks decreased in intensity during the staining period (from day 0 to days 1 and 24), revealing a selective uptake of the I 3 − ion by the meniscus . The behavior of iodine ions in different samples explains the radiodensity values of the staining solutions. Indeed, in sheep samples, a maximum concentration of I − and a minimum concentration of I 3 − resulted in radiodensity values similar to pig samples stained in the water-based solution. Conversely, in PBS samples, a higher concentration of I − on day 24 compared to day 1, and a lower concentration of I 3 − on day 24 compared to day 1, led to similar radiodensity values between days 1 and 24. Fig. 9 UV–visible spectroscopy analysis of the iodine solutions. ( A ) Three distinct peaks were identified at 228, 288, and 351 nm and assigned to I − and I 3 − . ( B ) I − was measured at days 0, 1, and 24 of the staining period for each sample group . ( C ) I 3 − was measured at days 0, 1, and 24 of the staining period for each sample group (diluted 500 times). Fig. 9 The pH values of the water- and PBS-based iodine solutions were measured at the same staining time points as the UV–visible spectroscopy analysis (days 0, 1, and 24). At day 0, the water-based iodine solutions were acidic with a pH of 4.37, while the PBS-based solution had a neutral pH of 7.70 ( Table 1 ). Over the staining period, the pH values of both iodine solutions decreased, with the water-based solutions consistently reaching a more acidic level ( Table 1 ). Table 1 Comparison of pH values of the iodine solutions. pH mean values were calculated at days 0. 1, and 24 for each sample group. Data are shown as mean ± SD. Table 1 Staining times Mean pH Values Sheep menisci in water Pig menisci in water Pig menisci in PBS Day 0 4.37 ± 0.04 4.37 ± 0.04 7.70 ± 0.16 Day 1 4.04 ± 0.03 3.94 ± 0.07 7.20 ± 0.10 Day 24 3.50 ± 0.17 3.30 ± 0.28 5.58 ± 0.19 In this study, the diffusion patterns of iodine-based CA solutions into the menisci of sheep and pigs was investigated using 3D imaging. Pig and sheep are the most frequently used animal models for meniscus repair and regeneration due to their similar anatomical features and biochemical characteristics to the human counterpart . Micro-CT was employed to visualize and quantify the diffusion of the CA within the tissues at various time points during the staining process. The non-destructive nature of micro-CT permitted to analyze and subsequently visualize each sample at eight different time points (from day 0–24). The unstained menisci (day 0) provide information on the volume and shape of the samples. However, specific anatomical and structural characteristics were not discernible due to the low inherent contrast of the meniscus, a typical characteristic of soft, low density tissues. Therefore, the use of CAs is necessary for the visualization of these tissues by X-ray-based imaging . In 2009, Metscher's study was the first one to use and compare different CAs for the visualization of soft tissues using micro-CT . Specifically, he demonstrated how simple staining methods based on iodine and phosphotungstic acid (PTA) enabled high-resolution visualization of embryonic chicken tissues, even permitting the distinction of individual cells . Considering the size of the meniscus and the ease of preparing the solution, Lugol solution (KI 3 ) was used to contrast sheep and pig meniscal samples for a total period of 24 days. Previous studies have shown that potassium iodide solution is easy to prepare and allows staining even large samples . The work of Pauwels et al. involved the study of various chemicals (n = 12) that could be used as CAs to stain mice paws by immersion of the samples for contrast-enhanced micro-CT . After 24 h of immersion, only the iodine-based and sodium tungstate solutions were able to penetrate the samples entirely, proving their effectiveness as CAs for larger specimens . However, additional days of staining were necessary in this study to permit the diffusion of the CA throughout the entire sample, probably due to the larger size of the sheep and pig menisci compared to mice paws. This is in line with the work of Disney et al., were completely staining a quarter segment of a bovine intervertebral disc (IVD), a fibrocartilagineous tissue with a composition similar to that of the meniscus, required 14 days of incubation in KI 3 . The volumes of all samples were calculated on days 0, 1, 4, 8, 12, 16, 20, and 24 of staining and normalized to the initial volume, in order to allow a more reliable comparison between the various samples. After one day of staining, the volume of the sample reduced in all groups, namely the sheep and pig menisci immersed in the water-based iodine solution as well as those of pigs immersed in the PBS-based iodine solution, by respectively 15 %, 20 %, and 7 %. For the menisci stained in the water-based solution, the volumes also decreased in the subsequent days of staining, albeit less substantially. Whereas for the samples in PBS-based iodine solution, the volume decrease was statistically lower than in those stained with the water-based solution and did not progress beyond day 4. This difference can be attributed to the pH-buffered of the PBS-iodine solution. Indeed, Dawood et al. demonstrated that tissue shrinkage is influenced by the pH of the iodine solutions, with more acidic pH levels resulting in greater shrinkage of the tissue samples . The shrinkage caused by the iodine solutions has also been described by other groups . Vickerton et al. demonstrated that the macroscopic changes in the tissue depend on the concentration of the KI 3 solution . For this reason, in our study, we used the minimum concentration required to achieve adequate contrast of the tissue, i.e. 3.75 %. In addition, radiodensity, a parameter directly related to the uptake of the CA, was calculated in terms of HU for each meniscal sample throughout the staining process. Traditional diffusion studies often rely on fluorescent or radioactive signals, properties that iodine do not possess. Despite this limitation, iodine-based CAs remain among the most commonly used CAs for micro-CT imaging of biological tissues. Although semi-quantitative, analyses of HU values in tissues provide valuable insights into iodine diffusion within the meniscus. In this study, the HU values of the samples stained with the water-based iodine solution enhanced during the first 4 days of staining, with a greater increase on the first day. In the subsequent period, the tissue seems to cease absorbing the CA. However, a qualitative analysis based on the visualization of micro-CT images was necessary to define a staining protocol and elucidate the diffusion of iodine in the tissue. In fact, as shown in Fig. 6 A and B, the contrast continued to diffuse throughout the staining period and its diffusion occurred mainly from the outer portion of the tissue . Similar results were observed in the images of the samples immersed in the PBS-based iodine solution , while regarding the radiodensity of the tissue, it increased after the first day of staining and subsequently remained stable. To confirm the radiodensity uptake of the samples, the staining solutions were also analyzed with micro-CT at each time point. Through this analysis, it was observed that an increase in the HU values of the meniscus corresponds to a decrease in the equivalent HU value of the contrast solution, which is to be explained by the diffusion of iodine from the solution to the sample. Degradation over time of the iodine in solution may thus be excluded. Further analyses involved the identification of iodine ions present in the solution and how their concentration differentially evolves during the staining period and in different solutions. For this, UV–visible spectroscopy was used, allowing the identification of three peaks – assigned to I − and I 3 − - present in both water-based and PBS-based solutions . The absorbance of I − and I 3 − during the staining period, which is directly proportional to the concentration, highlights a variability in the uptake and retention of I − ions and a selective uptake of I 3 − ions by the meniscus tissue. Previous studies, such as that of Lakin et al. demonstrated through contrast-enhanced micro-CT and histological analysis that the diffusion of anionic CAs, such as iodine-based ones, is impaired by a high concentration of glycosaminoglycans (GAGs), while it is favored for cationic CAs . Honkanen et al. observed differential uptake of iodinated contrast agents between cartilage and meniscus, noting a higher uptake in the meniscus, which they attributed to its lower GAGs content. In fact, GAGs are less concentrated in the meniscus than in hyaline cartilage, constituting approximately 10 % of the GAG content found in cartilage . These previous studies considered either large, iodine-based molecules (such as ioxaglate) or NaI solutions, resulting primarily in I⁻ ions. In our study, however, we found that the majority of the uptake was due to I₃⁻, whose formation requires the presence of I₂, which was absent in the previous experiments . We observed a significant increase in HU values, a finding not fully explained by factors related solely to the charge affinity between CA and FDC in tissues, since the negative charge of GAGs generally repels anionic contrast agents. We therefore hypothesize that the combination between the specific structure and composition of the meniscus and of the triiodide ion (I₃⁻) influences the diffusion and binding behavior of the anionic iodinated CA, extending beyond the simple electrostatic repulsion associated with fixed charge density (FCD). The linear structure of the triiodide ion makes it more likely to interact with regions that offer spatial stability, such as the cavities formed by biochemical components like collagen, proteoglycans, and matrix glycoproteins, which serve as temporary retention sites for I₃⁻ . In addition, the presence of polar functional groups, such as -OH and -NH groups from collagen and other extracellular matrix components, facilitates weak hydrogen bonding with I₃⁻. For example, collagen, a protein with amino (-NH₂) and carboxyl (-COOH) groups, exhibits internal polarization at physiological pH, which promotes weak interactions with I₃⁻. Similarly, proteoglycans, although negatively charged due to sulfate (-SO₃⁻) and carboxyl (-COO⁻) groups, contain polar groups capable of forming hydrogen bonds with I₃⁻. Matrix glycoproteins, although less polar and less negatively charged, also possess core protein structures linked to branched chains, providing potential sites for I₃⁻ interactions. Together, these weak interactions contribute to the unexpectedly high HU values observed in our study. This suggests that the uptake of Lugol's solution into the meniscus is influenced not only by electrostatic factors, but also by the unique structure, size, and binding capabilities of the I₃⁻ ion, allowing for localized partitioning with attenuation levels exceeding those of the initial solution, as observed . The influence of the negative FCD induced by GAGs is nevertheless evident in the generally lower uptake in areas with higher GAG content. The inner zone of sheep and pig menisci is richer in GAGs, enhancing the ability of the tissue to withstand compressive loads, while the outer zone has a lower GAG content . This different spatial distribution of GAGs in meniscal tissue could explain the greater diffusion of iodine from the outer zone of the meniscus found in our study. The differential distribution of iodine in the tissue can also be explained by the higher vascularization of the external regions compared to the inner areas of the tissue . Blood vessels store glycogen, the molecule to which iodine has an affinity, in the vascular smooth muscle cells (VSMCs) of the artery and vein wall . Another important factor to consider is pH, which can have a significant effect on hydrogen bonding. pH influences the ionization state of functional groups involved in bonding (such as -OH, -NH₂, and -COOH), thereby affecting the strength and stability of hydrogen bonds. In our study, we measured a pH at day 0 of 4.37 ± 0.04 for the CA in water and 7.70 ± 0.16 for the CA in PBS. At a neutral pH, as in our CA solution in PBS, the amino (-NH₂) and carboxyl (-COOH) groups in collagen are in their more stable physiological forms, allowing collagen to form stronger and more durable hydrogen bonds. At an acidic pH (such as that observed for CA in water), carboxyl groups (-COO⁻) tend to become protonated, reducing the number of sites available for hydrogen bonding due to reduced partial charges and polar regions. We also observed a pH decrease of 1–2 units over time in all groups ( Table 1 ), likely due to the gradual acidification of Lugol's solution and the subsequent release of paraformaldehyde from tissue fixation . This additional acidification should, therefore, be taken into consideration when defining the optimal staining duration. This study is not without limitations. First, we did not perform a power analysis prior to data collection; however, we selected a sample size of 6 per group, which is consistent with standard practice for this type of analysis in similar studies . The comparative analysis of menisci was based on two animal species, but we took into account that sheep and pigs are the animals with the greatest similarities to the human equivalent and are therefore most commonly used for studies of meniscal regeneration and repair. Furthermore, the use of CAs can cause artifacts in the original structure of the tissue. In fact, in our study, the use of iodine-based solutions caused a volume shrinkage. However, the routine sample processing steps of other imaging techniques (e.g., histology processing protocols), can also cause tissue artifacts, severely damaging the sample [ , , ] or failing to achieve maximal resolution obtainable with micro-CT . This potential for shrinkage due to iodine staining may affect imaging accuracy by complicating volume assessment. In addition, variations in meniscal tissue composition may result in inconsistent staining responses. Future studies using multimodal imaging may provide insights to better account for these effects. Given the significant iodine uptake within the first day of staining, future studies would also benefit from including a 12-h time point to capture early diffusion dynamics. While our study focused on longer staining periods based on the extended time required for iodine penetration in large fibrocartilaginous samples , the lack of a time point before 24 h may be considered a limitation. Future research could also explore the impact of iodine uptake on the biomechanical properties of the meniscus, testing the same sample imaged with micro-CT, for example, through nanoindentation. However, in such cases, additional considerations should take into account the effects of fixatives, such as formalin used in the present study, which is known to cause cross-linking and increase tissue stiffness . This highlights the potential need for contrast agent protocols without chemical fixation, with appropriate adjustments to account for altered diffusion dynamics and potential changes in tissue properties. In conclusion, this study demonstrated the utility of iodine-based CAs and advanced 3D imaging techniques for visualizing large soft tissue and investigated the iodine diffusion patterns within the meniscal tissue of sheep and pigs, with a particular focus on the mechanism significance of the presence of I₃⁻ ions in enhancing contrast. The non-destructive nature of micro-CT allowed a detailed spatial and temporal analysis, revealing a preferential iodine diffusion through the peripheral region of the meniscus during the staining period. For the sheep samples in the aqueous solution, 4 days of staining are sufficient for iodine to diffuse through 70 % of the sample's width. Whereas, for the pig samples, 8 days of staining in either water- or PBS-based iodine solutions are necessary to reach the same level of diffusion. Therefore, we recommend an 8-day staining period, as by this point, iodine has diffused through at least 70 % of the tissue width in all sample groups, and any additional shrinkage is statistically minimal after the first day of staining. Extending staining time beyond 8 days does not significantly affect tissue shrinkage and radiodensity, but can increase iodine leakage risk, particularly in water-based solutions, as highlighted by Hildebrand et al. and Boix-Lemonche et al. . Therefore, an 8-day staining period effectively balances iodine diffusion and stability within the tissue. The UV–vis analysis of the iodine solutions highlighted the differential absorption of iodine ions by the tissue. The findings of this study have potential important implications for the use of iodine-based CAs in imaging studies of the meniscus and offer valuable insights into the diffusion patterns of iodine solutions in the tissue. Moreover, the iodine staining method used in this study enabled detailed visualization of key structural components within the meniscal tissue, particularly collagen fibers and blood vessels, when scanned at high resolution. Identifying these elements is essential for advancing research on meniscal health and injury, with a focus on structural integrity and functionality. Federica Orellana: Writing – original draft, Visualization, Methodology, Investigation, Formal analysis, Data curation. Alberto Grassi: Writing – review & editing, Investigation. Katja M. Nuss: Writing – review & editing, Resources. Peter Wahl: Writing – review & editing. Antonia Neels: Writing – review & editing, Resources. Stefano Zaffagnini: Writing – review & editing. Annapaola Parrilli: Writing – review & editing, Writing – original draft, Supervision, Methodology, Funding acquisition, Conceptualization. The data that support the findings of this study are available from the corresponding author upon reasonable request. The research was supported by the Swiss National Science Foundation . The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | Study | biomedical | en | 0.999996 |
PMC11696654 | Climate change shifts the environmental pattern due to natural phenomena or human intervention. Changes in temperature, precipitation, pressure, and humidity may indicate climate change and global warming . It is critical to comprehend the impact of climate on economic value to develop effective mitigation and adaptation policies . It is however difficult to quantify the costs of climate change to the economy. The recent studies are focused on the statistical approaches using historical data as compared to the past studies that focused on process models to estimate the financial losses brought on by climate change . The effect of weather variations cannot estimate the impact of climate change, its repercussions and adaptation are different from short-term and long-term changes. Climate change has welfare consequences in terms of weather realization and production choices . Climate change's net economic price includes mutual adjustment and equilibrium costs. Because of the essentially inferior outcomes under the altered environment and because agents are first unprepared for or even aware of the altered climate, climate change lowers welfare. The remainder of the well-being loss, including the cost of adaption following adjustment to the new environment, is referred to as the balance or equilibrium cost of climate change . Climate change is a global issue and has become a critical cause of concern for nations . Due to the growing urgency of combating climate change, many advanced countries are also grappling with environmental deterioration, excessive utilization of energy, and greenhouse gas emissions . South Asian countries especially developing countries like Pakistan are more vulnerable to the changes in climate and the awareness related to adaption and mitigation is very low . Developing countries like Pakistan are more prone to the problems caused by climate change. The poverty and resource paucity of developing countries put them at a greater risk of victimization as compared to the advanced countries that have a stronger capacity to adapt . Pakistan's increased levels of poverty and lack of food security are due to climate change and impact the largest economic sector of Pakistan . Although the country is not largely responsible for greenhouse gas emissions, it experiences significant effects of change in the climate. It seriously jeopardizes every aspect of environmental sustainability. Global warming and climate change also impact several socioeconomic factors . Natural calamities and soaring temperatures cause health problems among individuals, may act as an obstacle to education, and cause unemployment due to migration. Moreover, it adversely affects the agriculture sector crucial to the economy . Adverse effects of climate change include raised temperature, changing rainfall patterns, droughts, and water reserve shrinkage. Moreover, it increases water scarcity, food insecurity, and health issues in individuals. Climate hazards like floods and droughts may cause people to be displaced. People may lose their jobs while moving from one place to another. Food prices may rise due to scarcity of food, leaving many people vulnerable to food crisis and hunger . These may result in issues like unemployment, inflation, and food poverty and eventually cause a decline in economic development . The adverse effects of climate change including food poverty, water shortage, energy crisis, raised temperature, and related issues directly impact the nation's expansion of its economy . The negative outcome of climate change also encompasses forestry, animals, aquaculture, and farming which are pivotal pillars of the Pakistani economy. Climate change will have a significant negative impact on Pakistan due to its reliance on agriculture . Any change in temperature and rainfall patterns impinge upon the food security of the population. Being an agro-based country, low agricultural productivity translates into lesser economic output for the country. Moreover, our industries depend on the agriculture sector for raw materials, so lesser agricultural production may disrupt the supply chain . Human progress, particularly economic activity, well-being, and access to energy, is impacted by the substantial effect that humans have on the natural environment. Therefore, to address more and more pressing climatic and environmental concerns, both industrialized and less developed nations globally have implemented a variety of policies, including reducing greenhouse gas emissions, preserving the environment, and speeding up transitions to cleaner energy . The cost of these issues further stresses the country's economic position. Furthermore, the costs incurred on coping strategies and adaption measures are to be considered as well for effective development plans. The paper adds in different ways to a deeper analysis of the issues raised by climate change. The study's primary focus is on professional, in-depth analysis of experts' interviews, as opposed to observational and conceptually inductive reasoning, which produces more objective and significant conclusions. Additionally, the QCA methodology was used to evaluate the opinions of experts on different issues. This method integrated the examination of numerical data with a qualitative inquiry, making it more complex and believable than previous studies on the same subject. Extensive interviews were conducted to gather the essential data for the qualitative evaluation to better understand the interplay of factors and causes of climate change. Since this is a complex subject with several facets, this article focuses on how, among other environmental causes, climate change leads to abnormalities in various economic variables as a result of the country's falling economy. It begins with an explanation of the variables causing climate change. These components of climate change that affect the economy directly or indirectly are the main focus of this study. The study is organized in such a way. Section 2 discusses the literature review, Section 3 explains the data and methodology, section 4 is the results and discussions and the final section is the conclusion. Climate change has a momentous impact on populations, communities, and the performance of financial prudence. Recently, structural changes have drawn interest from several nations as a crucial means of achieving greenhouse gas reduction objectives. Noteworthy structural changes in energy, commerce, and societal life have resulted from the emergence of sustainable energy, which mostly consists of energy from renewable sources, the ongoing promotion of globalization to trade openness, the flourishing of the service and manufacturing sectors, and the growth of urbanization . Climate factors like temperature have a huge impact on societies, heat induces mortality, and provokes aggression while reducing productivity. In addition to harming crops and raising electricity demand, high temperatures can lead to population shifts both inside and across international borders . Tropical cyclones diminish economic output for extended periods, cause property damage, and cause fatalities. The trading patterns may also be influenced by the climate. In this regard, typhoon hits reduce government revenue, frequently decrease imports, and extreme heat, which lead to inefficiency, and reduce the number of products produced by a region, both in the agricultural and industrial sectors . The frequency of both natural and environmental catastrophes can vary greatly from season to season; some years may have very few fatalities until a major catastrophic event that takes multiple lives occurs . During the previous ten years, catastrophic weather events have claimed the lives of almost sixty thousand people annually on average throughout the world . Many plants and other species are projected to perish as a result of climate change brought on by a lack of natural assets, a rise in the melting of glaciers, and increasing water levels . There is a good chance that the current trends of rising temperatures, breakouts of bug diseases, health issues, and a shift in seasons and routine behaviors will continue in the years to come . Around the world, both human and natural environmental disasters have resulted in enormous losses, including reduced agricultural yields, system restoration, and the reconstruction of essential technology . Climate change has a substantial influence on macro-economic variables too . Financial indicators and macroeconomic variables have a critical part in a country's economic volatility .talked about the financial and macroeconomic factors and their effects on the stock exchange. The fluctuation in the stock prices can be the result of the disparities in the macroeconomic factors that might affect not only the stock market but the dividends and so many other factors. The skilled and rich financial markets can likewise be highly beneficial to the country's economic performance management. These fluctuations can have a consequence on cash movements, which constitute the foundation of almost any financial market . analyzed the variations among the capital flows and the exchange rates. In established markets, where factors do not fluctuate significantly, it would be easy to manage the capital flows in that market. In some kinds of marketplaces with large volatility of factors, there will be price/return confusion, and consumers will be hesitant to spend their funds in the market. These variations can also be the cause of inflation in the country .talks about inflation persistence and exchange rate regimes. The variation of macroeconomic indicators can also have a significant influence on overall stock exchange market performance, and investment returns influence the 100 indexes .described the influence of macroeconomic factors on aggregate returns of the stock exchange marketplace. There exists a significant connection between macroeconomic variables and the stock exchange marketplace. The impact can be positive as well as negative .will look into the relationship between macroeconomic indicators and stock values. The prices of the stocks would vary automatically on the market as the macroeconomic variables fluctuated, such as changes in oil prices and currency rates, which typically have an impact on any economy, particularly those in developing nations . According to the new research , climate change affects the economy on a macroeconomic level. Tropical cyclones are said to have a linear relation with economic growth and they may result in slowing down of GDP growth depending upon the intensity of the storm. Moreover, the temperature has a non-linear influence on productive capacity is so significant that output is greatest at about 13 °C. Heavy precipitation hurts businesses and communities, this is more evident in the agriculture-based surroundings. These impacts are frequently quantifiably substantial. It is predicted that upcoming heating can sluggish the development by 0.28 %. Climate change may also result in demographic distortions . Development in the economy and productivity in general are strongly influenced by the climate. As a result of its impact on economic development and growing worldwide presence, climate change has emerged as a top priority for national and international ecological authorities . Thus, it is important to comprehend how climate change affects the agriculture sector's total production factor when developing regional adaptation plans and structuring effective climate policy agreements. The effects of global climate change on the agriculture industry were previously predicted by earlier research. According to research, different parts of the world will be affected by global climate change in the agriculture industry. Scientists' main focus now is on analyzing how climate change affects different agricultural activities in different geographic areas and developing appropriate responses to its consequences . According to recent empirical data , modern people are already under tremendous economic and social pressure as a result of the existing environment, and future climate change will only make these costs rise much more . Theoretically, losses in the now and the future may be prevented if communities were to properly adapt to climate change. When these calamities occur, populations may adopt measures or make investments that will lessen their impact. To decrease the impacts of global warming and climate change, the Ministry of Climate Change of Pakistan has taken several actions . The main causes of heightened climate change consequences, however, are an absence of consciousness and understanding about operative actions, weaknesses in organizational capabilities, a lack of resources and their inefficient use, and poor economic conditions. Pakistan is making efforts to reduce carbon emissions and improve the environment quality through obtaining cash through the Asian Development Bank's (ADB) global climate finance. Also, the Green Pakistan Program is being conducted throughout Pakistan. The two most vulnerable industries are those related to water and agriculture. Rainwater harvesting, stormwater management, and groundwater recharge are the three technologies for the water industry. The agriculture industry's preferred technologies include effective irrigation systems (both drip and sprinkle), crops that can withstand drought, climate estimations and forecasts, as well as the presence of an early-warning system . To tackle the ramifications of the changing climate, Pakistan needs to focus on drafting national development policies and plans that are effective in dealing with issues related to the economy and society along with the issues in the priority sectors. Both the Kyoto Protocol and the United Nations Framework Convention on Climate Change include provisions for policy frameworks that should be used to drive the Climate Change Action Plan . This research is different from other previous studies’ literature as it uses qualitative interview surveys method and Qualitative Comparative Analysis to assess the economic abnormalities and their drivers because of climate change. This study adds a new theoretical foundation of Qualitative Comparative Analysis with Qualitative Analysis to analyze the drivers and impacts of climate change which previous studies have not used. For this study, the climate change experts and economic analysts from Pakistan are the population. Semi-structured discussions go with abstract, all-out material. Besides, for this study, thirty interviews were systematized ( Table 1 ) – and the majority of candidates provided thorough and clear responses by the study's restrictions. From February 2023 to April 2023, there were thirty distinct days during which the interviews were done. Thirty comprehensive semi-structured interviews were performed overall, with inhabitants of Islamabad city participating in fifty percent of them and non-residents of Islamabad city participating in the remaining fifty percent. In all, there were 30 % women respondents and 70 % men respondents. Surveys with stakeholders, such as government workers, officials, members of the media, and staff members of non-governmental organizations (NGOs), were also undertaken. Table 1 Interview respondents. Table 1 Job role Pakistan Category Economic Analysts 16 Climate Change Expert 14 Total Participants 30 Table 1 's job descriptions demonstrate that all of the candidates have a broad understanding of climate change and economic irregularities. Qualitative or mixed study design frequently depends on participants who can communicate and reflect well enough to offer detailed accounts of what they've experienced. Interviews with uninterested participants that yield vague replies are not good for analysis. Smaller sample numbers and realistic observation and interviews are preferred by qualitative approaches. Survey interview research design has been employed by us for both the data collection and analysis. we have employed a purposive sampling method, because of the respondents' specific characteristics which are significant in a group, financial resources, time limits, travel expenses, and other logistical issues related to in-person interviews. A purposive sample consists of persons who happen to be the most relevant, approachable, and perhaps able to give the scholar the details they require. We have conducted the survey interviews with economic and climate change experts in Pakistan using purposive sampling . The primary interview questions were derived from the body of work already published on economic irregularities and climate change. Table 2 includes the interview questions. The interviews are currently underway, and they cover a wide range of topics, including climate change and macroeconomic fluctuations, climate change and consumer responses, climate change and economic abnormalities, and controlling perspectives toward climate change abnormalities. The interviews then delved deeply into the specific topics covered in the literature. The one-on-one conversations often lasted between 50 and 60 min and even up to 75 min for each person. Table 2 Interview details. Table 2 Interview protocol The interview processes Inform the interviewer(s) and applicant(s) Plan your research strategy. Plan the study's grit, taking the goalposts into account. In opposition to potential research worries, moral subjects, and reaching a consensus Prepare for the interview or focus group in advance. 1. To what extent, is an individual's thinking about (harsh weather conditions, greenhouse gasses, industrialization, energy consumption) towards climate change? 2. How to control the antecedents of abnormal climate changes? 1. What are the main types of climates that may present in Pakistan, and what are their effects on the macro-economy (poverty, fatalities, financials, businesses, unemployment)? 2. May you please share the different effects of climate change on the economic variables (transportation system, infrastructure, migration, crops, food insecurity, clean drinking water, decoupling)? 1. What to do towards hedging the abnormal conditions of the climate? 2. What role do you play in these sorts of conditions? 3. What are the effects of your part in definite situations? 1. What kind of information is vital for you in the direction of analysis/advice at a time of uncertainty concerning the economy? 2. What do you see as the key controls to do hedging with cognizance? Because of the multiple risk variables and circumstances in the climate change auditing report, this study used the common auditing qualitative description to ensure the logic and correctness of the QCA outcomes. Later, seven classifications and types of those discovery hazards were established by the Pakistan auditing common qualitative statements, relevant rules, and financial audit. Even more significant, every categorization, to certain degrees, describes the precise appearance and issue features. The comprehensive assessment, induction, and categorization that make up these seven classes are appropriate as prerequisites for QCA analysis, therefore all seven of them would be chosen as requirements for climate change reporting hazards via auditing shown in Table 3 . The results of the QCA investigation would determine the percentage of illegal spending because it offers a collection of instruments for analyzing the required and sufficient circumstances, demonstrating outcomes, and connecting parallels and discrepancies between different combinations of situations and scenarios. The threat associated with climate change increases with increasing values of indicators and decreases in spending efficiency. Table 3 Variables description. Table 3 Determines Name Contractions Events Infringement of policy implementation regarding climate change Pi Infringement of funds and resources Fr Infringement of town planning Top Infringement of laws for the preservation of historic sites and sustainability Hs Infringement of infrastructure in Riverland RL Infringement of regulation of deforestation Df Infringement of rules for quality control Qc Outcome Percentage of illicit spending Is A mixed research method strategy was employed since a single quantitative or qualitative approach was insufficient to comprehend and describe the study's concerns. The qualitative comparative analysis has been used for empirical analysis and the thematic analysis approach has been used for qualitative analysis. In this sense, a quantitative research approach was initially used to analyze the factors that drove. Then, via the use of the interview method, the associations between the variables in question were further investigated. The versatility of thematic analysis makes it appropriate for analyzing a broad variety of data sources. For example, data from "standard" in-person data-gathering techniques like interviewing and conducting focus groups may be investigated using thematic analysis. This research project therefore exemplifies a descriptive mixed-technique approach. The application of qualitative comparative analysis (QCA) in management research demonstrates that despite the intricate nature of the field of management issues, there are seldom explored growth avenues that may be revealed through study. As a result, QCA can enhance knowledge about increasing management issues while maintaining their comprehensive character . Considering the aforementioned reasons, this study used the QCA approach, which could bring together the benefits of qualitative and quantitative analysis, to comprehend the influencing variables and creation mechanisms of climate change threats through expert opinions. Even though this study incorporates the opinions of various experts, the scenario's study is unable to achieve the standards of a big sample, making it difficult to obtain trustworthy outcomes using statistical tools. QCA is excellent at limited sample assessment within 10 and 40 cases for an in-depth comprehension of a real event with an integration of quantitative statistical analysis and qualitative analysis . The purpose of the QCA technique is to determine the connections between the contingent configuration and its outcome using case comparison, determining which contingent configuration will result in the anticipated result and which contingent configuration might result in if not present while taking the interrelationships of influencing factors into consideration. QCA techniques are potential instruments for bridging the divide between variable- and case-oriented investigation . There are three primary QCA analysis techniques fuzzy set QCA, crispy set QCA, and multivalued QCA . Of these, fuzzy set QCA is the most well-known and has been applied in several studies to date. We will use the fuzzy set QCA technique in the first analysis of this study. The second analysis of this study relied on a subjective method (qualitative methodology) to thoroughly elicit the candidates' thoughts, as also used by Ref. . Deep discussions (interviews) were performed to gather primary data for the thematic analysis to gain a greater understanding of how climate change and economic factors interact. Additionally, the qualitative thematic analysis was supplemented by the investigator's perspectives as well as information from relevant online platforms and firsthand witnesses. It is undeniable that neither some information nor an estimate on potential responses were previously given to the candidates. In this study, multiple interview data was utilized to examine the relationship between climate change and economic factors. Numerous examples make it possible to classify configurations and fundamental overtones using a practical inspection of the topics and signals. The key debate points centered on the study's primary research questions: herding bias and irregularities in the economic factors. This study was able to identify several open crow's nests, allowing a proportional downfall of maneuvers to the application of analysis (based on a high degree of awareness) immediately before dealing with the issue in which stockholders invested due to herding bias and a lack of market expertise. To create intricate databases utilizing rational and comprehensive methods, QCA bases itself on the Boolean algorithm, which permits the simplest formulae and whose set of conditions and results have values that are either 0 or 1 having variable segments. The infringement of policy implementation regarding climate change is 76 percent and has a value of 1 on the other hand no infringement of policy implementation is 24 percent and has a value of 0 in Table 4 . Similarly, this table shows the occurrence and non-occurrence of all the variables in percentages and with the values of 1 and 0. Table 4 QCA variables and their segments. Table 4 Variables Determines Portions Value pi Occurred 76 % 1 Not Occurred 24 % 0 fr Occurred 56 % 1 Not Occurred 44 % 0 top Occurred 83 % 1 Not Occurred 17 % 0 hs Occurred 70 % 1 Not Occurred 30 % 0 RL Occurred 90 % 1 Not Occurred 10 % 0 df Occurred 73 % 1 Not Occurred 27 % 0 qc Occurred 53 % 1 Not Occurred 47 % 0 is Occurred 53 % 1 Not Occurred 47 % 0 Source: Author Calculated The fsQCA 3.0 software was used to conduct the analysis. The circumstances and results were calibrated in the first stage. When using fsQCA, calibration is required. Calibrating requires the definition of three observation points: 0.05 for complete non-membership to the set, 0.5 for the point of greatest uncertainty, and 0.95 for complete membership to the set. It is necessary to calibrate the system before building the truth table, which will yield distributions of possible outcomes for each possible set of conditions. FsQCA allows researchers to find several paths to a solution. The fsQCA's intermediate solution is displayed in Table 5 . The approach logically reduces the configurations using the Quine-McCluskey algorithm. The membership in each configuration affects how much the configurations' means for the result are weighted. The mean, weighted by the highest value of the other configurations, is tested, and this value is published against it. The n consistency of each configuration (inclusion in not − y, or 1 − y) is compared to the y consistency of each configuration (inclusion in y). Results that are not significant (at the 0.1 threshold) are excluded. This approach necessitates deducing the predicted contributions of each causal set to the outcome. Table 5 fsQCA Results. Table 5 Paths QC DF RI HS PI FR TP Raw Coverage Consistency Qc∗Df∗Rl∗HS∗Pi ● ● ● ● ● 0.564329 0.842105 Df∗Rl∗HS∗Tp∗Pi ● ● ● ● ● 0.276515 0.863636 Qc∗Df∗HS∗Fr∗Pi ● ● ● ● ● 0.261021 0.833331 ∼Qc∗Df∗Rl∗HS∗Tp∗∼Fr ○ ● ● ● ○ ● 0.481402 0.772942 Qc∗Df∗Rl∗Tp∗∼Fr∗Pi ● ● ● ● ○ ● 0.521455 0.983644 Note: ● represents the presence of a condition and ○ the absence of a condition. There are three options: both presence and absence. proposes a bottom bound of 0.80 for a high score in the result. As a result, we eliminated any solution with a consistency of 0.80 or below. Assumptions made for the parsimonious solution may not be valid. Thus, we computed the intermediate solution. Counterfactuals are used in intermediate solutions to reduce the complexity without relying on erroneous assumptions. This process necessitates considering each causal set's predicted contributions to the result. There are three options: both presence and absence. The truth table's solution term ( Table 5 ) illustrates the connection between several sets of criteria and the result. The combination of criteria suggests a favorable association between climate change and the economic condition of Pakistan. The results show that all the variables are good and consistent because variable consistency is above 0.74. Whereas, raw coverage should be 0.25 to 0.65. Raw coverage results also lie in the standard range. So, we say that all the variables have good raw coverage. In necessary conditions Table 6 , all the variables are consistent and covered. This approach is an intriguing exception to the general tendency, covering 3.8 % of the sampled firms. This route may be related to economic growth and includes government and policymakers to improve the climate condition of the country. Table 6 Analysis of necessary conditions. Table 6 Variables Consistency Coverage PI 0.860320 0.327586 FR 0.769393 0.479166 TP 0.880466 0.388024 HS 0.947248 0.285185 RI 0.960300 0.392856 DF 0.921034 0.289285 QC 0.760002 0.582608 The study found that climate change hazards in Pakistan were not triggered by one specific factor, but rather by a complex combination of factors (Pi, Fr, Top, Hs, RL, Df, Qc). Configuration assessment is a novel kind of research tool that examines the internal workings of climate change hazards and comprehends their microoperation method; as a result, this study will examine the risks associated with megaprojects by evaluating and classifying eight criteria in conjunction with the pertinent specifications. Seven configurations were then compiled using QCA. This statistical approach can help the industry strengthen its risk-control measures. Thematic analysis was conducted on transcriptions of the interviews following the themes identified using qualitative and quantitative data analysis. The same were identified through simple observations manually from transcriptions of the interviews. Numerous items might be referred to as "thematic analysis," including yet but not restricted to social sciences data assessment methods. Thematic analysis is a technique for discovering patterns in qualitative research that is often used today. It has also been asserted that thematic analysis evolved from the study of content analysis, and the terms "thematic analysis" and "content analysis" are frequently used consistently to describe both qualitative as well as quantitative analysis . We have used five phases of thematic analysis in our interview analysis. Familiarizing oneself with the information is the initial step in the thematic analysis procedure, which we had started at the time of gathering the information. To get more involved the researcher in the information and provide the foundation for analysis, the next step is creating codes. As the coding process went on, we began to identify commonalities and trends within the information being analyzed. Before transitioning from coding to thematic construction in the third stage, it is critical to maintain emphasis on processing the complete data information. In the fourth stage, the ideas we generated were fluid and subject to change, much like the initial version of an original piece of work. The fourth step was to evaluate prospective themes. Afterward, in the fifth and last stage, creating the report after generating the full analysis. The interviews drafting and the thematic assessment of the information is written below. “I have never seen climatic carnage on the scale of the floods here in Pakistan”, remarked a climate change expert during the interview. All nations will experience losses and damage from the climate as our world continues to warm beyond their capacity to adapt. This is an international crisis. It needs an international response. Floods in 2022 and an exceptional stretch of torrential rains in recent months, as well as the loss of some species and the extraordinary melting of glaciers, are all warning signals of what is to come. (Interview) The concentration of greenhouse gases (GHGs) is constantly exceeding new thresholds, according to climate change advocates. The primary factor causing the global climate change is GHGs. Devastating natural tragedies are one way that this change manifests. A subsequent climate change excerpt who is working on GHGs in Pakistan makes the idea clear: “Pakistan is responsible for the majority of the repercussions while producing only point three percent of the world's GHGs by volume”. A large portion of GHG emissions is produced by the US, China, and India. (Interview) The quote that follows is particularly revealing with facts given by an expert. The conflict between financial expansion and carbon dioxide gas releases is another one of the century's biggest problems. The emergence of the industrial revolution from the nineteenth to twenty-first century has resulted in increased GHG emissions including atmospheric CO2 by almost thirty-five percent. “If the GHG emissions are not controlled, a rise of about 1.4–5.8 °C is expected in the global warming”. (Interview) Pakistan is among the region's most negatively impacted by global warming, and climatic changes and is ranked 16th in the vulnerability index. As a result, the country faces a lack of clean drinking water & food insecurity. An expert explained that: “The significant increase in the GHG emissions results in Climatic changes which carriage a great danger to Asian countries like Pakistan, India, and China”. The population and energy intensity are the main factors influencing ecological excellence in Pacific Island Countries (PIC), according to the beliefs and knowledge of the experts. The respondents also show that in PIC countries, population growth and prosperity both worsen environmental quality by raising CO2 emissions. However, relative to Pakistan, India, and China, China is more negatively impacted by affluence. Energy structure and carbon intensity had a conflicting impact in PIC countries; that is, in some years. They increase ecological excellence while in the remaining study years, they degrade it. However, the most important element that has a direct impact on how much CO2 is emitted in PIC countries is energy intensity. (Interview) China is the biggest emitter of CO2 and the biggest energy user on the planet. The emissions have grown at an exponential rate. A viable choice to reduce CO2 emissions is to limit energy consumption. However, China uses over sixty-nine percent of the total of the world's energy, so any efforts to cut back there would have a similar effect on the country's economic growth and global trade. India is industrializing at a fast pace but not without a cost. The country faces many environmental issues including degradation of air quality, disruption of coastal ecosystem, natural disasters like floods, and unexpected weather events. Malnutrition, disease exposure, lost revenue, and destroyed livelihoods are a few ways that such calamities hurt the region's economy. (Interview) Moreover, the expert's analysis revealed that Pakistan underwent expensive coupling, weak economic decoupling, and strong decoupling throughout the investigation. In General, Pakistan witnessed exclusive negative decoupling, meaning that increases in CO2 emissions reflect the country's economic growth. “In addition, significant decoupling also took place during 1991, 1995, and 2009. India went through tetrad decoupling phases: exclusive coupling, expensive decoupling, expensive negative decoupling, and expensive robust decoupling. Generally, India shows feeble decoupling, which means that its rate of financial performance is more advanced than its rate of increase in CO2 releases. Additionally, India also showed exclusive coupling, which indicates that these years did not see any decoupling, whereas robust decoupling solitary happened from 2010 to 2020”. The results of the expert's analysis show that China exhibits feeble decoupling over the majority of the research period, as well as exclusive coupling and exclusive negative decoupling. The level of energy increases the decoupling progress in PIC nations, according to the expert's analysis. Although the energy structure and CO2 emissions have conflicting effects on the evolution of PIC's decoupling, they occasionally promote decoupling while occasionally impeding it. Similar to this, population growth and wealth both contribute to the slowing of the decoupling process in PIC nations. (Interview) Despite its small contribution to global carbon emissions, Pakistan is experiencing the effects of climate change. Due to its geographical location and diverse tropical continental climate, the nation experiences severe climate-related natural hazards. Residents of the impacted areas have been devastated by the recent floods. Take note of the information an expert member stated: “By September 19, the floods had impacted about 2 million homes”. (Interview). Fig. 1 shows the massive scale of floods in the country recently has flooded 25 % area of the country. Fig. 1 A photograph of Pakistan taken from space on September 30, 2022, shows the country submerged in floodwaters. By Sentinel-1, a European Earth observation satellite. Fig. 1 According to the writers' research, expert assessments, and investigations, environmental occurrences that contribute to macro-level economic losses are projected by quantified numbers. Interviewees explained macro-economic losses by sharing some facts. Climate change has contributed to an increase in poverty and unemployment. The economic experts projected Pakistan's poverty level to be 39.3 % by using $3.2 per day lower middle-income poverty criteria. For the fiscal years 2020–22, the rate for the upper middle class is set at $5.5 per day. This demonstrates how severe the economic crisis is in Pakistan. Pakistan is ranked sixth globally in terms of the Global Climate Risk Index. “Between 1999 and 2019, Pakistan had 91,089 fatalities, $81 billion in economic losses, and 152 instances of extreme weather”. (Interview). The main area of climate change that affects Pakistan is its water cycle according to interviewees. One of the key industries most likely to suffer from climate change is agriculture. Quality of food, accessibility, and supplies are all impacted by climate change. The performance of agriculture may be impacted by projected rises in temperature, modifications to rainfall patterns, modifications to severe weather events, and reductions in water supply. “Climate change and pollution are also the causes of seasonal smog”. (Interview) Several respondents stated people currently must deal with new problems such as food insecurity and clean drinking water on top of the two difficulties of unemployment and poverty brought on by climate change. Additionally, certain respondents mentioned that despite that they had already come into contact with food and drinking water, they had noticed a huge rise in both of these problems in recent years to the point that they are now among the top concerns for the public and government. (Interview). Numerous areas have lost significant crops and sources of income. The nation's food security is now seriously threatened as a result of this. Pakistan needs to import food even though it is an agricultural nation. A number of the poll participants made the observation that increased rain and flooding can disrupt the systems for distributing and transmitting electricity. This frequently results in a daytime interruption in electricity, which negatively affects daily living, and businesses and eventually raises general disappointment. Foreign exchange reserves and businesses decrease as a result. (Interview) Participants in the interview discussed how the environmental catastrophe brought on by climate change has affected towns as well as rural infrastructures, notably damages to roads and schools. “Roads totaling 12,700 km were damaged, and 7.6 million people were directly impacted. Around $30 billion has been calculated as the total loss. In Baluchistan, Sindh, and the Punjab, more than 80 districts were submerged. The school system has been badly impacted, along with other industries. The exceptional rains damaged or destroyed 17,566 schools, including 1584 in Baluchistan, 1180 in the Punjab, and 15,842 in Sindh”. (Interview). According to some respondents, environmental issues including infrastructure deterioration and climate change-related factors like rising temperatures have put the homes of the people in danger, which has forced them to move. As a result, a sizable portion of the populace has crossed the danger boundary and has reacted to the shift by migrating. (Interview). The extent of the devastation to dwellings around Pakistan is depicted in Fig. 2 . We can see the infrastructure damage including houses all over the country particularly in the provinces of Sindh and Baluchistan. Fig. 2 A United Nations (UN) visual shows the magnitude of the destruction of properties throughout Pakistan. Fig. 2 Several respondents recommended that a bad economic catastrophe can be avoided by implementing a few wise and useful policies. To stabilize the economy, measures should first be implemented to boost exports from the nation. A specific focus should be given to political and economic issues including reducing obstacles to foreign direct investment. To improve domestic output and employment, special measures should be taken to win over foreign investors. It is important to start implementing specific plans as soon as feasible to capture renewable energy sources. “The use of solar and wind energy can help meet the world's energy needs”. (Interview) The participants emphasized that Infrastructure that is climate resilient needs to be prioritized. The nation urgently needs to invest in its human resources. “One of the youngest populations in the world is found in Pakistan”. It is past time to involve Pakistan's youth in the construction of infrastructure that is climatically resilient. Controlling unauthorized building construction beside rivers and streams should be the responsibility of the competent authorities. It is important to educate the public on how to deal with natural disasters to reduce losses. Budgets should be set aside by the government to deal with unforeseen disasters to minimize the loss of life and property. (Interview) Experts incorporated that as well, people's incomes, health, housing, infrastructure, and food security are all at stake in response to climate change. To lessen the impacts of climatic risks, the government should follow worldwide best practices. To lessen the effects of climate change, cooperation with international organizations should be improved and significant action should be taken. Fluctuations, earthquakes, and storms can all be lessened by taking precautions. “Long-term precautions against natural disasters include the construction of dams and water reservoirs”. The regional development strategy ought to include public input regularly. One of the senior interviewees expressed, that Pakistan should insist that the UN and its related organizations give the development of fundamental infrastructure significant consideration. The public should be made aware of the risks posed by climate change through awareness campaigns so that the average person may contribute to mitigation measures. (Interview) Several economists stated Pakistan must increase foreign direct investment (FDI) to fund resilient and sustainable development initiatives and to boost the businesses in the country. To combat the poverty brought on by climate change, the government should take some unique measures to strengthen small farmers, women, and laborers. Loans and small- and medium-business initiatives may be among them. Delinking economic expansion and environmental output is of immense importance to achieve sustainable economic development. PIC countries use around twenty-nine percent of the total energy produced worldwide. “The fast-paced economic growth of China impacts its neighboring countries”. Pakistan should demand that China reduce its omissions and also pay the cost of economic damages in the region. (Interview) Additionally, during the discussions, fresh factors for an environment that were rarely mentioned in earlier studies were revealed. Persons who are impoverished and disadvantaged are advocating for even more significant climate change action. Climate change is not only a disaster for the ecosystem; it is also a societal issue that compels us to tackle injustice on numerous other levels, such as that between men and women, decades, and rich and poor nations. “For more efficient development outcomes, the International Panel on Climate Change (IPCC) has underlined the requirement for reducing carbon emission that adheres to the principles of environmental justice (i.e., recognition, procedural, and distributive justice)”. (Interview) Furthermore, a number of those interviewed held the belief that communities provide a variety of perspectives, and expertise, to the problem of boosting resistance and battling global warming. They must be viewed as partners in resistance development instead of recipients. Climate change is primarily caused by human activity. Climatic change is the primary cause. Burning of fossil fuels such as oil and coal has led to a rise in the quantity of carbon dioxide released into the atmosphere. There has been an increase in global heating as a consequence of the greenhouse effect's spread. “This phenomenon can be attributed to the fact that some chemicals in our atmosphere, including water vapors, carbon dioxide, methane, nitrous oxide, and chlorofluorocarbons, block heat from leaving the planet's surface, thinning the ozone layer and raising temperatures”. (Interview) Another significant problem in Pakistan's industrialized eastern Punjab region is smog, which causes Lahore, the province capital, to become heavily polluted throughout the winter mentioned by interviewees. Authorities claimed that they are attempting to address the issue, which affects a large number of brick kilns. “Furthermore, heat stroke, starvation, the rise of vector-borne diseases like dengue virus, a rise in a load of water diseases, and other factors will affect people's capacity to work and make a living”. Moreover, multiple individuals referred to deforestation and a rise in the usage of pesticides in home and agricultural settings are two additional climate change-related factors. The second largest contributing factor to global warming. Deforestation is responsible for about twenty-four percentage points of all emissions of greenhouse gases. (Interview) We have been the first to discover climate change's multiple drivers and various economic abnormalities associated with it by qualitative comparative analysis and the last ones to have such an opportunity to prevent it from happening, making it the most important issue of our time. In this study, we used qualitative analysis to identify several elements of climate change and their influence on various economic variables, and we used QCA analysis to analyze certain variations. Hazards associated with the causes and effects of climate change are rising along with its multifaceted nature. Authorities and a significant number of experts have realized that conventional investigation methodologies for identifying climate change risks have difficulty reflecting the scale of the issues, particularly routine evaluations of specific threats. Therefore, the related recommendations would have minimal impact on reducing the hazards associated with climate change. Accordingly, this study analyzes the hazards associated with climate change by evaluating and separating seven scenarios in conjunction with the pertinent expectations. QCA analysis provides an innovative form of study instrument that explores the inner nature of climate change difficulties and grasps their micro and macro action procedures. Seven variants were then compiled by QCA. The findings showed that complex and variable mixture conditions, rather than one specific factor, were to blame for Pakistan's climate change risks, which represented a significant advance in the field of quantitative and qualitative analysis as well as an organized strategy for the community to reduce climate change risk to a manageable level. Climate change's reverberating effect on various additional economic factors is more profound and multifaceted than its obvious effect. Furthermore, as the qualitative research shows, economic forces have a significant causal influence on other elements, particularly structural infrastructures. The changing climate has put many countries in danger, and rising economies are particularly vulnerable. Living in a bubble of ignorance won't get us very far because the world is witnessing a melting glacier problem, rising floods, animal extinctions, extreme weather events, and much more. It is imperative to spread knowledge of climate change in every manner possible, even though seemingly worthless tasks like completing school assignments. This crucial issue, which is exerting a severe impact on the region, has made South Asia more susceptible to calamities. Pakistan is generally experiencing severe effects from climate change and global warming. The changing climate puts Pakistan's economy, real estate market, food production, and stability in danger. Given the stark realities, the Pakistani government should move swiftly to fight the harmful consequences of climate change. There is little doubt that the officials are paying attention to this matter since they view it as sensitive and significant. As the climate changes, millions of poor people will face serious issues like extreme weather, health effects, risks to heritage and culture, financial stability, transportation, water management, and social welfare. As a result, productivity in the agricultural, manufacturing, and service sectors all exhibit negative and substantial relationships with temperature. If climate change is not handled, it will severely hinder economic progress. To deal with the influence of climatic changes on many sectors, adaptation and mitigation strategies are required at the micro level. Climate change, however, is a global problem. Pakistan, in comparison to affluent nations, contributes very little to GHG emissions, making it very difficult for Pakistan to mitigate climate change. Alternative energy sources are more effective and help solve the global warming issue. Power generation from sunlight, winds, tides, and biofuels is more environmentally friendly and sustainable. If we generate electricity using other energy sources, the effects are minimal. Nuclear power produces a small amount of greenhouse gas emissions; increasing its inclusion in the energy mix might aid in reducing global climate change. Emerging economies must receive financial support from richer countries to switch to low-carbon development pathways and support them to become ready for the consequences of climate change in order so that there can be a sustainable global climate change accord. The main source of global warming is the energy required to operate, heat, and cool our homes, enterprises, and factories. Energy-efficient solutions are an immediate necessity. Essentially, we need to implement a double strategic plan: firstly, we should cut emissions and stabilize the levels of greenhouse gases in our atmosphere; second, we should adopt climate-friendly habits and uphold the principles of sustainable economic development. A new type of analysis tool called qualitative comparative analysis explores the internal workings of climate change risks and grasp their economic impacts. For this reason, this research analyzed the risks associated with climate change by auditing and classifying seven conditions (variables) along with the pertinent specifications. More circumstances (variables) for examination may be included in subsequent studies. The findings suggested that complex and variable combination conditions rather than a single factor contributed to Pakistan's climate change risks. Future studies can incorporate numerous identified factors into the study and analyze them. This would open up new avenues for assessing climate change risks using quantitative analysis methods and systematic thinking to help policymakers raise the risk-controlling threshold. We have collected information from 30 economic and climate change experts for survey interview analysis and QCA due to the limited availability of experts during our research. Future studies can use more experts' information for qualitative and QCA analysis to obtain more comprehensive results. Usama Usman: Writing – review & editing, Writing – original draft. Xueyan Yang: Supervision. Muhammad Ismail Nasir: Data curation. Data can be accessed with the permission of the author. Foundation University Islamabad, Pakistan approved all experimental protocols (interviews surveys). Every source was included in the reference part and referenced inside the text. This paper has no funding. The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | Other | other | en | 0.999999 |
PMC11696657 | Langerhans cell histiocytosis (LCH) is a rare proliferative disorder characterized by the clonal accumulation and infiltration of dendritic histiocytes within various tissues causing a single- or multi- system disease . The exact cause of LCH remains unknown . The incidence of LCH in adults is notably lower than in children, with estimates ranging from 1 to 2 cases per million compared to 5–10 cases per million children per year . While bone involvement is the most frequent, about 50% of cases, spinal manifestations are less common . LCH spine involvement often clinically manifests as neck or back pain, restricted motion of spine and/or neurologic deficits . LCH central nervous system involvement, particularly pituitary gland enlargement and stalk thickening, occurs in approximately 25% of adult cases and may lead to diabetes insipidus or panhypopituitarism . Salivary gland involvement is another rare manifestation of LCH with only a few reported cases of parotid glands involvement . A 46-year-old female, with no significant medical or surgical past medical history, presented to the emergency department (ED) with 2-week history of severe progressive back pain radiating lower limbs along with bilateral lower limbs weakness and numbness. Physical examination revealed thoracolumbar tenderness, bilateral paraplegia, and bilateral positive Babinski signs. Her blood pressure was elevated 200/110 mmHg with normal body temperature and results of initial blood tests revealed slightly raised inflammatory markers, white blood test (WBC) 10.6 × 10^3/uL [normal range: 4.0-10 x 10^3/uL] and C-reactive protein (CRP) 27.6 mg/L [normal range: 0-5 mg/L]. A computed tomography (CT) head scan ruled out acute hemorrhagic stroke and a CT angiography of the aorta excluded aortic dissection but spotted a T6 vertebral body compression fracture . Fig. 1 Selected thoracic region sagittal (A) and axial (B) images of computed tomography scan -bone window- revealed T6 vertebral body partial collapse with suspicion of an underlying bone lesion (red arrow). Fig 1: Magnetic resonance imaging (MRI) of the thoracolumbar spine revealed intraspinal ventral epidural enhancing soft tissue lesion within the pathologic fracture of T6 vertebral body, along with moderate cord compression at the same level . Fig. 2 Selected images of multiplanar multisequence MRI of thoracolumbar spine with intravenous contrast. Sagittal planes: (A) T2 (B) STIR (C) T1 (D) T1 (D) T1 post contrast. There is T6 vertebral body collapse with underlying abnormal marrow signal of T1 low and T2/STIR bright signal and heterogenous postcontrast enhancement. Element of posterior retropulsion as well as associated prevertebral, paravertebral and retrovertebral intraspinal ventral epidural enhancing soft tissue component at the same T5-T6 level (green dotted line) with moderate cord compression. Fig 2: Due to the patient neurological deficits and radiologic finding, the patient underwent T4-T8 pedicle screw fixation, T6 decompressive laminectomy, and biopsy of the epidural lesion. Histopathological analysis revealed findings consistent with LCH, with positive immunohistochemical staining for CD1a, langerin, S100, and CD68 . BRAF mutation was negative. Screening whole body positron emission tomography (PET) CT scan for metabolic assessment showed increased uptake at the area of T6 vertebra, suggestive of residual/active disease, and high right parotid gland uptake . MRI of the head showed a thickened pituitary stalk . Accordingly, endocrine assessment was made, which was unremarkable. Fig. 3 Histopathology examination. (A) CD1a, (B) CD68, (C) Langerin, (D) S100. Fig 3: Fig. 4 Whole body positron emission tomography. (A) Coronal plane showing asymmetric right-sided parotid gland high uptake (red arrow). (B) Sagittal plane showing mild increased uptake at the area of T6 vertebra consitent with post operative changes. Note the T4-T8 interpedicular screw fixation artefact at mid-thoracic spine. Fig 4: Fig. 5 Selected T1 sagittal (A) and coronal (B) images of multiplanar multisequence MRI of pituitary gland without intravenous contrast (avoided due to acute kidney injury), showing mild thickening of the pituitary stalk (red arrows). The width of pituitary stalk measured 4 mm. Fig 5: LCH is more prevalent in the pediatric population, with the highest incidence age range between 1-15 years . Males are more probable to manifest the disease with a ratio of (M: F, 2:1) . Clinical and radiological findings of LCH are nonspecific . The diagnosis of LCH is solely based on histopathological examination. The histopathological pattern demonstrates a diffuse infiltration of pale staining mononuclear cells that resemble histiocytes with indistinct cytoplasmic borders and rounded or indented vesicular nuclei . Given its rarity among adults, it is unlikely to be listed on initial encounter management differential diagnoses. In regard to LCH of the spine, radiologically a single or multiple well-defined osteolytic lesions accompanied by soft tissue lesions leading to vertebral collapse/flattening -vertebra plana- are the most common radiologic finding . The neurological manifestations of LCH are variable . Diabetes insipidus (DI) is the most common manifestation of CNS involvement, which can cause frequent urination . Co-existing anterior pituitary endocrinopathies is common among LCH related DI patients . Radiologically, pituitary gland enlargement and stalk thickening are nonspecific but well-established . Pituitary stalk width of 4 mm or greater is commonly used as a cutoff to point towards pathological thickening . Even though the reported patient did not complain of parotid gland and no palpable mass was observed at the expected anatomical location, fluorodeoxyglucose (FDG) high uptake of parotid gland is likely to be explained by early systemic involvement of LCH. There have been several case reports of LCH with parotid glands involvement of variable presentation; isolated, multisystem, unilateral and bilateral [ 8 , , , ]. Isolated LCH of sublingual gland was also reported . In short, this is a very rare case of an adult lady who was presented with back pain along with sensory and motor deficits in the lower limbs. MRI spine showed T6 vertebral body pathological collapse with epidural soft tissue component compressing the spinal cord. She underwent T4-T8 pedicle screws fixation with T6 decompressive laminectomy and biopsy of the epidural lesion, which revealed histopathological features of LCH. Other imaging modalities showed findings suggestive of multi-system involvement. Written informed consent for publication was obtained from the patient. | Review | biomedical | en | 0.999996 |
PMC11696661 | Although it is extremely difficult to state the exact date humans ventured into gold exploration, since its discovery, the aura around gold has captivated humans of all cultures globally. Gold's everlasting gleam, unchanging uniqueness, unfading reflective sunlit lustre, and its comparison to the sun's perpetuity have until contemporary times been the epithet of its symbolic and revered statues for which humans have always craved . In order to satisfy man's insatiable desire for gold, humans used rudimentary tools, in one of the most important industries of early civilization . Present day Ghana, a country located in West Africa, was not left out of the countries that saw gold mining as a viable economic venture for which reason she was mining gold long before the trans-Saharan trade. The abundance of gold earned present day Ghana its colonial name Gold Coast . Before the European explorers arrived on the shores of present-day Ghana in 1471, artisanal gold mining was a major industry that supported the Akan speaking states. Traditional gold exploitation techniques, which were rudimentary in nature with minimal environmental impacts, constituted a highly respected traditional vocation in the then Gold Coast. Groups, families and individuals were the main custodians and owners of parcels of land enriched with gold ore in their communities . The arrival of the European explorers in then Gold Coast, which saw them introduce large-scale gold mining after colonization, marked the beginning of groups, families and individuals who hitherto were custodians of gold-rich land being muscled out . To restore normalcy, appease traditional authorities, create jobs, and propel the economy, the ‘’Small-scale Gold Mining Law, 1989 (PNDC Law 218)’’ which officially legalised artisanal and small-scale mining in Ghana was passed with much hope and economic expectations three decades after Ghana gained independence from British rule . This joy was however, short-lived due to bureaucracies associated with the acquisition of mining concessions and licences under the very law that gave the groups, families, individuals and traditional authorities so much hope . Driven by an inherent will to survive, many inhabitants of rural communities in Ghana took to ‘galamsey’ a Ghanaian jargon from the phrase “gather and sell,” coined from observation of how gold was mined with simple tools by natives and sold afterwards. to circumvent the unending bureaucracies associated with the acquisition of mining concessions and licences under ‘’The Small-scale Gold Mining Law, 1989, PNDC Law 218’’ . This drive towards economic survival saw the artisanal and small-scale mining (ASM) sector (both legal and galamsey) in Ghana contribute a significant 43 % of the total gold produced in 2018. However, this significantly high contribution by the ASM sector comes at a huge environmental and public health cost. The use of heavy earth moving equipment for excavation, which has replaced the use of rudimentary techniques at some galamsey sites, has resulted in massive destruction of forest reserves, fertile agricultural lands, and vegetation cover in Ghana . Galamsey activities in the Ashanti Region, Eastern and Western Regions of Ghana has led to massive loss of forest cover resulting in loss of aquatic and terrestrial habitats affecting the distribution of flora and fauna ultimately distorting the ecological structures of local communities. In freshwater aquatic environments such as the Pra, Offin, Ankobra and Birim Rivers, mercury which is predominately used by artisanal miners during amalgamation of gold is transformed into methyl mercury, an extremely toxic species of mercury with a huge potential of bioaccumulating in aquatic organisms and biomagnifying along aquatic food chains . Additionally, colliery effluents high in fine and coarse particles associated with galamsey activities all over Ghana has turned pristine clear water bodies muddy impacting negatively on aquatic ecosystems . Fig. 1 Impacts of galamsey on River Offin; Photo Credit: Kenneth Bedu-Addo. Fig. 1 The muddy slurry effluent from galamsey sites has reduced sunlight penetrability in water bodies seriously affecting primary productivity, habitats of indigenous traditional fish species as well as the functionality and integrity of freshwater ecosystems . United Nations assertion of sparse water quality in African countries is a signal aquatic ecosystem restoration as a tool for the improvement of access to ecosystem services will continue to be a daunting task . The assertion by Ref. corroborates the paucity of research on the impacts of ‘galamsey’ on ecosystem services viz provision, regulatory and cultural ecosystem services in Ghana. This article seeks to fill the gap by using the Drivers-Pressures-State-Impact-Response (DPSIR) framework proposed by UNEP in tandem with the quantitative, defensible impact characterization approach to find answers to two key questions namely: (i) what is happening to aquatic environment because of galamsey and why this is happening? ( compilation and analysis of status and trends of key environmental indicators); and (ii) what the consequences are for the aquatic environment on the provision, regulatory and cultural ecosystem services? ( analysis of impacts of environmental change on human health and ecosystem services). Ghana is located in the West Coast of Africa with its southernmost point five degrees north of the Equator with the Greenwich meridian passing through its industrial city of Tema. With geographic coordinates of 8 00 N, 2 00, Ghana has a 539 km coastline along the Gulf of Guinea and shares borders to the east with Togo, north with Burkina Faso, to the west with Côte d’Ivoire and to the south with the Gulf of Guinea. Ghana's climate is generally tropical with the northern enclave being hot and dry, the southwestern enclave being hot and humid and the south-eastern enclave being dry. The southern enclaves are characterised by two rainy seasons, a minor season between September–November and a major season between April–July attaining a maximum in June. Mean annual rainfall figures in these areas range between 1250 and 2150 mm. The climate in the northern enclave is predominantly semi-arid, which is influenced by the movement of the inter tropical convergence zone (ITCZ). The ITCZ brings cool dry Northeast Trade winds (Harmattan) in the dry season between November and March, and moist Southwest Moonson in the wet season between April and September. The rainy season is unimodal with an average annual rainfall value of 980 mm/year . Currently, Ghana hosts a number of the world's gold hulks including Xtra Gold, Newmont Gold Ltd, Perseus Mining, Gold Fields Ghana, AngloGold Ashanti and Golden Star among others . The gold deposits are hosted in two main geological formations the metasedimentary and metavolcanic rocks associated with mesothermal quartz-vein gold Deposits and conglomerate rocks associated with paleoplacer gold deposits. Alluvial gold deposits can also be found in the metasediments along some rivers in Ghana . Fig. 2 Mineral distribution and prospects in Ghana. Fig. 2 An analysis of the impacts of galamsey on ecosystems and ecosystem services in Ghana was undertaken based on a three (3) step analytical approach adapted from UNEP's human Environment analytical approach. The first of the three steps entailed a pre-assessment stage, during which an indicator impact pathway diagram was developed during four (4) experts' interviews, site visits, review of documents a comprehensive desk study based on five (5) thematic areas namely impacts of galamsey on aquatic ecosystems, impacts of galamsey on aquatic ecosystem provision services, impacts of galamsey on aquatic ecosystem regulatory services, impacts of galamsey on aquatic ecosystem cultural services, and impacts of galamsey on human health. Secondary data sources on the five thematic areas used in the study included peer reviewed and non-peer reviewed journals, books, institutional publications, online databases including Google Scholar, online blogs, case studies, and articles from credible sources. An inclusion criteria taking into cognizance the reputation of the source of information, bias of information due to interest from a sponsor, corroboration of information from other sources, built-in credibility of information as well as credible academic journals and publishers was used to gather data for answering the questions what is happening to the aquatic environments in Ghana and why it was happening? ( compilation and analysis of status and trends of key environmental indicators) and what the consequences are for the environment and people? ( analysis of impacts of environmental change on ecosystem services. Fig. 3 Impact pathway assessment indicators (adapted from Ref. ). Fig. 3 The desk study then formed the basis for the use of impact pathway diagram with indicators namely vegetation loss, hydromorphological alteration, flow modification, discolouration of water, arsenic, mercury, suspended solids, and turbidity levels for the initial impact assessment of galamsey activities on aquatic ecosystems and the ecosystem services they provide. Arsenic was selected as an indicator for the study because the gold ore in the Dunkwa-On-Offin area is embedded in arsenopyrite which leads to the release of arsenic during galamsey activities. Mercury was selected as an indicator by virtue of its use by the galamsey miners for amalgamation purposes with water bodies serving as receptacles for unrecovered mercury. The activities of galamsey miners in the study area which is mainly alluvial generates huge quantities of mud during the mining and recovery processes there altering the suspended solids loads, colour and turbidity of water bodies in Ghana . The huge quantities of mud generated during galamsey activities in tandem with the diversion of rivers are precursors for flooding of arable land, river flow modification, hydromorphological alteration well as vegetation loss . The indicators were further used in an adapted version of the DPSIR analytical framework namely DPSI in combination with the quantitative, defensible impact characterization approach ( Table 1 ) Fig. 4 Adapted D-P-S-I framework based on the UNEP human environment approach. Fig. 4 Source: Adapted Table 1 Quantitative, defensible impact characterization for the assessment of significance of impacts attributable to galamsey. Table 1 which was selected based on its credibility as a recommended Training Resource Manual on Environmental Impact Assessment by the United Nations Environmental Programme and the experience of the authors on impact assessment to find answerers to two key questions (i) what is happening to the aquatic environments in Ghana and why it was happening? ( compilation and analysis of status and trends of key environmental indicators) and (ii) what the consequences are for the environment and people? ( analysis of impacts of environmental change on human health and ecosystem services. In answering the question what is happening to the aquatic environments in Ghana and why it was happening, the levels of mercury and arsenic in river Offin a galamsey impacted water body was undertaken by taking 500 ml snap water over a 1000m distance in clean labelled sampling bottles thoroughly rinsed with distilled water and pre-conditioned with Nitric acid (HNO3) to keep the integrity of the sample for arsenic and mercury analysis. Five snap samples which were taken 200m apart taking into cognizance variation in depth of sampling, sampling site history, field observations, uniformity of the sampling point was thorough mix to give representative composite sample of River Offin over a 1000m stretch. The composite sample generated out of the five snap samples was placed in a cool box with ice packs and sent to the lab for the determination of mercury and arsenic concentrations. The 1000 ml of the composite water sample was filtered through a 0.45 μm filter after which 200 ml was analysed with a Varian A220 Flame Atomic Absorption Spectrometer at wavelengths of 253.7 nm and 193.7 nm for Hg and As respectively. Turbidity readings were done on site using the HI97727 colour of water photometer which has an advanced optical system to provides a narrow band interference filter to ensure accurate readings. A certified reference material for Hg, As and turbidity was prepared and analysed for quality assurance purposes. The readings obtained from the lab analysis using the HI97727 colour of water photometer and the Varian A220 Flame Atomic Absorption Spectrometer were subjected to Student T-Test analysis in GraphPad Prism 7 to ascertain significance between the permissible levels of As, Hg and turbidity and the measured values for As, Hg and turbidity in the analysed water samples. To answer the question what the consequences of galamsey are for the aquatic environments and the people of Ghana ( analysis of impacts of environmental change on human health and ecosystem services ), the indicator impact pathway was used. Each indicator was rated based on severity of impact + spatial scope of impact + duration of impact (consequence of impact with an upper limit value of 15) and frequency of impact + frequency of activity (likelihood of the impact with a maximum value of 10). Impact significance attributable to galamsey activities was finally derived as shown in the rating matrix ( Table 1 ). The various colours in the rating matrix were interpreted as follows: a very high impact significance ranged between 126 and 150 (+++++red), a high impact significance ranged 101–125 (++++orange), a medium-high impact significance ranged from 76 to 100 (+++yellow), low-medium impact significance ranged from 51 to 75 (++green), a low impact significance ranged from 26 to 50 (+teal), an extremely low or no impact significance ranged from 1 to 25 (0blue). A cycle of cause-effect-outlook relationship for responsible artisanal mining in Ghana which had artisanal miners as part of the ecosystem was developed as a recommendation towards responsible mining at the artisanal and small-scale level in Ghana. Fig. 5 Cycle of cause-effect-outlook relationship for responsible artisanal mining in Ghana. Fig. 5 Table 2 presents a comparison between the mean levels of arsenic, mercury, and turbidity in galamsey polluted water bodies and the Ghana Environmental Protection Agency (EPA) permissible levels of arsenic, mercury, and turbidity for surface water. There were highly significant differences between the mean Hg, As and turbidity levels with the potential of affecting ecosystem services. Table 2 Comparison of levels of arsenic, mercury, turbidity, and permissible discharge levels in galamsey impacted rivers in the context of ecosystem services. Table 2 Ecosystem Service Arsenic Conc. (mg/L) P-value Mercury Conc. (mg/L) P-value Turbity (NTU) P-value Mean Stand Mean Stand Mean Stand Drinking (Provision) 35.36 0.025 P < 0.0001 87.5 0.001 P < 0.0001 1600 1 P < 0.0001 Recreation (Cultural) 35.36 0.05 P < 0.0001 87.5 0.001 P < 0.0001 1600 50 P < 0.0001 Irrigation (Provision) 35.36 0.01 P < 0.0001 87.5 – 1600 – Aqua culture (Provision) 35.36 0.005 P < 0.0001 87.5 0.001 P < 0.0001 1600 25 P < 0.0001 P is significant at p < 0.05; Stand = Permissible discharge level; Conc. = Concentration; NTU=Nephelometric turbidity unit. Table 3 shows significance of impacts attributable to gold mining operations assessed via rating each parameter under consequence of the impact with a maximum value of fifteen (15) and likelihood of the impact with a maximum value of ten (10). The product of consequence and likelihood of 140 and 42 are interpreted as having very high significance and low significance respectively. Mercury and loss of vegetation had the highest and least significance ratings. Table 3 Significance rating for galamsey related indicators based on severity of impact, spatial scope of impact and duration of impact. Table 3 Arsenic Suspended Solids Vegetation Loss Flow Modification Hydromorphological Alteration Discolouration Erosion Mercury Severity of Impact 5 4 2 3 3 2 3 5 Spatial Scope of Impact 3 4 2 2 2 4 3 5 Duration of Impact 4 4 2 5 5 4 3 4 Consequence of Impact (C) 12 12 6 10 10 10 9 14 Frequency of Impact 4 5 2 3 3 4 3 5 Frequency of Activity 5 5 5 5 5 5 5 5 Likelihood of Impact (L) 9 10 6 8 8 9 8 10 Significance of Impact (Consequence x Likelihood) High (12x9 = 108) High (12x10 = 120) Low (6x7 = 42) Medium-high (10x8 = 80) Medium-high (10x8 = 80) Medium-high (10x9 = 90) Low-medium (9x8 = 72) Very high (14x10 = 140) Table 4 presents the results of impact significance matrix of galamsey on ecosystem provision services using the severity of impact, spatial scope of impact, duration of impact , frequency of impact and frequency of activity using arsenic, mercury suspended solids, discolouration of Water, erosion, flow modification and hydromorphological alteration s indicators. A very high impact significance ranges between 126 and 150 (+++++ red), a high impact significance ranges 101–125 (++++orange), a medium-high impact significance ranges from 76 to 100 (+++yellow), low-medium impact significance ranges from 51 to 75 (++green), a low impact significance ranges from 26 to 50 (+teal), an extremely low or no impact significance ranges from 1 to 25 (0blue). Table 4 Impacts of galamsey activities on provision ecosystem services in Ghana. Table 4 Table 5 is presented as results of impact significance matrix of galamsey on cultural ecosystem services using the severity of impact, spatial scope of impact, duration of impact , frequency of impact and frequency of activity using arsenic, mercury suspended solids, discolouration of Water, erosion, flow modification and hydromorphological alteration s indicators. A very high impact significance ranges between 126 and 150 (+++++ red), a high impact significance ranges 101–125 (++++orange), a medium-high impact significance ranges from 76 to 100 (+++yellow), low-medium impact significance ranges from 51 to 75 (++green), a low impact significance ranges from 26 to 50 (+teal), an extremely low or no impact significance ranges from 1 to 25 (0blue). Table 5 Impacts of galamsey activities on cultural ecosystem services in Ghana. Table 5 Table 6 is presented as results of impact significance matrix of galamsey on regulatory ecosystem services using the severity of impact, spatial scope of impact, duration of impact , frequency of impact and frequency of activity using arsenic, mercury suspended solids, discolouration of Water, erosion, flow modification and hydromorphological alteration s indicators. A very high impact significance ranges between 126 and 150 (+++++ red), a high impact significance ranges 101–125 (++++orange), a medium-high impact significance ranges from 76 to 100 (+++yellow), low-medium impact significance ranges from 51 to 75 (++green), a low impact significance ranges from 26 to 50 (+teal), an extremely low or no impact significance ranges from 1 to 25 (0blue). Table 6 Impacts of galamsey activities on regulatory ecosystem services in Ghana. Table 6 Prior to the advent of galamsey, Ghanaian residents in towns and villages have mined gold at the artisanal level for economic reasons in a responsible and sustainable manner using rudimentary tools for centuries. Current divers identified as increasing youth unemployment, poverty, non-enforcement of ACT 703, rising gold prices and a slump in agriculture has led to an escalation of galamsey activities over the last few years across the length and breadth of Ghana [ 9 , 24 , , , ]. Gamlasey has thus largely replaced subsistence agriculture, which hitherto was the principal income-earning activity in most Ghanaian communities. The galamsey miners operate without valid licenses from the Minerals Commission of Ghana and undertake their mining activities regardless the Water Resources Commission of Ghana's vision of ‘Sustainable water management by all for all’ resulting in debilitating aquatic ecosystems impacts. The impacts caused by galamsey activities in Ghana include altering of predation behaviours, altering of light penetration in water bodies, destruction of aquatic habitats, fragmentation/loss of aquatic habitats and destruction of aquatic ecosystems, which may lead to the loss of aquatic biodiversity as, is evidence at Tontokrom, Kyekyewere and Tarkwa Nsuam in in in the Ashanti, Central and Western Regions of Ghana. Galamsey activities across the length and breadth of Ghana has and could lead to the diversion of river flow, cause mercury pollution and siltation . The mercury used during the amalgamation process of gold recovery by the galamseyers could be deposited in sediments of aquatic systems where it could be converted to methylmercury by microorganisms and absorbed by phytoplankton making it available for accumulation in consumers along the food chains. The mercury could also undergo bioaccumulation in fish, snails, crabs and biomagnify along food chains and food webs thus making galamsey one of the leading contributors to mercury pollution in Ghana . The diversion of rivers by galamsey miners in Ghana could lead to the flooding of arable land and the transport of colliery effluent in water bodies negatively affecting the sink capacity of the water bodies and suitability of the water bodies to serve as a habitat for aquatic organisms. The colliery effluents from galamsey activities which are normally high in fine and coarse particles has turned once pristine clear water bodies including the Oda, Ankobra, Pra, Birim and Offin rivers into various shades of brown rivers in Ghana. The slurry and mud coupled with the unrecovered mercury used by galamseyers for amalgamation purposes during gold recovery has the potential of reducing the penetrability of light in water bodies in Ghana reducing the rate of photosynthesis, inhibiting the growth of aquatic plants, bioaccumulating in several species of fish and other hydrobionts that are crucial to aquatic ecosystem function . The silt, sand and clay, which constitute the main suspended solids generated during galamsey, could also darken water bodies, clog the gills of fish, reduce the spawning sites of fish by settling in cracks/crevices, and reduce the hunting ability of predatory aquatic organisms that rely heavily on vision ultimately affecting food chains. Suspended solids generated in huge quantities during galamsey activities could additionally negatively affect the reproduction and health of certain fish species, which rely on visual cues during mating and reproduction. The huge volumes of suspended solids generated during galamsey activities could affect the habitats of traditional and indigenous fish species with the functionality and integrity of aquatic ecosystems in Ghana ultimately being threatened . The impacts of galamsey on ecosystem services is so massive in Ghana one begins to wonder if the economic benefits from galamsey is worth the massive siltation, sedimentation and discolouration of several rivers across Ghana. The negative impacts on galamsey on ecosystem services in Ghana is evident in arsenic pollution, mercury pollution, suspended solids pollution, discolouration of water bodies, flow modification, and hydromorphological alteration of several water bodies . The extremely high levels of total suspended solids in galamsey related water bodies with its resultant significant p-values between monitored data and permissible levels is an indication livelihood from ecosystems services could severely be affected by galamsey . The excessive suspended solids loads could impact negatively on key provision services not limited to water being used for navigation, raw water for aqua culture, fish as a food source and raw water for drinking purposes . High-suspended solids load generated by the galamsey activities can block the sun's rays from reaching submerged aquatic plants, which serve as producers in an aquatic ecosystem reducing primary productivity . Additionally, the colliery effluent emanating from galamsey with its associated high levels of coarse and fine suspended solids have the potential of darkening waterbodies in Ghana. The darkening of the water bodies could make the water bodies absorb more heat thereby increasing the temperature of the water bodies. An increase in temperature will decrease dissolved oxygen concentrations due to the higher affinity of suspended solids to sunlight as compared to water molecules. As the heat dissipates to the surrounding water by conduction dissolved, oxygen levels will drop considerable causing stratification and the destruction of hydrobionts some of which are important protein sources for humans [ 23 , , , , ]. Provision services, which are the most obvious of the services, provided by ecosystems is most impacted by galamsey activities in Ghana. Among the aquatic ecosystems provision services negatively affected by gold mining in Ghana are fresh water for domestic use, fishes and aquatic resources. Several rivers including the Ankobra, Offin, Anikoko, Tano, Bodwire, Asesree, Assaman, Birim, Pra and Oda some of which are very important intake points for raw water for treatment and distribution to consumers have all seen an escalation in turbidity in recent times. The Kibi, Daboase and Odaso treatment plant treatment plants of the Ghana Water Company Limited had to suspend operations due to extremely high turbidity values of 1600 NTU, 1261 NTU, up to 2000 NTU and 3842 NTU respectively because of sprawling galamsey activities. This could negatively impact on raw water sources an important service humans obtain from aquatic ecosystems. A vast range of aquatic foods including snails, fish and crustaceans (provision service) obtained from aquatic ecosystems are being lost to overuse of water bodies and conversion of same to mining hot spots galamseyers Additionally, genes and genetic information very critical for aquatic flora and fauna breeding, and biotechnology purposes are being lost due to the destruction of aquatic ecosystems by galamsey activities. Among the cultural ecosystem services that could be affected by galamsey in Ghana are spiritual values, religious values, educational values, cultural heritage values, recreation, and aesthetic experience . Water bodies desecrated by suspended solids pollution, discolouration by silt, and hydromorphological alteration by galamsey deprive communities of religious values including the use of water bodies for Christian programmes such as baptism and spiritual values including visitations by traditionalist to deities in water bodies to reverse curses among others. Educational values not limited to traditional knowledge systems not limited to visiting river bodies only on specific days to help water bodies self-cleanse and replenish fish stock to meet the protein requirements of inhabitants of communities in Ghana is fast disappearing due to the galamsey activities. The aesthetic and recreation benefits humans obtain from aquatic ecosystems could be totally lost if suspended solids pollution, discolouration of water bodies with silt, and hydromorphological alteration of water bodies attributable to galamsey goes on unabated. A summary of the ecosystem services likely to be negative affected in Ghana as a result of the pollution of water bodies by galamsey activities is presented in Table 7 . Table 7 Summary of ecosystem services to be affected in Ghana because galamsey. Table 7 Category Service Impact of Galamsey on Service Provisioning Food • Reduced fish, oyster and snail catch • Reduced quantities of fish as source of protein • Reduced availability of raw water for aqua culture Raw water • Reduced raw water availability • Reduced raw water quality • Increased raw water treatment cost Genetic and ornamental resources • Reduced and or loss of critical genetic material for aquatic animal and plant breeding • Loss of shells used for making ornaments • Loss of exotic and ornamental fish used in aquarium Natural chemicals & pharmaceuticals • Loss of biocides, traditional medicines and food additives Regulation Water regulation and purification • Increased incidence and magnitude of flooding • Reduced aquifer recharge • Reduced ability of water bodies to self-cleanse Erosion control • Reduced soil retention ability of terrestrial environments due to clearing of vegetation Regulation of human diseases • Increased abundance of diseases causing vectors as a result of the destruction of the habitat of fishes that feed on the larvae of vectors, such as mosquitoes Cultural Spiritual and religious and cultural heritage values • Reduced use of water bodies for religious and spiritual activities not limited to baptism and revoking curse • Loss of identity because of the importance rural communities place on deities' and culturally significant species all of whom dwell in water bodies. Aesthetics, recreation and ecotourism • Decreased use of water bodies for recreation, aesthetic and ecotourism purposes. The significantly high concentration of arsenic in water sampled in comparison to the permissible levels P < 0.0001 ( Table 2 ) is expected to have a high significance rating based on the quantitative, defensible impact characterization ( Table 3 , Table 4 ). This is because the arsenic concentration exceeded the permissible levels in water bodies in Ghana by over 1400 times ( Table 2 ). Additionally arsenic a highly hazardous inorganic micropollutant of priority can cause hydrobionts toxicity, affect fish growth, fish behaviour and or reproduction in fish. The high significance rating of arsenic is also attributed arsenic's ability to destroy habitats of fish, aquatic mammals, birds, and invertebrates. Arsenic's high significance rating could be attributed to the ability of arsenic to go into soil solution subsequently leaching into ground water aquifers . Arsenic could have a high impact (++++) on aqua culture and fish harvest due to arsenic’ ability to destroy fish habitat, affect fish growth, behaviour and or reproduction. Another reason for the high significance rating of arsenic is attributable to the unavailability of portable water in several rural communities in Ghana for which reason residents of these communities are dependent on raw water for drinking purposes. The residents could end up ingestion arsenic, which is abundantly present in the form arsenopyrite in several gold mining communities in Ghana making arsenic available for ingesting through drinking water. The significantly high concentration of arsenic in sampled water bodies in Ghana could have dire consequences on some provision and cultural ecosystem services including the availability of raw water to be treated for drinking purposes and water for recreational purposes respectively ( Table 4 , Table 5 ). Arsenic was however given a low impact score ( Table 4 ) for arsenic contaminated water as a result of galamsey used for irrigation because plants uptake arsenic from soil in two main forms namely arsenate and arsenous acid. On entering the root cells, arsenate is quickly reduced to arsenite and channelled into media external to the plants or transported to shoots. A considerable number of plant species are known to be arsenic excluders and are unresponsive to arsenic over a wide range of concentrations in soils for which reason the concentration of arsenic in plants usually is very low [ , , , ]. The significantly high concentration of mercury in water bodies in comparison to the permissible levels P < 0.0001 ( Table 3 ) has a very high significance rating based on the quantitative, defensible impact characterization ( Table 3 ). This is because the mercury concentration exceeded the permissible levels in water bodies affected by galamsey activities in Ghana over 87,000 times ( Table 2 ). Elemental mercury in effluent generated by galamsey activities could be converted to methylmercury by microorganisms and absorbed by phytoplankton making it available for accumulation in consumers in the food chains linked to filter feeders and sediments of aquatic systems in Ghana. Mercury a known toxic heavy metal will have a very high significance rating because the methylmercury species of mercury is among the micropollutants that are most bioaccumulated and environmentally persistent. The bioaccumulation of methylmercury in fish and its subsequent biomagnification is known to be of risk to humans . The transport of mercury vapour released during the process of amalgamation and gold recovery by galamsey miners, and its subsequent inhalation has the potential of seriously impairing the excretory functions of the kidneys, the transfer of impulses by the nervous system and the cognitive functions of the brain of humans. This is yet another reason why the emission of mercury due to galamsey activities received a very high significance rating ( Table 4 , Table 5 , Table 6 ) . Turbidity levels of galamsey impacted rivers in Ghana can exceed the permissible turbidity levels for surface water by as much as 1600 times ( Table 2 ). This significantly high turbidity levels P < 0.0001 ( Table 2 ) could reduce the sunlight intensity reaching aquatic plants affecting photosynthesis with subsequent impacts on dissolved oxygen levels and the growth of aquatic plants crucial for primary production in aquatic ecosystems for which reason its impact is rated high . Another reason for the high impact significance rating of suspended solids is attributed to the huge volumes of slurry and mud, which constitute the main suspended solids, generated during alluvia galamsey gold mining activities including abstraction and sluicing which significantly increases turbidity in water bodies in Ghana. The slurry and mud, which are predominantly made of silt, sand and clay, darken water bodies, disrupting the migration of aquatic organisms that rely on vision for courtship and spawning. Additionally high turbidity because of suspended solids could lead to clogging of gills of fish, settling in cracks/crevices, impairment of the hunting ability of predatory aquatic organisms, affect food chains and ultimately protein sources of human beings. Huge volumes of suspended solids with its associated high turbidity in galamsey affected water bodies in Ghana could affect the habitats of traditional and indigenous fish species with the functionality and integrity of ecosystems and their associated provision ecosystem services ultimately being threatened . High turbidity in surface water bodies in Ghana could impact negatively on cultural ecosystem services ( Table 5 ) not limited to recreation, tourism, aesthetic experience, spiritual, religious, educational and cultural heritage values. The significantly high levels of suspended solids ( Table 3 ) could lead to browning of water bodies in Ghana which could stimulate a bloom of the algae Gonyostomum semen with a resultant decrease in primary production associated with food chains and food webs necessary for fish biomass much sort after by anglers. The Gonyostomum semen bloom could also cause allergic reactions and skin irritation to prospective swimmers who will subsequently avoid such water bodies depriving them of recreational value that can be obtained from water bodies [ 47 , , , ]. The significant quantities of arsenic, mercury and suspended solids generated during galamsey activities in Ghana has diverted the flow of several rivers, altered light penetration, destroyed aquatic habitats, fragmented aquatic habitats, and destroyed aquatic ecosystems in Tontokrom in the Ashanti Region, Kyekyewere/Akropong in the Central Region and Tarkwa Nsuam in the Western Region leading to loss of aquatic biodiversity in Ghana. The silt, sand, and clay the main suspended solids generated during galamsey activities, has darkened water bodies in Ghana, clogged the gills of fish, reduced the spawning sites of fish, and reduced the hunting ability of predatory aquatic organisms in Ghana. Provision services not limited to raw water to be treated for domestic purposes, raw water for irrigation, raw water for aquaculture, fish harvest, raw biotic material (algae for fertilizer), and ornamental resources have seen a significant reduction due to an upsurge in silt, sand, and clay the main suspended solids generated during galamsey activities in Ghana. Cultural services including knowledge associated with folklore, aesthetic, religious and spiritual values derived from rivers Pra, Desu, Oti and Offin in Ghana are being lost to huge quantities of silt, sand, and clay generated during galamsey activities. The self-cleansing ability, flood regulation ability and habitat structure maintenance of several rivers in Ghana has been significantly impacted by silt, sand, and clay the main suspended solids generated during galamsey activities. The metalloid arsenic released during gold recovery by galamsey activities was rated high in significance due to most gold deposits in Ghana mainly occurring within mineralized precambrian greenstones with the gold trapped as arsenopyrite. The high significance rating of arsenic is attributed to hydrobionts toxicity in rivers Pra, Desu, Oti and Offin caused by the highly hazardous persistent inorganic form of arsenic released during galamsey which affects fish growth, fish behaviour, fish reproduction and depletes fish stocks an important provision service of these rivers. The heavy metal mercury used by the galamseyers for amalgamation activities and had a very high significance rating based on the quantitative, defensible impact characterization due to the persistence of mercury in aquatic environments. The mercury which could be converted to methylmercury by microorganisms and absorbed by phytoplankton bioaccumulating and biomagnifying in fish, snails and crabs along aquatic food chains and food webs across the length and breadth of Ghana. Kenneth Bedu-Addo: Writing – review & editing, Writing – original draft, Supervision, Software, Methodology, Investigation, Formal analysis, Conceptualization. Louis Boansi Okofo: Writing – review & editing, Software. Augustine Ntiamoah: Writing – review & editing, Software, Formal analysis. Henry Mensah: Writing – review & editing, Formal analysis. To actualize Ghana's push towards achieving SDG targets 6.1 and 6.6 of goal 6 which seeks to ensure ‘safe and affordable drinking water’ and ‘protect and restore water-related ecosystems’ respectively, galamsey activities should be done under a licenced regime devoid of aquatic pollution, destruction of ecosystems and their services through regular capacity building programme on sustainable mining techniques viz the use of mercury-free mineral processing equipment and through the implementation of the cycle of cause-effect-outlook relationship for responsible artisanal mining in Ghana as shown in Fig. 5 . Data sets generated during the current study are available from the corresponding author on reasonable request. The data sets were obtained mostly from the water sources which Ghana's water company abstracts its raw water which is under siege by artisanal miners and hence mostly demarcated as security zones. I declare that the results/data/figures in this manuscript have not been published elsewhere, nor are they under consideration (from you or one of your Contributing Authors) by another publisher. The corresponding author has read the Springer journal policies on author responsibilities (opens in a new window ) and submits this manuscript in accordance with those policies. I declare that all the material is owned by the authors and/or no permissions are required. “All authors have read, understood, and have complied as applicable with the statement on "Ethical responsibilities of Authors" as found in the Instructions for Authors and are aware that with minor exceptions, no changes can be made to authorship once the paper is submitted.” None. The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | Other | other | en | 0.999999 |
PMC11696662 | Type 2 diabetes mellitus (T2DM) is the most prevalent type of disease, accounting for about 80 % of cases of diabetes diagnosis. Diabetes is one of the top five causes of death worldwide . The global public health danger of type 2 diabetes poses a threat to the economies of all countries, especially those in developing nations. Driven by quick development, dietary changes, and progressively inactive lifestyles, the epidemic has expanded simultaneously with the global surge in obesity cases . The majority of patients with type 2 diabetes and obesity have insulin resistance in their muscles, liver, and fat, which reduces these tissues' sensitivity to insulin .Given the rising prevalence of type 2 diabetes and its association with glucocorticoid use, understanding the underlying mechanisms and potential intervention is crucial. The action of glucocorticoids in target tissues is influenced by the density of nuclear receptors and the intracellular metabolism facilitated by two isozymes of 11β-hydroxysteroid dehydrogenase (11β-HSD). These isozymes catalyze the reversible conversion between active glucocorticoids, cortisol, and corticosterone, and their inactive forms, cortisone, and 11-dehydrocorticosterone. 11β-HSD1 typically functions as an oxidase (dehydrogenase) in vitro, converting active cortisol (in humans) or corticosterone (in rodents) into their inactive forms, cortisone and 11-dehydrocorticosterone, respectively. However, in intact cells, especially under specific physiological conditions, 11β-HSD1 acts predominantly as a reductase, regenerating active glucocorticoids from inactive forms. This reversal to reductase activity is indeed central to enhancing glucocorticoid action at the cellular level, often influenced by factors like cofactor availability (NADPH) and cellular context. 11β-HSD1 is widely expressed in the gonads, adult brain, inflammatory cells, liver, adipose tissue, muscle, and pancreatic islets. In obesity, 11β-HSD1 is primarily elevated in adipose tissue, which causes metabolic problems . Patients with subclinical Cushing's syndrome have been reported to have a higher frequency of hypertension, central obesity, impaired glucose tolerance, diabetes, and hyperlipoproteinemia . Since the 1940s, when glucocorticoid therapy for autoimmune disease was first introduced, the widespread use of these drugs has resulted in the concurrent discovery of numerous harmful metabolic side effects, which has limited therapy. Unexpected hyperglycemia caused by the start of glucocorticoids frequently results in avoidable hospital admissions, extended hospital stays, higher infection risks, and worsened graft function in recipients of solid organ transplants. The management of diabetes induced by steroids is complicated by the large ranges in post-prandial hyperglycemia and the absence of well-defined treatment guidelines . Insulin resistance and hepatic steatosis are strongly correlated. Numerous reports discuss the connection between glucocorticoids and hepatic steatosis . In mice models of non-alcoholic fatty liver disease (NAFLD) and non-alcoholic steatohepatitis disease (NASHD), metformin reverses steatosis, inflammation, and abnormalities of aminotransferases . Metformin has been shown in numerous clinical trials to be beneficial for patients with NAFLD and NASHD . Nevertheless, a meta-analysis found that metformin did not help non-alcoholic steatohepatitis (NASH) patients' steatosis, lobular inflammation, hepatocellular ballooning, or fibrosis . According to recent guidelines, metformin is not advised as a specific treatment for liver disease in adults with NASHD because it does not significantly affect liver histology . Numerous supplements derived from medicinal plants have been investigated for their possible advantages in treating hepatic steatosis. Curcumin, a phenolic compound derived from the turmeric root (Curcuma longa), may lower the hepatic fat content (HFC) in NAFLD, according to systematic reviews of the literature . Furthermore, meta-analyses have demonstrated that curcumin can enhance several liver-related parameters in people with NAFLD, such as circulating levels of triglyceride, total and low-density lipoprotein cholesterol, alanine transaminase, and HbA1c, as well as fasting plasma glucose, hyperinsulinemia, insulin resistance (as measured by the homeostatic model assessment [HOMA-IR]), body weight, body mass index (BMI), and waist circumference [ , , , ]. Moreover, curcumin appears to possess anti-inflammatory characteristics . Curcumin influences multiple aspects of Metabolic Syndrome including insulin sensitivity, blood pressure, inflammation, and abiogenesis suppression . However, Curcumin's low absorption from the gastrointestinal tract, bioavailability, and water solubility limit its beneficial effects , some curcumin nanoparticles have a significantly higher bioavailability than the simple powder form . Nanotechnology-based pharmaceutical formulations, particularly those incorporating curcumin NPs, have emerged as promising solutions to enhance the bioavailability of curcumin and amplify its anti-diabetic properties. Conventional curcumin is hindered by its low solubility and rapid metabolic degradation, which limits its therapeutic effectiveness. However, by utilizing nanocarrier systems, researchers have significantly improved curcumin's absorption and stability, facilitating more efficient delivery to target tissues . Previous studies have demonstrated that curcumin NPs can effectively improve key metabolic parameters in diabetic animal models, including substantial reductions in blood glucose levels and enhanced insulin sensitivity. Moreover, nano-curcumin possesses strong antioxidant and anti-inflammatory capabilities, addressing oxidative stress and chronic inflammation commonly associated with type 2 diabetes mellitus, poetries not only contribute to improved glycemic control but also help mitigate the long-term complications of diabetes, such as cardiovascular diseases and metabolic syndrome . Curcumin and curcumin NPs have the same chemical structure, but theoretically, curcumin NPs could be just as effective as curcumin at reducing the risk factors for cardiovascular disease. For example, some earlier in vitro and in vivo research suggested that curcumin NPs might have some therapeutic benefits over native curcumin . Ashtary-Larky et al., reported that curcumin NPs supplementation was associated with an improved glycemic profile by decreasing fasting blood glucose, fasting insulin, and HOMA-IR. Moreover, curcumin NPs supplementation resulted in a rise of HDL. These researchers also found decreases in C-reactive protein, and interleukin-6, which show the favorable anti-inflammatory and hypotensive effects of curcumin NPs supplementation . Turmeric's bioactive ingredient curcumin has attracted attention as a possible treatment for type 2 diabetic mellitus (T2DM), mainly because of its insulin-sensitizing, anti-inflammatory, and antioxidant qualities. Examining curcumin's efficacy and mode of action is crucial when comparing it with better-developed therapies like metformin, sulfonylureas, and more recent ones like glucagon-like peptide-1 (GLP-1) receptor agonists. Glycemic management in type 2 diabetes is based on factors like metformin, which lowers hepatic glucose production and improves insulin sensitivity. While GLP-1 receptor agonists enhance insulin production in response to meals and decrease stomach emptiness, sulfonylureas enhance the release of insulin from pancreatic beta-cells, promoting appetite and weight control. Curcumin has a different mechanism of action, but it affects oxidative stress reduction and inflammatory pathways (including the NF-kB pathway), both of which are linked to insulin resistance. It has also been discovered that curcumin directly alters insulin signaling pathways, but the exact mechanisms are yet not fully known . Studies on curcumin show small, variable effects on fasting glucose and HbA1c levels, whereas metformin and sulfonylureas consistently show considerable decreases in these parameters. Curcumin usually reduces fasting glucose and HbA1c by a smaller amount in trials, and the outcomes vary less among groups and research types . At larger dosages, gastrointestinal problems are the main side effect of curcumin, which is generally considered safe . In contrast to sulfonylureas, which may result in hypoglycemia, and GLP-1 agonists, which may cause nausea and pancreatitis, curcumin has a comparatively low incidence of side effects, which makes it a desirable therapy adjunct . Most research on curcumin in T2DM involves comparisons with placebos rather than established antidiabetic drugs like metformin or GLP-1 receptor agonists. This makes it difficult to assess curcumin's relative effectiveness and to understand if it could serve as a substitute or supplement to existing therapies. Direct comparisons with these medications are essential for positioning curcumin within the spectrum of T2DM treatments, yet few studies have tackled this. Addressing this gap could reveal curcumin's value in T2DM management—whether as a complementary therapy or a viable alternative in certain cases [ , , , ]. These gaps illustrate the limitations of current research on curcumin for T2DM, underscoring the need for more rigorous, standardized, and comprehensive studies. Until such research is conducted, curcumin's role in T2DM management remains tentative. Addressing these gaps through well-designed trials and mechanistic studies would provide a clearer, evidence-based foundation for integrating curcumin into T2DM treatment, whether as a primary option or as a beneficial adjunct. Therefore, the primary objective of this study is to evaluate the effects of curcumin and its nanoformulation on insulin resistance and metabolic disorders in Wistar rats with dexamethasone-induced hyperglycemia and dyslipidemia. The research will compare the therapeutic efficacy of curcumin, its nanoformulation, and metformin while exploring their mechanisms for improving metabolic health. Chitosan (Low molecular weight), Sodium tripolyphosphate, Curcumin, Metformin, and Dexamethasone pharmaceutical grade were purchased from (Glentham - UK). Levels of glucose, total cholesterol, triglycerides, high-density lipoproteins (HDL), low-density lipoproteins (LDL), albumin, and aminotransferases (AST and ALT) were measured using commercially available diagnostic kits from Bio-diagnostic (Cairo, Egypt). Serum Insulin levels were estimated by using an Ultra-sensitive rat insulin ELISA kit from Gen X Bio Health Sciences Private Limited, New Delhi. glutathione (GSH), superoxide dismutase (SOD, and malondialdehyde (MDA) detection kits were obtained from Nanjing Jiancheng Bio-Technology Co. Ltd (Nanjing, China). GLUT4 was obtained from Abcam (USA). The kit manufacturer's manual's instructions served as the basis for the measurement technique. These kits have been rigorously validated for experimental use and typically demonstrate high sensitivity, often exceeding 90 %. This high sensitivity ensures reliable detection of metabolic markers, which is essential for accurate diagnosis and monitoring of conditions like diabetes and dyslipidemia. Curcumin-loaded chitosan NPs (CUR NPs) were prepared with a slight modification of previously reported ionic gelation method by Duse et al. . CUR NPs were created at Sohag University in Egypt's Faculty of Science. Dimethylsulfoxide (DMSO) was used to dissolve curcumin while stirring continuously. Chitosan was dissolved in glacial acetic acid at room temperature, diluted with distilled water, and combined with the curcumin solution using a magnetic stirrer (Thermolyne, USA) running at 500 rpm. To create nanoparticles, the mixture was then dripped with tripolyphosphate (TPP) at a rate of one drop every 3 s using a burette and a 500-rpm magnetic stirrer. The mixture was then left over the magnetic stirrer for half an hour to produce a stable curcumin nanoparticle solution. After being separated by an ultracentrifuge (Hanil Micro 17 TR centrifuge - HE5) for 30 min at 4 °C and 17000 rpm, the nanoparticles were lyophilized (freeze-dried) and kept at 4 °C for use in subsequent studies. The stability of the curcumin nanoparticles was then monitored for five days, along with their color, turbidity, and sedimentation. Fourier transform infrared spectroscopy (FT-IR), drug loading capacity (DLC), entrapment efficiency (EE), average particle size and size distribution, and transmission electron microscope images were used to determine the characteristics of the CUR NPs. Using the JEOL JEM 100 CXII (100 KV), the curcumin nanoparticle's surface morphology, microscopic structure characterization, and particle size and distribution were all examined. A popular method for analyzing the structure of molecules, identifying chemical bonds between them, and defining their structure is FTIR spectroscopy. Across the FTIR spectra absorption band, certain functional groups present in the molecular chemical structure are resolved. Interactions between various substances and medications were noted in the NPs. Using , the FTIR spectra of the chitosan NP sample were determined. After curcumin loading, nanoparticles were separated from the suspension by ultracentrifugation (Hanil Micro 17 TR centrifuge - HE5) at 17000 rpm and 4 °C for 30 min. The amount of free curcumin in the supernatant was measured by UV-spectrophotometer at wavelength 422.5 nm. The Encapsulation efficiency (EE) and drug loading capacity (DLC) of nanoparticles were calculated using the following equations: Encapsulation efficiency ( % ) = [ ( T − F ) / T ] × 100 Drug loading capacity ( % ) = [ ( T – F ) / W ] × 100 Where F is the free amount of curcumin (Non encapsulated curcumin) in the supernatant (mg), T is the total amount of curcumin added into the chitosan solution (mg), and W is the weight of nanoparticles (mg). Healthy Wistar male rats weighing around 155–184 gm were used in the present study. The Sohag Institutional Animal Care and Use Committee (Sohag- IACUC) of the Faculty of Medicine at Sohag University provided ethical approval for this research, which was carried out by protocol no. Sohag-5.November 5, 2023.03. The experimental animals' discomfort and suffering were kept to a minimum during every step of the process. The rats were acclimated for two weeks to the following conditions: 25 °C ± 2 °C, 65 % ± 10 % humidity, 11–13 air ventilation cycles per hour, and 12 h of light per day. Standard pellets were supplied to the rats, and they also received unlimited access to water. A total of 30 rats were divided into 5 groups with 6 rats in each group. The rats were divided according to weight into (160–167, 180–184, 170–180, 150–162, and 158–169 gm) for groups 1, 2, 3, 4, and 5 respectively. Bodyweight was checked for all groups on day 1, day 14, and day 28. Fasting blood glucose was checked for all groups on days 1, 10, 14, and 28. Insulin resistance was induced by intraperitoneal injection of dexamethasone (1 mg/kg) for 14 days in all treated groups. Animals were fasted overnight (14 h) before dexamethasone treatment as described by Mahendran and Devi . Group 1 served as normal control and rats were given the vehicle (DMSO: Tween 80: Water) in a volume ratio of 1:1:8 . Group 2 served as dexamethasone (DEXA) control received dexamethasone alone. Group 3 served as positive control and received an oral reference drug (Metformin, MET) (40 mg/kg) for 14 days after dexamethasone injection. Groups 4 and 5 were treated orally with (Curcumin, CUR) and curcumin NPs (CUR NPs) respectively at a dose of 100 mg/kg for 14 days after dexamethasone injection. Rats were anesthetized and then sacrificed by cervical dislocation. The liver and pancreas were dissected out. Selected organs were stored in 10 % formalin and sent for histopathological analysis. After obtaining a clear serum by centrifuging blood samples for 10 min at 3000 rpm, the samples were kept at −20 °C for biochemical analysis. Samples of tissue from the pancreas and liver have been sliced and prepared for histology and biochemical analysis. All rats had their fasting blood glucose (FBG) levels tested with an Accu-Chek meter (Roche Diagnostics GmbH, Mannheim, Germany) from the tail vein. For each time point, three measurements were made. We recorded measurements of their body weight on day 1, day 14, and day 28. Following rats sacrificed by cervical dislocation, blood was collected from the heart. After the blood was centrifuged, a lipid profile was assessed in the serum by measuring the levels of triglycerides, high-density lipoproteins (HDL), low-density lipoproteins (LDL), and total cholesterol. An assay kit for rat insulin enzyme-linked immunosorbent was used to measure the levels of fasting serum insulin (FSI). The formula for the insulin resistance index (IRI) is IRI = FBG × FSI/22.5 . A Hitachi Analyzer Model 911 (Hitachi) was used to measure the serum activities of albumin, aspartate (aminotransferase, AST), and (alanine aminotransferase, ALT) in rats. The pancreas and liver of every rat were taken out immediately and weighed. Each rat's portion of the pancreas and liver were homogenized in a glass homogenizer using cold phosphate-buffered saline (1:4) (pH 7, 0.01 mol/L) containing a protease/phosphatase inhibitor cocktail . The resulting homogenate was filtered, centrifuged at 5000× g for 5 min, and then used to assess oxidative stress markers such as GSH, SOD, and MDA. Three repeats of the trials were conducted . Muscles were separated from connective tissue, liquid nitrogen was frozen quickly, and then kept at −70 °C for additional analysis, and it was weighed. A glass homogenizer was used to homogenize the muscle component of each rat, using cold phosphate-buffered saline (1:4) (pH 7, 0.01 mol/L). After filtering and centrifuging at 5000× g for 5 min, the homogenate was utilized to measure GLUT4. The trials were carried out three times . For preparation of histological sections of different investigated groups; tissue samples of the liver and pancreas were fixed in 10 % neutral buffered formalin for 24–36 h at room temperature. The tissue samples underwent the following steps: dehydration at room temperature by upgraded ethyl alcohol, clearance at room temperature by xylene, and embedding in paraffin at 70 °C. After the construction of paraffin blocks, tissue sections of 5 μm thicknesses were de-paraffinized in xylene, re-hydrated by downgraded ethyl alcohol, and washed in running tap water. For hematoxylin and eosin staining, the sections were incubated in hematoxylin for 7 min at room temperature, washed in running water, and incubated in eosin for 2 min. The sections were washed in running tap water, dehydrated by upgraded ethyl alcohol, cleared in xylene, and mounted using Dibutylphthalate Polystyrene Xylene (DPX). Four micrometer sections of formalin-fixed and paraffin-embedded tissue blocks of the liver were de-paraffinized by xylene; followed by rehydration in downgraded alcohol and washing in running water. For blocking of endogenous peroxidase activity, tissue sections were incubated in 3 % H2O2 for 10 min at room temperature. Antigen retrieval was performed by incubation of tissue sections in 0.01 mmol/L Citrate buffer solution (pH 6) at 92 °C for 20 min. After washing by PBS buffer, tissue sections were incubated with either anti- Tumor necrosis factor (TNF) mouse monoclonal antibody or anti- Proliferating Cell Nuclear Antigen (PCNA) mouse monoclonal antibody (Clone PC 10, Catalog # NB 500, Novus Biologicals) for 1 h at room temperature. Tissue sections were washed twice with PBS before incubation with goat anti-mouse secondary antibody, incubated in streptavidin-biotin for 10 min, and separated by washing in BPS for 5 min after each step. The reaction products were visualized by immersing the sections in diaminobenzidine (DAB) for 15 min at room temperature . Nuclear counterstaining was done by immersion in Harris’ Hematoxylin for 2 min followed by rapid washing in tap water to remove extra dye. Sections were dehydrated by upgraded alcohol, cleared in xylene, and mounted using DPX. TNF and PCNA expression were evaluated based on a percentage of positive cells regarding the intensity of immune staining . Evaluation of histological and immune-stained sections was performed using a binocular Olympus microscope CX40 RF200 (Olympus Optical Co., LTD). Results were analyzed by one-way ANOVA followed by Tukey multiple comparison tests using SPSS software (version 27). The values in each group are characterized by a normal distribution and identical variance, the data was presented in Mean ± SD. For the non-parametric data (Body Weight and Fasting Blood Glucose), the equality of groups’ means was additionally checked by the Kruskal-Wallis one-way ANOVA by ranks and multiple comparison tests, the data was presented in Mean ± SE. Statistical significance was assumed if P < 0.05. Color, turbidity, and sedimentation of the curcumin nanoparticles were observed for five days following formulation to assess their stability. We discovered that the turbidity remained constant, there was no color shift, and there were no deposits of curcumin at the vial's bottom. • Particle Size Distribution The curcumin-loaded chitosan nanoparticles' particle size histogram reveals that the particles' median size is 68.75 nm, with a range of 44.3–94.1 nm. The histogram's size distribution reveals that about 80 % of the particles fall between 74.6 and 83.4 nm. • TEM morphology Fig. 1 Histogram of Particle size distribution curve of CUR NPs from TEM . Fig. 1 To determine the morphology and shape of the nanoparticles, TEM measurements were performed. The TEM Nano graphs magnified to 100 and 500 nm scales showed that the chitosan nanoparticles loaded with curcumin have a homogeneous distribution, spherical shape, and homogenous structure. Fig. 2 TEM images of CUR NPs (scale bar 100 and 500 nm) showing microspheres. Fig. 2 Fig. 3 shows that the curcumin spectrum (purple color) had two characterization peaks (1117 cm −1 of (C-O-C) and 1509 cm −1 of (OH)), whereas the chitosan spectrum (black color) showed three characterization peaks (1079 cm −1 of (C-O-C), 1424 cm −1 , and 1383 cm −1 of (NH2)). When curcumin-loaded chitosan-TPP nanoparticles (blue color) were compared to curcumin, a distinct spectrum was seen, with new, strong peaks emerging at 3387 cm −1 and 1026 cm −1 . Additionally, the peak vibration of 1509 cm −1 shifted to 1511 cm −1 . It is possible that in the nanoparticles, the hydroxide groups of curcumin and the ammonium groups of chitosan were connected. The previous study on curcumin loading into chitosan nanoparticles revealed similar findings . Fig. 3 FTIR analysis of chitosan (Chi), curcumin, and curcumin nanoparticles (CUR NPs). Fig. 3 Following the chitosan-TPP nanoparticles loaded with curcumin being prepared, the nano-formulation was collected and centrifuged. The remaining curcumin in the solution's supernatant was then quantified using a spectrophotometer with a wavelength of 422.5 nm. For curcumin, the results showed that the encapsulation efficiency (EE%) and drug loading capacity (DLC%) were 97.67 % and 52.87 %, respectively. Table 1 displays the five groups' body weights. In comparison to day 28, the dexamethasone group showed a decrease in weight. When comparing the curcumin, and curcumin NPs treated groups to the dexamethasone control group, a significantly higher increase in body weight was noted (P < 0.001). The similarity in weight gain between curcumin NPs and metformin groups (P = 0.865) suggests that nano-curcumin may offer comparable metabolic benefits to metformin in countering diabetes-induced catabolic effects. Including an effect size, such as Cohen's d, for weight gain differences between treatment groups and the dexamethasone, control could quantify this outcome more precisely. Table 1 Effect of Curcumin and Curcumin NPs on body weight in dexamethasone-induced Rats (n = 6 per group). Table 1 Groups Body weight (gram) Day1 Day14 Day28 Normal Control 164.8 ± 1.0 195.6 ± 3.0 230.6 ± 3.9 Dexamethasone Control 181.5 ± 0.7a 132.6 ± 3.0a 133.5 ± 3.2a Metformin 177.0 ± 1.5 129.8 ± 1.9 200.6 ± 2.1 Curcumin 157.3 ± 1.7 131.3 ± 2.0 204.0 ± 1.0bc Curcumin NPs 162.8 ± 2.1 129.8 ± 1.8 204.0 ± 0.8bc Results were analyzed by Kruskal-Wallis one-way ANOVA, Values are mean ± SE. P < 0.001 vs. Normal control. P < 0.001 vs. Dexa Control group. P < 0.001 vs metformin group. Table 2 shows the blood glucose levels of the five groups after a fast. Day 28 showed an elevation in fasting blood glucose levels in the dexamethasone group relative to the control group. As compared to the dexamethasone control group, there was a highly significant drop in the fasting blood glucose levels in the groups treated with metformin, curcumin, and curcumin NPS (P < 0.001). The groups treated with curcumin and curcumin NPS had fasting blood glucose levels that were like those of the metformin group (P = 0.250/0.949 respectively). This effect highlights the potential of curcumin, particularly in its nanoform, to act as a glucose-lowering agent. Reporting the percent reduction in fasting blood glucose and calculating effect sizes like Hedges' g could provide a clearer picture of nano-curcumin's relative efficacy. Table 2 Effect of Curcumin and Curcumin NPs on fasting blood glucose levels in dexamethasone-induced Rats (n = 6 per group) Fasting blood glucose level (mg/dl). Table 2 Groups Day1 Day10 Day14 Day28 Normal Control 102.6 ± 2.0 101.8 ± 2.0 100.8 ± 1.3 92.2 ± 4.8 Dexamethasone Control 106.8 ± 3.4 128.0 ± 0.8 a 145.1 ± 1.1 a 140.1 ± 3.1 a Metformin 103.1 ± 2.2 128.6 ± 0.8 144.5 ± 1.2 101.3 ± 6.9 Curcumin 101.5 ± 1.5 128.0 ± 0.5 143.5 ± 1.7 102.0 ± 1.2 b Curcumin NPs 104.3 ± 2.4 128.3 ± 0.8 146.3 ± 0.5 103.0 ± 3.6 b Results were analyzed by Kruskal-Wallis one-way ANOVA, Values are mean ± SE. a P < 0.001 vs. Normal control. b P < 0.001 vs. Dexa Control group, c P < 0.001 vs metformin group. Table 3 summarizes the lipid profiles of the five groups. The dexamethasone group exhibited elevated levels of cholesterol, triglycerides, and LDL, and decreased HDL in contrast to the normal group. Compared to the dexamethasone control group, Curcumin and curcumin NPs markedly improved lipid profiles, reducing cholesterol, triglycerides, and LDL while increasing HDL levels (P < 0.001). Interestingly, the curcumin NPs group showed greater lipid-lowering effects than metformin (P < 0.001), indicating superior efficacy in managing dyslipidemia. Presenting confidence intervals for the changes in lipid levels and calculating effect sizes would reinforce the clinical importance of these improvements, especially since lipid control is crucial in diabetes management to prevent cardiovascular complications. Table 3 Effect of Curcumin and Curcumin NPs on lipids in dexamethasone-induced Rats (n = 6 per group). Table 3 Groups CH (mg/dl) TGS (mg/dl) HDL (mg/dl) LDL (mg/dl) Normal Control 93.5 ± 1.03 55.9 ± 0.7 27.8 ± 0.6 54.5 ± 0.9 Dexamethasone Control 206.3 ± 1.01a 173.3 ± 2.6a 5.3 ± 0.3a 166.2 ± 1.1a Metformin 121.6 ± 1.15 81.2 ± 0.9 22.5 ± 0.5 82.7 ± 0.6 Curcumin 139.6 ± 0.2bc 90.8 ± 0.4bc 21.9 ± 0.9b 99.4 ± 0.9bc Curcumin NPs 101.3 ± 2.4bc 93.2 ± 0.3bc 25.7 ± 0.08b cc 57.0 ± 2.4bc Values are mean ± SD, CH = Cholesterol, TGS = Triglycerides, HDL = High-density lipoproteins, and LDL = Low-density lipoproteins. P < 0.001 vs. Normal control. P < 0.001 vs. Dexa Control group. P < 0.001 vs metformin group. The impact of the five groups' liver function parameters is shown in Fig. 4 . Compared to the normal group, the dexamethasone group had higher levels of AST, and ALT, and smaller levels of albumin. The AST, and ALT levels were significantly lower in the metformin, curcumin, and curcumin NPs treatment groups than in the dexamethasone control group while the albumin level was significantly higher in the metformin, curcumin, and curcumin NPs treatment groups than in the dexamethasone control group (P < 0.001). The AST, ALT, and Albumin levels in the curcumin/curcumin NPs treated groups were comparable to those in the metformin group (P < 0.001/P < 0.001, P = P < 0.001/P < 0.001, and P < 0.001/P = 0.932, respectively). This indicates that underscores nano-curcumin's role in supporting liver health. Adding effect sizes for AST, ALT, and albumin changes relative to the dexamethasone group could illustrate the magnitude of liver protection provided by curcumin NPs. Fig. 4 Effect of Dexa, Metformin, CUR, and CUR NPs on liver function parameters (AST, ALT, Albumin), (Results were analyzed by One-way ANOVA and Tukey's post hoc tests. Results are shown in mean ± SD (n = 6). a, b, c p < 0.001 compared to Normal control, Dexa Control, and Metformin groups, respectively. Fig. 4 In Table 4 , which illustrates oxidative stress, the administration of dexamethasone resulted in a significant (P < 0.001) reduction in the levels of antioxidant enzyme SOD and GSH marker as well as an increase in the lipid peroxidation marker MDA in the liver and pancreatic homogenate of dexamethasone control animals. When compared to the dexamethasone control group rats, treatment with metformin, curcumin, and curcumin NPs restored GSH level and SOD enzyme activity with a decline in MDA level with nano-curcumin showing superior effects. This indicates that nano-curcumin effectively counters oxidative stress in liver and pancreatic tissues, which is a critical component of diabetes complications. Including a comparison of mean percent changes and confidence intervals for these markers would emphasize nano-curcumin's protective effects on tissue health. Table 4 Effect of Curcumin and Curcumin NPs on liver and pancreas tissue antioxidant parameters in dexamethasone-induced Rats (n = 6 per group). Table 4 Groups MDA (Nmol/g tissue) GSH (Pg/g tissue) SOD (U/g tissue) liver Pancreas Liver Pancreas liver Pancreas Normal Control 8.1 ± 0.07 6.7 ± 0.05 48.1 ± 0.08 38.2 ± 0.05 41.2 ± 0.09 31.6 ± 0.00 Dexamethasone Control 36.1 ± 0.05a 25.8 ± 0.05a 17.6 ± 0.1a 14.4 ± 0.04a 15.1 ± 0.07a 11.5 ± 0.06a Metformin 28.7 ± 0.08 20.7 ± 0.05 20.3 ± 0.08 17.1 ± 0.05 17.5 ± 0.1 15.5 ± 0.08 Curcumin 17.4 ± 0.1bc 15.2 ± 0.05bc 31.7 ± 0.1bc 26.8 ± 0.06bc 30.2 ± 0.05bc 23.4 ± 0.04bc Curcumin NPs 10.5 ± 0.05bc 10.9 ± 0.1bc 40.1 ± 0.1bc 32.7 ± 0.00bc 34.7 ± 0.05bc 28.06 ± 0.08bc Values are mean ± SD, MDA = Malondialdehyde, GSH = Glutathione, SOD = superoxide dismutase. P < 0.001 vs. Normal control. P < 0.001 vs. Dexa Control group. P < 0.001 vs metformin group. The Glycemic Parameters and GLUT4 of the five groups are summarized in Fig. 5 , Fig. 6 . Dexamethasone significantly increased blood insulin and HOMAIR and decreased insulin-stimulated skeletal muscle glucose transport (p < 0.001). When compared to the dexamethasone group, these activities were significantly reversed by metformin, curcumin, and curcumin NPs. Curcumin NPs appeared to be more effective than curcumin and metformin in improving glycemic control (insulin, HOMAIR, and GLUT4). This suggests that nano-curcumin enhances insulin sensitivity and glucose transport, crucial for diabetes management. Adding effect sizes for HOMAIR and GLUT4 expression changes could quantify curcumin NPs impact on glycemic parameters more precisely. Fig. 5 Effect of Dexa, Metformin, CUR, and CUR NPs on Glycemic concentration (Insulin and HOMA IR), (Results were analyzed by One-way ANOVA and Tukey's post hoc tests. Results are shown in mean ± SD (n = 6). a, b, c p < 0.001 compared to Normal control, Dexa Control, and Metformin groups, respectively. Fig. 5 Fig. 6 Effect of Dexa, Metformin, CUR, and CUR NPs on Muscles Glut4 concentration, (Results were analyzed by One-way ANOVA and Tukey's post hoc tests. Results are shown in mean ± SD (n = 6). a, b, c p < 0.001 compared to Normal control, Dexa Control, and Metformin groups, respectively. Fig. 6 For the control group, the liver showed preserved hepatic architecture with identified lobulation, central venules, and portal areas. Hepatocytes are arranged in 2 cell-thick plates with patent hepatic sinusoids. The liver cells have uniform size and shape with central nuclei . There is no evidence of hepatic degenerative changes, steatosis, necrosis, or inflammation. Fig. 7 Histological sections of liver tissue from different study groups: A- Control group showed normal hepatic lobulation with preserved central vein (CV) and uniform hepatocyte cording (black arrow), separated by patent sinusoids (blue arrow). B and C- Liver tissue of dexamethasone-treated rats showed large areas of geographic necrosis (B) with dense inflammation (C, thick red arrows). Hepatocytes showed frequent micro-vesicular steatosis (thin red arrows). D- Rats treated with metformin showed mild degenerative changes (granular cytoplasm and cloudy swelling; black arrows), residual inflammation (thick red arrow), congested central vein (white arrow), and micro-vesicular steatosis (red arrow). E- Rats treated with curcumin showed residual portal inflammation (thick red arrow) and cloudy swelling of hepatocytes (black arrow). F- Treatment with nano-curcumin induced remarkable improvement of hepatic morphology with only residual focal mild cloudy swelling of hepatocytes (black arrow). H&E stained sections; magnification is ×400 for all. Fig. 7 Administration of dexamethasone-induced widespread damage of liver tissue . There are multiple foci of large geographic necrosis separated by zones of inflammation and degenerated liver tissue. Viable hepatocytes showed cloudy swelling, granular and glassy cytoplasm, and micro-vesicular and macro-vesicular steatosis. The central venules and hepatic sinusoids showed focal congestion. Hepatic lobules and portal areas are sears for patchy moderate inflammatory reactions mainly neutrophils and lymphocytes. Treatment of rats with metformin-induced improvement of hepatic tissue damage changes compared to dexamethasone-treated rats. The necrotic effect of dexamethasone is very minimal . However, there is residual moderate venous and sinusoidal congestion and hepatocytes showed patchy degenerative changes namely micro-vesicular steatosis and cloudy swelling. Portal areas and hepatic lobules showed mild to moderate inflammation rich in lymphocytes. Treatments of rats with either curcumin or nano-curcumin induced prominent improvement of the damaging effect of dexamethasone on liver tissue. Liver tissue of both curcumin and nano-curcumin-treated rats showed retained normal lobular architecture with normally appearing hepatic cording and almost absent steatosis. Residual histological changes include mild focal cloudy swelling and mild portal inflammation in liver tissue of curcumin-treated rate and mild cloudy swelling in liver tissue of nano-curcumin-treated rats. The main histopathological changes of liver tissue in different groups are summarized in Table 5 . Table 5 Main histopathological findings of liver tissues in different study groups. Table 5 Parameter Study group Control Dexamethasone Metformin Curcumin Curcumin NPs Necrosis – ++++ + – – Central veins and sinusoid congestion – +++ + – – Steatosis – +++ + – – Cloudy swelling – +++ + + + Portal/lobular inflammation – ++++ + + – Absent (−), minimal (+), mild (++), moderate (+++), severe (++++). Histological sections of pancreatic tissue obtained from the control group showed normal lobulation of the pancreas with preserved exocrine and endocrine components . Pancreatic acini look uniform in size and shape. They are lined by a single layer of cuboidal cells with eosinophilic cytoplasm and have uniform nuclei. Multiple small aggregates of Islets’ cells were identified. They have a round to oval uniform shape with pale granular cytoplasm and uniform central nuclei. The stroma between Islets' cells contains a thin capillary network. Fig. 8 Histological sections of pancreatic tissue from different study groups: A and B- Control group: pancreatic tissue with preserved lobules (thick black arrows) and identified Islets' cells (thin black arrow). Pancreatic acini have uniform size and shape with uniform cuboidal cell lining (red arrows). C and D- Dexamethasone treated rats: Pancreatic tissue showed extensive necrosis (thick red arrows) with residual ghosts of pancreatic acini (thin red arrows). E- Metformin-treated rats: pancreatic tissue showed focal degeneration of pancreatic acini (white arrow). F and G- Curcumin-treated rats: there is residual cytoplasmic vacuolation of pancreatic acini (arrowhead) and multiple congested vessels (thick black arrows). H- Nano-curcumin-treated rats: Pancreas showed normal architecture of both exocrine (red arrows) and Islets' cells (thin black arrow) with few congested vessels (thick black arrow). H&E stained sections; magnification is ×100 for A and C and ×400 for others. Fig. 8 Pancreatic tissue of dexamethasone-treated rats showed widespread necrosis and degeneration. Necrotic areas appeared as zones of structureless eosinophilic tissue alternating with ghosts of acini and ghosts of Islets' cells . No cellular or nuclear details could be detected. Treatment with metformin showed remarkable improvement of pancreatic tissue with markedly reduced necrosis. Pancreatic tissue showed focal mild degeneration of exocrine pancreatic acini with almost viable Islet cells. Residual congested vessels and patchy mild stromal inflammatory reactions were observed . Treatment of rats with both curcumin and nano-curcumin induced prominent improvement of pancreatic tissue with absent necrosis. Pancreatic tissue retained normal architecture of both exocrine and endocrine components. For curcumin-treated rats ; pancreatic tissue showed focal mild degeneration of exocrine pancreatic acini in terms of cytoplasmic cloudy swelling and cytoplasmic vacuolation. Islets' cells look viable with no recorded necrosis. Residual multiple congested vessels were identified. For nano-curcumin-treated rats ; only a few congested vessels were seen with no evidence of residual necrosis, degeneration, or inflammatory reaction. The main histopathological changes of pancreatic tissue in different groups are summarized in Table 6 . Table 6 Histopathological findings of pancreatic tissues in different groups. Table 6 Parameter Study group Control Dexamethasone Metformin Curcumin Curcumin NPs Necrosis/degeneration of the exocrine pancreas – ++++ + – – Necrosis/degeneration of the endocrine pancreas – ++++ – – – Inflammation – ++ + + – Absent (−), minimal (+), mild (++), moderate (+++), severe (++++). Expression of TNF was detected as brown, cytoplasmic staining. In general, expression of TNF is faint or weak to moderate in most positive cells of different investigated groups . In addition, the expression was higher in islet cells compared to the exocrine pancreas. Expression of TNF was faint in the pancreatic tissue of the control group with sporadic positive cytoplasmic expression of TNF expression by scattered cells in other study groups. The average proportion of TNF-positive cells in the control group , and dexamethasone-treated group is 1 % and 12 %; respectively and it was 8 %, 6 %, and 1 % in the metformin-treated group , curcumin-treated group and nano-curcumin treated group ; respectively. Fig. 9 Expression of TNF in pancreatic tissue of different study groups: Faint to moderate cytoplasmic expression of TNF (red arrows) in pancreatic tissue of control group (A), dexamethasone-treated group (B), metformin-treated group (C), curcumin-treated group (D) and nano curcumin-treated group (E). Immune-stained sections; magnification is ×400 for all. Fig. 9 Expression of PCNA was detected as brown nuclear staining in positive cells. In general, expression of PCNA is moderate to strong . Additionally, expression of PCNA was relatively higher in exocrine pancreatic acini compared to islets' cells. In different investigated groups, expression of PCNA was negative in the pancreatic tissue of dexamethasone-treated rats due to extensive necrosis of pancreatic tissue. The average percentage of PCNA-positive cells was 30 %, 75 %, 55 %, and 45 % in the pancreatic tissue of the control group , metformin-treated group , curcumin-treated group , and nano curcumin-treated group ; respectively. Fig. 10 Expression of PCNA in pancreatic tissue of different study groups: Nuclear expression of PCNA (red arrows) in pancreatic tissue of control group (A), dexamethasone-treated group (B) , metformin-treated group (C), curcumin-treated group (D) and nano curcumin-treated group (E). Immune-stained sections; magnification is ×400 for all. Fig. 10 Histopathological results demonstrated that both curcumin and curcumin NPs significantly reduced liver and pancreatic tissue damage, but curcumin NPs showed fewer residual inflammatory changes and necrosis. The reduced TNF expression (inflammation marker) and enhanced PCNA expression (cellular repair marker) further support curcumin's NPs regenerative potential. Calculating effect sizes for TNF and PCNA expression differences among groups could illustrate the degree of tissue recovery provided by curcumin NPs. Incorporating effect sizes and confidence intervals into the statistical analysis would not only strengthen the clinical significance of these findings but also clarify the magnitude of curcumin's NPs benefits relative to both untreated diabetic states and metformin. Insulin resistance is one of the potential pathways for the development of adult-onset diabetes, according to several theories. Insulin resistance induced by dexamethasone causes hyperinsulinemia, hyperglycemia, dyslipidemia, hepatic steatosis, muscle weakness, and body weight loss. Insulin resistance occurs prior to the manifestation of symptoms. Higher doses of dexamethasone are used in several therapeutic conditions. Insulin resistance can be treated early to stop the emergence of further issues . In general, glucocorticoids (GCs) raise blood sugar levels through a variety of processes, including enhanced hepatic glucose synthesis (gluconeogenesis), decreased peripheral glucose absorption into muscle and adipose tissue, and breakdown of muscle and fat to supply extra substrates for glucose synthesis; Long-term exposure to GCs is linked to the development of severe insulin resistance and metabolic dysfunction; however, the exact biochemical mechanism underlying this association remains unclear . The current investigation supports these results by showing that dexamethasone markedly raised insulin and blood glucose levels in insulin-resistant rats. Due to its high metabolism, quick excretion from the body, and poor absorption, curcumin has a low bioavailability . For simulating drug delivery to target organs, the nanosized particle size is helpful. In general, drugs that employ nanoparticle technology have poor oral solubility and bioavailability . Nanosized particles can improve a drug's stability, bioavailability, and absorption . In the current work, curcumin was loaded onto chitosan-TPP nanoparticles using the ionotropic gelation process. The TEM analysis of CUR NPs showed a median diameter of 68.75 nm and a spherical shape. The loading capacity and encapsulation efficiency were found to be 52.87 % and 96.67 %, respectively. According to FTIR studies, the hydroxide groups of curcumin and the ammonium groups of chitosan bind CUR and chitosan together more. In the current investigation, the treatment of dexamethasone caused insulin resistance, which in turn caused diabetes, dyslipidemia, hepatic steatosis, muscle weakness, and body weight loss. The administration of curcumin and curcumin nanoparticles (100 mg/kg/orally) resulted in a considerable reduction in high serum glucose, insulin, and lipid levels and elevation in muscles and body weight. Additionally, the liver and pancreas pathological abnormalities caused by dexamethasone were ameliorated. Here, subcutaneous dexamethasone efficiently produced insulin resistance in normal rats as evidenced by elevated HOMA-IR index, hypertriglyceridemia, hyperglycemia, and hyperinsulinemia. Prior research demonstrated that rats could develop IR when exposed to dexamethasone at varying dose levels and for varying lengths of time. suppression of hepatic hexokinase activity, suppression of hepatic glucose oxidation, and promotion of hepatic gluconeogenesis are among the proposed mechanisms of dexamethasone-induced insulin resistance . Insulin secretion and action are biologically related . Insulin-related indicators, such as HOMA-IR, have been found in relation to insulin function in type 2 diabetes and the health of insulin-producing cells. The current study's findings suggest that by enhancing beta cell activity and lowering the insulin resistance index, CUR, and CUR NPs can improve type 2 diabetes. Mantzorou et al. showed that CUR reduced plasma glucose levels and enhanced insulin resistance in diabetic rats . Additionally, CUR NPs supplementation significantly decreased FBS in comparison to the placebo in the Rahimi et al. research . In a different investigation, CUR NPs (doses of 10 and 50 mg) decreased FBS by 32 % and 37 %, respectively, in rats with type 1 diabetes . These outcomes are consistent with those of Ahmed et al. , who reported that treatment with Lut/ZnO NPs significantly decreased the levels of FBG, insulin, and HOMA-IR. These findings indicated that Lut/ZnO NPs successfully improved insulin sensitivity and glucose tolerance in rats with type 2 diabetes. Another impact of dexamethasone that contributes to the development of insulin resistance in rats is the downregulation of GLUT4, Insulin-Mediated Induction A signaling mechanism may cause insulin-regulated GLUT4 translocation. That pathway involves lipid kinase phosphatidylinositol 3-kinase (PI3K). When insulin attaches to the insulin receptor on the surface of the target cell, the receptor changes shape, activating its tyrosine-kinase domain inside the cell. The proto-oncoprotein c-Cbl and insulin receptor substrates (IRS) are subsequently phosphorylated. The essential substrates in muscle and fat cells are IRS-1 and IRS-2. These substrates are found adjacent to the plasma membrane and attract effector molecules, including PI3K, which has been involved in the translocation of GLUT4 to the plasma membrane . While Stimulation Mediated by Non-Insulin In skeletal muscle, GLUT4 translocation to the plasma membrane is stimulated by physical exercise. A mechanism other than PI3K, which is required for the insulin-stimulated pathway, is responsible for this activation. To fulfill the increased energy demands of skeletal muscle during exercise, skeletal muscle contraction triggers 5′-AMP-activated protein kinase (AMPK), which is thought to translocate exercise-responsive GLUT4-containing shuttles to the cell surface to mediate glucose transport . According to earlier research, dexamethasone raised the levels of free fatty acids in rats, which may decrease the expression of the GLUT-4 transporter in cell membranes, reducing glucose absorption and reducing glucose metabolism in regions responsible for storing glucose . In our study, CUR and CUR NPs reduce hyperglycemia and ameliorate insulin sensitivity by increasing GLUT4. According to Zhang et al., CUR has been demonstrated to improve cellular glucose uptake by the promotion of GLUT4 translocation from intracellular compartments to the plasma membrane, hence improving insulin sensitivity in the muscle tissue of insulin-resistant rats. The result of our study is consistent with the result of this study . The rats' body weight was significantly lower after receiving dexamethasone (1 mg/kg I.P. daily) for 14 days than it was for the normal control group (p < 0.001). Dexamethasone treatment has been demonstrated to cause skeletal muscular atrophy, which may explain weight loss, in addition to the breakdown of muscle proteins , as well as the inhibition of muscle protein synthesis . As shown in Table 1 and Fig. 4 , the group treated with CUR and CUR NPs observed a much greater gain in body weight than the Dexa-treated group (p < 0.001), the weight gain in these rats is due to an increase in muscle mass. This observation aligns with several earlier research works that used different experimental strategies. In particular, mice given dexamethasone showed reduced weight gain compared to those fed a normal or high-fat diet . The current investigation demonstrated that daily oral administration of dexamethasone resulted in a statistically significant increase in fasting blood glucose levels when compared to the normal control group. When metformin, curcumin, and curcumin NPS were given, the fasting glucose levels decreased. Days 10 and 14 of the dexamethasone therapy showed an increase in fasting glucose levels. Surprisingly, blood glucose levels in the dexamethasone groups increased from around 128 mg/dL to about 144 mg/dL in just three days, as shown in Table 2 . This finding agreed with a prior study that examined the impact of curcumin on serum FBG levels in rats with diabetes . For 14 days, a daily dose of 100 mg/kg of CUR and CUR NPs was administered. By the end of the trial, both curcumin groups' fasting blood glucose levels significantly decreased as compared to the insulin resistance group (Dexa group). The administration of glucocorticoids causes lipid disturbances; it raises triglycerides, total cholesterol, and LDL cholesterol levels and lowers HDL levels, which may be a secondary cause of dyslipidemia. The mechanisms behind glucocorticoid-induced dyslipidemia may include impaired LDL catabolism, increased lipoprotein lipase activity, and subsequent elevation of LDL level due to increased plasma insulin . The present study found that treatment with CUR and CUR NPs significantly improved the altered lipid profile. Increased HDL levels and significantly reduced high CH, TG, and LDL were observed with CUR and CUR NPs (100 gm/kg/oral). The HDL level is raised in the current study by CUR and CUR NPs, which is consistent with previous research . These elevated HDL levels will aid in reducing a variety of dyslipidemia-related cardiovascular problems. According to other studies that support our findings, CH, TG, and LDL were all markedly decreased by CUR and CUR NPs [ , , ]. In the Rahimi et al. patients with T2DM received CUR NPs supplementation (80 mg/day) vs a placebo for three months. The findings demonstrated that CUR NPs supplementation substantially reduced triglycerides, cholesterol, and LDL when compared to placebo . In a different study, glucose 6-phosphatase and phosphoenolpyruvate carboxykinase in the liver, as well as the regulation of the SREBP (sterol regulatory element-binding proteins) cycle, were found to be the causes of the effects of CUR on diabetic rats treated with doses of 40 and 80 mg/kg of the drug. The results indicated a decrease in serum levels of FBS, insulin, cholesterol, triglyceride, LDL, and insulin resistance in the rats treated with CUR . The outcomes of these investigations align with the findings of our investigation. According to RM El-Gharbawy et al. , in Type-2 diabetes, zinc oxide nanoparticles restore abnormalities in lipid metabolism that generally lead to elevated serum lipid levels. These outcomes concur with the current study's conclusions. Through a combination of increased fatty acid synthesis and impaired fatty acid β oxidation in the liver, glucocorticoids can contribute to the formation of fatty liver . Elevated amounts of free fatty acids have been linked to the emergence of skeletal muscle insulin resistance, hypertension, and fatty liver, which is thought to represent the hepatic outcome of the metabolic syndrome because of hepatic insulin resistance . The current investigation confirms the previous finding that the injection of dexamethasone (1 mg/kg/i.p.) caused the development of insulin resistance-related hepatic steatosis. The pathological alterations in the liver were significantly ameliorated by treatment with CUR and CUR NPs (100 mg/kg/orally) . This could be the result of a reduction in circulating fatty acid levels, which would reduce the amount of fat dexamethasone-induced liver fat development. The present study demonstrates that Dexa-induced hepatotoxicity is characterized by a significant increase in the serum activity of ALT, AST, and Albumin. These results are consistent with a recent study we conducted when rats given Dexa showed higher serum levels of liver marker enzymes . These enzymes' activity are sensitive markers of liver damage and are correlated with the severity of the damage . There were notable variations in the levels of albumin, ALT, and AST between the CUR and CUR NPS treatment groups, according to the statistical results. Because curcumin absorbs slowly when taken orally, CUR NPs at a dose of 100 mg/kg BW are more effective than curcumin at the same dose in lowering AST, and ALT levels and increasing albumin levels. As a result, curcumin that has been altered into nanoparticle form may be more effective at lowering and preventing the production of free radicals, resulting in decreased levels of AST, and ALT, and increased levels of albumin. Significant changes in histopathology further supported the damage of liver tissue caused by Dexa. degenerative alterations, such as inflammation, necrosis, cloudy swelling, granular and glassy cytoplasm, micro-vesicular and macro-vesicular steatosis, and focal congestion. Our findings are consistent with those of Safaei et al. , who demonstrated that Dexa induced extreme hepatocyte degeneration, necrosis, and inflammatory cell infiltration. Gutiérrez et al. revealed that selenium nanoparticle intake induced a beneficial effect on diabetic rats concerning liver function, as measured by a reduction in ALT, AST, and ALP. These data are in agreement with the results of the current study. Consuming CUR-loaded PLA–PEG NPs improved liver enzymes in diabetic rats, as indicated by a decrease in ALT, and AST according to El-Naggar et al. , These findings are consistent with the study's findings. These findings suggested that prepared nanoparticles could be utilized to prevent or ameliorate diabetes. It has long been believed that oxidative stress is one of the primary damaging factors that cause the development of insulin resistance. Although the exact mechanism of oxidative stress production is still up for discussion, it has been suggested that several mechanisms are involved . One of the primary causes of oxidative stress is reactive oxygen species (ROS), which are an unavoidable consequence of metabolism. The primary process that produces ROS is the passing of electrons from the respiratory chain of the mitochondria and their subsequent transfer to molecular oxygen, which forms the superoxide anion (O2−). ROS is the O2 produced when the enzyme NADPH oxidase is activated . Increased auto-oxidative and non-enzymatic glycosylation are among the potential mechanisms that significantly trigger the formation of free radicals and radical-induced lipid peroxidation Pro-oxidative conditions are caused by increased ROS production, which throws off the equilibrium between oxidant and antioxidant status levels. The average tissue MDA level in the pancreas and liver increased in the current study when the Dexamethasone group was included. Elevations in MDA levels are indicative of increased lipid peroxidation, which is evidence that glucocorticoid medication causes oxidative stress. Compared to the group that received CUR, the treatment group that received CUR NPs exhibited the lowest levels of MDA. This is because curcumin that has been transformed into nanoparticles has a higher bioavailability, which enables it to be better absorbed by the body, reach its intended organs, and lower liver and pancreas tissue MDA levels. By elevating GPx activity and lowering elevated liver MDA levels, nanoparticles help to promote curcumin's adsorption in intestinal epithelial cells and enhance its hepatoprotective effects in rats . A prior study that estimated the antioxidant and antihyperglycemic effects of Allium boonei extract in rats with dexamethasone-induced hyperglycemia revealed similar results. Daily injection of dexamethasone (0.4 mg/kg) for 30 days to produce Hyperglycemia resulted in a notable increase in the MDA level . The liver's accumulation of lipids may oxidize, releasing free radicals such as reactive oxygen species (ROS) . By destroying unsaturated fatty acids in cell membranes, ROS causes lipid peroxidation and decreases endogenous antioxidants, which damages the liver . Glutathione (GSH), which is a substrate for glutathione peroxidase (GPx) and glutathione S-transferases (GST), is the first line of defense against free radicals. It is responsible for replenishing GPXs, which detoxify lipid hydroperoxides and H 2 O 2 . In this study, the liver and pancreatic GSH levels were decreased in the Dexa-treated group compared to the normal control group. In contrast, CUR NPs therapy led to a greater elevation in pancreatic and liver GSH levels than did CUR. In accordance with these results, Lv et al., 2018 reported that Dexa treatment reduced glutathione peroxidase levels in broiler liver . Hepatic and pancreatic tissues of SOD activity increased in animals treated with metformin, CUR, and higher in CUR NPs, compared to the diabetic control group. This increase was more important in the liver and pancreas. Karihtala and Soini suggests that SOD readily hydrolyzes hydrogen peroxide by disproportionation damaging superoxides. By increasing the synthesis of SOD, an enzyme essential to the body's antioxidant protection, CUR, and CUR NPs function greater than the reference drug (metformin), especially CUR NPs. Our results with diabetic rats demonstrated that our available diabetes treatment options, particularly 100 mg/kg CUR NPs, effectively restored the activities of antioxidant enzymes in the pancreas and liver along with reducing oxidative stress markers, MDA, and increasing GSH and SOD in the hepatic and pancreatic tissues. Additionally, curcumin can increase overall antioxidant activity, improve islet viability, and reduce the generation of reactive oxygen species (ROS) in islets, hence restoring pancreatic islets . Curcumin's ability to scavenge free radicals by interacting with the oxidative cascade to reduce oxidative enzymes, restore the antioxidant status, and chelate metal ions was linked to its ameliorative effect on hepatic GSH levels, which prevented the Fenton reaction . Histological investigations of the liver and pancreas conclude that dexamethasone induces significant liver and pancreas damage in rats, characterized by necrosis, inflammation, and steatosis. Treatment with metformin mitigates these effects to a moderate extent, while curcumin and nano-curcumin offer substantial protection, preserving hepatic architecture and significantly reducing pathological changes. These findings highlight the potential of curcumin and nano-curcumin as effective therapeutic agents against dexamethasone-induced liver and pancreas damage. Limitations of the current study include: 1) The use of animal models to study type 2 diabetes (T2DM) has limitations, as the physiology and metabolic responses in animals may not fully replicate those in humans. This can affect the generalizability of the results to human disease mechanisms and treatment responses. 2) A small sample size reduces the statistical power of the study, making it difficult to detect significant effects or accurately estimate variation within groups. It also limits the ability to generalize the results to a broader population. 3) Short study duration does not fully capture the long-term effects of interventions or assess the progression of T2DM. Chronic conditions such as T2DM require longer observation periods to understand the sustained effect of treatments. 4) Lack of sample diversity, as a homogeneous sample (e.g., age, gender, genetic background) limits the applicability of the results to diverse populations and may not reflect varying responses due to genetic, lifestyle, or environmental factors. 5) Experiments are conducted in vitro, the results may not fully mimic the conditions of the in vivo, where complex interactions and regulatory systems influence cellular responses. These limitations highlight the need for further research using larger, more diverse samples, human models, and extended study periods to confirm and extend these findings. This study makes a significant novel contribution to the field of diabetes research by highlighting the potential of nanoparticle-based formulations, particularly nano-curcumin, as an advanced treatment strategy for T2DM. By enhancing the bioavailability and effectiveness of curcumin, the nano-formulation addresses the limitations of conventional curcumin supplementation, which is often limited by poor absorption. Based on the study's findings, nano-curcumin offers significant potential for clinical application in human diabetes treatment. The nano-formulated curcumin demonstrated enhanced efficacy in lowering fasting blood glucose, improving lipid profiles, and reducing liver and pancreas tissue damage. Its antioxidant properties, particularly the increased SOD and GSH and reduced MDA levels, suggest a protective effect against oxidative stress. These results support nano-curcumin's viability as an adjunct therapy for T2DM, potentially complementing traditional treatments by improving metabolic control and reducing diabetic complications. Amany M. Hamed: Writing – review & editing, Writing – original draft, Validation, Software, Resources, Methodology, Formal analysis, Data curation, Conceptualization. Dalia A. Elbahy: Writing – review & editing, Software, Resources, Data curation. Ahmed RH. Ahmed: Writing – review & editing, Writing – original draft, Validation, Resources, Methodology, Data curation. Shymaa A. Thabet: Writing – original draft, Methodology, Data curation. Rasha Abdeen Refaei: Writing – original draft, Validation, Software, Resources, Methodology, Data curation. Islam Ragab: Writing – review & editing, Validation, Software. Safaa Mohammed Elmahdy: Writing – review & editing, Visualization, Validation, Resources, Methodology, Investigation, Formal analysis, Data curation. Ahmed S. Osman: Writing – review & editing, Visualization, Software, Formal analysis, Data curation. Azza MA. Abouelella: Writing – review & editing, Writing – original draft, Methodology, Data curation, Conceptualization. Not applicable. All data generated or analyzed during this study are included in this published article. The manuscript is original. It has not been published previously by any of the authors and even not under consideration in any other journal at the time of submission. The research has not received external funding The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | Review | biomedical | en | 0.999997 |
PMC11696671 | Growing environmental concerns about fossil fuel resources have led to a significant focus on developing environmentally favourable power generation methods [ , ]. Solid Oxide Fuel Cells (SOFCs) have gained significant attention due to their ability to efficiently convert chemical energy into electrical power through electrochemical reactions . The SOFCs' high operational temperature range provides several advantages in their applications [ , ]. These benefits include a high rate of electrochemical reactions, high efficiency, adaptability to different fuels such as pure hydrogen [ , ], biogas , natural gas, methanol, and ethanol , minimal pollutant emissions, the ability to function as a hybrid energy system , and various geometrical configurations [ , , ]. However, the main obstacles to SOFC technology are the reduction of their cost and start-up time , improved durability , reduced degradation resulting from high temperatures, and enhanced SOFC system efficiency . Direct internal reforming (DIR) has the potential to substantially decrease the overall cost and complexity of SOFC systems while simultaneously increasing their overall efficiency by utilizing the heat produced by the SOFC for endothermic reforming reactions in the anode . Menon et al. investigated H-SOFC systems with DIR numerically. The effect of different operational conditions on species transport, temperature distribution, and electrochemistry illustrated. The H-SOFC performance was analysed under various operating conditions, including the influence of partitioning the anode into multiple regions with various catalytic areas . Kumuk et al. created a computational model of an electrolyte-supported SOFC powered by hydrogen and coal gases with different electrolytes. The impact of temperature changes on the efficacy of proton-conducting and oxygen ion-conducting electrolyte SOFCs simulated numerically. It was demonstrated that O-SOFC was more efficient than H-SOFC at higher temperatures, while H-SOFC performed better at medium temperatures. Hydrocarbons and biomass are particularly suitable for SOFCs because of their low cost and wide availability . However, the complex composition of these fuels results in multiple electrochemical and chemical reactions . Gholaminezhad et al. modified Fick's model to develop a 1D channel-level model of SOFC fuelled by methane. They simulated the electrochemistry and mass transfer phenomena of a SOFC to predict current density limitations. Min et al. developed a 1D model for investigating the thermal and electrochemical characteristics of a SOFC stack. A parametric analysis performed to determine the optimal operating conditions of SOFCs by varying current density, fuel ratio, and pressure. The results indicated that high efficiency achieved using low current density, high fuel consumption, and low air usage. Tu et al. looked into how the fuel composition, thermal efficiency, and electrical efficiency of SOFCs were affected by different ways of processing methane. They showed that steam reforming of methane produces more H2 and CO per mole of methane, resulting in high efficiency but low thermal efficiency. They showed that SOFCs can have high efficiency and low carbon deposition if the right O/C ratio chosen during the pretreatment of methane. This leads to the complexity of heat generation processes and complicates performance prediction and optimization . Takino et al. experimentally developed a modified equation for exchanging SOFC anode current density using methane fuel. The combination of their equation with numerical simulation used to investigate the efficiency factors of an electrolyte-supported SOFC. The modified equation reproduced the V-I characteristics and temperature distribution. Although computational fluid dynamics (CFD) has demonstrated high precision for evaluating performance, its complexity prevents online prediction and optimization. In contrast to the commonly used 2D or 3D multi-physics simulation (MPS) approach, by using artificial intelligence (AI) models, a black-box model is created by using a set of parameters for solving non-linear equation systems 10 [ , ]. Peksen et al. investigated the effectiveness of the pre-reforming procedure for various syngas used as fuel by combining experimental data with numerical simulation methods. The thermochemistry of syngas fuel analysed using a CFD model. The developed model is then used to generate the necessary data to train a machine learning model. Additionally, studies looked at the combination of MPS and AI. As an example, a hybrid model for the investigation of SOFCs to address the challenge of long-term operation using difficult-to-use fuels was developed by Xu et al. . The model combined MPS and deep learning, allowing for precise prediction with an error of less than 1 %. Also, they used a genetic algorithm to optimize, resulting in maximum power density while staying within the temperature gradient and operating condition limits. Song et al. conducted experimental tests on 30 SOFC stack segments at varying furnace temperatures. Multiple evaluation criteria used along with ANN models to predict the stack's efficiency. The results indicated that the fitting errors of the three algorithms are within 5 %, whereas the neural network offered the best prediction accuracy in its results for generalizability and testing time. Yan et al. presented a modelling framework to optimize the microstructures of SOFC electrodes using sequential simulations and multi-objective optimization assisted by artificial intelligence. They analysed the influence of various initial powder parameters, such as particle size distribution, on the SOFC's degradation rate and cathodic overpotential. They found that lower pore size and fine particle size result in a lower cathodic overpotential but a higher degradation rate. Xu et al. developed a framework to enhance the performance of SOFCs using CFD modelling, ANN, and genetic algorithms. Initially, a 3D CFD model developed that considered multiple parameters, including geometry, microscopic features, and operating conditions, and data collected. Their results indicated that the ANN provided the most accurate predictions of SOFC performance, with an R-score value of 0.99889. Mahmood et al. conducted a sensitivity analysis to explore the influence of key operational and design parameters such as operating temperature, material porosity, flow configurations, air-fuel ratios, and electrolyte thickness on the performance and thermal stresses within the SOFC's porous electrodes and solid electrolyte. Mütter et al. optimized SOFC performance using ANN and genetic algorithms (GA). The ANN trained with data from a multi-physics model with molar fraction, temperature, and current density as the input data. The GA then applied to optimize power output, yielding near-global optimum solutions with alternative gas compositions. Gnatowski et al. used an ANN model that dynamically updates the charge transfer coefficients based on operational conditions, trained on experimental data from SOFC anode polarization curves. The ANN predictions improved the accuracy of overpotential estimates, demonstrating its effectiveness in enhancing electrochemical modelling in SOFC applications . Therefore, artificial intelligence provides a powerful prediction method for fuel cell applications. However, the performance of these applications depends on the appropriate choice of machine learning and deep learning technology . AI technologies, specifically (ANNs), are being utilized to enhance the design and operational parameters of these fuel cells . Due to the complexity of the governing equations in H-SOFCs, it demands a robust and efficient method for predicting performance under varying conditions. While traditional numerical simulations are accurate, they are often time-consuming and computationally intensive. This work aims to address this challenge by combining numerical modelling with AI techniques, K-nearest neighbours (KNN), and artificial neural networks (ANN) algorithms. This integration of AI offers an innovative approach to streamline the prediction of H-SOFC parameters like current density and power density, making it a valuable tool for rapid optimization and design in H-SOFC technology. This hybrid approach represents a step forward in leveraging AI to complement multiphysics simulations, providing more efficient and accurate performance predictions. From the review of the literature, it found that to date, little study conducted on the impact of A/F on the efficiency of proton-conducting solid oxide fuel cells (H-SOFCs). Therefore, the purpose of the current study is to conduct a comprehensive numerical investigation and analysis of how the A/F ratio affects H-SOFC performance. In the model, various parameters such as A/F, temperature, voltage, and fuel flow velocity considered for training the AI models. The model has been setup to solve the coupled non-linear governing equations, which include continuity, momentum, mass transfer, chemical and electrochemical reactions, and energy equations, by means of a multiphysics simulation method developed in house. A multiphysics numerical simulation of a simplified micro-planar proton-conducting H-SOFC developed in the current study. The simplified H-SOFC model configured as shown in Fig. 1 . It consists of a porous anode electrode, a porous cathode electrode, a solid electrolyte and channels for air and fuel. The geometric characteristics of the computational domain given in Table 1 . Fig. 1 Representation of an anode-supported H-SOFC. Fig. 1 Table 1 Geometric characteristics of the present study. Table 1 Parameter Values Length of the cell 2 × 10 − 2 (m) Height of channels 1 × 10 − 3 (m) Anode height 5 × 10 − 4 (m) Electrolyte height 1 × 10 − 4 (m) Cathode height 1 × 10 − 4 (m) The numerical model solves the governing mathematical equations for the H-SOFC including continuity, momentum, mass transfer, chemical, and electrochemical reactions. The H-SOFC functions through DIR process, where a mixture of hydrogen, methane, steam water, carbon dioxide, and carbon monoxide provided to the fuel channel. Hydrogen produced in the anode through chemical reactions, e.g. through the DIR process or water-gas shift reaction (WGSR). The DIR process can convert methane to a mixture of hydrogen and carbon monoxide (H2 and CO) on the surface of an anode, while the WGSR is a reversible chemical reaction that converts carbon monoxide and water to carbon dioxide and hydrogen. The chemical formulas for DIR and WGSR reactions are given in Eq. (1) and Eq. (2) , respectively: DIR: (1) C H 4 + H 2 O → 3 H 2 + CO WGSR: (2) CO + H 2 O → H 2 + C O 2 The generated hydrogen is oxidized, as shown in Eq. (3) : (3) H 2 ↔ 2 H + + 2 e − Protons flow from the anode to the cathode through the proton-conducting electrolyte. At the cathode-electrolyte interface, the protons react with electrons received from the anode via an external circuit, as shown in Eq. (4) : (4) O 2 + 4 H + + 4 e − ↔ 2 H 2 O The overall reaction of the SOFC is represented in Eq. (5) : (5) 2 H 2 + O 2 ↔ 2 H 2 O It has been assumed that the H-SOFC numerical model is operating in a steady state condition. The fluid flow is laminar and compressible, and all properties of the fluid change with temperature. The fluid behaves like an ideal gas. The electrolyte is considered dense and non-porous; therefore, there is no mass or momentum transfer through electrodes. Porous electrodes ohmic heating is not considered since the ionic conductivity is negligible compared to the electrical conductivity . It assumed that electrodes have perfect selectivity for the electrochemical reactions, fuel undergoes electrochemical oxidation within the anode's porous electrode, and oxygen reduction occurs in the cathode's porous electrode. The governing mathematical equations used to process the H-SOFC model are expressed as follows. The velocity field, u , and pressure, P , for the porous electrodes and gas channels are determined by applying continuity and momentum equations. The continuity equation is expressed in Eq. (6) [ , ]: (6) ∇ . ( ρ u ) = Q b r Here ρ represents the mixture's density, and Q b r represents the mass generated per unit volume. Since reactions only occur in electrode layers, Q b r is equal to zero for gas channels. The momentum equations for the channels and electrodes are given in Eq. (7) and Eq. (8) , respectively: (7) ρ u . ∇ u = ∇ [ μ ( ∇ u + ( ∇ u ) T ) − 2 3 μ ( ∇ . u ) I ] − ∇ p (8) ρ ε ( u . ∇ ) u ε = ∇ . [ μ ε ( ∇ u + ( ∇ u ) T ) − 2 3 μ ε ( ∇ . u ) I ] − ∇ p − ( μ κ + Q b r ε 2 ) u Here μ is dynamic viscosity of a gas mixture, and κ and ε refer to the permeability and porosity of the electrodes, respectively . The production and consumption of gas species that occur during chemical and electrochemical reactions lead to momentum sources at both electrodes . The operating voltage at a specific current density is determined by Eq. (9) : (9) V = E O C V − ( η a c t + η c o n c ) In which E O C V is the cell's reversible open circuit voltage. The interface between the anode and air channel defined as ground; therefore, the anode open circuit voltage, E a n O C V , is zero. The cathode open circuit voltage, E c a O C V , is obtained by applying Nernst's equation in Eq. (10) . (10) E c a O C V = 1.253 − 0.00024516 T − R T 2 F ln ( p H 2 O ( c a ) I p H 2 ( a n ) I p I O 2 ( c a ) 0.5 ) Electrode-electrolyte interface partial pressure, P Ι , is computed by using the transport model . Here F is the Faraday constant and η a c t , η c o n c represent the activation and concentration overpotential, respectively. The activation overpotential is calculated using Eq. (11) : (11) η a c t = ϕ e − ϕ i − E O C V Here ϕ e is the electronic potential and ϕ i is the ionic potential. The concentration overpotentials for the anode, η c o n c , a n , and cathode, η c o n c , c a , are obtained from Eq. (12) and Eq. (13) , respectively : (12) η c o n c , a n = R T 2 F ln ( p H 2 ( a n ) p H 2 ( a n ) Ι ) (13) η c o n c , c a = R T 2 F ln ( ( p O 2 ( c a ) p O 2 ( c a ) Ι ) 0 / 5 ( p H 2 O ( c a ) Ι p H 2 O ( c a ) ) ) The potential distribution of electronic, σ i , and ionic charges, σ e , for the electrolyte, cathode, and anode are expressed in Eqs. (14) , (15) , (16) : (14) ∇ . ( − σ i e l ∇ ϕ i e l ) = 0 (15) ∇ . ( − σ i a n ∇ ϕ i a n ) = ∇ . ( − σ e a n ∇ ϕ e a n ) = + i v , a n (16) ∇ . ( − σ i c a ∇ ϕ i c a ) = ∇ . ( − σ e c a ∇ ϕ e c a ) = − i v , c a The charge source term, i v , is determined the Butler-Volmer equation, as expressed in Eq. (17) : (17) i v = i 0 , e l e c t r o d e [ exp ( 2 α a n F R T η a c t ) − exp ( − 2 α c a F R T η a c t ) ] Here α a n and α c a are anode and cathode charge transfer coefficients. The mass fraction species, ω i , in the electrodes and gas channels is determined by Eq. (18) [ , ]: (18) ∂ ∂ t ( ρ ω i ) + ∇ . ( ρ ω i u ) = − ∇ . j i + R i The diffusion mass-flux vector, j i , is calculated using the modified Fick's equation, as represented in Eq. (19) [ , ]: (19) j i = − ( ρ D i e ∇ ω i + ρ ω i D i e ∇ M n M n − j c , i + D i T ∇ T T ) The species mass source term, R i , is calculated according to the values of DIR rate, R D I R , and WGSR rate, R W G S R , in electrodes . The values of R i for chemical and electrochemical reactions are obtained from Eq. (20) and Eq. (21) , respectively [ , ] : (20) R i = ω i M i ( a i R D I R + b i R W G S R ) (21) R i = ω i M i c i i v n F As a result, the overall mass generation term is computed using Eq. (22) : (22) Q b r = ∑ i R i In porous electrodes, Knudsen diffusion, D i K n , should add to the average diffusion coefficient, D i m , due to considerable species collisions with the walls. Therefore, the effective diffusion coefficient, D i e , is calculated using the Bosanquet formula, as shown in Eq. (23) : (23) 1 D i e = 1 D i m + 1 D i K n Where D i K n and D i m are calculated using Eq. (24) and Eq. (25) , respectively [ , ]: (24) D i K n = 2 3 ε τ r p 8 R T π W k (25) D i m = 1 − ω i ∑ i ≠ j K g ω j / γ D i j The binary diffusion coefficient, D i j , is determined by the Maxwell-Stefan equation, and γ equals one . Here, τ is the tortuosity of porous electrodes, and r p stands for the average pore's radius. The temperature profile across the entire domain is determined as shown in Eq. (26) : (26) ρ c p u . ∇ T + ∇ . ( − k e f f ∇ T ) = Q t o t Here, c p is the specific heat, and k e f f is the thermal conductivity coefficient . The mass source term of the energy equation, Q t o t , is given as follows in Eqs. (27) , (28) , (29) [ , ]: In electrolyte: (27) Q t o t = σ i e l ( ∇ ϕ e e l ) 2 + Q e l e c In Cathode : (28) Q t o t = σ i c a ( ∇ ϕ e c a ) 2 + σ e c a ( ∇ ϕ e c a ) 2 + i η In anode : (29) Q t o t = σ i a n ( ∇ ϕ e a n ) 2 + σ e a n ( ∇ ϕ e a n ) 2 + i η + Q c h e m Where i represents the electrode current density and iη is the heat generated from irreversible overpotential losses. Here, σ(∇φ) 2 illustrates the Ohmic heating term, and Q c h e m is the energy source term related to chemical reactions. Q e l e c and Q c h e m are energy sources for electrochemical and chemical reactions, respectively [ , ]. Table 2 presents the operational conditions and material properties used in the current study [ , ]. To solve the governing equations, the following boundary conditions considered: At the inlet of gas channels, the velocity field, temperature, and gas mixture composition specified. At the outlet, atmospheric pressure and zero mass diffusion assumed. The fluid regime is continuous, and the outer walls have no-slip boundary conditions and are thermal insulation. Table 2 The operational conditions and material properties used in current study [ , ]. Table 2 Parameter value Operational conditions T 973 K P in 1 atm P out 1 atm V Fuel 1–3 m/s V air 3 m/s Mole fraction of input fuel components H 2 0.661 ، C H 4 0.116، H 2 O 0.003، CO 0.218, CO 2 0.002 Mole fraction of input air components 0.001H 2 O ،0.789 N 2 ، 0.21 O 2 Material properties porosity of electrodes 0.4 Permeability 1 0 − 12 Electrode's tortuosity 3 Thermal conductivity of the electrolyte 2 . 16 W / m . K Anode thermal conductivity 1 . 86 W / m . K Cathode thermal conductivity 5 . 84 W / m . K Current density of anode exchange 5300 A / m 2 Current density of cathode exchange 2000 A / m 2 Electrolyte conductivity 0 . 009 T − 6 . 157 S / m Density of SOFC components 452 . 63 kg / m 3 Specific heat capacity of SOFC components 3515 . 75 J / kgK Pore radius 0 . 5 μ m D an , eff 8 . 984 × 1 0 − 5 m 2 / s D Ca , eff 4 . 748 × 1 0 − 6 m 2 / s σ ele 225 . 92 exp − 6 . 3 × 1 0 3 / T Ω − 1 m − 1 The H-SOFC model uses discretised geometry to apply the previously introduced nonlinear equations to discretised nodes and mesh elements. Initially, input parameters defined within the numerical model to develop the electrochemical equations and obtain initial solutions for the operating voltage and cell's current density. In the next step, the mass and momentum conservation equations solved to obtain the velocity field. In final stage, all models used to solve the coupled partial differential equations simultaneously. The model then updates initial solutions and calculates all outputs. This approach involves solving the independent nonlinear partial differential equations individually and using their results as initial values for all the governing equations. Iterations repeated in each step until convergence achieved. Fig. 2 provides an overview of the overall process of the H-SOFC modelling process, including all the essential steps. Fig. 2 Diagram of the H-SOFC modelling process. Fig. 2 A grid independence test conducted to determine the influence of the mesh size on the output current density and select the optimal grid size for the present study. Four different computational grids with different element sizes analysed, as shown in Fig. 3 (a). The results reveal no notable difference (less than 3 %) in the current density values between a computational mesh of 84,656 elements and 121,806 elements. Consequently, a mesh size of 84,656 elements chosen for all simulations. To validate the numerical simulation, a comparison is made between the polarization curves of the numerical results and the results obtained from an experiment conducted by Taherparvar et al. , as depicted in Fig. 3 (b). The geometric parameters, operating conditions, and cell materials kept consistent. Fig. 3 (a) Comparison of average current density along the electrodes with different grid sizes, (b) Comparison of multiphysics simulation polarization curves and experimental data. Fig. 3 Due to the non-linear and complex nature of the governing equations within the H-SOFC numerical model, running the model for different conditions would be costly. However, a trained AI tool may be able to analyse the performance of the model under different considerations. This study contains a combination of multiphysics simulation and AI techniques. Initially, the data obtained from numerical simulations utilized to train and the AI model. An artificial neural network (ANN) k-nearest neighbours (KNN) algorithm, which involves preprocessing the data, splitting it into training and testing sets, and normalizing it. We ran 364 simulation with different values of some H_SOFC parameters(temperature, air-to- fuel ratio, velocity of the fuel gas, and voltage. Before constructing the AI models, the input parameters obtained from the results of H-SOFC numerical simulations, including the air-to-fuel ratio, voltage, temperature, and input fuel velocity, normalised within a range of zero to one. The outputs considered in this study are the H-SOFC current density and power density. For network training purposes, 364 data sets used, which randomly split into two groups: a training set (composed of 70 % of the data) and a testing set (consisting of 30 % of the data). The input parameters and their range of values shown in Table 3 . It is worth noting that the data analyses in this study performed using Python, which is an open-source high-level programming language widely used in scientific computing. The machine learning models implemented using the Keras and Scikit-learn libraries written in Python. Table 3 Variations in the input parameters for KNN and ANN models. Table 3 Input parameters Value Air-to-fuel ratio 0.5–4 Voltage 0.1–1.1(v) Temperature 800-973( K ) Inlet fuel velocity 1-3(m/s) The K-nearest neighbour algorithm is a machine learning method used to classify new data points by comparing them to the nearest data points in the training dataset. The K-nearest neighbour algorithm enables the consideration of K arbitrary neighbours. The value of K represents the number of neighbours that considered. To determine the class of each data point, the algorithm considers the neighbouring data points of its surrounding class. The predicted class assigned based on the class with the highest count among the neighbours. In this study, the value of K is determined based on the minimum error obtained for each K value. The artificial neural network is a supervised learning method consisting of interconnected neurons with adjustable weights that process data through three or more layers. The components of an ANN include an input layer, one or more hidden layers, an output layer, a set of neurons, weights, biases, and activation functions . A structure of ANN with two hidden layers shown in Fig. 5 . The model selection procedure is the most crucial aspect of a neural network, as it directly influences the model's output. Various architectural and hyperparameter configurations must be explored and optimized to determine the optimal model, such as the number of input parameters, number of neurons, number of hidden layers, activation functions, and loss functions . For the ANN model, various architectures and hyperparameters (such as the number of hidden layers, number of neurons, activation functions, etc.) need to be optimized to ensure high accuracy. We used the grid search method to find the optimal values for our model. Table 4 shows the different values of hyperparameters that evaluated to find the ultimate values. Table 4 Hyperparameter tuning with grid search. Table 4 Hyper parameters Tested values Optimal value Learning rate 0.1, 0.01, 0.003 0.01 Number of hidden layers 1, 2, 3 2 Number of neurons 16, 32, 64, 128 (32,64) Batch size 4,16, 32, 64 16 Epochs 100, 200, 300 200 Activation function Relu, Sigmoid, Softmax Relu Fig. (4) illustrates the structure of the ANN used in this study, including an input layer, two hidden layers, and an output layer, along with the number of neurons in each layer. Additionally, the input and output data are depicted in Fig. 4 . The final hyperparameter values of the optimized ANN model are presented in Table 5 . Fig. 4 A structure of ANN with two hidden layers for the current study. Fig. 4 Fig. 5 (a) The current density distribution at various A/F ratios; (b) distribution of power density at various A/F ratios at a temperature of 973 K ; (c) V-I and P-I curves with a fuel-to-air ratio of one at different temperatures; ( d ) H2 mole fraction variations at the anode-electrolyte interface for various temperatures. Fig. 5 Table 5 Hyperparameters for training ANN models. Table 5 Output Model Input parameters Number of neurons in hidden layers Output activation function Batch size Prediction of power density ANN) first model( T, v(m/s), S/V, V(v) (32, 64) Sigmoid 16 ANN)second model( T, v(m/s), S/V, V(v) (32, 64, 32) Sigmoid 32 Prediction of current density ANN)second model( T, v(m/s), S/V, V(v) (32, 64) Sigmoid 32 To evaluate the accuracy of trained models, some standard criteria are used, which are : Mean Absolute Error (MAE): The mean absolute value of the prediction errors, regardless of their direction. The smaller (closer to 0) the value, the better the trained model performs. The MAE is expressed as in Eq. (30) : (30) M A E = ∑ i = 1 n | y i − x i | n Where x is the predicted value and y is the actual value. Mean Squared Error (MSE): This error is similar to the MAE, but it squares the absolute values of the errors. However, it is typically more challenging to interpret due to the magnitude of the values and their dissimilarity to the data. The MSE value is calculated using Eq. (31) : (31) M S E = ∑ i = 1 n ( x i − y i ) 2 n Root Mean Squared Error (RMSE): This type of error addresses the interpretation problem of MSE by taking the square root of the final value, so that the resulting error is of the same data type as the original data. The RMSE value is calculated using Eq. (32) : (32) R M S E = ∑ i = 1 n ( x i − y i ) 2 n R-squared ( R 2 ): This measure demonstrates the correlation between the model outputs and the predicted values. It is important when a statistical model is used for prediction or for evaluating test data. The closer the value is to one, the higher the model's accuracy. The R-squared value is calculated using Eq. (33) : (33) R 2 = 1 − ∑ i = 1 n ( x i − y i ) 2 / ∑ i = 1 n ( x i − x i ‾ ) 2 Different operating conditions are the main factors affecting the electrochemical performance of SOFCs. In this part of the study, the effects of various operating parameters (e.g., operating temperature, air/fuel ratio), the Effects of variation in inlet fuel velocity, and the prediction of fuel cell current and power density by an ANN model are investigated. The study's results categorized into two main groups: numerical simulation results and AI results. Multiphysics simulation results presented as follows. The effect of different air-to-fuel (A/F) ratios on cell performance studied by simulating the model at a temperature of 973 K with varying ratios of A/F ranging from 0.5 to 4. This ratio obtained by changing the value of the fuel. Fig. 5 (a) displays the current density versus the air-to-fuel ratio for different voltages: 0.1 V, 0.4 V, 0.7 V, and 1 V. The fuel cell's current density decreases with an increasing A/F ratio. Variations in current density reduction are more significant at lower voltages, particularly at V = 0.1 and a higher A/F ratio. The highest current density of 33.6 mA/cm2 achieved at A/F = 0.5 and V = 0.1. This decrease in current density attributed to fuel reduction as it moves along the fuel channel, leading to a decline in reaction and current density. Fig. 5 (b) shows the power density versus the air-to-fuel ratio for various voltages, including 0.1 V, 0.4 V, 0.7 V, and 1 V. The fuel cell's power density decreases as the A/F ratio increases. For instance, at 0.4 V, by increasing the A/F ratio from A/F = 1 to A/F = 4, the cell's maximum power reduced by about 20 %. At higher A/F ratios, the decrease in power density becomes more significant as the fuel entering the fuel channel leads to fuel dilution, affecting both reforming and electrochemical reactions. Consequently, a higher A/F ratio decreases the rate of both reactions. Fig. 5 (c) shows the variations in voltage-current and power-current density for an A/F ratio 1 at different temperatures. Increasing the temperature has a significant impact on the output power and current density, resulting in an overall increase in cell efficiency. The findings show that when the temperature decreases from 1000K to 800K, the output power and current density decrease by 48 % and 41 %, respectively. In Fig. 5 (d), the hydrogen mole fraction variation at the anode-electrolyte interface shown as a function of temperature. Where the temperature rises, there would be an increase in the variation of the H2 mole fraction. For example, variation in the H2 mole fraction at 1000K is approximately 3.5 percent higher than T = 800K. As temperatures rise, the rate of electrochemical processes increases, leading to greater fuel consumption. Furthermore, the variation in H2 mole fractions along the cell length at T = 1000K is 7 percent greater than the corresponding value at T = 800K. To confirm the accuracy of the numerical modeling, simulation results are compared with literature papers. Findings align with [ , ], which highlight that higher operating temperatures enhance current density, and power density, and reduce ohmic losses. This is also corroborated by Refs. [ , ] who showed that air-fuel ratios impact overall SOFC performance, although its effect is smaller than temperature. Fig. 6 (a) shows the hydrogen mole fraction distribution at the anode-electrolyte interface during operation at a voltage of 0.5 V with an inlet fuel velocity of 1. The results demonstrate that a higher A/F ratio leads to more significant variations in the H2 mole fraction. An A/F ratio of 4 has the most variation in H2 concentration, from a maximum of 0.125 at the inlet to a minimum of 0.029 at the outlet. In Fig. 6 (b) and (c), the distribution profiles of hydrogen concentration in the anode and fuel flow channels presented for different A/F ratios at a temperature of 973K and a voltage of 0.5 V. As the A/F ratio doubles, triples, and quadruples, the hydrogen concentration drops at the anode outlet, reaching 36 %, 15 %, and 5.5 % of the hydrogen concentration at the anode outlet with an A/F ratio of 1, as shown in Fig. 6 (c). For an A/F ratio of 0.5, the maximum H2 mole fraction is 0.8291 at the outlet and decreases to 0.029 for an A/F ratio of 4. Fig. 6 (a) H 2 mole fraction variation at the anode-electrolyte interface with varying A/F ratio at a temperature of 973K, (b); (c) H 2 mole fraction distribution in the anode and fuel flow channel at a temperature of 973K as a function of different A/F ratios. Fig. 6 As mentioned, fuel cell power and current density predictions made using ANN and KNN methods. The results obtained from these methods presented below. For power prediction, 364 data points used. Fig. 7 compares the expected power with the actual power determined by numerical results for both the training and test set. In part (a), the ANN model with three hidden layers can achieve an MAE of 0.031200 and a R 2 coefficient of 0.98 for test data. As shown in Fig. 7 (b), the ANN model with two hidden layers achieves an MAE of 0.01612 and a R 2 coefficient of 0.99 for test data, indicating improved accuracy compared to the first model (the ANN with three hidden layers). Fig. 7 (a) ANN model with three hidden layers and the output parameter P; (b) ANN model with two hidden layers and the output parameter P; (c) The KNN model with K = 3 and the output parameter P; (d) ANN model with two hidden layers and the output parameter I; (e) KNN model with K = 3 and the output parameter I. Fig. 7 The results of using the KNN model to predict the cell's power density shown in Fig. 7 (c). The optimal value for K, which determines the number of nearest neighbours considered, is determined by calculating the MAE for K values ranging from 1 to 20. Among the range of K values evaluated, K = 3 obtains the lowest MAE of 0.036127, and thus, it selected for training the present model. The accuracy obtained for the training and testing datasets is 97 % and 95 %, respectively. However, the KNN model exhibits lower accuracy compared to the ANN model. Choosing the optimal model to predict a target quantity is a procedure that requires attention. In Table 6 , the performance of various trained ANNs and KNN models for predicting the power density examined following hyperparameter tuning. According to the results, the ANN model with two hidden layers achieves the best accuracy. Table 6 Evaluation of the trained models' performance for predicting power density. Table 6 method MAE MSE RMSE R 2 ANN (First model) 0.016129 0.0006539 0.0255730 0.990 ANN (Second model) 0.031200 0.0017748 0.042128 0.980 KNN 0.036127 0.0032140 0.056692 0.950 Fig. 7 (d) and (e) compare the current density values predicted by the ANN and KNN models with the actual values (obtained from the numerical data). The outcomes demonstrate similar result as power prediction, the ANN has the best accuracy, and the distribution of training and testing data points is more uniform around the y = x line. Table 7 presents the MAE, MSE, RMSE, and R 2 coefficients for both the ANN and KNN models. Table 7 Evaluation of trained models for predicting current density. Table 7 Method MAE MSE RMSE R 2 ANN 0.016599 0.0005871 0.024231 0.99 KNN 0.026136 0.0015718 0.039646 0.97 The results show that the ANN model is more accurate and has fewer errors than the KNN model. Three new data points have used to evaluate the accuracy of the selected, trained model (ANN with two hidden layers) in predicting target values. The input values for these three test data sets presented in Table 8 . Table 8 Input parameter values for three new data points. Table 8 The values of input parameters Test data number ( °C )Temperature ( m / s )Velocity of inlet fuel Air-to-fuel ratio (V)Voltage Data point 1 850 2.5 1.5 0.1–1.1 Data point 2 950 2.5 1.75 0.1–1.1 Data point 3 910 2 2 0.1–1.1 Fig. 8 provides a comparison between the results obtained from the multi-physics model simulation and the predicted outcomes from the first ANN model, which demonstrates higher accuracy. This trained ANN model effectively predicts variations in current density ( Fig. 8 (a)) and power density ( Fig. 8 (b)) values based on voltage changes. In comparison to other research, the ANN method used in this paper demonstrated excellent predictive accuracy for current and power density with an error rate below 1 % and an R-score of approximately 99 %. Similar results were found by Xu et al. and Wang et al. who combined multi-physics simulations with deep learning, achieving a prediction error of less than 1 %. These studies confirm the reliability of AI models in SOFC prediction and highlight the potential for further optimization through advanced algorithms. Fig. 8 The predicted value by the ANN model and its comparison with the actual value obtained from the simulation results (a) of current density and (b) of power density. Fig. 8 This paper presents a comprehensive investigation combining numerical analysis and (AI) techniques to study and predict the performance of a micro proton-conducting solid oxide fuel cell (H-SOFC) fuelled with methane. Using a detailed numerical approach, we solved complex mathematical equations, including electrochemical, mass transfer, heat transfer, continuity, and momentum equations, to understand the behaviour of H-SOFCs under varying operational conditions and by changing the values of air to fuel ration(A/F), temperature, velocity of the fuel gas and voltage. The numerical simulation results used to train both an artificial neural network (ANN) and a K-nearest neighbours (KNN) model, enabling accurate predictions of the cell's output power and current density. The main findings of this study summarised as follows. • Impact of Temperature: The performance of H-SOFC using DIR of methane fuel improves significantly as temperature increases. The simulation results show that as the operating temperature increases from 800 K to 900 K and 1000 K, the maximum output power density increases from 74.4 mW/cm 2 to 678.8 mW/cm 2 and 932.6 mW/cm 2 , respectively, indicating a substantial enhancement in cell performance. • Effect of Air-to-Fuel (A/F) Ratio: The numerical model reveals that the current density and power density of H-SOFC decrease as the A/F ratio increases. Optimal performance achieved at an aspect ratio of A/F = 0.5, where power density increases by 2 % and current density by 7 % compared to the state at A/F = 1. Conversely, at A/F = 4, power and current density decrease by approximately 25 % compared to A/F = 1. • AI Model Accuracy: The ANN model demonstrated remarkable accuracy in predicting the power density and current density of the H-SOFC, with average absolute errors of less than 1.6 % and an R-score of about 99 %. This confirms the ANN model's potential as an effective tool for performance prediction, reducing the reliance on time-consuming numerical simulations. Overall, increasing the temperature and decreasing the electrochemical conversion voltage enhances the hydrogen conversion rate, leading to a faster reaction of methane to hydrogen land, and resulting in improved fuel cell performance. The combination of numerical modelling and AI-based prediction represents a significant advancement in studying of H-SOFCs. This hybrid approach provides a deeper understanding of H-SOFC operations and offers an efficient and accurate method for predicting performance parameters, significantly reducing the computational cost. The results of this work have the potential to influence future research, promoting the development of more efficient, AI-assisted fuel cell technologies that are practical for a wide range of applications. Future research could explore the integration of H-SOFCs into hybrid energy systems, where the fuel cell works in conjunction with other energy technologies (such as gas turbines or renewable energy sources). Parastoo Taleghani: Writing – original draft, Validation, Conceptualization, Writing – review & editing. Majid Ghassemi: Writing – review & editing, Supervision, Conceptualization. Mahmoud Chizari: Writing – review & editing, Supervision, Conceptualization. Not applicable. Not applicable. Data is available upon request from the corresponding author. This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | Other | other | en | 0.999997 |
PMC11696696 | The Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR)-Cas9 system is a two-key-component system consisting of the target-specific CRISPR guide RNA (gRNA) and Cas9 endonucleases, where CRISPR single-guide RNA (sgRNA) identifies the target site to be cleaved by Cas9 endonuclease to achieve subsequence insertion or deletion of a fragment of DNA . In the CRISPR-Cas9 system, the gRNA (spacer) sequence needs to be complementary with its targeting DNA sequence, containing 20 nucleotides, which is followed by a three-nucleotide sequence called protospacer adjacent motif . The CRISPR-Cas9 system has been widely implemented in various species and cell types and has great potential for human therapeutics . Although the CRISPR-Cas9 system has become a powerful gene-editing tool, a major challenge for its effective application is to design/choose the optimal sgRNA, which has high on-target cleavage efficacy and low off-target effect (OTS). Indeed, not all sgRNAs would cut a target DNA with equal efficacy, i.e. different sgRNAs have different on-target efficiencies . Meanwhile, the Cas9 system scans the whole genome and possibly cuts unintended DNA sequences (off-targets) . Thus, the off-target activity has been a major concern since the invention of the CRISPR-Cas9 system, especially for therapeutic and clinical applications. To detect the off-target activities of sgRNAs genome-wide in a sensitive and unbiased way, several experimental techniques have been developed such as GUIDE-Seq , Digenome-Seq , SITE-Seq , CIRCLE-Seq , HTGTS , BLISS , and CHANGE-Seq . Among these techniques, the CRISPR-Cas9 system induces double-strand break (DSB) cleavage sites either in the purified genomic DNA or in living cells. In spite of their respective advantages and limitations, which have been widely reviewed and comprehensively benchmarked regarding the sensitivities and resource requirements, etc. , methods for detecting off-target activities of the Cas9 system are still labor-intensive, high-cost, and some are even difficult to operate. In silico models provide relatively rapid, low-cost alternatives to predict the off-target activities of sgRNAs, thus facilitating the optimized design of sgRNAs beforehand. Up to now, various computational tools have been developed to facilitate the optimal sgRNA design. These works could be classified into three categories: (i) alignment-based scoring; (ii) hypothesis-driven testing, and (iii) machine learning model-based predictors. With increased data generated from the CRISPR community, machine learning, especially deep-learning-based models have become the main-stream. More recently, deep learning principles-based prediction systems have surpassed their competitors . Particularly, DeepCRISPR employed a deep convolutional denoising neural network-based autoencoder architecture to learn the deep representation of each sgRNA sequence and their associated epigenetic features, and further followed by a fully convolutional neural network (CNN) model for building the classifier. The autoencoder-based pre-training models on massive unlabeled sgRNA sequences in the whole genome help to capture sgRNA representations efficiently. AttnToMismatch_CNN applied a transformer architecture with multi-heads attention modules to perform the encoding and decoding of each sgRNA and DNA sequence pair. CRISPR_Net proposed a new sequence encoding scheme, which considered both mismatch and indels (i.e. insertions and deletions), and then connected by a recurrent convolutional network combining Inception-based CNN and bidirectional long short-term memory (BiLSTM) for learning the network classifier. These are three representations of off-target prediction studies using various advanced deep learning models. Several other studies were published during the preparation of our study, and they were more or less based on different combinations of CNN and recurrent neural networks (RNN) with different sequencing embedding approaches . In addition, for an overview of machine learning model or deep learning-based CRISPR sgRNA design tools, readers are referred to recent benchmarking studies . Most of the current available deep learning models for CRISPR-Cas9 off-target predictions were trained on small sets of sgRNAs in various cell-lines and not evaluated on a large set of sgRNAs in human primary cells. In this study, we aim to develop a new deep learning model for CRISPR-Cas9 off-target predictions by exploring the performance of large-scale human primary cells. Specifically, we proposed a new stack encoding to encode the sgRNA–DNA pairs and adopted the Bidirectional Encoder Representations from Transformers (BERT) architecture for contextualized embedding followed by a conventional BiLSTM architecture for the deep learning model training. Our experiments demonstrated that the proposed new model outperformed existing deep learning models (including DeepCRISPR, CRISPR-Net, and AttnToMismatch_CNN) through single split and leave-one-sgRNA-out cross-validations as well as independent testing. For comparison purposes, we collected the cell-line CRISPR-Cas9 off-target datasets from previous studies: DeepCRISPR and AttnToMismatch_CNN . The positive pairs were generated in multiple studies with different genome-wide off-target screening protocols across two cell lines: the HEK 293-related cell lines (18 sgRNAs) and K562T (12 sgRNAs). The positive pairs were the same between the DeepCRISPR and AttnToMismatch_CNN studies. However, the negative pairs were slightly differently generated by either Bowtie or Cas-OFFinder . Here, the negative set generated by Cas-OFFinder with up to six mismatched bases in each pair was selected. In total, 656 positive off-target sites and 169 557 negative off-target sites were collected. We collected the human primary T-cell data generated by CHANGE-Seq , a recently developed in vitro genome-wide off-target cleavage site technique. In this dataset, 110 sgRNA targets across 13 therapeutically relevant loci were screened and 202 043 sgRNA–DNA pairs were measured. Among these pairs, 191 528 pairs contain only mismatches, i.e. no indels. In addition, when comparing the detailed pairs, 66 109 sequence-based redundant pairs (i.e. completely duplicate pairs regardless of the genomic positions) were removed. Thus, in total, 125 419 unique sgRNA–DNA pairs containing 27 410 positive and 98 009 negative off-target pairs were used. In this dataset, a set of 55 high-confidence sgRNA cleavage sites from three different sgRNA targeting HEK293 genomic DNA were obtained by two amplification-free long-read sequencing techniques including Pacific Biosciences’ single molecular read-time sequencing (SMRT-OTS) and Oxford Nanopore Technologies’ nanopore sequencing (Nano-OTS) . To collect negative pairs corresponding to these positive pairs, we used Cas-OFFinder to find potential sgRNA–DNA mismatch pairs with mismatched bases ≤6. Finally, 480 negative pairs were identified. Inspired by energy-based model of Alkan et al. and to mimic the energy configuration for the sgRNA and DNA sequence pairs, as this is important to form the sgRNA and DNA double strand, we proposed to use stack encoding to represent the sgRNA–DNA pair. Specifically, a two-base length-sliding window was adopted to extract the dimer pairs (or doublets) from a length of 23-base pair (bp) sgRNA–DNA double strand with a step of one base from 5' end to 3' end of the sgRNA. Since the sgRNA sequence or more broadly the DNA sequence consists of four nucleotides (A, C, G, T), there are 256 (4 4 ) different types of dimers (“vocabulary sets”) that can be formed. Thus, the 23-bp length of the sgRNA–DNA sequence pair was converted to a vector of 22 dimers. The next step is to encode the discrete vector of 22 dimers into dense numeric features. We applied word embedding to map each dimer into a d -dimensional vector of floating point values. The word embedding, also known as distributed word representation, is an unsupervised learning algorithm that can capture both the semantic and syntactic information of words from a large unlabeled corpus. It would transform the dimer to low-dimensional numeric features and similar dimers would have a similar embedding feature. The embedding dimension is a hyper-parameter, which can be trained together with the deep learning architecture. We set the embedding dimension to 64 in the final model. This function was implemented using Keras embedding function in “TensorFlow” . The BERT model is a special type of transformer model , where many encoders are stacked on top of each other. Encoder architectures are used for understanding the semantic meaning of tokens in a given vocabulary within natural language processing (NLP) tasks. These stacked encoder structures are proven to be effective in solving NLP tasks, but they are usually hard to interpret. We adopted the BERT model to perform a deep feature representation for the sgRNA–DNA pairs, as a contextualized embedding layer. For this purpose, we used a relatively small architecture, limiting the layer size and attention heads to six and eight, respectively. This architecture is smaller than the conventional BERT architectures, where the BERT BASE model has 12 layers and 12 heads . The BERT model takes the same input as the word embedding model, i.e., a discrete vector of 22 dimers/doublets of the paired sgRNA and the DNA sequences. The embedding dimension was set to 64, thus the output of the BERT layer is a 64-dimensional continuous embedding vector. In this study, the BERT model was trained from scratch, and therefore pre-trained model was not used. The BERT architecture was implemented and maintained in the “Transformer” Python package, which is managed by the “HuggingFace” company. An LSTM architecture is composed of many memory blocks. These memory blocks are able to integrate information from previous blocks and retain the important ones, while taking direct inputs from the data. To achieve that, each memory block has an input, output, and a forget gate. These gates determine which part of the input information should be stored, output, and for how long it should be stored, respectfully. The LSTM layers triumph over traditional RNN layers on issues such as better information management and avoiding exploding and/or vanishing gradients. Particularly, we used a BiLSTM network to extract the forward and backward information of the sgRNA–DNA sequence pairs. In the BiLSTM layer, a forward LSTM computes a representation ht→ of the sequence from left to right at every word t, and a backward LSTM computes a representation ht← of the same sequence in reverse. These two distinct networks use different parameters, and then the representation of a word ht = (ht→; ht←) is obtained by concatenating its left and right context representations . At the output of the BiLSTM layer, the forward and backward outputs of both LSTMs are combined together and concatenated. The proposed new model in this study is illustrated in Fig. 1 . For comparison purposes, the model that used BERT as the embedding layer was named “CrisprBERT,” and the one that used conventional word embedding was simply called “BiLSTM.” First, the encoding (Input layer) output is fed into the BERT embedding layer. The output of the BERT layer is 64-dimensional representation vectors, giving a 64×22 matrix. This is fed into the BiLSTM module (Recurrent layer). The concatenated output of the BiLSTM layer then goes into a series of dense layers. The final dense layer has a sigmoid activation for binary classification purposes. The architecture is implemented using “TensorFlow” . Overall, the proposed new model consists of the stack encoding input layer, BERT embedding layer, BiLSTM recurrent layer, and two dense layers as well as a sigmoid output layer. We adopted k-fold cross-validation, a single-split cross-validation and leave-one-sgRNA-out methods to evaluate the models. The conventional k-fold cross-validation was used for parameter tuning of the models. The single-split cross-validation was then used to compare this model with others, by leaving 10% of the data as a validation set. Finally, for the leave-one-sgRNA-out strategy, one sgRNA and their associated sgRNA–DNA pairs were put aside for validation, and the remaining sgRNAs were used for training. In addition, a completely independent testing was performed to evaluate the generalization ability of the models. The training datasets and testing datasets were from different experiments—mainly from different cell types or different protocols. The metrics for evaluating the performance of the model are Receiver Operating Characteristic-Area Under Curve (ROC-AUC) and Precision-Recall-Area Under Curve (PR-AUC), which are both widely used in classification problems. The ROC curve is plotted as the true-positive rate [TP/(TP + FN)] against the false-positive rate [FP/(FP + TN)] under a series of thresholds where TP is true positive, FN is false negative, FP is false positive, and TN is true negative. The precision–recall curve is plotted as precision [TP/(TP + FP)] versus recall [TP/(TP + FN)] under a series of thresholds. PR-AUC score is particularly suitable for assessing the performance of models on an imbalanced dataset. The higher the value of PR-AUC, the better the performance of the model in class imbalance problems. The value of ROC-AUC and PR-AUC is in (0, 1), where 1 indicates a perfect performance. The proposed CrisprBERT and BiLSTM models were implemented using Python 3.7 with TensorFlow (2.5.0) as the backend. All experiments were carried out on a computer with Intel (R) Core (TM) i9-12900H CPU @ 3.50 GHz, Ubuntu 24.04.1 LTS and 32 GB RAM, as well as one NVIDIA GeForce GTX 3080 Ti Laptop GPU with 16 GB of memory. The top 50 doublets from both CHANGE-Seq and DeepCRISPR datasets were extracted, and their frequency distribution across different positions of the sgRNA–DNA pairs was calculated. Doublet distribution in both datasets showed the specificity of certain doublets in given positions . Interestingly, 41 of the top 50 doublets were shared in both datasets, although different enrichments at different positions were observed. The top five observed doublets in CHANGE-Seq dataset are GG to TC, GG to CT, GG to AC, TG to TC, and AG to CC, whereas the top five doublets in DeepCRISPR dataset are AG to AC, TG to CC, TG to TC, GA to CC, and GG to AC. High frequency of TG to TC, TG to CC, GG to AC, and GG to TC doublets were consistently observed in the 21st position (i.e. last second positions) of both datasets. Besides the above enrichments, high frequencies of GG to TC and CT were also observed in the first and the middle of the sgRNA sequences in the CHANGE-Seq dataset, while GG to other nucleotides mismatches (e.g. GG to AC, TC, CT, and CG) were mainly observed at the first position in the DeepCRISPR dataset. Meanwhile, more diverse doublet mismatches were observed in the DeepCRISPR dataset than in the CHANGE-Seq dataset at other positions such as the AG to AC mismatches at positions 5 and 9, the GC to CA, CT, and CC at position 16. We first explored the influences of different encoding and embedding dimensions on the performance of the proposed CrisprBERT model. We compared singlet, doublet, and triplet encoding, as well as different embedding dimensions in the BERT layer. A cross-validation was performed on both DeepCRISPR and CHANGE-Seq datasets to evaluate the performance. The singlet and doublet encoding demonstrated comparable performance, while the triplet encoding exhibited reduced performance . Similarly, results across various embedding dimensions indicated that the model with an embedding dimension of 64 performed slightly better than those with other parameters on both datasets . Therefore, we opted for doublet encoding and an embedding dimension of 64 as the default setting for the CrisprBERT model. After obtaining the optimal architecture and hyperparameters, the CrisprBERT and the simple BiLSTM models were compared with three different deep learning strategies previously published: DeepCRISPR, Attention_to_mismatch network, and CRISPR-Net. The cross-validation performances for the CHANGE-Seq dataset were measured on three of the models: CrisprBERT, BiLSTM, and Attention_to_mismatch. However, we were unable to train the DeepCRISPR and CRISPR-Net models on this dataset as the source codes are not available. The validation for all three models was achieved using the same 10% of the dataset. Both BiLSTM and CrisprBERT outperformed the Attention_to_mismatch model . Specifically, the cross-validation ROC-AUC scores for Attention_to_mismatch, BiLSTM, and CrisprBERT were 0.85, 0.919, and 0.935, and the PR-AUC score for the models are 0.760, 0.854, and 0.887, respectively. These results imply that the simple sequence-based BiLSTM model with a proper doublet encoding can achieve similar results for off-target prediction compared to a denser and advanced neural network such as the Attention model. Meanwhile, the CrisprBERT model outperformed the BiLSTM model in both ROC-AUC and PR-AUC tests, indicating the BERT embedding has an advantage over the conventional word embedding. The same cross-validation test was repeated using the DeepCRISPR dataset for both models. The PR- and ROC-AUC values for DeepCRISPR, Attention_to_mismatch, and CRISPR-Net models were taken from their respective studies. The comparison results are shown in Fig. 3B . Although all the models researched quite high ROC-AUC scores (i.e. ∼0.99), the PR-AUC scores are relatively small (i.e. around 0.5). Since this dataset is heavily imbalanced with much larger negative pairs than positive pairs, the PR-AUC is believed to be a more suitable metric. For this metric, the CrisprBERT again outperformed all other models, with a PR-AUC score of 0.544 (∼10% marginal increase over the other models). Meanwhile, the BiLSTM model remains comparable to DeepCRISPR and Attention_to_mismatch models. Additionally, we explored the influences of sizes and data imbalance ratios in training data on the model performance. We therefore conducted cross-validation testing on subsets of the CHANGE-Seq dataset with different data sizes as well as subsets of the DeepCRISPR dataset with different imbalance ratios. As shown in the Supplementary Figs S2 and S3 , the results demonstrated increased performances when the dataset size increased while a decreased performance when the data imbalance ratio between positive off-target and negative ones increased. To evaluate the generalization ability of the CrisprBERT model on predicting the off-targets of new (unseen) sgRNA, a leave-one-sgRNA-out experiment was performed to mimic the prediction performance of the model on new sgRNAs. In this particular test, a single sgRNA along with its corresponding off-target pairs were used for cross-validation and were left out of training. However, some sgRNAs have very few positive off-targets in both datasets (as low as one), which leads to statistical discrepancies, such as PR-AUC scores of 1 . Hence, some sgRNAs with very few positive off-target sequences were combined together to yield at least 30 positive off-target sequences. Specifically, we first sorted the sgRNAs based on the number of positive pairs. We then combined subset of the sgRNAs in a heuristic way (following the increased order of the number of positive pairs) to form the combined sgRNA sets, ensuring that each combined set includes at least 30 positive pairs. This leave-one-sgRNA-out validation was then repeated for all these combined sgRNAs, and the performance was measured over all the sgRNA–DNA pairs. We compared the performance of these two models with the other three models on the DeepCRISPR dataset only, where the ROC-AUC and PR-AUC of the DeepCRISPR, Attention_to_mismatch and CRISPR-Net models were extracted from the CRISPR-Net study. Figure 4 shows that CrisprBERT performed the best over other models regarding the PR-AUC metric, with a PR-AUC score of 0.486, which is more than a 10% marginal increase compared with other models. The BiLSTM model also showed a slight improvement in PR-AUC when compared with the other three models. In addition, it also achieved comparable ROC-AUC scores with other models. To qualify the generalization capability of the models, independent tests were further conducted. It is particularly important to show the prediction performance on completely unseen data that are obtained from different experimental protocols, different sgRNAs, and different cell types. We trained BiLSTM and CrisprBERT on the DeepCRISPR dataset. For other comparison models, we used the released models from each study. We first tested them on the CHANGE-Seq dataset. As before, all indel sequences were removed from the CHANGE-Seq dataset. As the DeepCRISPR model required associated epigenomic features, we downloaded four epigenomic tables of the HepG2 cell line from ENCODE and annotated the pairs in the CHANGE-Seq data. The results are shown in Fig. 5A . All models have comparable ROC-AUC or PR-AUC scores except the DeepCRISPR model, which showed lower scores. Specifically, CrisprBERT and Attention_to_mismatch performed similarly with CrisprBERT having a slightly higher PR-AUC score (i.e. 0.629, compared to 0.620 of Attention_to_mismatch). Both performed better than the BiLSTM and CRISPR-Net models. The CRISPR-Net model scored slightly less than the BiLSTM model. An almost identical pattern was observed with ROC-AUC scores. Furthermore, all models were tested on another independent dataset, the long-read OTS dataset. For this dataset, when using the DeepCRISPR model, the corresponding epigenomic tables of the HEK293 cell line were extracted from ENCODE. A similar trend was observed in this test compared to the test on the CHANGE-Seq dataset . The CrisprBERT, BiLSTM, and CRISPR-Net models performed very similarly with respect to their PR-AUC and ROC-AUC scores, around 0.54 and 0.89, respectively. Attention_to_mismatch and the DeepCRISPR models performed a bit worse, achieving PR-AUC scores of <0.4. Finally, we explored whether training the model on integrated datasets would improve the performance of the CrisprBERT model through simply increasing the amount of training data. We performed the leave-one-sgRNA-out test on both DeepCRISPR and CHANGE-Seq datasets as well as the pooled dataset from these two for the BiLSTM and CrisprBERT models. Regarding the pooled dataset, the models followed the same protocol to produce the validation set. However, the training set size was increased by combining both datasets. We reported the global ROC-AUC and PR-AUC scores by merging all the individual datasets as well as the average ROC-AUC and PR-AUC scores of the individual leave-one-out sgRNAs. To reduce statistical variability, we made sure that every leave-one-out validation group had at least 30 positive off-targets. This implied merging some sgRNAs, which had very few positive off-targets. We observed the pooling strategy did not increase the performance of the models on this dataset. For the pooled dataset, the ROC-AUC and PR-AUC scores were 0.871 and 0.637 for the CrisprBERT model, compared with 0.821 and 0.401 for the BiLSTM model. For the individual dataset, the ROC-AUC and PR-AUC results were 0.881 and 0.653 for the CrisprBERT model and 0.876 and 0.412 for the BiLSTM model . When checking the performance of each individual sgRNA, models with both strategies performed similarly except the BiLSTM reduced performance slightly when trained on pooled datasets, particularly for the ROC-AUC performance . The PR-AUC for the pooled dataset does not differ much from the individual dataset in this case. For CrisprBERT, most sgRNAs had expected precision accuracy scores between 0.6 and 0.7. The results for the DeepCRISPR dataset were slightly different from the CHANGE-Seq dataset. Although the effect of pooling does not seem to improve the overall performance, we observed drastically increased ROC-AUCs with the pooling strategy for both models but not the PR-AUCs, where RP-AUC scores decreased slightly. Meanwhile, we observed that the average ROC-AUCs and PR-AUCs per sgRNA showed higher values than the global ROC-AUCs and PR-AUCs, largely because the number of positive off-target pairs is relatively small for many sgRNAs. Specifically, the global PR-AUC scores for the CrisprBERT model were 0.486 and 0.379 for the individual and pooled datasets, respectively. The global PR-AUC scores for the BiLSTM model were 0.385 and 0.355, again for the individual and pooled datasets, respectively . Accordingly, the global ROC-AUC scores for the CrisprBERT model were 0.960 and 0.972 for the individual and pooled datasets, respectively. Comparatively, the ROC-AUC scores for the BiLSTM model were 0.860 and 0.889 for the individual and pooled datasets, respectively . In addition, both models performed similarly with respect to the average performance on each sgRNA, with CrisprBERT doing slightly better than the BiLSTM model for both individual and pooled datasets . Accumulated experimental data have demonstrated that the CRISPR-Cas9 system induced DSB repair outcome is non-random . Their off-target effects primarily depend on the properties of the endonuclease and the sgRNA sequences as well as the functional state (e.g. open chromatin regions) of the target genome . These features provide an opportunity for building in silico models to predict the outcomes of designed sgRNAs, thus facilitating the optimized design of sgRNAs beforehand. In this study, a new sequence-based doublet stack encoding for sgRNA–DNA pairs was proposed to mimic the local energy configuration of Cas9 binding. Previous studies have highlighted the significance of mutations at specific positions within sgRNA–DNA pairs in influencing the specificity of the CRISPR-Cas9 system. In this study, we intend to conduct a similar analysis but focus on the distribution of stack doublets. Our results demonstrated a high degree of consistency in doublet occurrence across two independent datasets (i.e. CHANGE-Seq and DeepCRISPR datasets). Moreover, these doublets tend to co-localize with regions identified in earlier studies, indicating that doublet encoding may effectively capture biologically relevant information. Compared to traditional single nucleotide-based encoding, the doublet stack encoding provides more potential vocabularies for downstream deep feature embedding and provides more flexibility to train a deep learning architecture-based model. Meanwhile, although triplet encoding expands the vocabulary from 256 to 4096, offering greater flexibility for model training, it also increases the challenge of training the model with a limited dataset. Therefore, doublet encoding provides a balance between the size of the training dataset and the model’s flexibility. In the CripsrBERT model, the BERT embedding approach was used to learn the deep representation of the doublets. Unlike the conventional word-to-vector embedding method used in the BiLSTM model, which generates fixed embeddings for each doublet regardless of its context, the embedding approach used in CrisprBERT is a contextualized doublet embedding model. It takes into account the surrounding doublets or sequences and their order when generating the doublet representations. Given the same doublet would be observed at different positions of the sgRNA–DNA pair and they might present different preferences in positive off-target sgRNA–DNA pairs, this contextual understanding allows CrisprBERT to capture the meaning of a doublet in different positions, which can further be beneficial for predicting the off-target effects of sgRNAs by considering the entire sgRNA–DNA sequence. Although this study mainly focused on CRISPR-Cas9 off-target prediction, the stacking encoding and the BERT embedding, as well as the BiLSTM architecture, could be applicable to CRISPR-Cas9 on-target activities prediction. Sequence-based models are still demanding although it was reported additional epigenomic features or gene expression network features, which reflect the contexts of the editing sites could further improve the prediction. The sequence-only based models are particularly important when an additional epigenomic feature or gene expression network features data for the specific cell lines or primary cells (i.e. CHANGE-Seq data) are not available. Although the original Attention_to_mismatch was trained on sequence information and cell-type-specific gene properties derived from biological network and gene expression profiles, we were able to train the Attention_to_mismatch model with sequence information only. CRISPR-Net was an innovative approach for quantifying the CRISPR off-target activities but, in principle, is a sequence-based approach. These two models achieved comparable performances when conducting independent testing. The DeepCRISPR model integrated epigenetic and sequence features together and applied the autoencoder method to get a pre-trained feature representation. This might be informative to capture the potential sgRNA–DNA binding contexts from massive unlabeled pairs, which further benefits the prediction of the on-target and off-target effects of unknown sgRNAs. However, it performed worse in our study for the independent testing when predicting the sgRNA off-target effects from unused cell lines. One potential reason is that the epigenomics features we extracted from the ENCODE HepG2 cell line were the closest to but not perfectly measured to the profiles of the primary CD4+/CD8+ T cells from a healthy adult donor in the CHANGE-Seq dataset. Nevertheless, the cell-type-specific chromatin contexts, including epigenomic and gene expression data, do provide additional information for distinguishing different off-target activities and would be beneficial for building predictive models. Moreover, the physiochemical properties of nucleotides, structure, or energy-based features might further benefit the classifier construction. Incorporating the contexts-based features and structure features with the sequences features into a deep learning architecture would be a direction worth exploring. Similar to Xiang et al. , the advance in CRISPR sgRNA off-target prediction is mostly data-driven, rather than model-driven. This is partially due to the limited training dataset we currently have. Most of the advanced deep learning models require thousands of millions of parameters of the models to show the advantages. One limitation we acknowledge is the modest performance improvement achieved by the CrisprBERT. However, with more datasets becoming available, BERT-like embedding and models would be better to capture the essential DNA–RNA mismatch pairs, ultimately enhancing off-target detection. Meanwhile, most of the published deep learning models are trained on the DeepCRISPR dataset, which contains off-target pairs of only 30 sgRNAs (i.e. 18 sgRNAs from HEK293 and 12 sgRNAs from K562) or subsets of the DeepCRISPR dataset. In this study, we expanded the training dataset up to 140 sgRNAs by incorporating the most recent 110 sgRNAs from human primary cells in the CHANGE-Seq data. However, the preliminary exploration results indicated that simply combining the two datasets did not necessarily show a significant improvement in performance. We noted that the two datasets were generated by different protocols and that the CHANGE-Seq dataset showed many more positive pairs for each sgRNA than the DeepCRISPR dataset. How to integrate different datasets from different cell-lines, different platforms, and even different species would be an important question in the field. The CrisprBERT model developed in this study was not pre-trained. It was trained from scratch to achieve an effective embedding of the input sgRNA–DNA sequence pairs. Besides the potential parameters regarding the embedding dimension, we adopted other default settings to train a BERT model from scratch. For the details, users are referred to the original documents provided by the HuggingFace team . Potentially, a BERT-like model can be pre-trained on biological “vocabulary” and “sentences.” This, however, will require biological context-specific tasks, compared to those that are used to train the current BERT-like models in the NLP field. Moreover, heterogeneous data integration, data augmentation, and effective transfer learning strategies might be helpful to pre-train the BERT model and to learn deep feature representations. | Other | biomedical | en | 0.999998 |
PMC11696700 | Modern comparative studies are flooded with biological trait data of varying scope, scale, and complexity. This deluge is due in part to advances in high-throughput phenotyping and sequencing for generating trait data across levels of biological organization—from single cells and tissues to entire organisms , populations , and species . New large-scale trait databases are also rapidly coming online to curate a great deal of biodiversity . The types of traits that can be measured and the questions that can be assessed with this information seem almost endless. However, as the complexity and breadth of comparative data continue to expand, so do the computational demands for analyzing them . In the wake of these advances, the last few decades have seen a resurgence in the sophistication of probabilistic models for studying trait evolution. A number of software tools exist for simulating and fitting models of continuous trait evolution according to Brownian motion (BM) and related processes, including the popular packages ape, geiger, and phytools . These approaches represent marked progress in simulation, inference, and mathematical modeling of evolution, which have been extended to incorporate additional considerations, features, and processes of evolution , including extensions of BM, such as Ornstein-Uhlenbeck , Early-Burst , and Pagel's Lambda, Delta, and Kappa models . These models have been tailored to address a broad spectrum of biological questions , statistical challenges , and data types . Built upon the principles proposed in Felsenstein , these models have emerged as a cornerstone of modern phylogenetic comparative methods (PCMs) central to comparative biology in the 21st century. While such advances hold great promise for improving evolutionary inference, a persistent question exists: how accurately do current models capture evolutionary processes in nature? Addressing this question requires a deeper understanding of current models and their alignment with empirical trait data. Fortunately, a promising approach for learning about a model involves simulating many replicate datasets under that model . Simulation-based strategies can help us better understand expected model outcomes, their predicted trait distributions, and other considerations for studying real trait data collected from nature . We can leverage large-scale simulations to understand theoretical and practical applications of model inference and the performance of statistical procedures in certain experimental and evolutionary conditions . Moreover, such strategies can be especially helpful when likelihood functions are expensive to compute or unavailable , and for methods that make use of simulations directly for inference, including machine learning techniques , Bayesian approaches such as posterior prediction and approximate Bayesian computation , and maximum likelihood-based methods . Yet, the computational demands of conducting effective and well-organized simulations under complex evolutionary models can quickly become infeasible or at least burdensome as the scale of analysis increases, imposing a significant barrier. Moreover, it is often desirable (if not necessary) to incorporate variability in the evolutionary processes and parameters that affect trait distributions across replicates to accommodate uncertainty or limit conditions to an expected range, rather than fixing them to a constant value for all replicates . For instance, many models of trait evolution are based on principles of BM , which includes an ancestral state z 0 (i.e. trait value at the root node of a phylogeny) and evolutionary rate parameter σ 2 . Conducting many replicate simulations with the same fixed values for z 0 and σ 2 may be neither helpful nor realistic. Instead, we may prefer sampling parameter values from a particular distribution to accommodate evolutionary variation across replicates. This can be accomplished, e.g. by sampling values of σ 2 from an exponential, uniform, or other applicable continuous distributions. Likewise, we can sample values of other relevant evolutionary parameters when conducting simulation under other models (e.g. sampling the α parameter of the OU model). Clearly, probabilistic trait models thus provide valuable frameworks for understanding evolution. However, what is sometimes less clear or accessible is the expected trait distribution under some complex models (such as those incorporating non-Brownian processes), how large-scale simulations can be conducted efficiently with phylogenetic transformations, and perhaps how current approaches to model fit and inference behave in realistic conditions. What also remains uncertain is model inference performance for diverse phylogenetic backgrounds and in the presence of trait measurement error. Moreover, recent modeling efforts include complex evolutionary processes known to present statistical challenges, including an “ancestral shift model” (termed “AncShift” here), which prompted discussions about the need to reassess current models and assumptions , and yet, straightforward simulation frameworks under this model are lacking. This model incorporates instantaneous jumps in the mean trait value on ancestral branches of the phylogeny, which violates continuous trait evolution assumed by models based on BM . Additionally, local rates model (termed “lrates” here) refers to a model that allows for the evolution of traits at different rates across different branches of the phylogeny, which can further complicate model fitting and inference . Because many canonical models of trait evolution are based on principles and extensions of BM, they can be reformulated as phylogenetic transformations, holding promise for incorporating more complex models and novel simulations that include multiple process levels, such as a “stacked” BM+AncShift model that integrates features of both processes. Regardless of whether model understanding, model inference, or both are the desired goals, the capability to conduct large-scale simulations under a set of target models is therefore imperative. Here, we introduce the package TraitTrainR , which is developed in R 4.4.0 and includes a comprehensive suite of functions trailed for organized, flexible, and large-scale simulations of trait evolution . To facilitate effective and efficient simulation experiments, TraitTrainR incorporates great flexibility in experimental and evolutionary parameters chosen by users (see Section 2.2), automated computation of phylogenetic transformations, and incorporation of measurement error directly into the simulation process. Models included in TraitTrainR represent extensions of the BM model, which can be reformulated as phylogenetic transformations that define the outcome of trait evolution as multivariate normal according to the ancestral states, evolutionary parameters, tree topology, and branch lengths, and TraitTrainR can also “stack” certain evolutionary models on top of a BM-based model. Specifically, TraitTrainR currently includes four different potential “stacking” options: “standard” (BM model or extension only), “lrates”, “AncShifts”, or both “lrates” and “AncShifts” combined. These variations of combined models have not been included within comparable simulation software packages, and thus, allow the user to explore novel modeling scenarios by combining processes. TraitTrainR first transforms the input phylogeny according to the primary model type, followed by any stacked model settings. This strategy allows users to simulate trait evolution under more complex evolutionary scenarios that are not available in current simulation software; e.g. a BM model with multiple ancestral shifts, an OU model with localized rate shifts, an EB model with both stacked processes, or perhaps some other combination. Another advance of TraitTrainR is the extensive customization options for both input and output settings, enabling variability in evolutionary models across replicates as well as flexibility in returned output formats. For example, values of the σ 2 rate can be fixed for all replicates (e.g. σ 2 = 1 ), or sampled from any number of applicable continuous distributions, including a uniform (with some minimum and maximum), exponential (with some rate), gamma distribution (with some shape and scale), or most any other appropriate distribution, or set of user-specified values. For models that include distinct rate shifts, the user provides a matrix of rate values and shift locations (time intervals or lineages), which permits replicates generated with different shift locations and rates. Likewise, the AncShift model can be specified to include multiple shifts in the ancestral state across the tree, which can be varied or fixed across replicates. Variability in among-trait associations can be incorporated by using a custom among-trait covariance matrix for each replicate for multi-trait simulations. A key advantage of TraitTrainR is flexibility in output formats, including: raw trait measurements, phylogenetic independent contrasts computed using the input tree, PICs computed using the input tree scaled to unit depth, phylogenetic transformations using phylogenetic generalized least squares (PGLS) principles , and PGLS-based transformations using the depth-scaled input tree. The scope of TraitTrainR currently includes a total of 44 models, spanning 11 primary models each with four options for model stacking . The TraitTrainR package includes a detailed manual, quick-start guide, and tutorial (see Supplementary Material and the TraitTrainR website), and dependencies include ape , geiger , and phytools employed for various simulation functions and phylogenetic transformations. The primary inputs required by TraitTrainR are a phylogeny for simulation and a ModelSimulationSettings list object that encompasses all user-defined options, including the desired model(s), their parameter values (or vector of parameter values), ancestral states of replicates (or vector of ancestral states), among-trait covariance matrices (for multi-trait simulations), output formats, options for automatically simulating normally distributed measurement error, and information for model stacking if desired. Thus, TraitTrainR allows users to specify an array of experimental and evolutionary settings to define the scope of simulation sessions. The primary function that users interact with is termed TraitTrain which requires the input phylogeny and the list ModelSimulationSettings detailed below, alongside any additional options requested by the user. Selecting a model of trait evolution is fundamental to phylogenetic comparative studies and provides insight into the mode and tempo of trait change expected on a phylogeny . Thus, correctly predicting the true model of evolution for a given studied trait is therefore a critical step toward our understanding of evolutionary and comparative biology . We applied TraitTrainR to investigate the problem of model selection using three empirical phylogenetic case studies: (i) a phylogeny of 76 Arthropods , (ii) 34 Penicillium fungi , and (iii) nine eutherian mammals . Varying the tree sizes allowed us to explore applications of TraitTrainR to large (Athropod), moderate (fungal), and small (primate) trees. We envision many potential applications of TraitTrainR , and through these examples we aimed to demonstrate the use of TraitTrainR for tackling a critical question: how does model selection perform with and without trait measurement error, and how might that manifest in statistical power (or lack thereof) to find the true evolution model in empirical phylogenetic trait studies? Each phylogeny was obtained from its respective publication and subsequently used as input by TraitTrainR to simulate 10 4 replicates for each of seven primary focal models (model details provided in Table 1 ). Specifically, we downloaded the Newick formatted phylogeny from each respective study. Values for all parameters were sampled from probability distributions to incorporate variability in evolutionary processes across replicates and set to reflect the bounds of model parameter values used by the function fitContinuous in geiger . Distributions for each parameter of the seven models are shown in Table 1 . After simulation, maximum likelihood estimation was conducted using fitContinuous to fit the models and calculate Akaike information criterion . That is, for each replicate generated by TraitTrainR , a trait dataset was simulated according to one of seven models with varying parameter values ( Table 1 ), and model selection was then conducted using AIC to find the best-fit model. This approach allows us to evaluate whether the true data-generating model would indeed be recovered as the lowest model AIC among the seven candidate models for each replicate. AIC is a gold standard in evolutionary studies for likelihood-based model selection that seeks to balance the goodness of fit (likelihood) with model complexity by penalizing the likelihood by the number of parameters. For example, many studies seek to compare the fit of a simple BM process, or alternatively, a more complex OU model that includes an attraction toward an optimum, or similar questions . By constructing confusion matrices, we summarized the accuracy of AIC model selection across replicates generated by TraitTrainR. The framework of TraitTrainR incorporates flexibility for multiple trait simulations, and thus, we sought to apply TraitTrainR to also understand the performance of AIC-based model selection when two traits are analyzed using multivariate model selection . Specifically, we used TraitTrainR to simulate 10 4 replicates for each of three models (BM, OU, and EB) for analyses of two traits based on the larger Arthropod phylogeny. As with our seven model applications described above, we also varied the amount of measurement error (variance), and AIC was used to assess the relative fit of each model using the R package mvMORPH . Our applications of TraitTrainR highlight challenges in selecting the correct model that generated the trait data in all three phylogenetic case studies; these findings are apparent for simulations both with and without measurement error . Generally, we find the highest accuracy for the largest analyzed tree , which is expected given the increased sample size, followed by the fungal and primate case studies, respectively. Yet, measurement error had a major effect on reducing model selection accuracy, and allowing standard error to be estimated during model fitting helped little in many cases . This finding may result from elevated noise-to-signal ratios when introducing measurement error . Estimation of error requires an additional parameter, which may explain why simpler models (i.e. BM) tended to be favored by lower AIC . However, accuracy to recover the OU model was highest for the medium-sized fungal phylogeny , suggesting that tree size is not the only determinant of model selection accuracy, and that model selection accuracy differs depending on the structure of the empirical tree. For this case study, measurement error influenced model selection toward the lambda model , whereas allowing the model to estimate error resulted in a preference for the simpler BM model . For these seven model demonstrations, all analyses and case studies struggled to recover the trend model for these single trait simulations. Our two-trait simulations also found evidence of relative reductions in model selection accuracy as measurement error increased , which reflect similar patterns found in OU-based multivariate studies . Collectively, our applications reveal inherent challenges of evolutionary model selection and impacts of measurement error (and lack of robustness when such error is estimated), underscoring the applicability of TraitTrainR for investigating important statistical and evolutionary questions under realistic expectations of trait data quality. Our findings also highlight the value of simulation studies for investigating the feasibility and power for discerning trait models for any empirical system, which can be examined even prior to data collection. Finally, we also emphasize that our results reflect only a specific set of case studies and explored parameter values ( Table 1 ). Though other studies have identified similar challenges with model selection and interpretation , such findings may be relevant to other datasets, trees, and values of evolutionary parameters. Future studies will clarify the challenges of model selection under various evolutionary and experimental settings. Supplementary data are available at Bioinformatics Advances online. None declared. This research was supported by startup funds from the University of Arkansas, the Arkansas High Performance Computing Center, and National Science Foundation grant IOS-2307044 to T.A.C. and R.A. M.D. was supported by National Institutes of Health grant R35GM128590, and National Science Foundation grants DBI-2130666 and DEB-2302258. D.D.M. was supported by National Science Foundation grant DEB-2110053. R.A. was also supported by funding from the Arkansas BioScience Institute. | Review | biomedical | en | 0.999997 |
PMC11696703 | Mitochondria are essential organelles found in almost all eukaryotic cells and are indispensable for cellular bioenergetics, metabolism and homeostasis. One of their main objectives is to produce ATP through oxidative phosphorylation (OXPHOS). OXPHOS occurs within the inner mitochondrial membrane, where electrons are shuttled along an Electron Transport Chain (ETC) mediated by the mobile electron carriers, Coenzyme Q (CoQ) and cytochrome C. Electron transfer through each complex is coupled to proton translocation from the mitochondrial matrix to the intermembrane space which generates a Proton Motive Force (PMF) across the inner membrane that is used by ATP-synthase, to phosphorylate ADP to ATP . Several other metabolites are also directly oxidized by the ETC by reducing CoQ. In mammals, these include the mitochondrial glycerol-3-phosphate dehydrogenase (G3PDH) , dihydroorotate dehydrogenase (DHODH) , proline , and the electron transfer flavoprotein dehydrogenase , the first step of mitochondrial fatty acid oxidation. Apart from the production of cellular energy, mitochondria are integral to various cellular and metabolic processes including pacing organism-specific development rates , apoptosis , calcium signalling , and regulating reactive oxygen species (ROS) production, which itself is an important secondary messenger . Mitochondria also generate metabolic intermediates crucial for biosynthetic pathways and redox regulation . Due to their central roles in cellular metabolism, signalling and bioenergetics, dysregulated mitochondrial metabolism is associated with various human diseases, emphasizing their critical role in maintaining cellular health . Understanding the intricacies of mitochondrial metabolism is therefore essential for advancing knowledge of cell biology, physiology, and medicine. Mitochondrial structure , and proteome content vary across tissues . Considering the metabolic roles played by proteins, proteomic changes would reroute metabolism to sustain different biological objectives in various cellular contexts. Therefore, mitochondrial metabolism and function are highly specialized to meet diverse cellular functions and bioenergetic needs. This is strongly evidenced in cardiomyocytes which are responsible for the control of the rhythmic beating of the heart and rely heavily on ATP to achieve maximal cardiac output . Brown Adipose Tissue (BAT) is a specialized type of adipose tissue with unique mitochondrial properties that permit thermogenic heat generation . One key characteristic of brown adipocyte mitochondria is a high abundance of uncoupling protein 1 (UCP1), which is responsible for uncoupling OXPHOS from ATP production . This uncoupling leads to the dissipation of the PMF across the inner mitochondrial membrane as heat, a process crucial for non-shivering thermogenesis , thus highlighting an alternate biological objective of the mitochondria within BAT. A better understanding of mitochondrial metabolism could, for instance, help reduce the prevalence of metabolic diseases in cardiac and other chronic metabolic diseases like diabetes. To give just one example, dysregulated ATP synthase activity following activation of inhibitory factor 1 (IF1) is implicated with a wide range of metabolic diseases including diabetes . Systems-level modelling of mitochondrial metabolism is essential to provide novel and testable model-driven insights into mitochondrial function and disease . Flux Balance Analysis (FBA) is a computational method that implements linear programming in conjunction with a metabolic reconstruction to predict metabolic fluxes on the systems level . By integrating existing knowledge of mitochondrial biology into such a modelling framework, researchers can specifically analyse mitochondrial metabolism. Omics data, such as transcriptomics or proteomics can be integrated into a metabolic reconstruction using a variety of methods such as E-Flux /E-Flux2 , to produce context-specific metabolic models. Thus, metabolic modelling can facilitate a better understanding of the metabolic differences between tissues or disease conditions. The mitochondrial metabolism of humans and mice is included in several metabolic reconstructions. Recon 1 was the first generic human metabolic model . Recon 1 has been updated to Recon R2, which included additional biological information and the correction of various modelling errors such as Recon 1’s inability to correctly predict realistic ATP yields . Recon R2 was subsequently upgraded to Recon 3D, which includes a total of 13 543 metabolic reactions and extensive human gene-product-reaction (GPR) associations . In parallel to the recon lineage of human metabolic models, a Human Metabolic Reaction series (HMR1 and 2) were developed and used to specifically model a human adipocyte and a hepatocyte, respectively, containing 6160 and 7930 metabolic reactions. Metabolic information from HMR2 was then complemented with information from Recon 3D to produce a unified metabolic model of human metabolism, called Human1, now containing over 13 000 reactions, 10 000 metabolites and 3625 genes. Human1 has since been used as a template to produce specific genome scale metabolic models of the fruit fly, worm, zebrafish, rat, and mouse using orthologue mapping and identification of species-specific metabolism using literature and databases. The mouse specific metabolic model remains the most concise mouse metabolic model to date and contains more metabolic reactions than its predecessor, iMM1865, which was produced using a top-down orthology-based methodology by mapping human genes of Recon 3D to mouse genes . One challenge facing predictive modelling at the genome-scale level is that large models are more error prone than smaller models. This is a consequence of missing knowledge and/or incorrect annotation. For example, the reconstruction and interpretation of the GPR rules adds uncertainty to the annotation process, and the process of constructing genome scale models involves gap filling that connects dead-end metabolites using reactions inferred from other models. This is essential to satisfy steady-state metabolism, however, this step is inherently uncertain as the new reactions might not be supported by the genome . Other sources of error include missing information relating to metabolite mass due to incorrect formulas , incorrect parameterization of reaction directionality constraints and issues relating to the incorrect compartmentalization of reactions and metabolites. These uncertainties accumulate and can account for mispredictions that include the incorrect operation of metabolite shuttles and the reversal of proton pumping , and the generation of infeasible metabolic cycles. As such, using large genome-scale models to specifically predict mitochondrial metabolism can, therefore, result in mispredictions . Several concise models of human mitochondrial metabolism exist , with MitoCore representing the latest and most comprehensive model of human cardiomyocyte mitochondrial metabolism . MitoCore includes the ETC within its reconstruction and can accurately model the PMF associated with ATP production. This model has successfully been applied to model fumarase deficiency , impaired citrate import and predicted accurate respiratory quotients on glucose and palmitate substrates which demonstrates MitoCore’s potential to model human cardiomyocyte mitochondrial metabolism. Mice are often employed as a model organism in mitochondrial research due to their highly similar structure, function and genetic homology with human mitochondria. This similarity makes mice a valuable model system for advancing our understanding of mitochondrial biology, mitochondrial dysfunction and disease, and for exploring potential interventions for mitochondrial-related disorders in humans. Because of these similarities, mouse mitochondrial metabolism is routinely compared to human mitochondrial metabolism in diverse biological contexts . Despite the prevalence of mice in vivo , in vitro , and in silico models, there are no concise in silico models of mouse mitochondrial metabolism. To address this limitation, and to valorize the opportunity presented by mitochondrial similarity, in this work, we have created mitoMammal, a mitochondrial metabolic network which can be used for constraint-based metabolic modelling of human and mouse mitochondria. Importantly, mitoMammal can be contextualized with omics data emerging from humans or mice, allowing for the capacity to model the metabolism of both species. To demonstrate this novelty, we have integrated mitochondrial transcriptomic data from Brown Adipocytes (BAs), and then mitochondrial proteomic data from mice BAT and cardiac tissues. We found that integrating proteomic and transcriptomic data from humans and mice into mitoMammal predicted proline dehydrogenase and G3PDH reduction of CoQ, the export of hexadecanoic acid from BAT tissue, and glycine import to sustain cardiomyocyte metabolism. To build the mitoMammal mitochondrial metabolic model, we identified the mouse orthologues of MitoCore’s GPRs using BioMart and the ENSEMBL database , as well as orthology information stored within mitoXplorer2 . This resulted in 389 mouse orthologues out of the original complement of 391 MitoCore genes ( Supplementary Table S1 ). The set of mitoCore GPR rules was compiled to their corresponding logical expressions for mitoMammal based on orthology relations between human and mouse genes. A summary of mitoMammal construction is represented in Fig. 1A (see also Supplementary Table S1 ). Gene modifications in the mouse version of the metabolic model reconstruction are listed in Table 1 and discussed below. The discovery that dihydroorotate can reduce CoQ in mouse mitochondria suggests that this is a conserved feature of all mammalian mitochondria . MitoCore was missing the reduction of CoQ by DHODH within the de novo pyrimidine synthesis pathway, while it contained glutamine metabolism, which is the starting substrate for this pathway. Initially, glutamine is converted to carbamoyl phosphate facilitated by carbamoyl phosphate synthase. Carbamoyl phosphate is then metabolized to carbamoyl aspartate through the activity of aspartate carbamoyltransferase, which is subsequently metabolized into dihydroorotate by the enzyme dihydroorotase . In mammals, these three enzymes are part of a single multifunctional protein abbreviated as CAD (Carbamoyl Aspartate Dihydroorotase). Dihydroorotate then reduces CoQ to produce orotate, facilitated by the enzyme dihydroorotate acid dehydrogenase (DHODH) that sits at the surface of the outer mitochondrial membrane. As such, orotate is never imported into the mitochondria and remains cytoplasmic . We included these metabolic reactions and new metabolites in mitoMammal. Orotate removal from the model was implemented by the addition of a demand reaction to maintain flux consistency. In total, five new reactions were added that incorporate four new metabolites and two new genes. Because the ETC is at the heart of mitoMammal, we closely inspected the GPR rules of the 5 ETC complexes and found a number of paralogous genes that were bound by an AND relationship. Furthermore, by integrating gene expression data, we observed that fluxes of Complex I and IV of the respiratory chain in the mitoMammal model, and hence also in MitoCore, were strongly reduced, or even shut down completely. We analysed the gene expression patterns of the paralogs and then corrected paralogous gene pairs to an OR relationship. These specifically included (mentioned as human paralog and mouse paralog pairs): Complex I: Ndufb11b/Ndufb11b [ENSMUSG00000031059/ENSMUSG00000061633 (mouse only)]. NDUFA4/NDUFA4L2 and Ndufa4/Ndufa4l2 . Complex IV: COX4I1/COX4I2 and Cox4i1/Cox4i2 . COX6A1/COX6A2 and Cox6a1/Cox6a2 . COX6B1/COX6B2 and Cox6b1/Cox6b2 . COX7A1/COX7A2 and Cox7a1/Cox7a2/Cox7a2l . COX8A/COX8C and CoX8a/Cox8c . We furthermore added UCP1 [ENSG00000109424, Ucp1 in mouse ] to the model, as this gene was not included in the original MitoCore model due to the model’s specificity for heart metabolism. The original MitoCore model was encoded using Systems Biology Markup Language (SBML) level 2 annotation. We updated the mitoMammal to the most recent, relevant specification of SBML level 3 (version 1) and validated the model for correctness using the online SBML validation tool [ https://synonym.caltech.edu/validator_servlet/index.jsp ; ]. Because the MitoMammal metabolic model can integrate -omics data from two, instead of one species, we modified the E-Flux algorithm by allowing the user to select the organism the data originates from. The adapted algorithm then uses -omics data to constrain the reactions specific to the chosen species. All other features of the original E-Flux method were maintained as in the original description of the algorithm . The adapted E-Flux algorithm was used to constrain mitoMammal with mouse proteomic data and transcriptomic data from humans. The adapted E-Flux algorithm first selects the data for the genes or proteins that are in the model and scales everything between 0 and 1 by dividing by the 90th percentile and values greater than 1 are capped at 1. These scaled values are then used to calculate the upper-bound of each reaction based on the GPR. For reactions that require multiple genes that all have to be expressed and are thus linked by an AND relationship, we assume as the upper bound the value of the gene with lowest expression. In case of an OR relationship between genes, each individual gene can contribute to the reaction and the sum of their values is used as the upper bound. This algorithm corresponds to the original E-Flux algorithm and has been adapted in python to work with COBRApy . The adapted E-Flux algorithm was used to constrain mitoMammal with mouse proteomic data and transcriptomic data from human and mouse. For mouse simulations, we integrated proteomic data from a recent study that extracted the mito-proteomes of isolated mitochondria from a range of mouse tissues . Normalized protein counts of cardiac and brown adipose tissue were scaled between 0 and 1. For cardiac tissue, we optimized ATP hydrolysis and for BAT simulations, we optimized the UCP reaction considering its essential role in producing non-shivering heat in this tissue. We used normalized RNA-sequencing data from Rao to model an in vitro differentiated hiPSC-derived brown adipocyte (BA). Normalized read counts were scaled between 0 and 1. The UCP reaction was chosen to be optimized considering the essential role that UCP1 plays in uncoupling ETC from ATP synthesis in BAs which is a prerequisite for producing non-shivering heat. Parsimonious FBA was performed using Python (version 3.8.5) in conjunction with the COBRApy toolbox , using the default ‘GLPK’ solver. The mitoMammal metabolic model, along with Jupyter notebooks and data used in this work are available at: https://gitlab.com/habermann_lab/mitomammal . This work aimed to produce a generic mammalian metabolic model of mitochondrial metabolism that incorporates new knowledge on CoQ fuelling. We first translated the genes from the human MitoCore model into mouse genes using orthology inference to create the basic mitoMammal model. Key metabolic pathways that include the TCA cycle, the Malate Aspartate Shuttle (MAS); OXPHOS and ATP synthesis; the Glycine Cleavage System, the proline cycle and fatty acid oxidation were also retained from the original model. MitoMammal now includes de novo pyrimidine synthesis from glutamate leading to the reduction of the CoQ complex by the enzyme DHODH. MitoMammal contains 780 genes encoding 560 metabolic reactions that involve 445 metabolites. The complete lists of reactions, metabolites, and associated fluxes from each simulation are available in Supplementary Table S1a–c , respectively. The core metabolism and bioenergetics with associated import/export reactions of the model are depicted in Fig. 2 . MitoMammal is based on MitoCore, a human specific cardiomyocyte mitochondrial model. MitoMammal was first tested on its ability to correctly produce accurate ATP levels from glucose oxidation. All nutrient input reactions except glucose and oxygen were constrained to zero to reflect aerobic glycolytic conditions. Maximization of ATP hydrolysis was used as the objective function for these simulations and the model was then optimized using parsimonious FBA for all simulations reported in this work. As expected, MitoMammal correctly predicted the production of 31 molecules of ATP from 1 molecule of glucose . To demonstrate mitoMammal’s ability of modelling mouse cardiac mitochondrial metabolism, we integrated proteomic data harvested from mitochondria isolated from mouse cardiac tissue and optimized ATP hydrolysis. This resulted in 330 constrained reactions out of the complement of 560 reactions. In satisfying the objective subject to these constraints, mitoMammal predicted the import of ɑKG, H 2 O, oxygen, oxaloacetate, glutamine, 3-mearcaptoacetate, and glucose. The model also predicted the export of alanine, NO, citrulline, lactate, fumarate, citrate, cysteine, NH 4 , CO 2 , isocitrate, hydrogen, and succinate . Flux predictions revealed that the flux of pyruvate emerging from glycolysis was partitioned between lactate production in the cytoplasm, and pyruvate import into the mitochondria. This is in agreement with the literature that reports a mitochondrial involvement of lactate production . It is now understood that the shuttle of lactate from and between cardiomyocytes to other cells facilitates lactate supply to cells in need of lactate, and acquired lactate plays a plethora of important roles such as cell signalling , the regulation of cell proliferation and development of organs and in the coordination of vascular development and progenitor cell behaviour in the developing mouse neocortex . TCA cycle fluxes were sustained by the import of citrate, ɑKG, fumarate, and malate. Glycine was imported into the mitochondria and converted to glutamate. We next wanted to show the predictive power and usability of mitoMammal to predict mouse mitochondrial metabolism in a BA cell by integrating mitochondrial proteomic data extracted from brown adipose tissue (BAT) . Following data integration, we then optimized flux towards the UCP reaction. From the model’s complement of 560 reactions, our modified E-Flux algorithm constrained 329 reactions. In order of decreasing flux magnitude, mitoMammal predicted the import of hydrogen, citrate, ɑKG, fumarate, cysteine, sulfate, glutamate, acetoacetate, butanoic acid, glycine, oxaloacetate, aspartate, alanine, and O 2 . Secreted metabolites consisted of malate, propionate, lactate, glutamine, hexadecenoic acid, thiosulfate, NH4, isocitrate, succinate, and CO 2 . In this simulation, citrate, fumarate, ɑKG and to a lesser extent, malate were predicted to be imported into the mitochondria to establish steady-state TCA cycle fluxes. Imported ɑKG was metabolized into succinyl-CoA within the TCA, and into 3-Mercaptopyruvic acid (mercppyr) exterior of the TCA cycle which then fed into pyruvate metabolism. Pyruvate metabolism was also established by the import of alanine and with conversion of malate into pyruvate. The majority of pyruvate was converted into acetyl-CoA and with the addition of citrate, channelled flux towards fatty acid synthesis and the export of hexadecanoic acid. Citrate was imported into mitochondria and assimilated into the TCA cycle, and upon conversion to Isocitrate, was then partially exported from mitochondria. Complex I (CI) was predicted to be reduced by NADH emerging from the TCA cycle, which injected electrons into the ETC and reduced the CoQ complex. MitoMammal also predicted the reduction of CoQ with proline via the proline dehydrogenase reaction (PROD2mB, encoded by the PRODH gene). CII was predicted to operate in reverse and reduce fumarate leading to succinate production and its subsequent export. From CoQ, electrons were passed along the ETC towards CIII and CIV which produced PMF, however, ATP synthase (CV) in this situation was predicted to be inactive, and the UCP reaction was active and carried the largest flux in this simulation. Mitochondrial uncoupling via UCP1 is a process that expends energy by oxidizing nutrients to produce heat, instead of ATP. To better understand the role played by the UCP reaction in BAT tissue, we next examined the reactions that would consume the newly uncoupled protons after their re-entry into the mitochondria to identify novel functionalities of the UCP reaction in BAT. Twenty proton-consuming reactions were identified and are shown in Supplementary Table S2 . The largest subset of these reactions performed metabolism of fatty acid and consisted of MECR14C and MECR16C which are responsible for fatty acid elongation of 3-Hydroxy Tetradecenoyl-7 Coenzyme A and 3-Hydroxyhexadecanoyl Coenzyme A respectively. Also belonging to this group were the reactions MTPC14, MTPC16, r0722, r0726, r0730, r0733, and r0791 and all performed fatty acid oxidation roles and released NADH within the mitochondria. The remaining reactions of this subset all consumed mitochondrial NADP. 4 more reactions performed metabolite transport functions with a citrate-carrying reaction carrying the greatest flux of this analysis. This reaction exports isocitrate and protons in exchange for citrate import. The model predicted flux associated with the characterized mitochondrial carrier responsible for the export of phosphate and photons (Plt2mB) out of the mitochondria. The citrate-malate antiporter (CITtamB) was also predicted to be active in exporting malate and protons in exchange for the import of citrate. Uncoupled protons were predicted to leak out of the mitochondria, facilitated by the Hmt reaction. A further subset of 3 reactions were implicated with amino acid metabolism. Within this subset, the reaction to carry the largest flux was 3-Mercaptopyruvate: Cyanide Sulfurtransferase in mouse BAT, which converts mercaptopyruvate and sulfate into pyruvate and thiosulfate. Also within this subset is the P5CRxm that involves the production of proline, and finally the methylmalonyl Coenzyme A decarboxylase reaction (MMCDm) which converted methylmalonyl-CoA into propionyl-CoA. The remaining reaction predicted to metabolize uncoupled protons was the CI reaction of the OXPHOS subsystem. Next, we wanted to demonstrate mitoMammal’s ability of modelling human mitochondrial metabolism. To this end, we integrated transcriptomic data from a brown adipocyte (BA) that was differentiated from an IPSC and optimized the UCP reaction. This resulted in constraining 488 reactions out of the complement of 560 reactions. Analysis of the resulting fluxes revealed that 15 metabolites were predicted to be imported into mitoMammal to support steady-state mitochondrial BA metabolism. Similar to the mouse model, H+, glutamate, cysteine, aKG, aspartate, O 2 , oxaloacetate, fumarate, and glycine were imported, however, with different magnitudes. The largest flux was again associated with H+ import. In addition, glutamine, glucose, formate, citrate, Fe 2 , and argininosuccinate were imported. Similar excreted metabolites included NH 4 , CO 2 , isocitrate, malate, lactate, propionate, and hexadecanoic acid. Opposed to the mouse model, alanine was exported, and not imported. In addition, the human model secreted proline, H 2 O, urea, NAD, folate, and phosphate . Similar to the mouse simulation , the import of citrate, fumarate, ɑKG and malate were predicted to contribute to sustaining steady-state TCA cycle fluxes. In human BAs, pyruvate emerging from glycolysis was predicted to be imported into the mitochondria and converted to alanine which was then, opposite to mouse BAs, exported out of the mitochondria. Citrate played a dual role and was also metabolized into acetyl-CoA which subsequently fed into endogenous fatty acid synthesis via acetyl-CoA, which agrees with the literature that describes mammalian BAT as possessing high endogenous fatty acid synthesis activity . In particular, the model predicted the synthesis and export of hexadecanoic acid and 5-Aminolevulinate (5aopm) from the mitochondria. Fluxes through ETC were similar to the mouse BAT simulation, except now, CV which in this simulation was predicted to operate in reverse and consumed ATP. Similar to before, CII was predicted to operate in reverse. The model furthermore predicted the reduction of CoQ with proline via the proline dehydrogenase reaction (PROD2mB, encoded by the PRODH gene). PRODH forms part of the proline cycle that regenerates proline via pyrroline-5-carboxylate which, in contrast to the mouse model, leads to the subsequent export of proline. In addition, we predicted the reduction of CoQ by G3PDH, which was not predicted in the mouse simulation. The reaction carrying the greatest flux in this simulation was again the UCP reaction which uncoupled the ETC from ATP production. We then analysed all proton consumption reactions predicted to be active as a consequence of optimal UCP1 activity. All reactions that carry a flux greater than 0.01 are also shown in Supplementary Table S2 . As with the previous simulation of mouse BAT, the reaction to carry the greatest flux was attributed to the citrate-carrying reaction . The largest subset of reactions was also implicated with the same fatty acid metabolic reactions as reported in the previous simulation, however, carrying much reduced predicted fluxes. The next subset of reactions again, all involved transport functions with the first that exported phosphate and protons (Plt2mB) out of the mitochondria, and the citrate-malate antiporter (CITtamB). Uncoupled protons were also predicted to leak out of the mitochondria, as facilitated by the proton-transport reaction. The final reaction predicted to be active in this simulation, which also was predicted to be active in the previous simulation, was the Pyrroline-5-Carboxylate Reductase reaction (P5CRxm). Similarly to the mouse BAT simulation, complex I of the ETC was predicted to be active, however with a lower flux magnitude. The model also predicted a subset of reactions implicated with amino acid metabolism to be active in this simulation that was not predicted to be active in the mouse BAT simulation. Instead of predicting flux through the r0595 reaction that is responsible for methionine and cysteine metabolism, the model predicted the consumption of uncoupled protons by 5-Aminolevulinate Synthase (ALASm) which metabolizes glycine into 5aop_m. The final reaction of this subsystem involved the Glycine-Cleavage Complex which converts glycine and lipoyl protein (lpro_m) into amino-methyl dihydrolipoyl protein. The remaining reactions that were predicted to metabolize uncoupled protons following UCP reaction optimization and specific to the human simulations were Malate dehydrogenase (MDMm), a reaction belonging to folate metabolism (MTHFCm) and finally a reaction involved in the urea cycle (G5SDym). We present mitoMammal, the first mitochondrial metabolic network reconstruction that serves for modelling mitochondrial metabolism for two species, mouse and human. MitoMammal contains two sets of GPR rules, one set of mouse genes, and another set of human genes, meaning the model can be constrained by integrating -omics data from these two organisms. Given the high similarity between mouse and human mitochondrial metabolism, we had the choice between two possible ways to model murine metabolism with -omics data-based constraints: either we would transform mouse gene identifiers to human and use the human MitoCore model for subsequent constraint-based modelling; or we could generate a mitochondrial metabolic model based on MitoCore that could be used for both species. We chose the latter, as it first makes the workflow for modelling mito-metabolism for the user straightforward; and second, it also allows the researcher to consider metabolic differences between the two organisms as each organism comes with its own set of GPR rules. We further added the DHODH reduction of CoQ following pyrimidine synthesis as this pathway was absent in MitoCore. As such, mitoMammal is the most comprehensive metabolic model of mammalian mitochondria to date. To demonstrate the model’s ability to model mouse and human mitochondrial metabolism we first verified mitoMammal’s ability to capture realistic rates of ATP production. We then constrained mitoMammal by integrating proteomic data extracted from mouse cardiac tissue and optimized ATP production. Predicted fluxes included lactate production from pyruvate and the assimilation of pyruvate into the TCA cycle, the import of glycine into the mitochondria and the involvement of CV within OXPHOS to produce optimal ATP to support cardiomyocyte mitochondrial function. The model also predicted the reduction of CoQ by CI, yet fatty acid oxidation to support ATP synthesis was not predicted. These predictions are in agreement with data reported on immature cardiomyocytes, which express low levels of fatty acids and high levels of lactate in the blood that activates anaerobic glycolysis as the major source of ATP production . We hypothesize that the reversal of CII in heart is an artefact due to missing values in the proteomics data we used. We found that several proteins that are part of the ETC were not detected in the dataset from . We confirmed this further by using mouse bulk transcriptome data from the Tabula muris project from heart tissue of 18 months old mice, where flux through the respiratory chain was as expected and high, including a forward flux through CII ( Supplementary Table S3 ). Given this experience, we hypothesize that the original MitoCore model was not used in combination with gene expression data, which left incorrect GPR rules undetected. The resulting predictions of the model also suggest that using constraints based on gene expression data is an excellent method to validate the correctness of GPR rules in genome-scale metabolic models, as it will reveal problems of the constructed model with respect to gene paralogs whose expression is restricted to specific tissues (the gene Ndufb11b, as an example, is only expressed in testis and, weakly, in the intestine). In this simulation, glycine was predicted to be imported into mitochondria and converted to glutamate. Glycine has been shown to protect against doxirubicine induced heart toxicity in mice which validates this prediction, and highlights the important role of glycine metabolism in cardiomyocytes in sustaining steady-state metabolism. Glycine has been shown to increase the ATP content of mitochondria isolated from cardiac cells, which serves as another validation, however, in this simulation we chose to optimize ATP production, so understanding if glycine plays an essential role in mitochondrial metabolism to support optimal ATP yields requires further research, and suggests another application of how mitoMammal can further our knowledge in this respect. Lactate is reported to fulfil important purposes that include providing an energy source for mitochondrial respiration, and being a major gluconeogenic precursor. As such, it is heavily involved in cellular signalling . Several basic and clinical studies have revealed the role that lactate plays in heart failure with the consensus that high blood lactate levels indicate poor prognosis for heart failure patients . Current research on this topic aims to target lactate production, regulate lactate transport, and modulate circulating lactate levels in an attempt to find novel strategies for the treatment of cardiovascular diseases. The in-depth knowledge gained by metabolic modelling with mitoMammal could also facilitate advances in this field. To further demonstrate the usability of mitoMammal with alternative objective functions, and to highlight the ability of mitoMammal to model mouse and human metabolism, we integrated proteomic data extracted from the isolated mitochondria of mouse BAT and integrated transcriptomic data of human BAs . For both simulations, we optimized the UCP reaction considering its central role in uncoupling electrons from the ETC and sustaining BAT metabolism . This leads to the dissipation of the PMF across the inner mitochondrial membrane which is essential for BAT function. Despite modelling two species with different -omics datasets, modelling BA metabolism with either human transcriptome or mouse proteome data resulted in several similar flux predictions. One such prediction relates to the metabolism of hexadecanoic acid, also known as palmitic acid, which has been shown to increase BA differentiation, decrease inflammation and improve whole-body glucose tolerance in mice and humans . These data validate the predictions of hexadecanoic acid metabolism in both simulations. Elevated levels of proline have been measured in mammalian BA tissue and elevated levels of proline dehydrogenase have also been associated with BA differentiation, and thermogenesis and are correlated with UCP1 activity . In both these simulations, mitoMammal indeed predicted proline reduction of CoQ via proline dehydrogenase, which is in line with these published data. Furthermore, it has been proposed that CoQ reduction by proline dehydrogenase activates ROS production which then activates signalling pathways that facilitate hormone-independent lipid catabolism and support adipose tissue thermogenesis . Both simulations of BA metabolism predicted the reverse activity of CII. It has been experimentally demonstrated that CII can work in reverse in bacterial mitochondria and mammalian mitochondria . There is an increasing evidence that reversal of Complex II is relevant for brown adipocytes in mice. CII reversal has been experimentally verified in conditions where oxaloacetate correlates to a reverse CII activity in mice BAT . The authors demonstrate that high UCP levels resulted with a reduced mitochondrial membrane potential, which then consequently lowered the NADH/NAD+ ratio, increased oxaloacetate accumulation and reversed CII. The authors proposed a physiology relevant role of CII reversal in regulating ROS production. Metabolic models predict steady-state metabolism, and thus without modification, cannot account for metabolite accumulation, but as observed in Fig. 4A we do predict an import of OAA within the mitochondria which again serves as model validation. Similarly, OAA is also predicted to be imported into the mitochondria following the integration of human BAT transcriptomic data . We also observed differences in metabolic fluxes when comparing the predictions following human transcriptomic data and mouse proteomic data integration. Following human transcriptomic data integration, the model predicted the import of pyruvate into the mitochondria which was not predicted following mouse proteomic data integration. Instead, pyruvate was predicted to be converted to mercaptopyruvate (mercppyr). The simulation involving integrating human transcriptomic data also predicted the export of 5aopm which was not predicted when integrating mouse proteomic data. 5aopm is a precursor metabolite of the heme biosynthesis pathway and is required for adipocyte differentiation . Disrupted heme biosynthesis in human and mouse adipocytes has been shown to result in decreased adipogenesis, impaired glucose uptake, and reduced mitochondrial respiration . These experimental discoveries of 5aopm therefore serve to further validate flux predictions following transcriptomic data integration and account for the misprediction associated with integrating proteomic data. Alanine was also predicted to be exported into the mitochondria for the human transcriptome simulation, yet the mouse proteomic simulation predicted the import of alanine. Alanine import and export into mammalian BAT tissue has been previously reported; however, the more comprehensive analysis reported by describes that alanine is an abundant circulating amino acid and functions as a nitrogen carrier where it is transported to the liver for nitrogen release. In their paper, the authors observed a net zero exchange flux and account for this to an equivalent uptake and release flux of alanine. As such, the model’s prediction of alanine import could be correct concerning mice metabolism . Regarding human BAT metabolism, it is understood that accumulation of glutamate may increase the transamination of pyruvate to alanine , which mitoMammal predicts, but much less is known of the fate of alanine and further research is necessary to validate the specific prediction of the directionality of alanine metabolism in human BATs. One remaining difference between the predictions is the activity of ATP synthase (CV) which was reported to operate in reverse following integration of RNA sequencing data and predicted to be inactive following integration of proteomic data from mice. MitoMammal represents the activity of CV as a Boolean representation of 14 genes that share an ‘AND’ relationship and so all 14 genes, or proteins need to be expressed to correctly produce all the individual subunits for a fully functional enzyme. For these GPRs, all 14 RNA sequencing transcripts were quantified, and because of the known reversibility of CV, our adapted E-Flux algorithm constrained the upper and lower bounds that correlated to the lowest transcript level of these 14 genes. As a consequence, mitoMammal in this simulation predicted the reverse activity of CV. For the proteomic simulation however, two of the 14 proteins were not identified and an additional 4 proteins were recorded as zero counts. As such, CV in this simulation was effectively constrained to zero and took no part in sustaining metabolic flows. ATP synthase (CV) is well known to operate in reverse during a wide range of different physiological environments to generate a mitochondrial membrane potential through ATP hydrolysis and the capacity of ATP hydrolysis has been observed in mitochondria isolated from BAT from mice and from humans . Reversal of ATP synthase in mice has recently been attributed to the activation of Inhibitory Factor 1 (IF1) (encoded by Atp5if1/ATP5IF1), which when activated, inhibits the reverse activity of ATP synthase. The work by demonstrates that downregulation of IF1 is critical to support ATP hydrolysis, by allowing ATP synthase to operate in reverse, which then permits non-shivering thermogenesis in mouse BAT. As such, these findings serve to validate the predictions made following the integration of human transcriptomic data, and highlight limitations of proteomic data in terms of missing data, as discussed in and . We have indeed quantified this by observing the fact that integrating transcriptomic data resulted in constraining more reactions than proteomic data [489 reactions (BA, Human) versus 329 (BAT mouse) or 330 (Cardiac mouse)]. Regarding the other reactions of the ETC, human BAT transcriptomic data integration predicted the reduction of CoQ by G3PDH and proline, yet for the mouse simulation with proteomic data, proline reduced CoQ, and G3PDH was predicted to be inactive. G3PDH reduction of CoQ has been experimentally determined for BAT in both humans and mice . G3PDH is involved in the glycerol 3-phosphate shuttle, which, similarly to the MAS, shuttles reducing power in the form of NADH from the cytoplasm into the mitochondria. G3PDH then oxidizes the imported NADH into NAD+ and releases an electron which reduces CoQ. Both mouse and human BAT express high levels of G3PDH, and knockout of G3PDH in both species are associated with metabolic type 2 diabetes mellitus and obesity . Given this, we believe the prediction of an inactive G3PDH flux in mice associated with proteomic data integration to be a misprediction as the G3PDH protein abundance was identified in the dataset, and so the upper bound was constrained to a corresponding positive value and the lower bound was constrained to zero. We therefore attribute this error to the E-Flux methodology that only constrains the upper bound and neglects to constrain the lower bound. This, combined with linear programming to maximize an objective reaction meant that in the context of the mouse, the lower bound of zero associated to the G3PDH reaction was predicted to be used to optimize flux towards the UCP reaction, as such ignoring the reactions involvement to satisfy the objective. In the context of the human simulation that used transcriptomic data , the model predicted an activity of G3PDH to optimize the UCP reaction. Some limitations have to be brought forward with this type of constraint-based modelling. For once, it is important to highlight the challenge associated with FBA of defining the correct biological objective reaction to optimize. While biomass as an approximation for bacterial growth is most likely justifiable in many cases, it is difficult to correctly assume a correct and foremost unique objective function for eukaryotic cells. There are promising developments in the field to circumnavigate this problem, including context specific multiobjective optimization , or avoidance of the objective completely by using flux-sampling methods , whereby each method comes with its own set of challenges. Another limitation of this study is that mitochondria within a cell are numerous, and here we are assuming that all mitochondria within one tissue conduct identical metabolism and operate independently from others, which may not be realistic. Mitochondrial activity is also influenced by crosstalk with organelles such as the golgi apparatus and endoplasmic reticulum. We chose here to specifically ignore this crosstalk in choosing the MitoCore model as a small and concise model that is capable of modelling mitochondrial metabolism. The contribution of other organelles is thus limited to observed imported and exported metabolites which, if experimentally known, can be used to constrain the model. One future research opportunity could be to establish small and precise models of other organelles, such as ER or peroxisomes, which then can be connected via import/export reactions. Finally, we chose the E-Flux algorithm to integrate expression data with mitoMammal. As reviewed in , numerous methods of -omics data integration are available in addition to E-Flux. For example, gene Inactivity Moderated by Metabolism and Expression compares omic expression levels to a threshold to determine sets of active reactions in a metabolic model , while the Integrative Metabolic Analysis tool uses expression data to categorize reactions into high, moderate, or lowly active subsets . Both these methods incorporate expression data into metabolic models by reducing gene expression levels to discrete binary states. The E-Flux method however constrains the upper bound of a reaction to a continuous value that is relative to the expression level of the corresponding gene. Because of this, the E-Flux approach offers a more physiologically relevant method of data integration which is why we used this algorithm in this work. One related limitation, as reported with the original E-Flux method, is that the method only constrains the upper bounds of irreversible reactions, and for reversible reactions, sets the lower bound to a negative value of the upper bound, and assumes that expression of a gene is proportional to its activity. An algorithm that could constrain both the upper and lower reaction constraints would, therefore, turn this challenge into an opportunity by further reducing the solution space to yield more accurate predictions. A further limitation of this work relates to the concise nature of mitoMammal with its ability to integrate concise omics data specific to the mitochondria. For this, a good and complete dataset is required as incomplete data is not adequate to fully constrain the model. This includes for instance accurately inferred gene expression data of mitochondria-encoded genes. We have demonstrated that mitoMammal can be used with different objective functions which is a crucial step in constraint-based metabolic modelling . In our simulations of heart metabolism, as a consequence of optimizing maximum ATP production, metabolic flux was predicted to avoid the UCP reaction. This prediction has also been experimentally validated in the work of who show that the UCP1 protein is inactive for cardiac tissue, yet active in BAT cells which highlights the metabolic flexibility of mitochondria in supporting tissue-specific function. | Study | biomedical | en | 0.999996 |
PMC11696728 | Alzheimer’s disease (AD) is a progressive neurogenerative disease that is currently regarded as being irreversible ( 1 , 2 ). While the pathogenesis of AD is incompletely understood, the pathological hallmarks of this disease include the formation of senile plaques comprised of β -amyloid (Aβ) deposits and neurofibrillary tangles together with chronic inflammatory responses that entail the activation and proliferation of glial cells, dysfunctional synaptic activity, and the degeneration and death of neurons ( 3 , 4 ). Inflammatory responses arise in the tissue surrounding Aβ deposits, and cerebral microvascular Aβ deposition has repeatedly been established as a driver of neuroinflammation in AD patients ( 5 ). The astrocytic and microglial activation evident in AD patients is associated with the release of a range of pro-inflammatory cytokines and chemokines that further propagate the neuroinflammatory cascade ( 6 , 7 ). The high levels of inflammatory mediators and complement cascade activity within AD patient brain tissue provide strong support for the pathogenic role of inflammation in this disease. The induction of this complex neuroinflammatory cascade is believed to be largely mediated by adhesion molecules and chemokine signaling. In this study, the senescence-accelerated mouse prone 8 (SAMP8) model was leveraged to better study the inflammatory microbiota-gut-brain axis and its association with AD. A growing body of evidence suggests that the gastrointestinal microflora is important not only for gut homeostasis, but also the function of distant organs such as the brain ( 8 , 9 ). A complex bidirectional communication system referred to as the microbiota-gut-brain axis has been proposed to explain this regulatory relationship ( 10 ). Through this microbiota-gut-brain axis, the dysbiosis of the gut microflora can impact psychiatric symptoms and cognitive function ( 11 ), while also modulating the homeostatic balance of immune activity within the brain, potentially contributing to the initiation or progression of age-related neurodegenerative conditions including AD, multiple sclerosis, and Parkinson’s disease ( 12 , 13 ). Mechanisms whereby gut microbes can reportedly affect the progression of AD include the differential regulation of neuroinflammatory activity, oxidative stress, Aβ deposition, and other factors linked to neuronal death ( 14 , 15 ). The most recent research evidence supports the ability of the gastrointestinal microflora to help delay aging-related activity and alleviate cognitive impairment in part via the mitigation of oxidative stress ( 16 ). The precise mechanisms whereby these gut microbes can counteract aging-related processes through this microbiota-gut-brain axis, however, remain to be firmly established ( 17 ). Traditional Chinese medicine (TCM) strategies have long been used to prevent or treat neurodegenerative diseases, and interest in their use has risen substantially in recent years. Asparagus cochinchinensis (AC) (Lour.) Merr. (Asparagi radix), also known as Tiandong, is an herb that is widely used in TCM practices ( 18 ). A member of the Liliaceae family, AC is used to nourish Yin, clear the lungs, moisten dryness, and promote the secretion of fluid under TCM theory ( 19 ). AC is a perennial plant found growing in China, Korea, and Japan, and it has been applied in TCM practices to treat conditions including coughs, fevers, inflammatory diseases, renal diseases, brain diseases, and breast cancer ( 20 ). Polysaccharides are natural polymers consisting of greater than 10 monosaccharide units’ joints by glycosidic linkages in linear or branching chains that can have very large molecular weights ( 21–23 ). Naturally derived polysaccharides can have a range of immunomodulatory effects in clinical settings ( 24 ). Polysaccharides can also influence developmental processes, exerting a diverse range of antioxidant ( 25 , 26 ), anti-inflammatory, hypoglycemic, antithrombotic, anticoagulant, antiviral, antitumor, and anti-complement activities ( 27–29 ). The precise processing technologies employed for AC can impact its functional, physicochemical, and microstructural properties. While a variety of plant polysaccharide extraction methods have been described ( 30 , 31 ), only an aqueous extraction-based strategy has thus far been reported for the isolation of AC polysaccharides (ACPs) ( 18 , 32 ). Moreover, SAMP8 mice are a subline of the SAM model first generated in the 1970s at Kyoto University that experience pronounced memory and learning impairments that worsen with progressive aging, making them ideally suited to studies of age-related disease. Importantly, these mice present with the overproduction of Aβ such that they are regarded as a model of early AD pathology ( 33 ). In this study, an acid extraction strategy was used to aid in ACP extraction, after which gas chromatography–mass spectrometry (GC–MS), gel permeation chromatography (GPC), high-performance anion-exchange chromatography (HPAEC), Fourier-transform infrared spectroscopy (FT-IR), and nuclear magnetic resonance (NMR) spectroscopy were employed for the structural characterization of the isolated polysaccharides. After characterizing ACP preparations, the impact of ACP administration on neuroinflammation and Aβ deposition within the brains of rapidly aging SAMP8 mice was evaluated and the impact of ACP administration on oxidative stress and inflammation in the brains of SAMP8 mice was assessed based on the levels of inflammatory mediators and Aβ present therein. Asparagus cochinchinensis was obtained from Guangxi (China). All reagents were from Sinopharm Chemical Reagent Co., Ltd. (Shanghai, China) and of analytical grade unless otherwise indicated. After drying at 45°C, AC roots were pulverized with a mortar in the laboratory and passed through a 40-mesh screen to generate a fine powder from which ACPs were subsequently extracted. While certain polysaccharides can be extracted through the use of diluted acid solutions, others, particularly acidic polysaccharides or those polysaccharides that contain uronic acid, can be extracted more readily using alkaline solutions. For this study, an acid-assisted extraction procedure was employed. Briefly, 5.0 g of AC powder was refluxed twice with 85 mL of 0.1 moL/L HCl solution for 2.5 h at 85°C, after which these extraction solutions were neutralized, concentrated, and precipitated using a final 75% ethanol concentration for 12 h at 4°C. Next, 80 mL of distilled water was used to suspend 6 g of this crude polysaccharide preparation, followed by centrifugation (10 min, 10,000 xg) with subsequent separation of the supernatant using a DEAE Sepharose Fast Flow column and elution using water and solutions containing various concentrations of NaCl (0, 0.1, and 0.3 mol/L). Different eluates were concentrated, dialyzed, and lyophilized to generate the ACP-A, ACP-B, and ACP-C fractions, of which 100 mg of ACP-B was dissolved with 4 mL of 0.1 mol/L NaCl followed by centrifugation (10 min, 10,000 xg). The supernatant fraction was then separated with a Superdex™200 column and eluted using 0.1 mol/L NaCl to yield the major polysaccharide (ACP). The respective phenol-sulfuric acid, bovine serum protein-Coomassie bright blue, and Folin–Ciocalteu methods were used for analyses of total carbohydrate, protein, and phenolic content ( 32 ). ACP molecular weight values were measured via GPC with a Sugar KS 805 column (50 × 8.0 mm, Shodex, Tokyo, Japan) and a differential refractive index detector, using 0.02 M sodium phosphate buffer (pH 6.8) as an eluent at a 1 mL/min flow rate with a column temperature of 30°C. A 30 μL injection volume was used, and a range of dextran standards (4,320, 12,100, 73,800, 121,000, 289,000, and 491,000 Da) were used to generate a calibration curve for molecular weight. ACP monosaccharide composition was analyzed with an ion exchange chromatography-pulse amperometric detection system (IEC-PAD, Thermo Fisher, United States) ( 34 ). Briefly, 5 mg ACP samples were hydrolyzed for 24 h with 3 mL of trifluoroacetic acid at 100°C, followed by the addition of 5 mL of methanol three times. Samples were evaporated until dry, after which the residue was dissolved in 10 mL of 1 M NaOH. Samples were next filtered and injected into an ion chromatography system using an AS-AP autosampler and a Carbopac PA-20 column (3 × 150 mm, Dionex). Monosaccharides present in ACP were determined with reference to 9 benchmark compounds (fucose, arabinose, galactose, glucose, xylose, mannose, fructose, galacturonic acid, and glucuronic acid). An FT-IR instrument (Vertex 70, Bruker, Germany) was used for FT-IR analyses with a 4,000–400 cm −1 spectral range, measuring sample transmittance for KBr pellets with a width of 7 mm. ACP sample (50 mg) were dried overnight under vacuum, followed by resuspension in 0.6 mL of D 2 O. These samples were then analyzed to generate 1 H NMR, 13 C NMR, DEPT 135 NMR, 2D 1 H- 1 H COSY, HSQC, and HMBC NMR spectra at 25°C with a Bruker Avance III 600 spectrometer (Bruker, Germany) and a PABBO probe (5 mm, BB/19F-1H/D, Z-GRD). As an internal standard, acetone was selected, with respective 31.45 and 2.225 ppm shifts for 13 C and 1 H NMR relative to acetone. Standard Bruker software and MestNova were used to process all resultant data. ACP surface morphology was assessed with a Quanta 250 FEG (FEI, America) SEM instrument. Briefly, samples were coated with a thick layer of gold, placed onto the substrate, and imaged at 10 kV with 1K–100K magnification under high vacuum. In total, 24 male SAMP8 mice and 6 senescence-resistant controls (SAMR1, NC) with a body weight of ~25.0 g were obtained from Beijing Weitong Lihua Laboratory Animal Co., Ltd. These animals were housed in a controlled environment (22°C, 50–70% humidity, 12 h light/dark cycle) with free food and water access. Animal experiments were performed as per the guidance of the Institute of Animal Care and User Committee (IACUC). Animals were randomized into wild-type control (SAMR1 mice, NC), control (SAMP8 mice), ACP (SAMP8 mice treated with 25, 50, or 100 mg/kg/d ACP), and positive control (SAMP8 mice treated with 1.667 mg/kg/d donepezil HCl) groups. After a 7-week dosing period, murine cognitive function was assessed with the Morris water maze (MWM) test, using a slightly modified version of an approach reported previously ( 35 ). Briefly, mice were trained for 5 d using a circular basin (90 cm high, 100 cm in diameter) containing water at 22°C, with a hidden platform located 1 cm beneath the water surface. On each day of training, mice completed four trials, beginning each trial in a different quadrant. Mice were assessed to determine whether they reached the platform within 90 s, allowing the mouse to rest on the platform for 15 s after locating it prior to removing animals from the tank. When mice failed to reach the platform, they were manually guided to it and allowed to rest there for 15 s. After this 5-day training period, mice were placed opposite the location of the hidden platform, which was removed, and the number of platform crossings within 90 s as well as the proportion of time spent in that target quadrant were recorded with The Smart v3.0 Small Animal Behavioral Recording and Analysis System (Reward Corporation). An E.Z.N.A. ® soil DNA Kit (Omega Bio-Tek, GA, United States) was used to extract total genomic DNA from fecal samples as directed, after which DNA quality and concentration were measured using 1.0% agarose gel electrophoresis and a NanoDrop ® ND-2000 spectrophotometer (Thermo Scientific Inc., United States), followed by storage at-80°C. The 338F (5′-ACTCCTACGGGAGGCAGCAG-3′) and 806R (5′-GGACTACHVGGGTWTCTAAT-3′) primers targeting the V3–V4 hypervariable region of the 16S rRNA gene were used to amplify isolated DNA ( 36 ) with an ABI GeneAmp ® 9,700 PCR thermocycler (ABI, CA, United States). Each PCR reaction consisted of 4 μ L 5x Fast Pfu buffer, 2 μ: 2.5 mM dNTPs, 0.8 μL of each primer (5 μM), 0.4 μL of Fast Pfu DNA polymerase, 0.2 μL of BSA, 10 ng of template DNA, and ddH 2 O to 20 μL. Thermocycler settings were: 95°C for 3 min; 27 cycles of 95°C for 30 s, 55°C for 30 s, 72°C for 45 s; 72°C for 10 min, with a final resting incubation at 4°C. Sample amplification was conducted in triplicate, and PCR products were extracted following 2% agarose gel electrophoresis using an AxyPrep DNA Gel Extraction Kit (Axygen Biosciences, CA, United States) as directed, with quantification then being performed with a Quantus™ Fluorometer (Promega, United States). Data are derived from three or more independent experiments and are presented as means ± standard deviation (x ± s). All analyses were performed using GraphPad Prism 8.0. When comparing data with a normal distribution and homogenous variance among multiple groups, one-way ANOVAs with the Student Newman–Keuls (SNK) multiple comparison test were used. p < 0.05 was regarded as the threshold for significance. Here, an acid-assisted extraction approach was used to isolate ACP, with a yield of 3.67% of the dry weight of the raw AC input material. Following ACP deproteinization and decolorization, samples were fractionated and purified with DEAE-52 and Sephadex G-200 chromatography columns . Initial DEAE-52 column purification yielded three fractions, including deionized water, 0.1 mol/L NaCl, and 0.3 mol/L NaCl fractions with respective yields of 23.9, 10.24 and 8.9%. ACPs-B was subject to further Sephadex G-100 chromatography column purification, yielding a single symmetrical ACP peak with a 91.2% yield (not shown). In view of the previous product yield and the experimental effect in animals, we finally selected ACPs-2 (ACP). ACP had a calculated molecular weight of 15,580 Da, and its polydispersity index was ~1.14, with this value being close to 1 such that this ACP fraction was regarded as likely being homogenous ( Table 1 ). Infrared spectral analyses are commonly used to detect functional groups including O–H, C–H, and C=O. ACP samples exhibited FT-IR signals at 3330, 2935, 1,413, and 1,055 cm −1 that are characteristic of polysaccharides ( 30 ). The peak at 3411 cm −1 was consistent with O–H hydrogen bond vibrations, while the peak at 2933 cm −1 was consistent with C–H vibrations, and the peaks at 3411 cm −1 and 2,933 cm −1 were indicative of the presence of a polysaccharide sample ( 24 ). Absorption bands in the 1,500–400 cm −1 region were sensitive to changes, with the peaks at 1036 cm −1 and 1,109 cm −1 confirming that C–O–C and C–O–H stretching vibrations were present, indicative of a pyranose ring. Signals were also observed at 529 cm −1 and 1,100–1,420 cm −1 . ACP was found to primarily consist of glucose, galactose, L-fucose, and fructose at a 82.14:12.23:2.61:2.49 ratio, with trace levels of xylose, arabinose, and rhamnose at a 0.48:0.04:0.02 ratio ( Table 2 ). In other reports, ACP was indicated to include fructose and glucose at a 93.3:6.7 molar ratio ( 37 ), with other studies having also described ACP preparations containing xylose, arabinose, glucose, rhamnose, mannose, galactose, glucuronic acid, and galacturonic acid ( 38 ). These differences in monosaccharide composition and molar ratios are likely related to differences in ACP sources or the methods employed for extraction and purification. Methylation analyses can offer significant insight regarding polysaccharide structural characteristics. As it contained little uronic acid, the direct and complete methylation of all free OH groups in ACP was observed. Following the hydrolysis, reduction, and acetylation of permethylated polysaccharides to isolate PMAAs, these samples were subjected to GC–MS analysis ( 38 ). Peak areas in the GC chromatogram were compared to compute the molar percentage ratios for various sugar residue types, ultimately revealing the presence of Glc p -(1→, →2)-Glc p -(1→, →6)-Glc p -(1→, →4)-Glc p -(1→, →3, 4)-Glc p -(1→, →2,4) -Gal p -(1→, →4,6)-Gal p -(1→, and →3,4,6)-Gal p -(1 → sugar linkages in ACP at molar percent ratios of 23.70:1.30:3.55:50.77:6.91:1.10:11.50:1.18 ( Table 3 ). Based on the monosaccharide and methylation linkage analysis results, ACP was primarily composed of glucose (50.77%), consistent with the →4)-Glc p -(1 → residues in the backbone of the ACP structure. An NMR approach was next used to gain further insight into the structural characteristics of ACP, including the glycosidic bond connections and configurations present therein. Virtually all protons in the ACP 1 H NMR spectra wherein the 3.00–5.30 ppm range , as is normal for polysaccharides. The strong signal peak at 4.70 ppm corresponds to the D 2 O solvent peak. Glycosidic bond configurations can be determined based on allocephalic hydrogenation shifts, with α -and β -configuration polysaccharide molecules generally exhibiting these shifts in the 5.0–5.5 ppm and 4.5–5.0 ppm ranges, respectively. The majority of ACP heterohydrogenation shifts fell in the 2.15–2.75 and 3.15–4.25 ppm ranges . Heteropolytic proton signals at 4.50, 4.55, 4.59, 4.61, and 4.86 ppm were assigned to β-pyranose units, while signals at 5.13, 5.18, and 5.19 ppm were assigned to α -pyranose units. These data were consistent with large numbers of glycosidic bonds in the β configuration together with a relatively limited number in the α configuration in ACP. Chemical shifts in the 1-dimensional carbon NMR spectrum were wider than those for the proton NMR spectrum, with slightly higher resolution . These carbon spectral data can be employed to determine the positions and molecular conformation of groups. Polysaccharide heterocephalic carbon signals are generally observed in the δ C 90–113 ppm range, with α -type glycosidic bonds exhibiting shifts < δ C 102 ppm. ACP exhibited bonds in the β -configuration, with seven signal peaks in the δ C 90–113 ppm range including signals at δ C 92.10, 95.93, at 98.10 ppm, supporting the presence of both α-and β-type bonds within ACP. Furanose C3 and C5 signals were present in the δ 82–84 ppm range, while pyranose C3 and C5 signals were < 80 ppm, enabling differentiation between the two. For residue A, anomeric hydrogen and carbon chemical shifts (4.61/95.93 ppm) were consistent with glucose in the β-configuration. In the COSY map, H2 (3.2 ppm) of residue A was determined based on cross peak 4.61/3.2 ppm, and H3 (3.85 ppm) of residue A was determined based on cross peak 3.2/3.85 ppm. H4 (3.45 ppm) of residue A was determined based on cross peak 3.85/3.45 ppm, and H5 (4.04 ppm) of residue A was determined based on cross peak 3.45/4.04 ppm. The H6 (3.83, 3.69 ppm) of residue A was determined from the cross peak 4.04/3.83, 3.69 ppm, which can be attributed to the chemical shift of hydrogen on the complete sugar ring. Then, the chemical shift of C on the sugar ring is attributed by HSQC signal. The chemical shift of C1 of residue A is 95.93 ppm, the chemical shift of C2 of residue A is 74.16 ppm, the chemical shift of C3 of residue A is 69.53 ppm, and the chemical shift of C4 of residue A is 75.66 ppm. The C5 chemical shift of residue A is 73.48 ppm, the C6 chemical shift of residue A is 60.75 ppm, and the chemical shift of C1 and C4 is shifted to the low field, indicating that the residue is replaced at the O-1, O-4 position of the sugar ring. Combined with the results of methylation analysis and literature reports, it is inferred that the residue A was concluded to correspond to →4)- β -D-Glc p -(1 →. For residue B, anomeric hydrogen and carbon chemical shifts (5.19/92.11 ppm) were consistent with glucose in the α -configuration. In the COSY map, H2 (3.49 ppm) of residue B was determined based on cross peak 5.19/3.49 ppm, and H3 (3.7 ppm) of residue B was determined based on cross peak 3.49/3.7 ppm. H4 (3.95 ppm) of residue B was determined based on cross peak 3.7/3.95 ppm, and H5 (4.2 ppm) of residue B was determined based on cross peak 3.95/4.2 ppm. The H6 (3.88, 3.72 ppm) of residue B was determined from the cross peak 4.2/3.88, 3.72 ppm, which can be attributed to the chemical shift of hydrogen on the complete sugar ring. Then, the chemical shift of C on the sugar ring is attributed by HSQC signal. The chemical shift of C1 of residue B is 92.11 ppm, the chemical shift of C2 of residue B is 71.47 ppm, the chemical shift of C3 of residue B is 71.37 ppm, and the chemical shift of C4 of residue B is 69.27 ppm. The C5 chemical shift of residue B is 71.44 ppm, the C6 chemical shift of residue B is 61.24 ppm, and the chemical shift of C1 is shifted to the low field, indicating that the residue is replaced at the glycocyclic O-1 position. Combined with the results of methylation analysis and literature reports, it is inferred that the carbohydrate residue B may be α -D-Glc p -(1→). For residue C, anomeric hydrogen and carbon chemical shifts (5.13/94.1 ppm) were consistent with galactose in the α-configuration. In the COSY map, H2 (3.89 ppm) of residue C was determined according to the cross peak 5.13/3.89 ppm, and H3 (3.6 ppm) of residue C was determined according to the cross peak 3.89/3.6 ppm. H4 (4.06 ppm) of residue C was determined according to the cross peak 3.6/4.06 ppm, H5 (3.8 ppm) of residue C was determined according to the cross peak 4.06/3.8 ppm, and H5 (3.8 ppm) of residue C was determined according to the cross peak 3.8/3.74. 3.88 ppm determines the H6 of residue C (3.74, 3.88 ppm), which can be attributed to the chemical shift of hydrogen on the complete sugar ring. Then, the chemical shift of C on the sugar ring is attributed by HSQC signal. The chemical shift of C1 on residue C is 94.1 ppm, the chemical shift of C2 on residue C is 70.4 ppm, the chemical shift of C3 on residue C is 72.76 ppm, and the chemical shift of C4 on residue C is 75.94 ppm. The C5 chemical shift of residue C is 71.52 ppm, and the C6 chemical shift of residue C is 67.56 ppm. The chemical shifts of C1, C4, and C6 are shifted to the lower field, indicating that the residue is replaced at the positions of O-1, O-4, and O-6 in the sugar ring. Combined with the results of methylation analysis and literature reports, it is inferred that the sugar residue C may be In the COSY map, H2 (3.89 ppm) of residue C was determined according to the cross peak 5.13/3.89 ppm, and H3 (3.6 ppm) of residue C was determined according to the cross peak 3.89/3.6 ppm. H4 (4.06 ppm) of residue C was determined according to the cross peak 3.6/4.06 ppm, H5 (3.8 ppm) of residue C was determined according to the cross peak 4.06/3.8 ppm, and H5 (3.8 ppm) of residue C was determined according to the cross peak 3.8/3.74. 3.88 ppm determines the H6 of residue C (3.74, 3.88 ppm), which can be attributed to the chemical shift of hydrogen on the complete sugar ring. Then, the chemical shift of C on the sugar ring is attributed by HSQC signal. The chemical shift of C1 on residue C is 94.1 ppm, the chemical shift of C2 on residue C is 70.4 ppm, the chemical shift of C3 on residue C is 72.76 ppm, and the chemical shift of C4 on residue C is 75.94 ppm. The C5 chemical shift of residue C is 71.52 ppm, and the C6 chemical shift of residue C is 67.56 ppm. The chemical shifts of C1, C4, and C6 are shifted to the lower field, indicating that the residue is replaced at the positions of O-1, O-4, and O-6 in the sugar ring. Combined with the results of methylation analysis and literature reports, it is inferred that the sugar residue C may be →4,6)- α -d-Gal p -(1 → . For residue D, anomeric hydrogen and carbon chemical shifts (4.86/93.69 ppm) were consistent with glucose in the α-configuration. In the COSY map, H2 (3.37 ppm) of residue D was determined based on cross peak 4.86/3.37 ppm, and H3 (4.07 ppm) of residue D was determined based on cross peak 3.37/4.07 ppm. H4 (3.48 ppm) of residue D was determined based on cross peak 4.07/3.48 ppm, and H5 (3.86 ppm) of residue D was determined based on cross peak 3.48/3.86 ppm. The H6 (3.98, 3.77 ppm) of residue D was determined from the cross peak 3.86/3.98, 3.77 ppm, which can be attributed to the chemical shift of hydrogen on the complete sugar ring. Then, the chemical shift of C on the sugar ring is attributed by HSQC signal. The chemical shift of C1 of residue D is 93.69 ppm, the chemical shift of C2 of residue D is 69.66 ppm, the chemical shift of C3 of residue D is 75.66 ppm, and the chemical shift of C4 of residue D is 75.94 ppm. The C5 chemical shift of residue D is 72.77 ppm, and the C6 chemical shift of residue D is 63.38 ppm. The chemical shifts of C1, C3, and C4 are shifted to the low field, indicating that the residue is replaced at the positions of O-1, O-3, and O-4 in the sugar ring. Combined with the results of methylation analysis and literature reports, it is inferred that the sugar residue D may be →3,4)- α -D-Glc p -(1 → . Based on these results from one-dimensional NMR ( 1 H and 13 C) and two-dimensional NMR (HSQC and HMBC) approaches, the fine structure of ACP was ultimately determined, as shown in Figure 2 and Table 4 . To further assess the structure of the backbone of ACP, 1D-and 2D-NMR spectra were analyzed, assigning 1 H and 13 C NMR signals based on correlations in the HMBC and NOESY spectra and values that have been reported in the literature. The HMBC and NOESY spectra can effectively reveal glycosylic linkages between sugar residues, and they can also reveal intra-residue connections, as shown in ACP. The NOESY spectrum was used to analyze connections in ACP, as it exhibited lower signal intensity at the cross peaks of the HMBC spectrum. Some cross peaks were observed in the NOESY spectrum , including peaks corresponding to H1 of sugar residue A has A cross peak 4.61/3.45 ppm with H4 of sugar residue A, the H1 of sugar residue A has a cross peak 4.61/4.36 ppm with H4 of sugar residue E, and the H1 of sugar residue A has a cross peak 4.61/4.07 ppm with H3 of sugar residue D. The H1 of sugar residue B has A cross peak 5.19/3.45 ppm with H4 of sugar residue A, the H1 of sugar residue C has a cross peak 5.13/3.45 ppm with H4 of sugar residue A, and the H1 of sugar residue D has a cross peak 4.86/3.88 ppm with H6 of sugar residue C. Therefore, based on one-dimensional and two-dimensional NMR information and methylation analysis, it is concluded that the polysaccharide is mainly composed of →4)- β -D-Glc p -(1 → and a small amount→4,6)- α -D-Gal p -(1 → and →3,4)-α-D-Glc p -(1 → and so on. Branched chain is mainly composed of α-D-Glc p -(1 → 4)-β-D-Glc p -(1 → connected to the sugar residues α-D-Glc p -(1 → 4)-β-D-Glc p -(1 → O-4 position or sugar residues of α-D-Glc p -(1 → 4)-β-D-Glc p -(1 → O-3 position. SEM approaches are often employed to assess the surfaces and microstructural characteristics of polysaccharides, offering insight into macromolecule morphology, shape, size, and porosity ( 39 ). SEM imaging revealed that all ACP samples presented with block-like structures after acid and water treatment . This may be attributable to cavitation activity, turbulence shearing, and instantaneous high pressures. Acid-assisted extraction can also disrupt cellular structures, increasing the contact area between the liquid and raw material phases. This may explain the block-like surface characteristics of ACP. Indeed, extraction and purification strategies have been confirmed to influence polysaccharide shape and surface topological characteristics. The impact of ACP administration on spatial memory in SAMP8 model mice was assessed with the MWM, monitoring the swimming paths of mice during testing . Relative to the NC group, SAMP8 model mice exhibited fewer platform crossings, while tighter paths and more platform crossings were observed for mice in the ACP50 and ACP100 groups as compared to the SAMP8 group. A similar improvement was also evident in the donepezil group. Compared to NC controls, SAMP8 mice also exhibited significantly increased Target Zone (%) and Fast Time in the Target Zone (s) values together with decreased Mean Speed in Target Zone and Latency 1st Entrance to Zone (s)-Target values, consistent with impaired spatial learning and memory . A significant increase in the Latency 1st Entrance to Zone (s)-Target was also observed in the ACP25 group ( p < 0.05), while significantly increased Distance in Target Zone (%) ( p < 0.001) and Fast Time in Target Zone (s) ( p < 0.05) were observed for the ACP50 group. The SAMP8 can result in the dysfunction of the epithelial barrier owing to an increase in the permeability of the intestines. The impact of ACP on intestinal barrier integrity in SAMP8 mice was assessed based on intestinal morphology and the expression of TNF- α , MUC-2, and SCFA receptors. Representative H&E stained sections of colon tissue are presented in Figure 5A , revealing clear crypt structures and the absence of inflammatory infiltration or damage in mice from the NC group. In contrast, pronounced mononuclear cell infiltration and crypt deformities were evident in SAMP8 mice, whereas ACP administration reversed these effects, consistent with the ability of such treatment to abrogate chronic inflammation and enhance the integrity of the epithelial barrier in the colon of these SAMP8 mice. Significantly reduced TNF- α expression was also detected in the colon of ACP-treated mice relative to SAMP8 model controls . Goblet cells produce the mucin MUC-2, and ACP treatment significantly lowered these Muc2 mRNA levels relative to the SAMP8 group. ACP is thus capable of augmenting intestinal epithelial integrity through the enhanced secretion of mucins and the suppression of TNF-α secretion into the systemic circulation. Short-chain fatty acids (SCFAs) are the primary polysaccharide metabolites within the intestines. Levels of acetic, propionic, n-butyric, and i-valeric acids in the colon contents of model mice were significantly reduced relative to those in samples from NC mice, whereas ACP administration significantly reversed this change, consistent with the ability of ACP to promote CFA production within the colon of SAMP8 mice . Moreover, ACP administration significantly enhanced Gpr43, Gpr41, and Gpr109A expression relative to the SAMP8 group . ACP also significantly increased lactic acid levels in the colon contents from these mice . Together these data support the ability of ACP to increase SCFA production, thereby promoting GPR upregulation within the colon. The aging process has been linked to both gastric dysfunction and degenerative changes in the nervous system that can contribute to gastrointestinal dysbiosis reflected by the impairment of the makeup and function of the gut microbiota ( 40 ). The composition of the gut microflora can also reportedly affect the rate of aging ( 41 ), with dysbiosis being closely related to AD incidence and progression. These gut microbes can engage in communication with the central nervous system via endocrine, immunological, and neural pathways, potentially contributing to the pathogenesis of neurodegeneration through the production of deleterious compounds, the regulation or secretion of neurotransmitters, and the induction of neuroinflammation. In this study, a 16S rDNA sequencing approach was used to evaluate changes in the gut microflora of analyzed mice. Based on the results of behavioral and oxidative stress analyses, ACP100 treatment yielded therapeutic efficacy superior to that of ACP50 and ACP25. Accordingly, the ACP100 dose was selected for use in these experiments exploring the pharmacodynamic effects of ACP in an effort to better understand its anti-aging mechanisms. Sequencing of the V3–V4 hypervariable region in fecal DNA samples from mice in the NC, SAMP8, and ACP treatment groups was conducted, after which the Ace, Chao1, Shannon, and Simpson indices were used to evaluate microbial alpha diversity, revealing pronounced differences in the microbial species present in these samples among groups . Relative to the NC group, a significant reduction in gut microbiota diversity was evident in SAMP8 mice, while this diversity was restored with ACP treatment. Further analyses of the gut microbiota in these mice were conducted at the phylum and genus levels. Dominant phyla in NC mice included Bacteroidota, Actinobacteriota, Patescibacteria, Firmicutes, and Campilobacterota , with the same composition being evident in other groups. A change in the gut microbiota composition was evident in the SAMP8 mice. A significant change in the aging-related B/F ( Bacteroidota/Firmicutes ) ratio was also evident, with respective values of 1.51, 0.52, and 1.74 in the NC, SAMP8, and ACP treatment groups. At the genus level, SAMP8 mice exhibited increases in the proportions of Mucispirillum, Actinobacteriota, Firmicutes, and Campilbacterota together with reductions in the proportions of Bacteroides, Akkermansia, and Lactobacillus . Following ACP treatment, these changes were reversed . AD is a form of chronic neurodegenerative disease with a complex and incompletely understood pathological basis such that there is a pressing need to explore novel approaches to treating affected patients ( 42 , 43 ). Many different processes and signaling pathways are involved in AD, with clear roles for inflammation, apoptosis, and oxidative stress in this setting ( 4 ). The dysbiosis of the gut microflora can also impact hippocampal Aβ clearance in AD patients, further potentiating disease development ( 44 ). Mechanistically, this loss of intestinal homeostasis can compromise the integrity of the intestinal barrier, resulting in the extravasation of inflammatory mediators that ultimately trigger or exacerbate inflammatory disease-related processes. Changes in the structural composition of the gut microflora have been shown to be associated with direct or indirect changes in neurotransmitter levels and the production of bacterial metabolites, which serve as signaling intermediaries between the gut and the brain. Through this pathway, but microbes can influence host biochemical and neurophysiological processes, and can modulate neuroinflammation in the brain via disrupting blood–brain barrier integrity. These processes ultimately result in altered brain function and behavior, and can contribute to the pathogenesis of AD. Changes in intestinal flora abundance and/or function can result in damage to the intestinal tissue, disrupting intestinal mucosal stability and integrity while triggering a range of inflammatory responses. The release of gut microbe-derived metabolites including SCFAs and 5-HT can also trigger depressive symptoms in the brain, which can also occur as a result of the effects of these microbes on the hypothalamic–pituitary–adrenal axis. Plant polysaccharides can serve as a form of prebiotic that can be used by intestinal microbes so as to stimulate the growth of beneficial bacteria, thereby potentially modulating the development of AD via the microbiota-gut-brain axis. Here, ACP was identified as a novel polysaccharide extracted from A. cochinchinensis that subsequently underwent structural characterization and analyses of its in vivo anti-AD effects. Structural characteristics can be used to categorize polysaccharides as glucans, mannoglucans, fructans, pectins, galactans, and arabogalactans. An inulin-type fructan with a molecular weight of 2,690 Da denoted AC neutral polysaccharide was previously isolated from A. cochinchinensis (Lour.). One-dimensional NMR, two-dimensional NMR, and methylation analyses ultimately revealed that the polysaccharide is mainly composed of →4)- β -D-Glcp-(1 → and a small amount→4,6)- α -D-Galp-(1 → and →3,4)-α-D-Glcp-(1 → and so on. Branched chain is mainly composed of α-D-Glcp-(1 → 4)-β-D-Glcp-(1 → connected to the sugar residues α-D-Glcp-(1 → 4)-β-D-Glcp-(1 → O-4 position or sugar residues of α-D-Glcp-(1 → 4)-β-D-Glcp-(1 → O-3 position. These assays revealed the ability of ACP to protect against intestinal dysbiosis and cognitive impairment in AD model mice. This polysaccharide was able to enhance learning and memory in these SAMP8 mice while also mitigating oxidative stress within the brain. Prior research suggests that gastrointestinal dysbiosis can trigger innate immune activity that results in mild chronic inflammation, which contributes to age-related degenerative processes, cognitive impairment, and the aging process as a whole. With more advanced age, gut microbiota diversity also declines, with accompanying reductions in Bifidobacteria levels and Firmicutes and Proteobacteria enrichment ( 45 ). In this study, the gut microbiota of SAMP8 exhibited disturbances with respect to the abundance of Bacteroidota, Actinobacteriota, Patescibacteria, Firmicutes, and Campilobacterota . Aging is also associated with an increase in the F/B ratio, influencing cognitive impairment and oxidative stress in the aging context ( 46 ). ACP-treated mice exhibited the reversal of these changes, suggesting that ACP was capable of alleviating oxidative stress and overcoming learning and memory deficits in part via modulating the gut microbiota composition in these SAMP8 mice. In summary, A polysaccharides extracted from A. cochinchinensis can protect against Alzheimer’s disease by regulating the microbiota-gut-brain axis. This polysaccharide was mainly composed with glucans, mannoglucans, fructans, pectins, galactans, and arabogalactans. Its molecular weight was 15,580 Da. The main chain is mainly composed of →4)- β -D-Glc p -(1 → and a small amount→4,6)- α -D-Gal p -(1 → and →3,4)-α-D-Glc p -(1 → and so on. The animal experiments have shown that ACP was able to enhance learning and memory in these SAMP8 mice while also mitigating oxidative stress within the brain. ACP also reversed the interference of Bacteroidota, Actinobacteriota, Patescibacteria, Firmicutes, and Campilobacterota in SAMP 8 mice. Therefore, ACP has the potential to prevent Alzheimer’s disease. | Study | biomedical | en | 0.999996 |
PMC11696751 | Heavy-metal pollution stands as a major environmental concern, with sources from the natural world, as well as industry, mining, and fossil fuel burning, all demanding immediate and concerted action to mitigate their deleterious effects on both the environment and humans. 1 − 3 The impact on human health is particularly concerning, as heavy metals such as lead, mercury, and cadmium have well-characterized toxicities in the body. 4 − 7 Heavy metals can damage organelles such as mitochondria, lysosomes, and the cell membrane, as well as enzymes, DNA, and nuclear proteins, leading to DNA damage, cell cycle disruption, apoptosis, or carcinogenesis. 8 Long-term exposure to heavy metals can also disrupt the endocrine and immune systems, leading to chronic health issues. 9 , 10 Yet, implementing traditional remediation techniques often proves economically burdensome, highlighting the need to seek cost-effective alternatives. 11 − 13 Sustainable solutions such as bacterial bioremediation are good alternatives within which there are many strategies for addressing heavy-metal contamination. 11 , 14 In the pursuit of harnessing their potential for bioremediation, dedicated research efforts have focused on the isolation and characterization of metal-tolerant bacteria (MTB) from a wide array of natural habitats. 15 − 18 These efforts seek to uncover bacterial species endowed with specialized mechanisms tailored to withstand and neutralize the toxic effects of heavy metals such as the secretion of small-molecule metal chelators, commonly referred to as metallophores. 19 − 21 Metallophores are molecules that exhibit high affinities for metal ions, 21 − 26 they have proven roles in removing and detoxifying toxic metals. 22 , 25 − 27 For example, deferoxamine B is a natural siderophore, originally discovered in a soil bacterium, Streptomyces pilosus , and it has been pharmaceutically used as an antidote to iron toxicity (Desferal) for decades. 28 In addition to the ferric ion, more than 20 different metal complexes of deferoxamine B have been characterized. 29 Moscatello and co-workers used a yersiniabactin, a metallophore produced by different Gram-negative bacteria, 23 , 30 , 31 immobilized within a packed-bed column for continuous removal of copper and nickel from industrial wastewater. 32 In a similar study, Ahmadi et al. used a heterologous biosynthetic system to produce yersiniabactin for the removal of a copper–zinc mixture from water. 33 This prior work demonstrates the need for novel metallophores to expand these applications and expand the toolkit of small molecules available for environmental bioremediation. Metal-binding small molecules are diverse and defy simple classification into groups like siderophores, a term specific to small molecules which assist in iron acquisition, which is often used exclusively from the metallophore label for metal chelators that aid in heavy-metal resistance. 23 , 26 , 34 However, as we seek metallophores for bioremediation purposes, it is increasingly recognized that metallophores can fulfill multiple ecological roles. 21 , 24 Siderophores are secreted by bacteria to chelate iron from the environment to make it bioavailable for the bacterial cell to grow and survive, 35 , 36 but previous studies have shown siderophores with dual functions, such as delftibactin and yersiniabactin metallophores. 23 , 26 , 34 Those two siderophores also possess the ability to chelate or biomineralize toxic noniron heavy-metal ions, e.g., copper or gold, in addition to their role in bacterial iron acquisition. 23 , 26 , 34 These observations raise the possibility that other siderophores may have dual-role activities in heavy-metal resistance. Hypothesizing that in evolving dual-role functionalities, these metallophores may better bind heavy-metal pollutants, versus siderophores selective exclusively for iron, underpins our interest in these metallophores. The soil microbiomes encompass a wide array of microbial communities. 37 To isolate from soil microbiomes particular bacteria with specific functions, it is necessary to employ enrichment culture techniques that target desired microbes out of a larger community, such as the pioneering work on an aerobic nitrogen-fixing Azotobacter bacterium by Beijerinck. 38 To that end, isolating MTB can be simply accomplished by using inhibitory concentrations of metals in various media to prevent the growth of susceptible microorganisms and enrich for MTB. 15 − 18 Prior work has investigated heavy MTB, for applications such as plant growth-promoting characteristics (including siderophore production). 39 Majewska and co-workers isolated siderophore producers, then tested these bacteria for their ability to bind to other metals. 40 In this study, we wanted to enrich for MTB and then screen those bacteria for siderophore production, as any dual-role metallophore producers should be positive hits in both screens. Soil samples were obtained from a former mining site contaminated with heavy metals, the Carpenter Snow Creek Superfund National Priorities List Site; which was a producer of silver, lead, and zinc with residual tailings and low-grade ore also contaminated with copper. 41 Initially, the soil microbiome was screened for MTB by using metal-treated plates. Copper was selected for its presence at our study site, 41 while cerium was selected for its trivalent oxidation state which we hypothesized would lead to different small molecule–metal interactions than the divalent cupric ions. Cerium is also an inner transition metal, which is valuable to industry, has significant US supply risks, and for which novel recycling methods (such as metallophore-based techniques 24 , 32 ) are needed. 43 Subsequently, a secondary screen using the Chrome Azurol S (CAS) assay 42 was employed to isolate only those MTB that produce metallophores. Metallophore-producing MTB were isolated from the Carpenter Snow Creek soil by using our dual-screen method. In our effort to identify bacteria capable of surviving metal stress, MTB, we employed 1/5× diluted Luria−Bertani-Lennox (1/5 LB, Sigma-Aldrich), International Streptomyces Project medium 4 (ISP-4), 62 or our lab’s Defined medium for Siderophores (DMS) 45 supplemented with either copper or cerium. From these plates, a total of fifty-one distinct colonies were picked based on exhibiting unique morphological characteristics. Under the second screening step utilizing CAS-dyed plates of either 1/5 LB or DMS (note: colonies from the ISP-4 plates were tested for CAS activity on the 1/5 LB plates), only 17 of the original fifty-one colonies displayed a positive response in the CAS assay indicative of metallophore secretion. These identified CAS-positive hits were subsequently cultured in liquid media to prepare long-term frozen stocks, and bacterial isolates were given a strain code of BL-MT-01 through BL-MT-17. The bacterial isolates underwent 16S rRNA sequencing to identify the taxonomy of the isolates and to streamline the selection process for further investigation. Analysis of the 16S rRNA results using multiple sequence alignment unveiled that several isolates had identical sequences, which were presumed either to represent repeated isolations or the same species; though insufficient discrimination based on the 16S gene is also a possibility as has been observed using comparison to whole genome-based methods. 44 Based on our 16S analysis, a representative strain of the eight distinct bacterial species was chosen, which was compared by BLAST to known sequences . The 16S sequences used were uploaded to GenBank under accession numbers PP868352 – PP868368 (see Table S1 for the complete list). In our efforts to validate this dual screening methodology for targeting the metallophore producers, we selected the Cupriavidus strain, BL-MT-10, for further genome mining because of this genus’s known production of metallophores, such as cupriachelin or taiwachelin. 45 , 46 The high molecular weight (HMW) DNA of the Cupriavidus strain was extracted and sequenced. The sequenced genome for this isolate was assembled into a circular chromosome of 4.47 Mb, a chromid of 3.72 Mb, and three plasmids with lengths of 40.6, 198, and 725 kb. This genome was compared against other members of the Cupriavidus genus via the OrhoANI tool (OAT) 47 which showed a >96% similarity to both Cupriavidus basilensis strains DSM 11853 and 4G11. Given this result, we have identified our strain as a member of this species, referred to throughout the rest of this report as C. basilensis BL-MT-10. The assembled genome of BL-MT-10 was mined for biosynthetic gene clusters (BGCs) that might potentially be encoded for metallophore production using antiSMASH. 48 The result revealed a metallophore BGC with 88% similarity to the known taiwachelin pathway from Cupriavidus taiwanensis LMG19424 46 on the 725 kb plasmid. To investigate if the identified homologue of the taiwachelin BGC in C. basilensis BL-MT-10 produced a similar molecule, we isolated and profiled the excreted metabolome of C. basilensis BL-MT-10 grown in DMS. C. basilensis BL-MT-10 metabolites were analyzed using the LCMS method our laboratory previously established for metallophore identification. 45 The acquired data were scrutinized using the Global Natural Product Social (GNPS) platform. 49 Within the data set, we primarily searched for the predicted [M + H] + mass for the taiwachelin (963 m / z ) to verify if the metabolomic profile of this bacterium matches the annotation of the metallophore BGC observed in the antiSMASH genomic mining. We identified a distinct cluster of masses exhibiting m / z values of 947, 963, and 991. To gain deeper insights into the structural characteristics of these compounds, we manually investigated the fragmentation spectrum of the 963 and 991 parent ions which were consistent with taiwachelin ( 1 ) 46 and an analogue with a lipid-tail modification . This comprehensive approach, integrating metabolomic analysis with genomic insights, provided robust evidence supporting the secretion of metallophores by C. basilensis BL-MT-10, with taiwachelin lipopeptides identified as key candidates. The initial MS/MS fragmentation analysis suggested that the annotated metabolites included taiwachelin ( 1 ), a known metallophore reported not to bind copper. 46 To obtain enough material for the isolation and characterization of this metallophore, we cultured BL-MT-10 at a 2 × 1 L scale using our DMS. 45 The crude extract (ca. 200 mg) was fractionated via reversed-phase solid-phase extraction (RP-SPE), yielding 48 mg from the 50% MeCN fraction. This fraction was extensively purified using RP-HPLC, resulting in the isolation of compound 1 (20 mg). The isolated compound 1 was then subjected to NMR analysis and compared with previously reported data on taiwachelin. The NMR data of 1 showed agreement with the previously reported data for this molecule . 46 This result was also consistent with our MS/MS fragmentation analysis and the high homology between the metallophore BGC in C. basilensis BL-MT-10 and the original producer, C. taiwanensis LMG19424. 46 A crude assessment of the metal binding capacity of taiwachelin showed that when the pure compound was mixed with ferric iron, a stable complex formed, which could be detected as a 1016.3973 m / z [M – 2H + Fe] + adduct (−1.7 ppm) using our LCMS method, suggesting strong binding in competition with the formic acid-acidified LCMS buffers. However, when mixed with Cu 2+ , Zn 2+ , or Ce 3+ ions, any adducts that formed with taiwachelin were not stable to the LCMS conditions and only the native taiwachelin [M + H] + adduct was detected, similar to prior results with this molecule. 46 With literature precedence to support the notion that metal-binding small molecules do not always fall into easily categorized groups such as siderophores to aid iron acquisition or metal chelators to aid in heavy-metal resistance but instead that metallophores can have multiple roles, 23 , 26 , 34 we set out to develop a way to enrich strains which produce these molecules. Our dual-screen method selectively isolates bacteria capable of both resisting heavy-metal stress and producing siderophores, as this profile is what we hypothesize will be present in strains using metallophores to resist heavy-metal toxicity from the larger pool of the soil microbiome. Previous research has focused on isolating MTB using media supplemented with toxic metals. 50 − 57 However, the application of the dual filtration method to specifically target bacteria capable of secreting metallophores is less studied. In our methodology, we employed two steps, the initial step involves the utilization of copper and cerium as a primary screening tool to identify bacteria that exhibit tolerance toward the applied heavy metals. Bacterial resistance to heavy metals can occur through different mechanisms, only one of which is the secretion of metallophores. 58 , 59 Given this, the subsequent CAS assay is employed to allow the selection of bacteria secreting metallophores, specifically siderophores. Using this approach, we successfully isolated eight new metal-tolerant metallophore-producing bacteria. We validated the utility of this workflow by using LCMS-based metabolomics and genome mining studies of C. basilensis BL-MT-10 to reveal the production of taiwachelin within this strain, to our knowledge the first report of this molecule from this species. There are possibilities for refining this methodology, particularly in terms of isolating and identifying other metal-specific metallophore producers in light of the lack of observed copper binding by taiwachelin. A revised method could replace copper and cerium with different metals, or similarly, the second filtering step could be adapted by complexing the CAS dye with these alternative metals. 27 , 39 , 40 , 60 In addition, we can utilize the MassQL tool to aid in the metabolomic identification of novel metal-bound small molecules. 45 , 61 Future investigations will be needed to prove the hypothesized dual role of 1 , as both siderophore that aids in iron acquisition and a metallophore that sequesters toxic heavy metals. Given the lack of observed copper binding by 1 , investigations of the BGC regulation are needed to show if copper metal stress, the screen used to isolate C. basilensis BL-MT-10, induces taiwachelin production. We will also apply our isolation methods to the other bacterial isolates from this work to build a repository of novel metallophores, allowing investigations of their metal–metallophore interactions and thereby helping to identify candidates for heavy-metal bioremediation. In May 2022, soil samples were collected from the Carpenter Snow Creek Mining District, a Superfund site in Montana, United States. Samples were collected from surface soil within a depth of 20 cm, directly transferred to 50 mL sterile Falcon tubes, and preserved at 4 °C until processing. To extract the soil bacteria, approximately 0.5 g of soil from each sample was added to 1 mL of sterile phosphate-buffered saline and vortexed at room temperature for 5 min at maximum speed. Subsequently, the samples were allowed to stand undisturbed for an additional 5 min, facilitating the settling of any solid particles. Following this sedimentation step, the supernatants containing suspended microbial cells were used in the next step to inoculate the metal-toxified solid agar plates. 50 μL of each sample was spread by a sterile glass plate spreader and allowed to grow on 15% agar-solidified plates of 1/5 LB, ISP-4, or DMS. Those media (1/5 LB, ISP-4, and DMS) were supplemented with 5 mg/L of various d -amino acids ( d -valine, d -methionine, d -leucine, d -phenylalanine, d -threonine, and d -tryptophan obtained as high-purity compounds from different suppliers) as described by Nguyen and colleagues. 63 A trace-metal solution of H 3 BO 3 , MnCl 2 ·4H 2 O, ZnSO 4 ·7H 2 O, Na 2 MoO 4 ·2H 2 O, CuSO 4 ·5H 2 O, and Co(NO 3 ) 2 ·6H 2 O, as in BG-11 medium, 64 was also added (1 mL/L) to the culture media. Each medium was also toxified with either copper(II) chloride or cerium(III) chloride at concentrations of 2.5 or 5 mM, respectively. To mitigate the risk of water evaporation and ensure optimal conditions for bacterial growth, each plate was wrapped with parafilm and incubated at 28 °C. Metal-toxified plates were investigated daily to pick any colonies with different morphological features for further analysis. These colonies were picked directly into the next step, the CAS assay. CAS plates were prepared as previously described by Louden and co-workers, 65 with a slight modification including the utilization of either 1/5 LB or DMS as the base medium instead of the Minimal Media 9. Colonies picked from metal-toxified plates were recultured on their corresponding CAS medium (except for ISP-4 medium plates which were recultured on 1/5 LB-CAS plates as ISP-4 failed to form a stable blue color with the CAS dye). Plates were incubated at 28 °C and investigated for metal-binding small molecule secretion over 2 weeks. Bacteria that showed a positive yellow halo in the CAS assay were then picked and cultured on a liquid medium of either 1/5 LB or DMS with both d -amino acids and trace-metal supplementation as discussed before. After growth to turbidity, 25% glycerol stocks of those bacterial cultures were prepared by diluting an aliquot of the cultures 1:1 with sterile 50% glycerol and kept at −70 °C. These frozen stocks could be restarted by streaking out on plates for further investigations, as detailed below. For each bacterial isolate, approximately 5 mL of a fresh turbid liquid culture was used to extract DNA for 16S rRNA sequencing. DNA extraction was carried out using the OMEGA Bio-Tek E.Z.N.A. Bacterial DNA kit, following the manufacturer’s instructions without using the optional bead-beating step. Subsequently, the extracted DNA samples underwent 16S sequencing using the commercial vendor GENEWIZ’s Bacterial Identification Sanger-based service which targets V1 to V9. 66 Vendor sequence data were then subjected to analysis using Geneious software, version 2021. Using Geneious software, sequences of our bacterial isolates were trimmed to remove high error portions of the Sanger runs and then compared against each other and against sequences within the GenBank public database. HMW genomic DNA was isolated from the C. basilensis BL-MT-10 bacterium grown on 5 mL of DMS medium at 28 °C for 48 h. Following the incubation period, the bacterial cells were collected by centrifuging the entire culture broth at 21,000 rcf and 13 °C for 5 min. The bacterial pellet was subjected to HMW DNA isolation using the NucleoBond HMW DNA kit (Macherey-Nagel, Germany) following the manufacturer’s protocol with a modification in the lysis step. Briefly, the bacterial pellet underwent lysis using the bacterial cell lysis protocol as utilized in the OMEGA Bio-Tek E.Z.N.A. Bacterial DNA kit. For this step, TE buffer (100 μL) and lysozyme (10 μL) were added to the bacterial cell pellet, and this mixture was allowed to incubate for 10 min. Following this incubation period, an addition of TL buffer (100 μL) and proteinase K (20 μL) was made, followed by an hour-long incubation at 65 °C. Subsequently, 5 μL of RNase was introduced into the microcentrifuge tube and kept at room temperature for 5 min. After the lysis stage, the HMW DNA was isolated following the instructions detailed in the protocol of the NucleoBond HMW DNA kit. DNA was quantified using a Qubit fluorometer before shipping to the commercial vendor Plasmidsaurus for nanopore sequencing. The assembled genome obtained for Cupriavidus basilenesis was then subjected to the online genome mining software antiSMASH 7.0 48 for its potential metallophore biosynthetic capacity. C. basilensis was grown on 2 × 5 mL of DMS medium for 3 days at 180 rpm and 28 °C. Clear supernatants were obtained by centrifuging the C. basilensis cultures for 10 min at 21,000 rcf and 13 °C. These supernatants were subsequently fractionated using a RP-SPE column (RP-SPE) following elution with 1000 μL each of Milli-Q H 2 O, 50% aqueous MeCN, and MeCN as eluting agents. The collected fractions were then subjected to LCMS analysis to screen for metallophores. We found that these metabolites were eluted in the 50% aqueous MeCN fraction. The LCMS system was equipped with a Core–Shell Kinetex, 2.6 μm 50 × 2.1 mm 100 Å EVO C18 column from Phenomenex. The LC gradient pump method employed 0.1% formic acid–acidified H 2 O (redistilled) as solvent A and 0.1% formic acid–acidified MeCN (LCMS grade, various suppliers) as solvent B. Our LCMS method for metallophore identification was used. 45 Briefly, the gradient program consisted of an initial elution with 90% solvent A and 10% solvent B for 3 min, followed by a linear gradient to 25% solvent B at 5 min, further followed by a linear gradient to 99% solvent B over 7.5 min, with a 3 min hold at 99% solvent B, and finally a return to the initial elution conditions over 2 min, followed by a 2.5 min re-equilibration, all maintained at a flow rate of 450 μL/min. LCMS data analysis was conducted either manually or using the GNPS molecular networking tool to construct a molecular network. 49 The network was generated with a Small Data Preset as a Networking Parameter, using specific settings including a minimum matched fragment ion value of 6, a minimum cluster size setting of 2, and a cosine score of 0.55. These data are available publicly from the MassIVE archive with accession ID: MSV000094901. To isolate taiwachelin from the C. basilensis bacterium, a starter culture of 1 mL was introduced into 1 L of liquid DMS, divided into two separate batches, and placed in a 2.8 L baffled Erlenmeyer flask. The bacteria were allowed to grow at 28 °C with shaking at 180 rpm for 48 h. Subsequently, the metabolites were harvested from liquid cultures (2 × 1 L) by shaking with HP-20 resin (20 g/L) at 180 rpm for 2 h using an orbital shaker. The resulting suspension was filtered through filter paper to eliminate culture supernatant and cells, after which the remaining resin was rinsed with 1 × 500 mL of Milli-Q-purified water. The metabolites adsorbed to the resin were then eluted by using 4 × 100 mL of methanol. This methanol extract was concentrated via rotary evaporation, and the presence of metallophores was confirmed through LCMS analysis. The crude extract was subsequently fractionated using RP-SPE with a 5 g C18 column. Elution was carried out sequentially with 20 mL portions of H 2 O, 50% MeCN/H 2 O, and then MeCN. The fraction containing 50% MeCN/H 2 O was further purified via RP-HPLC , utilizing a C18 semipreparative column (Phenomenex Luna, 250 × 10 mm, 5 μm) with a flow rate of 3 mL/min. A gradient method of 0.1% formic acid–acidified H 2 O (Milli-Q) as solvent A and 0.1% formic acid–acidified MeCN (HPLC grade, Fisher Scientific) as solvent B was used. The gradient from 45% to 75% B over 15 min facilitated the isolation of the pure compounds. HPLC flowthrough was collected in 1 mL volumes, and peaks were identified by a 210 nm chromatogram. The organic solvent was removed from the collected fractions by rotary evaporation, and subsequently, the remainder was frozen and dried using a freeze-dryer (a Labconco Dry System/FreeZone 2.5 lyophilizer). The pure sample was then subjected to NMR and LCMS analyses. The NMR data sets are publicly available at the Natural Products Magnetic Resonance Database under archive number: NP0333454. 2.2 mg portion of isolated lyophilized taiwachelin was diluted into 221 μL of redistilled Milli-Q water (a 10 mM solution). Separately, metal salts, FeCl 3 ·6H 2 O (Sigma-Aldrich, ACS grade), ZnCl 2 (Fisher, ACS grade), CeCl 3 ·6H 2 O (Aldrich 99.9%), and CuCl 2 ·2H 2 O (Sigma-Aldrich), were also prepared as 10 mM stocks in Milli-Q water. An aliquot of the metallophore was mixed with the metal salt, and this mixture was diluted to 200 nM in 10% acetonitrile in water. These mixtures were run on LCMS with the method described above. Novel colonies from our strain isolation efforts were worked with in a Labconco Purifier Class I Safety Enclosure until 16S sequencing identified their nearest relatives as Biosafety Level 1 (BSL-1) organisms, at which point they were treated as such. Any group replicating our workflow should also treat uncharacterized bacteria as BSL-2, unless shown otherwise. | Review | biomedical | en | 0.999997 |
PMC11696755 | Wellbore instability is one of the most challenging problems in the oil and gas industry and is the reason for most drilling difficulties. It is estimated to have caused significant annual global losses, and 90% of these problems occur in low-permeability shale, which represents 75% of all drilled formations. 1 Lately, the proven oil and gas resources discovered in deep-water fields have boomed progressively, and the need for the recovery of these petroleum reserves makes it imperative to drill in extremely harsh environments, where the pressure and temperature are very high, and the formation is chemically active. To face this major challenge, it is essential to accurately consider all the factors affecting wellbore stability, including stresses, pressure, temperature, and chemical effects. To understand how the poroelastic, thermal, and chemical effects influence wellbore stability, one should understand how these effects influence the pressure, temperature, and stress distributions around the wellbore and how they are interdependent. Starting with the poroelastic effect, drilling using mud with a different pressure than the formation pressure causes fluid diffusion between the formation and the wellbore, which in turn changes the stresses and pressure around the wellbore. Moving to the thermal effect, drilling mud has a different temperature than the surrounding rock, which changes continuously by contact with the formation during circulation. This temperature change causes heat transfer between the wellbore and the formation by conduction and convection cooling the rock at larger depths and heating it at shallower ones. 2 Heat transfer has two impacts on wellbore stability. First, the stress profile surrounding the wellbore is changed by the generated thermal stresses. Second, the distribution of pore pressure is impacted by temperature fluctuations. Finally, the chemical effect is caused by the difference in salinity between the drilling mud and the formation fluid. This salinity difference causes water and salts to transfer between the wellbore and the formation. This affects wellbore stability by changing the pressure and stress distribution and reducing shale strength around the wellbore. The first attempt to study wellbore stability was by using a time-independent linear elastic model to calculate the concentrated stresses around the wellbore and compare them with rock strength using proper failure criterion. 3 − 5 The poroelastic theory was first developed by Biot 6 and was further developed by Detournay and Cheng, 7 who studied the poroelastic effects on delayed borehole instability and shear failure initiation inside the rock. 7 Cui et al. 8 also developed a time-dependent poroelastic model for inclined boreholes using a loading decomposition scheme. 8 , 9 Palciauskas and Domenico 10 first introduced the thermoporoelastic theory by studying the mechanical response of rock to heating during nuclear waste storage. It was further developed and several studies conducted wellbore stability analysis based on linear thermoporoelastic models neglecting convective heat transfer. 11 − 13 Roohi et al. 14 used a linear thermoporoelastic model to estimate the optimum reamer/bit size ratio in reaming while drilling (RWD) technology. The assumption of neglecting the convection heat transfer in mid or high-permeability formations is not valid. Therefore, Wang and Dusseault 15 consider this convection effect in their thermoporoelastic model for steam injection in high permeability formation. Chen and Ewy 16 studied both conductive and convective heat transfer for both a permeable and an impermeable boundary. Also, a fully coupled conductive-convective thermoporoelastic model during drilling in high-permeability sandstone was developed by Farahani et al., 2 and Gomar et al. 17 Thermal osmosis and thermal filtration effects were also considered in some studies such as Zhou et al., 18 Gao et al., 19 Liu et al., 20 and more recently, Fan and Jin 21 , 22 have studied the poroelastic and thermal convective effects considering the shale as a semipermeable boundary for nonhydrostatic in situ stress conditions. Some researchers have considered the chemical effect in analyzing wellbore stability taking into account thermal stresses and the flux of both water and solutes from drilling fluids into and out of shale formations. 23 − 25 Chen and Ewy 26 have used a chemo-poroelastic model to calculate pressure, stresses, and critical mud weights with and without including the undrained loading effect. Chen et al. 1 have studied the effects of mechanical forces and poroelasticity, as well as chemical and thermal effects on shale behavior. The effect of shale hydration on strength reduction has been studied by many researchers using different drilling fluids and shale samples at different soaking times. 27 − 29 Additionally, some studies have considered the impact of other factors on wellbore stability such as the presence of fractures, 30 rock strength anisotropy, 31 and the anisotropy of hydraulic and thermal conductivity of the rock formation. 32 Additionally, several research efforts have been invested in order to evaluate the mechanical, chemical, and thermal effects on wellbore stability, 33 while investigating the effect of different failure criteria as by Aslannezhad et al. 34 , 35 They investigated the effect of variation in temperature, mud salinity, and cohesion on the determination of a safe mud window. In the solution of their model, they used the complementary error function approach to describe transient phenomena of the temperature and pressure. Although this approach is widely used to obtain an analytical solution to the problem, it is primarily useful for short-term, transient analysis. While significant progress has been made in understanding and modeling wellbore stability under complex conditions, a comprehensive and integrated approach that simultaneously considers the coupled effects of poroelasticity, thermal, and chemical processes, as well as the influence of different failure criteria on wellbore stability in deep, high-pressure, high-temperature environments remains limited. This paper investigates the individual and coupled effects of poroelasticity, thermal, and chemical processes on wellbore stability in deep, high-pressure, high-temperature environments. The paper also assesses the influence of different failure criteria on wellbore collapse predictions. From that extent, four numerical models are developed to calculate the stresses acting on the wellbore according to the individual effects of poroelasticity, thermal, and chemical processes. The coupled interactions between poroelastic, thermal, and chemical effects on wellbore stress and pressure distributions are investigated. Additionally, the performance of four failure criteria in predicting wellbore collapse under various loading conditions and environmental factors is evaluated. The paper is divided into four main sections. In the first section, a brief introduction and review of the literature body is highlighted. In Section 2 , a description of the model development process is presented. In this section, the mathematical description of the four models is elaborated. In Section 3 , the results of the models described are presented, in addition to a discussion of the results. The validation of the models is also presented. Finally, the conclusions and recommendations are highlighted in Section 4 . Modeling wellbore stability involves five key steps. First, the in situ principal stresses are converted into the wellbore coordinate system. Second, the distribution of temperature, pressure, and stresses around the wellbore is computed, including various models (Elastic, Poroelastic, Thermoporoelastic, and Chemi-Thermoporoelastic). Third, the three principal stresses at each point around the wellbore are determined. Fourth, a failure criterion is applied to assess whether the wellbore can sustain the applied stresses or if failure is imminent. These criteria help predict the potential collapse area around the wellbore for a given mud weight. Finally, the computed principal stresses are compared to rock strength using the applied failure criterion to establish the safe mud window and optimal wellbore trajectory for drilling. These steps will be explained in more detail in the following subsection. The in situ principal stresses (σ v , σ H , σ h ) representing vertical, maximum horizontal, and minimum horizontal stresses respectively, are transferred into the local wellbore coordinate system (σ xx , σ yy , σ zz ) with its coinciding with the borehole axis at any azimuth and inclination angle (β, α) using the following equations from Abousleiman et al. 36 as shown in Figure 1 . 1 where, 2 The development of a comprehensive model to analyze the stress distribution around a wellbore, considering various influencing factors such as hydraulic, thermal, and chemical effects are outlined. We begin by discussing the foundational linear elastic model based on Kirsch’s solutions, which serves as the starting point for our analysis. Kirsch’s solutions describe the stress distribution around a circular hole in an infinite, homogeneous, isotropic elastic medium, providing the basis for understanding the wellbore stress response under different conditions. Building on this, poroelastic effects to account for fluid pressure interactions within the formation are incorporated, followed by thermoporoelastic effects to include thermal-induced stresses. Finally, we extend the model to chemo-thermoporoelasticity, capturing the combined influence of chemical reactions, temperature changes, and fluid pressure on the stress state around the wellbore. Starting from the linear elastic model, which assumes that the concentrated stresses around the wellbore only result from removing the rock column during drilling ignoring the hydraulic, thermal, and chemical effects. These stresses can be calculated using Kirsch’s solutions 37 as in ( eqs 3 – 8 ). 3 4 5 6 7 8 where the subscripts ( rr , θθ, and z ) denote the stresses in cylindrical coordinates in the radial, tangential, and axial directions, respectively. The terms σ r θ , σ rz , and σ θ z represent shear stress components in the radial-tangential, radial-axial, and tangential-axial planes, respectively. σ xx , σ yy , and σ zz refer to the far-field principal stresses in the Cartesian coordinate system, aligned with the x , y , and z directions, while σ xy , σ xz , and σ yz denote shear stresses. R w is the radius of the wellbore, r is the radial distance from the wellbore center where stresses are evaluated. The angle θ is measured in the cylindrical coordinate system from a reference direction. P w is the wellbore pressure, and Finally, v denotes the Poisson’s ratio of the rock. The hydraulic diffusion effect is addressed through the poroelastic model, which accounts for the changes in pressure and stresses resulting from the fluid exchange between the wellbore and the surrounding formation. This exchange is driven by the pressure differential between the drilling mud and the formation’s pore pressure. based on Biot’s theory, 6 the transient hydraulic diffusion can be given by eq 9 . 9 where P is the pore pressure, t is the time. c f is the diffusivity coefficient for fluid flow in the porous medium. The coupling between the stress–strain relationship and the hydraulic diffusion is done through the constitutive equations for a poroelastic medium as by the following equation. 10 Here, G is the shear modulus of the rock, and ϵ ij is the strain tensor component. λ is the Lamé’s first parameter δ ij is the Kronecker delta, which is 1 when i = j and 0 otherwise, ensuring that λδ ij ϵ kk only affects the normal components of the stress tensor, and α p is the Biot’s coefficient. The thermal effects are considered by coupling the transient temperature variation with the hydraulic formation pressure and the induced thermal stresses due to the expansion/contraction of the rock grains. The transient temperature distribution is given eq 11 . 11 where T is the formation temperature, c T is the thermal diffusivity of the rock. The left-hand side of the equation represents the transient heat accumulation. The first term on the right-hand side represents the heat transfer by diffusion, and the second term on the right-hand side represents the heat transfer by convection. For low permeability formations as the case in shale, this last term can be neglected, 38 and eq 11 becomes, 12 Therefore, eq 9 can be rewritten to account for the pressure/temperature coupling following. 13 with c ft as a coupling coefficient that links temperature change to pore pressure. Finally, the chemical effect is considered by the chemi-thermoporoelastic model, which incorporates the changes in pressure, stresses, and shale strength due to the transfer of water and salts between the wellbore and the formation, driven by salinity differences. The pressure change is quantified using the following equation. 1 14 where P π represents the osmotic pressure, R is the universal gas constant, T is the formation temperature, V is the partial molar volume of the water, I m is the shale membrane efficiency, and a wm and a wsh denote the water activity of the drilling mud and shale, respectively. Research has extensively examined the water activity and membrane efficiency of shale, along with the factors influencing them. 27 , 39 , 40 Generally, higher fluid salinity results in lower water activity. Conversely, lower membrane efficiency allows ions to move more freely between the mud and the shale, reducing osmotic diffusion. This proposed model captures the transient nature of pore pressure and temperature distributions through pore pressure variation, temperature variation, and chemical instabilities components. Although the base model for linear elastic solution considers a steady-state condition, eqs 9 and 12 describe the time-dependent evolution of pore pressure and temperature, respectively, considering the effects of fluid diffusion and thermal conductivity in the formation. Furthermore, eq 13 introduces a coupled temperature–pressure effect, accounting for temperature-induced changes in pore pressure. This approach allows for dynamic stress redistribution around the wellbore as temperature and pore pressure evolve over time. By incorporating these transient effects, the model provides a more comprehensive and realistic analysis of wellbore stability. In this section, the numerical methods employed to solve the governing equations for stress calculations around the wellbore are outlined. The finite difference method with forward approximation is utilized to discretize and solve these equations, and four distinct models are defined; the linear elastic model, the poroelastic model, the thermoporoelastic model, and the chemi-thermoporoelastic model. First, for the linear elastic model, the stresses are directly computed using Kirsch’s solutions as presented in ( eqs 3 – 8 ). These solutions provide analytical expressions for the stress components around a circular wellbore in an infinite elastic medium. Second, in the poroelastic model, the finite difference method is employed to discretize both the pressure equation ( eq 9 ) and the stress–strain relationship ( eq 10 ) as follows. 15 The hydraulic-induced stresses expressed by the following relationship. 16 17 18 where P f ( r , t ) = P ( r , t ) – P o . The total stress is then computed by adding the hydraulic-induced stresses to the mechanical stresses from the linear elastic model. Third is the thermoporoelastic model, which extends the poroelastic model by including thermal effects. We solve the transient temperature distribution equation ( eq 12 ). 19 The coupled pressure/temperature equation ( eq 13 ) is given by the following. 20 Similarly to the poroelastic model, the induced thermal stresses are calculated according to the following equations. 21 22 23 where T f ( r , t ) = T ( r , t ) – T o . For this model, the induced thermal stresses shown by ( eqs 21 – 23 ) and the induced hydraulic stresses shown by ( eqs 16 – 18 ) are added to the linear elastic model. However, for that model, eq 20 is used for calculating the formation pressure. The initial conditions for the pressure and temperature are expressed as, and the boundary conditions are, Finally, The chemi-thermoporoelastic model incorporates chemical effects by accounting for osmotic pressure changes. The osmotic pressure is computed using eq 14 . This effect influences the interface between the wellbore and the drilling fluid. It is worth noting that in this work, the osmotic pressure is assumed to be steady-state, as the difference in the salinity between the formation and the drilling fluid is significant, which leads made the effect occurring instantaneously. In that case, the boundary conditions of the pressure at the wellbore wall become as follows. 24 As shown in Figure 2 , the radial stress is always one of the three principal stresses acting on the wellbore, and the θ– z plane contains the other two principal stresses, which can be calculated by the following equations. 41 25 26 27 where σ j and σ k are oriented at angles γ 1 and γ 2 from the z -axis of the wellbore, respectively, and can be calculated by the following equations. 28 29 To assess the stability of the rocks surrounding the wellbore, a failure criterion should be assigned. A rock fails when the surrounding stress exceeds its tensile or shear strength, whichever is reached first, and the type of failure depends on rock lithology and the applied stress. Table 1 presents a comprehensive overview of various rock failure criteria, detailing the governing equations, relevant rock parameters, linearity, and the effect of intermediate principal stress (σ 2 ). The table presents 5 shear failure criteria beginning with the Mohr-Coulomb criterion, followed by the Drucker-Prager criterion, the Mogi-Coulomb criterion, the Modified-Lade criterion, and finally Hoek-Brown criterion. Each criterion offers a different approach to modeling rock failure: the Mohr-Coulomb and Mogi-Coulomb criteria assume linearity and either ignore or consider σ 2 , while the Drucker-Prager, Modified-Lade, and Hoek-Brown criteria are nonlinear, with varying considerations for σ 2 . 31 , 37 Additionally, the study includes the tensile failure criterion, which addresses conditions where tensile stress leads to failure. This criterion is simply comparing the minimum principle stress with the rock tensile strength. The criterion assumes a tensile failure of the rock takes place if the minimum effective principle (σ 3 ′ ) stress acting on the rock exceeds the tensile strength. A MATLAB code is developed that contains the set of equations described previously, in addition to the failure criteria detailed in Table 1 , which aims to predict the collapse failure according to the input parameters and also set the safe mud window. This code integrates various rock failure criteria allowing for comprehensive stability assessment of wellbore rocks. The code evaluates the input parameters to determine the likelihood of tensile or shear failure. By identifying these failure points, the code helps in defining the safe mud weight window necessary to maintain wellbore stability. The algorithm that the code follows is presented in Figure 3 , illustrating the logical flow. The code first starts by importing the required input data which are formation elastic, thermal and chemical properties, and the in situ principal stresses. These data can be determined from logging data, and the mud pressure thermal and chemical properties. Second, at any inclination and azimuth angle, the in situ principal stresses have been transformed into the wellbore coordinate system. Then the pressure and stress distributions have been calculated using the different models (elastic, poroelastic, thermoporoelastic, chemi-thermoporoelastic) models. Then the calculated concentrated normal and shear stresses have been transformed into the three principal stresses acting at each point around the wellbore. Finally, the calculated effective principal stresses have been compared with the rock shear and tensile strength using the different shear and tensile failure criteria to predict the collapse area at any specific mud weight and the mud window at any inclination and azimuth angle. The numerical solution developed in the previous section is verified by comparing it with the models presented by Ding et al. 32 for wellbore stability analysis, which accounts for the effects of anisotropic thermal and hydraulic conductivity. In their work, they introduced two models, first, is a semianalytical solution that uses the Stehfest method for Laplace inversion, referred to in this work as the (Laplace inversion method). The second is an analytical solution that assumes early time and small radial distance conditions, which will be referred to as the (Error function method). The verification results using the input data from Ding et al., 32 shown in Table 2 are presented in Figure 4 . The figure shows the results from the present model in comparison with the models from Ding et al. 32 according to the temperature distribution , pressure distribution , induced hydraulic stresses , and induced thermal stresses . The comparison is carried out by considering both isotropic (ICC) and anisotropic (ACC) conditions at two distinct times, representing short-term and long-term behavior ( t * = 0.1 and t * = 10). Here, the dimensionless time t * is defined as . For the anisotropic cases (ACC), the concept of the effective diffusivity has been utilized as proposed by the ref ( 32 ), as the effective thermal and hydraulic difficulties have been evaluated according to the following, 30 31 where c f,e and c T,e are the effective hydraulic and thermal diffusivity, respectively. the subscripts (1–3) denote the respected quantity along the different axes. In this case, c f,1 = c f,2 = c f,∥ , and c f,3 = c f,⊥ and the thermal diffusivity is treated in the same way. The angle α is the angle between the gradient and the axes. For the comparison between the results, the axes of the bedding are aligned with the principle axes as in the reference. For the isotropic case (ICC), only the diffusivity along the bedding planes is considered. The results demonstrate a strong agreement between the present model and the Laplace inversion method across all the conditions evaluated. This consistency is evident for both isotropic (ICC) and anisotropic (ACC) hydraulic and thermal conductivity scenarios at different time scales ( t * = 0.1 and t * = 10), confirming the validity of the current work. However, a noticeable discrepancy arises between the Error function method and the other two methods, particularly at t * = 10. This deviation results from the assumptions made in the Error function method, which are based on early time and small radial distance conditions, as described by the original authors. 32 The simplifications inherent in this approach limit its accuracy at later times, leading to the observed differences. This comparison presented in the figures confirms both the verification and validation of the present solution. In this section, the present model is applied to analyze the stability of the wellbore using data from Tables 2 and 3 , assuming isotropic conditions for the thermal and hydraulic diffusivity. As mentioned earlier, the four time-dependent wellbore stability models are considered to calculate temperature, pressure, stress distributions, and strength reduction when using drilling fluids with varying temperatures and salinity. Figure 5 a, 5 b show pressure distribution at different radii from the wellbore using different wellbore stability models after 1 and 24 h from the formation exposure to the drilling fluid, respectively. A slight increase in the formation pressure with a maximum magnitude at the wellbore walls can be in the observed poroelastic model. This is because of overbalanced drilling conditions, and the fluid diffusion occurs from the wellbore into the formation which leads to an increase in the formation pressure. By examining the thermal effect, a significant influence on formation pressure is noted. It is observed that after 1 h of exposure to drilling, the formation pressure increases from 10 to 24 MPa when the temperature of the drilling fluid exceeds the formation temperature by 60 °C (Δ T = +60 °C). Conversely, when the formation temperature exceeds the drilling fluid temperature, a notable decrease in formation pressure is observed. An increase in temperature causes the expansion of the formation fluid, rock grains, and structure. For a given increase in temperature, the volume change of formation fluid inside the pore spaces is greater than the volume change of porosity, and hence the pore pressure increases. The pressure is then dissipated due to Darcy flow, as can be noticed in Figure 5 b. The peak pressure magnitude is decreased and also shifted deep inside the formation. The magnitude and dissipation time needed for that phenomenon depends on the thermal diffusion in comparison with the hydraulic diffusivity of the rock. In the case of low hydraulic diffusivity, and high thermal conductivity as in the case of clay formation, this effect is maximized. This effect has been also noted by previous research. 42 , 43 It is worth mentioning that It is acknowledged that a temperature difference of ±60 °C may seem large under conventional drilling conditions. However, such temperature differences are not uncommon in certain drilling environments. For instance, geothermal wells often experience significant temperature gradients, with temperature differences of this magnitude being typical in many geothermal drilling scenarios. 44 − 46 Similarly, high-pressure high-temperature (HPHT) wells, as well as deep offshore wells and wells drilled in permafrost regions, can also experience significant temperature differences due to extreme depth, pressure, and environmental conditions. 47 Finally, noticeable effect on the formation pressure distribution considering the chemical effect. The drilling fluid with lower salinity mud (higher water activity) causes fluid diffusion from the wellbore into the formation by the osmosis pressure which increases the pore pressure, while higher salinity mud reverses the effect. By comparison to the thermal effect, the chemical effect has a shallow influence near the wellbore walls. As can be seen from Figure 5 a, the change in the formation pressure after 1 h due to the osmosis only reached 1.2 of the wellbore radius and extended to 1.8 after 24 h. For wellbore stability analysis, it may maybe more relevant to investigate the resultant stresses acting on the formation, and to further analyze whether this formation will be able to hold the stresses without failure or not. For that reason, a comparison between resultant stresses considering the four main effects is performed as shown in Figure 6 . By examining the results in comparison to the base model (elastic), one can observe a small increase in the radial and tangential stresses utilizing the poroelastic model. Considering the thermal effects, a significant change in both radial and tangential stresses can be observed in comparison to the elastic model for both heating and cooling scenarios. Here, we can differentiate between two effects. First is the stresses resulting from the expansion/contraction of the solid material of the rock due to the temperature difference, including the pore space itself. And second is the stresses resulting from the expansion/contraction of fluid inside the pore space due to the temperature change. To differentiate between both effects, one should analyze Figure 6 in correlation with Figure 5 . Initially, a rapid and shallow change in stresses is observed, which is attributed to the immediate pressure increase within the pore spaces. This is evident in Figure 6 a,c, which show the stress distribution after 1 h. At the wellbore wall ( r = R w ), the tangential stress reaches 68 MPa in the thermoporoelastic model, compared to 58 MPa in the elastic model. For radial stresses, both models show equal stress at the wellbore wall initially. However, deeper inside the formation, at r = 1.7 R w , the thermoporoelastic model records a maximum radial stress of 24 MPa, compared to 20 MPa for the elastic model. Notably, for the thermoporoelastic model, the stresses start to decrease as one moves deeper into the formation, a trend also observed for tangential stresses. However, in Figure 6 b, 6 d, which display the results after 24 h, the stress distribution within the formation becomes more monotonic, with no rapid changes. This gradual change reflects the slower process of thermal expansion/contraction of the rock matrix. In the case of cooling, these phenomena are mirrored, with the stresses decreasing instead. These variations in stress are evident in both radial and tangential stresses. It is worth noting that after 1 h, the thermoporoelastic model reveals an interesting behavior with the tangential stress observed at a radius greater than 1.9 times the wellbore radius ( r > 1.9 R w ) where the effect of heating is minimal, as demonstrated in Figure 6 c. One explanation for that result is at distances further away from the wellbore, the expansion effects are more pronounced in the surrounding formation, where the pressure buildup has dissipated, leading to a localized reduction in stress, which creates a zone of minimum stress deeper in the formation. The chemical interactions between the fluid and the rock formation also show a role in stress distribution when considering fluids of varying salinity. In comparison to the thermal effect, the chemical effect is much slower, however, it can still influence both radial and tangential stresses over time. For radial stresses, after 1 h as in Figure 6 a, the high salinity fluid causes a slight decrease in the near wellbore area ( r = 1.7 R w ). Beyond this point, the radial stresses align with those predicted by the thermoporoelastic model, and the chemical effect vanishes. This slight decrease in radial stress can be attributed to the osmotic pressure differences and ion exchange processes, which slightly alter the stress distribution close to the wellbore. On the other hand, for low salinity fluid, the radial stress distribution closely follows the thermoporoelastic model, making it difficult to distinguish any significant difference between the chemi-thermoporoelastic and thermoporoelastic models at this stage. However, after 24 h, the chemical effect becomes more pronounced. In the high salinity case, the influence on radial stresses extends deeper into the formation, reaching a radius of approximately 3 R w . The more significant effect observed in the high salinity case after 24 h is likely due to the prolonged interaction between the ions in the fluid and the rock matrix, which leads to more changes in pore pressure and stress redistribution over time. Concerning the tangential stresses, the impact of the chemical effects acts differently. After 1 h, the change in tangential stresses due to chemical effects is limited to a radius very shallow to the wellbore walls ( r = 1.15 R w ). However, as time progresses, the tangential stresses are further influenced by the ongoing chemical reactions, with the effect reaching a radius of about 1.5 R w after 24 h. This gradual expansion of the affected zone indicates a slow but steady redistribution of tangential stresses as the chemical interactions progress. To translate the discussion of the previous section into practical applications within the drilling industry, it is essential to analyze the predicted stability and identify the stable regions around the wellbore. As elaborated in the previous section, the complex interactions between thermal, chemical, and poroelastic effects significantly influence the stress distribution, which in turn affects wellbore stability. Each factor contributes individually to stability conditions. Thermal effects primarily induce stress through the expansion or contraction of both rock and pore fluid, while chemical interactions, driven by osmotic pressures and ion exchanges, gradually alter stress close to the wellbore. These thermal and chemical effects combine with poroelastic responses to fluid diffusion under overbalanced drilling conditions, leading to intricate stress redistribution patterns that influence both radial and tangential stresses over time. Another key factor in this analysis is the failure criterion, which determines the conditions under which the rock surrounding the wellbore may fail or remain stable. Figure 7 shows the predicted collapse failure zone around the borehole using different modeling scenarios around a wellbore. Each plot corresponds to a specific combination of failure criterion and model conditions, as labeled at the top of each plot. The figure is organized as rows and columns. Each row presents different types of models or conditions, and the columns represent the different failure criteria. All figures are generated using the 1 h stress result. From the figure, one can observe that Mohr-Coulomb and Drucker-Prager failure criteria generally show more extensive yielding zones than Mogi-Coulomb and Modified-Lade criteria. The failure criterion significantly impacts the predicted failure zones. Mohr-Coulomb and Drucker-Prager show similar patterns, while Mogi-Coulomb and Modified-Lade criteria exhibit unique stress distribution characteristics. As can be seen, Mohr-Coulomb displays a more uniform yielded zone around the wellbore, indicating broader concentrated yielding occurring at azimuths perpendicular to the maximum horizontal stress. This pattern implies that the Mohr-Coulomb criterion may be more sensitive to the uniformity of applied stresses and may predict washout around the entire borehole wall. Drucker-Prager, on the other hand, is also extensive but slightly more localized than Mohr-Coulomb, and like Mohr-Coulomb, predicts broader yielding areas. In contrast, Mogi-Coulomb and Modified-Lade predict more localized yielding patterns, indicating stress concentration primarily in the direction of the minimum principal stress. This results in breakout orientations rather than a washout pattern, as seen in the Mohr-Coulomb case. Mogi-Coulomb and Modified-Lade thus suggest a more anisotropic stress distribution, where stress changes are less pronounced across the wellbore, reducing the likelihood of complete wellbore wall failure. This characteristic implies that these criteria are more sensitive to triaxial stress conditions and predict a gradual initiation of failure that localizes rather than disperses stress, as reflected in the smaller, distinct yielded zones shown in the analysis. The Poroelastic model shows a similar pattern as in the elastic model when it comes to failure zone prediction. Mohr-Coulomb and Drucker-Prager show more conservative behavior, predicting larger yielded zones, while Modified-Lade is less conservative. Thermal effects show a noticeable influence on the predicted yielded zones across all failure criteria utilized in this study. First, cooling induces contraction, generally reducing stress levels, but potentially increasing tensile stresses around the wellbore. Mohr-Coulomb and Drucker-Prager criteria continue to predict failure occurrence, with a noticeable reduction compared to the nonthermal poro-elastic model. On the other hand, heating shows exacerbates plastic deformation, increasing the risk of wellbore instability. The difference between failure criteria becomes a little less distinguishable, as they all predict the same failure pattern. However, Mohr-Coulomb and Drucker-Prager showed the most significant increases in the yielded zone. Finally, combined chemical and thermal effects, with higher and lower salinity and thermal changes using the chemi-thermoporoelastic model. Higher salinity enhances the stability of the formation around the wellbore with the four failure criteria. In comparison with the thermo-poro-elastic model, under cooling conditions, one can notice that there is no failure predicted in the case of Mogi-Coulomb or Modified-Lade. In the lower salinity case, there is no noticeable difference compared to the thermoporoelastic scenario. The predicted failure zone is almost identical, with only a slight change observed as a small expansion of the yielded zone around the circumference of the wellbore, rather than deeper within the formation. Based on these observations, we conclude that Mohr-Coulomb and Drucker-Prager criteria provide more conservative predictions, making them suitable for applications in high-uncertainty or less-developed fields where stability is critical and a larger safety margin is preferred. These criteria are valuable in situations where limited data are available, as they require only a few key parameters to be utilized, such as cohesion and friction angle. This makes them advantageous in early stage field development, where detailed geomechanical data may be lacking. In contrast, the Mogi-Coulomb and Modified-Lade criteria, which predict smaller yielded zones, may be more suitable for well-characterized, more-developed fields with lower uncertainty and greater operational knowledge. Therefore, the choice of failure criterion should be guided by both the specific stability requirements of the drilling operation and the available field data. These results show that the choice of failure criterion significantly impacts the predicted stress distribution and extent of plastic deformation around the wellbore. Mohr-Coulomb and Drucker-Prager are more conservative, predicting larger plastic zones, while Mogi-Coulomb and Modified-Lade are less conservative, with Modified-Lade often predicting the least deformation. The interaction of thermal and chemical effects with the selected failure criterion can significantly alter the predicted wellbore stability. Heating generally increases the extent of plastic deformation. Cooling tends to reduce the plastic zones, but the overall pattern remains influenced by the failure criterion. As an application of this study, wellbore stability analysis is employed to predict safe mud weight margins, for preventing wellbore collapse and fracturing. It is defined by the difference between the maximum allowable mud weight (to prevent fracture of the formation) and the minimum allowable mud weight (to prevent wellbore collapse). This difference, P window = P frac – P collapse defines the safe operational zone. The larger P window magnitudes correspond to a wider mud window with less risk for instability issue, P window ≤ 0 corresponds to a nonstable wellbore. Wellbore orientation (inclination angle and direction) can alter the resulting stresses acting on the wellbore walls, which can influence the stability of the well. Therefore, in this analysis, different models and failure criteria are used to predict the mud window across various wellbore orientations and inclinations. The results are visualized using stereo net plots as shown in Figure 8 , where Azimuth (0–360° around the circumference) represents the orientation of the wellbore in the horizontal plane, with North at 0° and South at 180°, and inclination is represented radially, with vertical wellbores at the center (0°) and increasing inclination toward the outer edges (90°). Mohr-Coulomb and Modified Lade collapse failure criteria were selected for this analysis, and they represent the most and the least conservative criteria as described in the previous section. A MATLAB code is developed for calculating the minimum and the maximum allowable mud weight for each azimuth and inclination angle as described in Figure 3 . In the elastic model case, the Mohr-Coulomb criterion predicts a critically narrow mud window, particularly for vertical wells, and horizontal wells in the direction of the maximum horizontal stress. However, for high-angle wells, in the direction of the minimum horizontal stress the mud window is wider. When the Modified Lade criterion is applied to the elastic model, the mud window becomes wider, reflecting a less conservative estimate of wellbore stability. As pore pressure and thermal effects are considered (in the poroelastic and thermoporoelastic models), the mud window narrows noticeably in case of heating. The influence of temperature is particularly significant. Under high-temperature conditions, the mud window shrinks significantly, with most well orientations, showing red zones, suggesting instability. An increase in formation temperature contributes to increased pore pressures and radial stress expansion, as well as an increase in tangential stress distribution near the wellbore. This combined stress effect can cause reduced formation integrity, especially in zones prone to thermal expansion, resulting in a narrower mud-weight window. Lower temperatures (−60 °C) somewhat alleviate this issue. The inclusion of chemical effects shows minor effects on the stable mud window. It is worth noting that the anisotropic stress state of the formation, defined by the stress magnitudes σ V = 30 MPa, σ H = 30 MPa, σ h = 20 MPa, has a significant effect on wellbore stability. Due to this stress anisotropy, horizontal wells oriented in the direction of the minimum horizontal stress (σ h ) show the most stable conditions, as evidenced by larger blue zones. This occurs because, in these orientations, the differential stresses acting on the wellbore are smaller, which reduces the likelihood of collapse or fracture. In contrast, wells aligned with the maximum horizontal stress (σ H ) are less stable, which is reflected by red zones (negative mud window values) indicating unsafe drilling conditions. Since the vertical stress (σ V ) is equal to the maximum horizontal stress (σ H ), this results in higher stress concentrations around the wellbore and narrows the mud window. In the elastic and poroelastic models, the failure criterion plays an important role in determining the size of the mud window and the regions of stability. In the thermoporoelastic models, which account for temperature effects, both the Mohr-Coulomb and Modified Lade criteria yield very similar results. The convergence of the two criteria under thermal effects suggests that thermal stresses dominate the failure mechanisms, reducing the differences between the two criteria. As a result, both models predict similar regions of stability. Further efforts to demonstrate the effect of time on the stability of the wellbore, the collapse area has been predicted at different times using the chemi-thermoporoelastic model and Mogi-Coulomb failure criterion in the case of cold-low salinity mud. Figure 9 shows that the collapse area increases with time since the cooling effect reduces the tangential stress and the pore pressure near the wellbore diminishes with time as shown in Figure 4 and Figure 6 c, 6 d after 1 and 24 h. Also, more fluid diffuses from the wellbore into the formation by the poroelastic and chemical effect which increases the pore pressure and the tangential stress making the formation more susceptible to collapse. This diffusion is driven by the overbalanced drilling conditions, which cause a differential pressure between the wellbore and the surrounding formation that drives the fluid from the wellbore into the formation. This fluid diffusion increases the pore pressure in the formation and results in an elevated tangential stress near the wellbore. The chemical effect, on the other hand, further enhances fluid diffusion through osmosis. The osmotic pressure difference due to different fluid salinity between the drilling fluid in the wellbore and the formation fluid drives additional fluid flow into the formation. The effect of time on the fracture pressure has been presented using the different models and Mogi-Coulomb failure criterion as shown in Figure 10 . The elastic model does not consider any time-dependent effects, so the fracture pressure is constant with time. In the other models, fracture pressure changes with time until some point ( t = 1 h) when it starts to stabilize. Starting with the poroelastic effect, as time passes, it causes more fluid diffusion inside the formation which increases the pore pressure and decreases the effective stress making the formation more susceptible to fracture so the fracture pressure decreases with time. As the thermal effect on pressure decreases with time near the wellbore, the pore pressure increases with time during cooling conditions (Δ T = −60 °C) and decreases the minimum effective principal stress making the formation fracture easier. So, the fracture pressure decreases with time. However, during heating conditions (Δ T = +60 °C), the pore pressure decreases with time which increases the minimum effective principal stress making the formation more stable against tensile fracturing. Considering the chemical effect, we can note an increase in the difference between the fracture pressure predicted by the chemi-thermoporoelastic model and the thermoporoelastic model during both higher and lower salinity mud. This means that the chemical effect increases with time increasing the fracture pressure during higher salinity mud ( a wm < a wsh ) and decreasing it during lower salinity mud ( a wm > a wsh ). As time passes, low water activity mud causes more fluid diffusion from the formation to the wellbore by osmosis effect which decreases the pore pressure and increases the minimum effective principal stress making the rock more stable against fracturing. Figure 11 shows the effect of time on the collapse pressure using different models and Mogi-Coulomb failure criterion. In comparison with fracture pressure variation, the collapse pressure takes more time to stabilize ( t = 1 day). Also, the thermal effect worsens the formation’s stability to collapse with time increasing the collapse pressure during both cooling and heating the formation. Moving to the chemical effect, using higher salinity ( a wm < a wsh ) mud enhances the stability to collapse and decreases the collapse pressure with time, while lower salinity mud has a reverse effect. Furthermore, to provide a more comprehensive analysis regarding the thermal effects, additional simulations with a temperature difference range, extending from +60 to −60 °C were performed as shown in Figure 12 . The figure shows the relationship between collapse pressure and fracture pressure and the temperature difference (Δ T ), which is calculated as the difference between the wellbore temperature ( T w ) and the formation temperature ( T f ). The collapse and fracture pressure were calculated using the Chemo-Thermoporoelastic model and the Mogi-Coulomb Failure Criterion at t = 1 min. It is noticeable that The width of the mud window narrows significantly as Δ T increases. As the temperature difference (Δ T ) increases in the heating scenario, both the collapse pressure—representing the minimum mud weight required to prevent wellbore collapse—and the fracture pressure—indicating the maximum permissible mud weight to avoid formation fracturing—show an upward trend. However, when the temperature difference Δ T ≥ + 5 °C, the collapse pressure begins to exceed the fracture pressure, resulting in a negative mud window. This indicates that maintaining a stable wellbore becomes unachievable under such differential temperature conditions, as no viable mud weight range can simultaneously satisfy both stability criteria. Conversely, unlike the heating scenario (+Δ T ), in the cooling scenario, the collapse pressure consistently remains lower than the fracture pressure across the range of negative temperature differences. Additionally, it can be observed that the mud window widens as the temperature of the drilling fluid decreases relative to the formation temperature. This work introduced a novel holistic approach to wellbore stability analysis by integrating poroelastic, thermal, and chemical effects into a comprehensive modeling framework. An in-depth examination of the coupled interactions between these effects was provided, yielding new findings into stress distribution and instability risk in high-pressure, high-temperature environments. Four stability models were used to analyze wellbore stability, comparing four shear failure criteria—Mohr-Coulomb, Drucker-Prager, Mogi-Coulomb, and Modified-Lade—to predict collapse areas, safe mud windows, and optimal wellbore trajectories. The study highlighted the significant role of time-dependent effects such as hydraulic, thermal, and chemical interactions, as well as drilling conditions such as mud pressure, temperature, salinity, and wellbore trajectory, on stability analysis. The results show that, due to the high thermal diffusivity of shale, thermal effects have a more pronounced impact on wellbore stability compared to poroelastic and chemical effects. The poroelastic effect increases the collapse area by 5%, while the thermal effect minimizes the collapse area by 80% during formation cooling and enlarges it by 140% during formation heating. The chemical effect decreases the collapse area by 20% using higher salinity mud and increases it by 10% using lower salinity mud. Regarding fracture pressure, the hydraulic effect reduces the fracture pressure from 20.4 to 17.4 MPa, a decrease of 15%. The thermal effect decreases fracture pressure by 30% during formation cooling and increases it by 15% during heating. Higher salinity mud enhances fracture stability by increasing fracture pressure by 15%, whereas lower salinity mud decreases it by 7%. Additionally, the anisotropic stress state of the formation significantly impacts wellbore stability, with a larger collapse area observed in the direction of the minimum principal stress. The comparison of four rock failure criteria is a unique contribution of this paper. Mohr-Coulomb and Drucker-Prager, which predicted 15–20% larger collapse areas, provide a more conservative approach. The Mohr-Coulomb criterion predicts a critical narrow mud window for vertical wells and horizontal wells oriented in the direction of the maximum horizontal stress, while for high-angle wells oriented in the direction of the minimum horizontal stress, the mud window is wider. The Modified Lade criterion reflects a less conservative estimate with a wider mud window. To enhance the understanding of wellbore stability and identify key influencing parameters, a global sensitivity analysis using Monte Carlo simulation could be employed, providing valuable insights for robust and resilient wellbore design. | Study | other | en | 0.999997 |
PMC11696767 | The coronary artery comprises epicardial arteries (diameter >500 μm), pre-arterioles (500 μm > diameter >100 μm), arterioles (100 μm > diameter >10 μm), and capillaries, with pre-arterioles, arterioles, and capillaries collectively constituting the coronary microcirculation. 1 Coronary microvascular dysfunction (CMD) refers to structural and functional alterations in pre-arterioles and arterioles that can lead to coronary blood flow impairment and ultimately myocardial ischemia. 1 Approximately 50–70 percent of patients with myocardial ischemia and no obstructive arteries are considered to have concurrent CMD. This proportion reaches as high as 80 percent in the female population, correlating with a poor prognosis. 2 , 3 , 4 Despite this prevalence, the key pathogenetic mechanism of CMD remains largely unknown, and effective targeted interventions are lacking. 5 , 6 Consequently, robust exploration of CMD pathogenesis is crucial for studies aimed at improving diagnosis and treatment. Genetics plays an important role in the occurrence and development of coronary artery disease (CAD). Recently, detecting risk loci of CAD has become one of the important methods for identifying high-risk patients or providing them with specific therapies. 7 , 8 Single nucleotide polymorphism (SNP) refers to a DNA sequence polymorphism caused by a single nucleotide variation. 9 The clinical transformation of SNP sites of cardiovascular disease including CAD has progressed rapidly. For example, detecting SNPs in the proprotein convertase subtilisin/kexin type 9 ( PCSK9 ) gene is beneficial to diagnosing hypercholesterolemia and determining the risk of atherosclerosis (AS) and CAD. 10 Similarly, genotyping of the vitamin K epoxide reductase complex 1(VKORC1) can predict a high risk of overdose before initiation of anticoagulation therapy and facilitate the development of a personalized anticoagulation treatment regimens. 11 Therefore, the search for SNP sites is consequently highly significant for enhanced diagnosis, precise therapies, and improved prognosis of these diseases. Nevertheless, few SNP studies have focused on CMD, and a large number of CMD-related SNPs have yet been found. In this article, we give an overview of the limited known CMD-associated SNPs from the view of coronary microvascular structure and function, and we predict potential loci that could significantly drive the development of CMD by highlighting SNPs associated with its pathogenesis and risk factors. We aim to provide a novel approach for subsequent SNP-related research on CMD diagnosis and precise prevention and treatment for CMD. As outlined above, SNPS are variations in DNA sequences that involve the substitution of a single nucleotide base for another within a genome. Typically occurring in SNPs usually occur in non-coding regions, SNPs can influence the structure and function of targeted proteins, especially when they occur at regulatory sites of genes. This can contribute to the occurrence and development of multiple diseases. 9 Once proposed by Eric S. Lander as the third-generation molecular marker in 1996, SNPs are characterized by their abundant presence throughout the genome, easy automatic analysis, and high genetic stability. 12 Over recent decades, multiple advances in sequencing technology and the decreasing costs of genetic testing, have facilitated extensive clinical research aimed at identifying disease-associated and disease-causing variants. 13 To date, SNPs have been widely used in biological and medical research fields such as human disease gene screening, disease diagnosis and risk prediction, and personalized drug screening. 14 , 15 The heritability of risk factors and regulatory function of pleiotropic region genes on cardiovascular disease have been widely acknowledged. 16 Thus, finding CMD-associated SNP sites are beneficial to improve the diagnosis, precise therapy, and prognosis of CMD. The pathogenesis of CMD involves structural and/or functional remodeling of the coronary microcirculation due to the dysfunction of endothelial cells (ECs) and vascular smooth muscle cells (VSMCs), as well as microvascular remodeling. 1 Among them, coronary structural abnormalities mainly include microvascular occlusion and remodeling, while functional abnormalities include the dysfunction of microvascular vasoconstriction and vasodilatation. 17 , 18 However, the mechanisms underlying CMD pathogenesis remain unclear. At present, oxidative stress and the consequent inflammatory response are regarded as the key mechanisms of CMD progression. 19 Impaired endothelial-dependent vasomotor activity is manifested as impaired nitric oxide (NO)-mediated vasodilation due to intracellular reactive oxygen species overproduction or enhanced endothelin-1 (ET-1)-mediated vasoconstriction through activation of RhoA/Rho-kinase pathway. 20 , 21 RhoA/Rho-kinase has also been implicated in VSMC hypercontraction leading to the spasm of coronary vessels and inflammation in ECs and VSMCs. 21 Diabetes, hyperlipidemia, and hypertension are the most important risk factors for CMD. 22 , 23 , 24 Several studies have reported that impaired endothelial NO production in the microcirculation due to endothelial dysfunction and vascular insulin resistance, as well as microvascular rarefaction and diminished angiogenesis, could lead to myocardial perfusion defects in patients or animals with diabetes. 25 , 26 In hypertensive populations, rarefaction and remodeling of intramyocardial coronary circulation, along with left ventricular hypertrophy, could contribute to CMD. 27 , 28 Numerous clinical studies have found that hyperlipidemia significantly impacts endothelium-dependent vasomotor function and acts as a major risk factor for CMD, with elevated levels of total cholesterol and low-density lipoprotein cholesterol. 29 , 30 The constriction of coronary microvessels has been considered to be the functional mechanism of CMD, 1 mediated by vasoconstrictors. Increased release of contractile agonists leads to abnormal vasoconstriction. Endothelin-1 (ET-1) induces the construction and remodeling of resistance arteries via the calcium-independent activation of Rho-kinase and the subsequent phosphorylation of the myosin light chain, causing CMD. 21 The rs9349379-G allele augmented the CMD risk by modulating vasoconstriction with higher plasma ET-1 levels. Patients with the rs9349379-G allele presented peripheral microvessel reactivity to ET-1 and vasoconstriction can be reversed by Zibotentan, an endothelin receptor blocker, indicating a potential targeted intervention for CMD 31 . Figure 1 CMD-related SNPs. CMD, coronary microvascular dysfunction; SNP, single nucleotide polymorphism. Figure 1 Table 1 CMD associated SNPs. Table 1 Target gene(s) Variant/alleles Intermediate phenotype References PHACTR1/EDN1 rs9349379-G Plasma ET-1 levels↑ 32 NOS3 rs1799983 GT Endothelial dysfunction 36 , 37 KCNJ11 rs5215 AA, GA, GG rs5218 CC rs5219 GA, GG rs5216 GG, CC Kir6.2 subunit of K ATP+ 37 , 40 PTX3 rs2305619 AA Inflammation 41 rs1333040 TT Impaired neovessel maturation 42 CMD, coronary microvascular dysfunction; ET-1, Endothelin-1; SNP, single nucleotide polymorphism. The diastolic dysfunction of coronary microcirculation includes endothelial-dependent and endothelial-independent vasodilation. 32 Reduced production and increased degradation of endothelial-derived diastolic factors including NO result in impaired endothelial-dependent vasodilation, while the diastolic disorders of VSMCs lead to damaged endothelial-independent vasodilation. 1 , 27 , 33 The production and release of NO are important for endothelial-dependent vasodilation. Synthesized from l -arginine and oxygen by endothelial nitric oxide synthase (eNOS), NO activates the guanylate cyclase pathway or reduces calcium inflow to mediate vasodilation in VSMCs. 34 Recent studies found that rs1799983 GT, the allelic variant of the eNOS gene NOS3 , was more represented in both CMD and CAD subjects than controls with normal coronary arteries, revealing that rs1799983_GT is one of the risk factors for CMD and CAD. 35 , 36 Coronary VSMCs mediate the contraction and relaxation of coronary arteries through calcium-dependent signals so that coronary blood flow rapidly adapts to changes in myocardial oxygen supply. 37 It is suggested that hypoxia-induced coronary vasodilation can be partly explained by hypoxia-induced K ATP + activation. 38 Researchers reported that the SNPs of KCNJ11 , encoding for the Kir6.2 subunit of K ATP+ , play an important role in the susceptibility of CMD and CAD. Rs5215 AA, GA, rs5218 CC, and rs5219 GA were more prevalent in CAD patients, while rs5219 GG increased more in CMD patients. 36 , 39 Additionally, rs5215 GG, rs5216 GG, and rs5216 CC were protective factors for CMD and CAD. 39 CMD may also be secondary to obstruction of the great vessels of the coronary artery after recanalization. 17 The rs2305619 AA of pentraxin 3 (PTX3), involved in inflammation, was associated with a higher incidence of microvascular obstruction in ST-elevation myocardial infarction patients after primary percutaneous coronary intervention and a higher 30-day mortality. 40 Furthermore, rs1333040 TT in the 9p21 chromosome was also more represented with microvascular obstruction in such patients after primary percutaneous coronary intervention. 41 ET-1-related polymorphisms, which augment ET-1 levels, could increase vascular tone and the subsequent dysfunction of coronary artery vasoconstriction. 42 Encoded by EDN1 , the expression of ET-1 is also distally regulated by the PHACTR1 gene and affected by enzymatic cleavage catalyzed by endothelin-converting enzyme 1. Carriers of at least one copy of the rs6458155C allele of the EDN1 gene, the minor rs9349379 G allele of the PHACTR1 gene, and the rs5665 T allele of the ECE gene, exhibited increased plasma ET-1 levels and CAD risk. 2 , 43 , 44 , 45 Meanwhile, the renin-angiotensin system is also activated to produce excessive angiotensin II, combined with angiotensin II type 1 receptor or type 2 receptor, exerting the vasoconstriction or vasodilation of coronary artery. 46 The A1166C CC genotype of angiotensin II type 1 receptor was associated with higher CAD risk and higher incidence of sudden cardiac death. 47 The gene locus of angiotensin II type 2 receptor has also been found to impact the occurrence of premature CAD. 48 Owing to the effect of vascular tone on coronary microvessels, these gene polymorphisms of ET-1, angiotensin II type 1 receptor, and angiotensin II type 2 receptor could complement potential variants associated with CMD . Figure 2 Prediction of CMD-related SNPs. CMD, coronary microvascular dysfunction; EC, endothelial cell; SNP, single nucleotide polymorphism; VSMC, vascular smooth muscle cell. Figure 2 Table 2 SNPs associated with coronary artery vasoconstriction. Table 2 Target gene(s) Variant/alleles Intermediate phenotype References PHACTR1/EDN1 rs6458155C Plasma ET-1 levels↑ 45 rs9349379 G 44 ECE rs5665 T Endothelin-converting enzyme 1 46 AT1R A1166C CC Vasoconstriction 48 AT2R −1332 GA Vasoconstriction 49 ET-1, Endothelin-1; SNP, single nucleotide polymorphism. ECs mediate coronary artery vasodilation by regulating NO production and the opening and closing of ion channels. 49 The 894 TT genotype for the NOS3 gene, located in exon 7, has been reported to contribute to the increased risk of coronary spasm, CAD, and major adverse clinical events including death. 50 Additionally, the rs3918226C allele was associated with a reduced risk of CAD. 51 The −786C > T polymorphism, located in the promoter region, was also associated with higher CAD risk. 52 Reducing the NO-dependent vasodilation, the lipid phosphate phosphatase 3 ( LPP3 ) rs17114036 was also associated with the risk of CAD. 53 It is also reported that the aldehyde dehydrogenase 2 ( ALDH2 ) alcohol flushing variant, ALDH2∗2 (rs671), impacted the risk of CAD by inducing endothelial dysfunction. 54 Furthermore, coronary microvascular spasm owing to the dysfunction of VSMCs plays an important role in CMD pathogenesis. 55 The leiomodin 1 ( LMOD1 ) gene was implicated in maintaining the phenotype and contractile function of VSMCs. 56 The T allele of rs2820315, located intronically in the LMOD1 gene, contributes to a higher CAD risk. 57 Considering the effects of ECs and VSMCs on coronary microvascular vasodilation, these variations could also be beneficial to predicting the polymorphisms of CMD ( Table 3 ). Table 3 SNPs associated with coronary artery vasodilation. Table 3 Target gene(s) Variant/alleles Intermediate phenotype References NOS3 894 TT rs3918226C Endothelial dysfunction 51 52 LPP3 rs17114036 Endothelial dysfunction 54 ALDH2 rs671 Endothelial dysfunction 55 LMOD1 rs2820315 VSMC differentiation 58 SNP, single nucleotide polymorphism; VSMC, vascular smooth muscle cell. AS has been recognized as one of the key factors of CAD pathogenesis, owing to atheromatous narrowing and subsequent occlusion. 58 A previous study has reported that approximately 80 percent of women with chest pain and no obstructive CAD have AS. 59 Plaque erosion, fissuring, or rupture induced by AS also leads to the obstruction and increased vasoconstriction of microvessels. 60 Furthermore, focal or diffuse CA has also been suggested to be associated with CMD development. 18 EC dysfunction, changeable VSMC proliferation and migration, and inflammation, could be potential genetic links between AS and CMD. 60 , 61 , 62 Therefore, the SNPs associated with AS have great implications for discovering novel variants of CMD ( Table 4 ). Table 4 SNPs associated with coronary atherosclerosis. Table 4 Target gene(s) Variant/alleles Intermediate phenotype References CD40 rs1883832C Endothelial dysfunction 65 RGS5 rs1056515 Endothelial dysfunction 66 SMAD3 rs17293632 T VSMC proliferation 67 rs41291957 G > A VSMC phenotypic switch 71 ADAMTS7 rs3825807 rs1994016 VSMC migration 73 74 PECAM1 rs1867624 Vascular barrier integrity and inflammation 58 MIA3 rs67180937 G lower VSMC proliferation and harmful phenotypic transitions 69 LNK/SH2B3 R262W Inflammation 75 IL-10 −592A/C Inflammation 76 IL6R rs2228145 rs4537545 rs7529229 Inflammation 77 CXCL12 rs1746048 Inflammation 78 , 79 CD163 rs7136716 Inflammation 80 PDGFD rs974819 Inflammation 82 MCP-1 rs2857656 CC Inflammation 83 SNP, single nucleotide polymorphism; VSMC, vascular smooth muscle cell. EC dysfunction is pivotal to the initiation and progression of AS. 62 CD40 is involved in the activation of ECs and adhesion of leucocytes, and its interaction with CD40 ligand plays an important role in AS. 63 A previous study revealed that the rs1883832C allele increased the risk of CAD through enhancing CD40 expression and subsequent monocyte adhesion. 64 Moreover, the rs1056515 variant of regulator of G protein signaling 5 ( RGS5 ), accounting for the decreased gene expression, was associated with impaired EC function and increased AS risk. 65 Considering the role EC dysfunction plays in both CMD and AS, these SNPs associated with EC dysfunction might also be implicated in CMD development. Genome-wide association studies (GWASs)have reported that the rs17293632 T allele of SMAD family member 3 ( SMAD3 ) was associated with reduced SMAD3 expression, inhibiting VSMC proliferation and protecting against CAD. 66 , 67 Similarly, the rs67180937 G allele of MIA3 was associated with lower VSMC proliferation and harmful phenotypic transitions in AS. 68 MiR-143 and miR-145 in VSMC could regulate the proliferation of VSMC, and be associated with AS. 69 The rs41291957 G > A variant has been reported to affect miR-143 and miR-145 expression to facilitate VSMC switch to differentiated/contractile phenotype, contributing to a lower CAD risk. 70 A disintegrin and metalloproteinase with thrombospondin 7 (ADAMTS7) has been found to promote VSMC migration by degrading extracellular matrix. 71 The rs3825807 variant has been found to modulate ADAMTS7 maturation to protect against CAD, and rs1994016 of the ADAMTS7 gene was associated with increased risks of CAD and AS. 72 , 73 Given that VSMC proliferation and migration affect CMD, these loci have great potential for predicting CMD-related SNPs. Platelet endothelial cell adhesion molecule-1 (PECAM1) mediates the protection of vascular barrier integrity, the disruption of which leads to the development of chronic inflammatory diseases such as AS. 57 The rs1867624 variant reduced PECAM1 expression, destroyed coronary barriers, and increased CAD risk. 57 The variant of LNK/SH2B3 R262W, affecting platelet–neutrophil aggregates, also displayed increased CAD risk in individuals with JAK2 VF mutation. 74 Furthermore, the −592A/C polymorphism of anti-inflammatory factor interleukin-10 ( IL-10 ) was associated with slow coronary flow and AS in the Han Chinese population. 75 Minor alleles of rs2228145, rs4537545, and rs7529229 of the interleukin 6 receptor ( IL6R ) gene have been also reported to be negatively associated with CAD risk. 76 The rs1746048 variant of C-X-C motif chemokine ligand 12 ( CXCL12 ) modulating plasma CXCL12 levels, was associated with CAD risk and related complications. 77 , 78 Though the intake of hemoglobin by CD163 inducing a pathogenic or protective macrophage phenotype in AS remains controversial, the minor allele of the rs7136716 genotype could mediate microvessel density and impact the risks of CAD and myocardial infarction by regulating CD163 expression. 79 A previous study reported that platelet-derived growth factor-D facilitated matrix metalloproteinase activity and monocyte migration in AS. 80 In the Han Chinese population, the SNP rs974819 of the PDGFD gene was sex-dependent and influenced CAD risk. 81 Besides, monocyte chemoattractant protein 1 (MCP-1) could promote recruitment of macrophages into atherosclerotic plaque. The rs2857656 CC genotype of MCP-1 contributed to a higher prevalence of carotid artery plaque. 82 Since inflammation is one of the important pathogenic mechanisms of CMD, these variants might be also associated with CMD risk or complications. Diabetes leads to endothelial dysfunction, changes in the levels of hormones, and alteration in the metabolism of VSMCs, which in turn cause the development of microvascular abnormalities. 83 During chronic diabetes, hyperglycemia and insulin resistance reduce eNOS expression in ECs, which causes decreased NO production, decreased endothelium-dependent relaxation, and CMD. 84 Functionally impairment of VSMCs in diabetes also aggravated macrovascular complications such as CAD 85 . The soluble NSF attachment protein receptor (SNARE) complex was involved in metabolic diseases. 86 The rs4717806 A and rs2293489 T minor alleles of syntaxin 1A ( Stx-1A ), a protein component of the SNARE complex, were associated with CAD risk. 87 Previous studies have reported chemerin-induced vascular inflammation and endothelial dysfunction. 88 Chemerin rs17173608 has been found to be a promising indicator for predicting insulin resistance and assessing the severity of CAD. 89 Polymorphisms of solute carrier family 2 facilitated glucose transporter member 1 ( SLC2A1 ) was associated with diabetic microangiopathy, possibly due to their role in the proliferation and extracellular matrix synthesis of VSMCs. 90 The rs1385129 of SLC2A1 was associated with the prevalence of cardiovascular complications in diabetic patients. 91 The rs9658664 of pancreastatin ( PST ), the peptide of which regulates glucose/insulin homeostasis, has conferred an increased risk for diabetes, hypertension, and CAD. 92 As one of the risk factors, diabetes-related SNPs affecting coronary artery structure and function could enlighten us in SNP prediction in CMD ( Table 5 ). Table 5 SNPs associated with CMD risk factors. Table 5 Target gene(s) Variant/alleles Intermediate phenotype References Stx-1A rs2293489 T rs4717806 A Metabolic syndrome and insulin resistance 88 Chemerin rs17173608 Vascular inflammation and endothelial dysfunction 90 SLC2A1 rs1385129 Proliferation and extracellular matrix synthesis of VSMCs 92 PST rs9658664 Diabetes 93 SORT1 rs599839 LDL-C level↓ 97 USF1 rs11576837 Hyperlipidemia 98 SCARB1 rs5888 HDL↑ 100 CETP rs1800775 HDL 101 PCSK9 rs11206510 rs11591147 LDL↓ 103 104 AGT M235T Coronary artery calcium 106 , 107 TLR4 896 G Blood pressure and pulse pressure↓ 108 PPARGC1A Gly482Ser Hypertension 109 PPAR-γ rs1801282 Glucose, cholesterol, triglyceride and ALT↑ 110 MCP-1 2518 A/G Blood pressure↑ 111 CMD, coronary microvascular dysfunction; HDL, high density lipoprotein; LDL, low density lipoprotein; SNP, single nucleotide polymorphism; VSMC, vascular smooth muscle cell. Hyperlipidemia has been recognized to play an important role in both CAD and CMD. 33 , 93 Lowering lipoprotein levels has been reported to improve CMD in hyperlipidemic patients. 94 Previous research highlighted that the association of the rs599839 G-allele of SORT1 with reduced low-density lipoprotein and triglyceride levels, and observed the decreased prevalence of CAD and myocardial infarction in subjects with the rs599839 GG genotype. 95 Upstream stimulatory factor 1 (USF1) is a transcription factor associated with familial combined hyperlipidemia and CAD. The rs11576837 variant reduces USF1 expression, improves insulin sensitivity and lipid profiles, and alleviates AS. 96 Scavenger receptor B1, encoded by the SCARB1 gene, mediates selective uptake of high-density lipoprotein cholesteryl esters into steroidogenic cells and the liver, impacting the development of AS through apolipoprotein B-containing particles. 97 It has been shown that the SCARB1 rs5888 AA genotype represents a higher level of large-sized high-density lipoprotein subtype, whereas the population with rs5888 GA and GG types shows increased CAD risk. 98 The rs1800775 variant, located in the promoter of the cholesteryl ester transfer protein (CETP) gene, was also associated with plasma high-density lipoprotein cholesterol level and CAD risk. 99 Proprotein convertase subtilisin/kexin type-9 (PCSK9), binding to low-density lipoprotein receptors on the cell surface and participating in lysosomal degradation, could be a target for dyslipidemia. 100 The genetic variants rs11206510 and rs11591147 were associated with cholesterol levels and contributed to a lower risk of myocardial infarction or CAD. 101 , 102 These hyperlipidemia-associated loci could be used to predict the potential risk SNPs of CMD. Hypertension has been recognized as one of the most important components among genetic risk factors of CAD. 103 The angiotensinogen ( AGT ) gene M235T variant was linked with CAD risk and coronary artery calcium in the CAD population. 104 , 105 A previous study reported that CAD patients with the Toll-like receptor 4 (TLR4) 896 G allele had lower systolic blood pressure and pulse pressure, compared with TLR4 896 A/A allele carrier. 106 The Gly482Ser variant of peroxisome proliferator-activated receptor-gamma coactivator 1-alpha ( PPARGC1A ), a gene related to energy metabolism and mitochondrial biogenesis, was associated with hypertension and CAD. 107 In addition, the loci of several genes including CYP17A1 , GUY1A1 , and ARHGAP42 were also found to be associated with hypertension and CAD. 103 Hypertension could be a risk for peroxisome proliferator-activated receptor-gamma ( PPAR -γ) rs1801282 mutation in CAD subjects. 108 The SNP of MCP-1 2518 A/G was also linked with blood pressure in asymptomatic patients with ischemic heart disease. 109 These hypertension-associated loci could also benefit our prediction of CMD risk SNPs. CMD refers to the structural and functional remodeling of the coronary microcirculation, and has a significant impact on the prognosis of concomitant diseases such as CAD. 1 , 110 Accordingly, CMD has become increasingly crucial to the diagnosis and treatment of coronary heart disease. 111 Currently, several obstacles exist in ensuring successful the prevention and treatment of CMD, including unclear pathogenic mechanisms, cumbersome diagnostic procedures, and lacking targeted interventions. Promisingly, it is particularly necessary to explore the pathogenesis and targeted intervention of CMD. A large number of epidemiological studies and GWAS have revealed that SNPs play an important role in the occurrence and development of a variety of cardiovascular diseases, and investigation of these SNPs seems to promise new insights into CMD pathogenesis and potential treatments. Consistently, predicting and screening CMD associated SNPs not only contributes to the early diagnosis of CMD-susceptible populations but also provides the possibility of targeted intervention for CMD. However, few studies have identified SNPs associated with CMD. Most extant studies focused on CMD pathogenesis such as coronary systolic function, diastolic function, and coronary microvascular obstruction, and some of which have been found to have a promising clinical application. To further explore more SNPs with a strong correlation with CMD risk, our review also illustrates potential CMD risk variants from cardiovascular diseases with similar mechanisms and risk factors. These loci could benefit the investigation of CMD-related SNPs and offer targeted interventions to be developed in the future. Z.H.Z. and F.D. conceived and designed the project; D.Y.T., J.L., Z.H.Z., and F.D. wrote the manuscript; D.Y.T., J.L., and Q.Y.Y. drew the figures and tables; X.Y.L. checked for spelling mistakes. These authors declared no conflict of interests. This work was supported by the Chongqing Key Project of Science and Technology Joint Medical Research (China) and the Chongqing Talent Program (China) . | Review | biomedical | en | 0.999997 |
PMC11696774 | Extracellular matrix (ECM) remodeling occurs in tissue regeneration, repair, and degeneration. Proper ECM remodeling benefits for activation and migration of stem cells and facilitates tissue regeneration. On the other hand, excessive and uncontrolled ECM remodeling can cause fibrotic scar formation, fatty infiltration, and heterotopic ossification, leading to organ or tissue dysfunction. 1 , 2 , 3 , 4 , 5 Mesenchymal stromal cells are indispensable for ECM remodeling and actively participate in maintaining tissue homeostasis. In the recent decade, a group of mesenchymal stromal cells specifically expressing platelet-derived growth factor receptor α (Pdgfrα) aroused researchers' interest. So far, Pdgfrα + stromal cells in skeletal muscles were detailly studied and reviewed timely. 6 , 7 , 8 , 9 They present fascinating properties during muscle injury or atrophy. At first, muscle-resident Pdgfrα + stromal cells were found to have a “double-edged sword” effect on muscle regeneration and degeneration. These cells are indispensable for the activation of muscle stem cells (MuSCs, also called satellite cells) during muscle regeneration. On the other hand, they accumulate in degenerative muscles and differentiate into myofibroblasts or adipocytes, leading to fibro-fatty infiltration, thus they are also called fibro-adipogenic progenitors (FAPs). In subsequent studies, muscle-resident Pdgfrα + stromal cells are found not only to participate in ECM remodeling but also actively communicate with other cell types, especially immune cells and stem cells through direct and indirect ways. These findings indicated that muscle-resident Pdgfrα + stromal cells exerted much more complex effects on regulating the microenvironment than expected. On the other hand, it has been well known that the fate of muscle-resident Pdgfrα + stromal cells is determined by surroundings. These findings reveal the muscle-resident Pdgfrα + stromal cells play a central role in maintaining muscle homeostasis. Besides muscles, Pdgfrα + stromal cells are subsequently identified in multiple organs and tissues, including artery, heart, pulmonary, tendon, bone, and adipose tissues. Although these tissue-resident Pdgfrα + stromal cells have distinct names in these tissues, it is interesting that they share many common characteristics. All these groups of cells are involved in tissue fibrosis formation, and most of them have the adipogenic capacity or the ability to store lipids. In addition, these groups of cells participate in both tissue regeneration and degeneration and are indispensable for supporting the activation of stem cells. Finally, the biomarkers of these tissue-resident Pdgfrα + stromal cells are similar. In this review, we comprehensively introduce and discuss the properties of Pdgfrα + stromal cells in distinct tissues and the crosstalk between these cells and the microenvironment in various conditions. Muscle-resident Pdgfrα + stromal cells (also called FAPs) should be isolated using fluorescence activated cell sorter. Either Lin − (lineage) − Sca1 + cells or Lin − Pdgfrα + cells can refer to this group of stromal cells. Sca-1 can be used as a gating marker instead of Pdgfrα when sorting out muscle-resident FAPs, 10 because 85% of muscle-resident Pdgfrα expressing cells were Lin − Integrinα7 − Sca-1 + . 11 , 12 FAPs in muscle may also express CD29 (∼90.0%), CD34 (∼30.0%), and CD90 (∼60.0%), but these markers are not specific for FAPs. It has been reported that these markers are also highly expressed in satellite cells. 11 Sca-1 cannot be used to define human FAPs since it is not expressed in human cells. Uezumi et al indicated that the CD56 − CD82 − CD318 − Pdgfrα + CD201 + marker could be used to identify human muscle-derived FAPs. 13 Furthermore, a recent study used CD34 + CD56 − CD45 − CD31 − as the strategy for isolating human FAPs ( Table 1 ). 14 Table 1 Biomarkers of Pdgfrα + stromal cells in different tissues in human or mice. Table 1 Tissues Species Gating Strategy Refs Muscle Mouse Lin(−):SM/C-2.6(+):PDGFRα(+) Uezumi et al, 2010 11 Lin(−):α7(−):Sca-1(+):CD34(+) Joe et al, 2010 12 ; Marcelin et al, 2020. 139 Lin(−):Ter119(−):α7(−):Sca-1(+):CD34(+) Malecova et al, 2018. 15 Lin(−):α7(−):Sca-1(+) Lemos et al, 2015 2 ; Heredia et al, 2013 77 ; Dong et al, 2014. 10 Lin(−):α7(−):PDGFRα(+) Saito et al, 2020 45 ; Dong et al, 2017. 95 Lin(−):Ter119(−):Sca-1(+) Contreras et al, 2019 & 2020. 24 , 25 Lin(−):Podoplanin(+):PDGFRα(+) Kuswanto et al, 2016. 81 Lin(−):PDGFRα(+) Uezumi et al, 2011. 4 Human CD56(−):CD82(−):CD318(−): PDGFRα(+):CD201(+) Uezumi et al, 2016. 13 CD15(+):PDGFRα(+):CD56(−) Arrighi et al, 2015. 31 CD34(+):CD56(−):CD45(−):CD31(−) Farup et al, 2021. 14 Adipose Mouse Lin(−):CD29(+):CD34(+):Sca-1(+) Rodeheffer et al, 2008. 129 Lin(−):Gp38(+):PDGFRα(+) Marcelin et al, 2017. 133 Lin(−):α7(−):Sca-1(+):CD34(+) Lemos et al, 2012. 132 Human Lin(−):CD34(+):CD44(+):PDGFRα(+) Marcelin et al, 2017. 133 Tendon Mouse TPPP3(−):Sca-1(+)PDGFRα(+) Harvey et al, 2019. 46 Lung Mouse Lin(−):Sca-1(+):CD34(+):Thy-1(+):PDGFRα(+) McQualter et al, 2009. 149 Heart Mouse Lin(−):Sca-1(+):PDGFRα(+) Soliman et al, 2020. 147 Bone marrow Mouse CD45(−):TER119(−):Sca-1(+):PDGFRα(+) Houlihan et al, 2012 158 ; Mashimo et al, 2019. 159 In uninjured muscles, FAPs locate in the interstitial space of muscle tissue. Although FAPs are close to vessels, they are proven to be distinct from pericytes and vascular smooth muscle cells. 11 In the context of muscle injury, FAPs undergo rapid proliferation, immigrate to injured sites, and rapidly accumulate circumferentially around injured muscle fibers. Interestingly, FAPs in glycerol-induced fatty degenerative muscle present with a round type, while they present with a typical elongated spindle shape in cardiotoxin-induced regenerative muscle. 11 The reasons causing this difference in morphology are still unclear, and we speculate the fate of differentiation may be one underlying reason. It has been identified that a single FAP possesses the ability to differentiate into a myofibroblast or an adipocyte in response to specific induction. 4 However, heterogeneity of FAPs has been noticed recently. Subgroups of FAPs play different roles in muscle development and injury. Malecova et al divided FAPs into four subgroups based on the expression of vascular cell adhesion molecule 1 (Vcam1) and Tie2 (encoded by gene Tek), i.e. , Tie2 high /Vcam1 − , Tie2 low /Vcam1 − , Vcam1 + , and double negative subgroup, of which Tie2 expresses low in Vcam1 + subgroup. Vcam1 + subgroup can only be observed in muscle injury, while Tie2 high /Vcam1 − subgroup and Tie2 low /Vcam1 − subgroup were likely present in all conditions. Moreover, the time to reach the peak of the Tie2 high /Vcam1 − subgroup and the Tie2 low /Vcam1 − subgroup was earlier and reduced more quickly than that of the Vcam1 + subgroup in the acute injury model. Interestingly, Vcam1 + FAPs appeared a high proliferative capacity and are closely associated with fibrotic phenotype in both acute muscle injury and muscle atrophy, implying this subgroup may be critical to fibrosis formation in muscles. 15 Liu et al compared the role of Tie2 + progenitors and Pdgfrα + progenitors in rotator cuff muscle injury using enhanced green fluorescent protein transgenic mice. They found that Pdgfrα + progenitors are prone to adipogenesis while Tie2 + progenitors are enriched in biomarkers of myofibroblast. 16 Because Tie2 is highly expressed both in FAPs and myofibroblasts, 11 it could not be used as a specific marker for FAPs. However, Tie2 + could be a useful indicator for the fibrotic phenotype ( Table 2 ). In another study, Giulio et al discovered that a subset of FAPs with higher expression of Sca-1 was more likely to differentiate into adipocytes. 17 Table 2 The function of major subgroups of Pdgfrα + stromal cells in varieties of diseases in each organ. Table 2 Tissues Biomarkers for subgroups Background Features Refs Muscle Vcam1(−)/Tie2 high , Vcam1(−)/Tie2 low and Tie2 low /Vcam1 Acute injury and dystrophy Vcam1 (−) /Tie2 high and Vcam1 (−) /Tie2 low expanding immediately after acute injury, while Vcam1 (−) /Tie2 high decrease firstly following by Vcam1 (−) /Tie2 low , the peak of Tie2 low /Vcam1 is later than the other two groups. Vcam1 (−) /Tie2 high :muscle growth, Vcam1 (−) /Tie2 low :neo-angiogenesis, Tie2 low /Vcam1: fibrosis Malecova et al, 2018. 15 Wisp1(+), Dlk1(+), Osr1(+) Dpp4(+), Osr1(+) Cxcl14(+) Regeneration Wisp1(+) sub for ECM remodeling, increases at early stage after injury, Osr1(+) sub is the dominant subgroup at late stage Oprescu et al, 2020. 19 CD90(+) and CD90(−) Fatty infiltration CD90(+) FAPs in human muscles are apt to adipogenesis Farup et al, 2021. 14 Adipose CD9 low and CD9 high High fat diet CD9 low sub representing adipogenic potential and CD9 high sub revealing a pro-fibrotic phenotype Marcelin et al, 2017. 133 CD55 and IL13ra1, VAP1 and Adam12, CD142 and ABCG1 Adipogenesis CD142 (+): ABCG1 (+) sub (Areg) showing an inhibitory effect on adipogenesis, Schwalie et al, 2018. 130 DPP4(+) Adipocyte development highly proliferative and multipotent progenitors, which can give rise to ICAM1+ and CD142+ preadipocytes Merrick et al, 2019. 134 Tendon TPPP3(+)Sca-1(−)Pdgfrα(+); TPPP3(−)Sca-1(+)Pdgfrα(+) Tendon regeneration and scar formation TPPP3+Sca-1-Pdgfrα+ subgroup is referred as tendon stem cells; TPPP3-Sca-1+Pdgfrα+ subgroup is tendon-resident FAPs, leading to scar formation after injury. Harvey et al, 2019. 46 Aorta Sca-1(+)PDGFRα(+)PDGFRβ(−) and Sca-1(+)PDGFRα(+)PDGFRβ(+) Aorta injury Sca-1(+)PDGFRα(+)PDGFRβ(−) giving rise to smooth muscle cells in severe injury Tang et al, 2020. 140 Heart Lin(−):CD29(+):mEF-SK4(+):PDGFRα(+):Sca1(+):periostin(+) Heart failure This subgroup secrets IL-17 to recruit immune cells and contributes to fibrosis Chen et al, 2018. 144 Since FAPs go through different stages during muscle injury, it is important to explore the dominant subgroup in each stage. Scott et al found that hypermethylated in cancer 1 (HIC1) enriched in mesenchymal progenitors (MPs) and regulated the MP quiescence. Depletion of HIC1 in MPs will cause impaired muscle regeneration. When analyzing the components of MPs, FAPs are found to be the major population. Then, the authors explored the trajectory of the proportion and the subgroups of HIC1 + MPs in injured muscles. The results showed that HIC1 + MPs reached the peak at the stage of injury, and returned to baseline at 28 days post-injury (DPI). Single-cell RNA-sequencing analysis showed that chemokines enriched in HIC1 + FAPs at 1 DPI, while cell proliferation associated biomarkers up-regulated at 2 DPI. After expansion, ECM proteins are highly expressed until 7 DPI, indicating an ECM remodeling process during muscle regeneration. 18 Oprescu et al traced the dynamics and heterogeneity of whole FAPs during muscle regeneration using single-cell RNA sequencing. They found that FAPs and muscle fibers were two major cell groups in normal muscles. FAPs in injured muscles at an early stage (before 2 DPI) expressed abundant chemokines, including chemokine (C–X–C motif) ligand families and chemokine (C–C motif) ligand families, indicating that activated FAPs might have potential roles in regulating immune cells. Then, WNT1 inducible signaling pathway protein 1 (WISP1) + FAPs increased at 3.5 and 5 DPI, and this subgroup enriched in fibrotic biomarkers. At 10 DPI, the regenerative stage of muscle repair, delta like non-canonical Notch ligand 1 (Dlk1) + FAPs became a major group. Odd-skipped related transcription factor 1 (Osr1) + FAPs were the dominant subgroup at 21 DPI. 19 Those findings indicate that FAP subgroups are intimately regulated in different stages of muscle injury ( Table 2 ). Another study showed that single-cell RNA sequencing of Pdgfrα + cells in muscle identified six subpopulations, and the other five clusters can all be differentiated from a common Osr1 + cluster. Adam12 + cluster and Gap43 + cluster express the genes encoding the interleukin (IL)-4 receptor alpha and IL-13 receptor subunit alpha 1 with the ability to respond to IL-4 or IL-13 signal. Clu + cluster is more likely to mineralization. Gli + cluster and Hsd11b1 + cluster manifest an unparalleled neuromuscular junction association in response to nerve injury. 20 According to research published by Hongchun Lin et al, FAPs from normal muscles could be divided into three subpopulations as C1, C2, and C3. However, in a mouse model of muscle atrophy caused by the denervation of the sciatic nerve, the three subpopulations from the denervated gastrocnemius muscle acted out transcriptional changes. Compared with normal FAPs, the denervated C1 subpopulation showed an apoptotic phenotype characterized by increased marks of apoptosis and the P53 pathway. Next, the denervated C2 subpopulation represented a pro-fibrotic phenotype with enriched denotes in epithelial–mesenchymal transition, transforming growth factor-β (TGF-β) signaling, and angiogenesis. Finally, the denervated C3 subpopulation unveiled pro-adipogenic features enriched with adipogenesis, MYC targets V1, and WNT/beta-catenin signaling. 21 It was reported that diabetes mellitus could promote fibro-fatty infiltration in muscles. Farup et al analyzed the subgroups of FAPs in patients with diabetes. The findings suggested that CD90 + FAPs are associated with muscle degeneration under the regulation of PDGF signaling. 14 Muscle-resident FAPs may develop from embryonic interstitial muscle connective tissue cells. 22 Uezumi et al and Joe et al examined the differentiation of FAPs and found that FAPs could commit to osteoblastic lineage, myofibroblasts, and adipocytes . Nevertheless, FAPs scarcely differentiate into myoblasts, indicating that FAPs belong to a lineage distinct from satellite cells. 11 , 12 TGF-β and PDGF signaling are the two most important stimuli to commit FAPs to myofibroblasts. Treatment of TGF-β1 significantly enhances the expression of α-smooth muscle actin (α-SMA), the classic biomarker of myofibroblasts, as well as connective tissue growth factor (CCN2/CTGF), fibronectin, β1-integrin, and collagen I in FAPs. 2 , 11 , 12 , 23 , 24 , 25 On the other hand, Pdgfrα regulates the fibrotic phenotype of FAPs. Mueller et al found intronic variants of Pdgfrα can be produced in FAPs with different polyadenylation sites, of which a protein isoform coded by one variant contained a truncated kinase domain. This isoform is highly expressed in regenerative FAPs and restrains the overactivation of Pdgfrα and attenuates fibrosis. 26 Recent studies have confirmed that many skeletal muscles respond to the autotaxin/lysophosphatidic acid/lysophosphatidic acid receptors axis and trigger fibrosis. In mechanism, lysophosphatidic acid may scale up the number of FAPs in skeletal muscle through extracellular signal-regulated kinase 1/2 signal pathway, and promote their phenotypic differentiation into myofibroblast. After lysophosphatidic acid treatment, vinculin, vimentin, and α-SMA mRNA expression were up-regulated (markers of myofibroblast phenotype differentiation), while the protein and mRNA levels of adipoq decreased (adipocyte markers). 27 Intracellular type II deiodinase and type III deiodinase may be associated with the differentiation of FAPs between adipocytes and fibroblasts. In a primary cell culture model, there were entirely different tracks of both enzymes through time. When FAPs differentiated forward adipocytes, type II deiodinase in FAPs reached the peak at the 50th hour and descend violently after that. Yet, type III deiodinase also climbed to the highest point as type II deiodinase began to decrease at the same time. When FAPs differentiated into myofibroblasts, type II deiodinase expression was at its highest point after two days, whereas type III deiodinase declined all the time to not de detected. 28 Figure 1 Multi-lineage differentiation of Pdgfrα + stromal cells in vitro and in vivo . Pdgfrα + stromal cells have multipotent capacity in different conditions. Representative signals regulating differentiation of Pdgfrα + stromal cells in vitro and in vivo are listed in the figure; “green line” indicates promoted, “red line” indicates inhibited, and “…” represents no signals that have been identified currently. Fig. 1 Figure 2 Representative milestone events leading to the discovery and development of Pdgfrα + stromal cells in various tissues and diseases. Fig. 2 Cocktails consisting of insulin, dexamethasone, and 3-isobutyl-1-methylxanthine can induce the adipogenesis of FAPs in vitro . Among the inducers in the cocktails, insulin is the most important adipogenic factor. 11 In one study, IL-1β and IL-4 were used to induce M1 and M2 polarization of macrophages, respectively. IL-4-treated macrophages promoted while IL-1β-stimulated macrophages inhibited the adipogenesis of FAPs . 29 Preconditioning showed potential benefits in boosting muscle regeneration after ischemia-reperfusion injury. He Zhang et al authenticated that preconditioning stimulated FAPs differentiation into brown/beige-like adipocytes by modulating the β3AR signaling pathway, thereby expediting muscle regeneration proved by improvement of central nuclei regenerating myofibers after ischemia-reperfusion injury. 30 Meanwhile, some other studies confirmed that FAP-derived adipose showed some features of brown adipose, which was sensitive to insulin-induced glucose uptake. 31 , 32 FAPs can commit to osteoblastic lineage. The member of bone morphogenetic protein (BMP) families can promote the osteoblastic induction of FAPs. 11 , 12 , 33 They can lead to heterotopic ossification in some conditions. Oishi et al detected the osteogenesis capacity in human muscle progenitors and human FAPs. They found these two progenitors had similar differentiative capacities in vitro . However, only FAPs can successfully form a bone-like tissue in vivo . The skeletal muscle resident Tie2 + FAPs were the main initiator of heterotopic ossification in mice. 34 Eisner and colleagues used BMP-2 treated acute injury model and demonstrated that FAPs, but not progenitors derived from circulation, were the main cellular source for heterotopic ossification. Inflammatory microenvironment perturbation may regulate the osteogenesis of FAPs in muscle injury. 35 Furthermore, miRNA-146b-5p and miRNA-424 boosted the osteogenesis of FAPs. 36 In addition to trauma, heterotopic ossification frequently occurs in nerve injuries. Sang et al revealed that calcitonin gene-related protein regulated heterotopic ossification after spinal cord injury . 37 In addition, IL-1 from activated monocytes can promote FAP mineralization demonstrated by up-regulated Runt-related transcription factor 2 expression in neurogenic heterotopic ossifications, which could be attenuated by supplementary anti-IL-1β neutralizing antibody. 38 Moreover, FAPs contributed to hereditary heterotopic ossification disease, named fibrodysplasia ossificans progressiva. ACVR1 in FAPs could lead to heterotopic ossification phenotype in patients with fibrodysplasia ossificans progressiva after binding to its ligand, activin A, 39 which we will discuss in a later section. Figure 3 Biofunctions of Pdgfrα + stromal cells in different diseases in various organs. Fig. 3 FAPs have a “double-edged sword” effect during muscle regeneration and degeneration. 40 Although FAPs are non-myogenic in nature, they can support the activation of muscle stem cells and facilitate muscle regeneration. 11 , 40 On the contrary, FAPs accumulate excessively in degenerative microenvironment and then differentiate into adipocytes or myofibroblasts, thus promoting muscle atrophy. 24 , 41 , 42 Unfortunately, it is impossible to prevent fatty infiltration and collagen deposition through diminishing FAPs, because ablation of FAPs can remarkedly impair muscle repair. 1 , 43 , 44 In our opinion, generally, FAPs have a “repairman” role. In regeneration, they facilitate the muscle fiber formation followed by going back to a quiescent status to maintain muscle homeostasis. In degeneration, the continuously high level of FAPs might result from the compensatory response to activation failure of muscle stem cells, which are required to muscle repair. Subsequently, the increased FAPs might undergo adipogenesis or fibrogenesis to occupy the space which occurred after fiber atrophy. In general, during muscle regeneration, FAPs undergo four important stages: proliferation, regeneration, senescence or apoptosis, and clearance and quiescence. 8 , 40 However, the senescence or apoptosis process can be prevented in a degenerative environment. Then, FAPs may differentiate into adipocytes or myofibroblasts, thus leading to fatty degeneration. 45 Timely apoptosis of FAPs is a key event for the switch between muscle regeneration and degeneration. The proliferation of FAPs at early stage benefits for activation of muscle stem cells, so inhibiting the proliferation of FAPs is not the best choice for preventing degenerative changes. 8 , 40 , 45 Current strategies that prevent the progression of fibro-fatty infiltration and muscle degeneration often focus on inducing quiescence, senescence, or apoptosis to avoid abnormal accumulation of FAPs or preventing adipogenesis and fibrogenesis directly. Since the crosstalk between FAPs and the microenvironment determined the fate of FAPs as well as remodeling the microenvironment, we hereby detailly describe the factors that play key roles in this communication . Figure 4 The classical pathways that regulate the communication between Pdgfrα + stromal cells and microenvironment. Fig. 4 Pdgfrα is an indispensable biomarker for the identification of FAPs, so PDGF ligands, especially PDGFAA, certainly play an important role in regulating FAPs. PDGFAA-Pdgfrα signaling contributes to the phenotypic switch toward pro-fibrotic FAPs. 7 , 46 Intronic polyadenylation of Pdgfrα attenuates FAP activation and fibrosis. 26 In addition, PDGFAA stimulates the TGF-β signaling through binding to Pdgfrα, thus promoting scar formation or fibrosis. 7 Uezumi et al found that FAPs moderately expressed OB-Cadherin (Cadherin-11). 11 It was reported that OB-Cadherin interacted with Pdgfrα. 47 However, the effect of OB-Cadherin on FAPs is still unknown. It is interesting to explore the role of OB-Cadherin in regulating FAPs because OB-Cadherin regulates cell migration, 48 differentiation, and angiogenesis. 49 , 50 , 51 , 52 , 53 , 54 , 55 TGF-β is an important cytokine for FAP-mediated matrix remodeling and fibrosis. Unexpectedly, it can reduce the expression of Pdgfrα on FAPs. TGF-β promotes the myofibroblast differentiation, and then FAPs lose Pdgfrα expression during this process. 24 The polarization of macrophages determines the production of TGF-β and survival of FAPs in injured muscles. TGF-β1 which is derived from Ly6C low macrophages induces FAPs to a fibrogenic phenotype and results in collagen deposition in injured muscle. Furthermore, TGF-β can perturb TNF-α-induced FAP apoptosis in an adipogenic environment. Blockage of TGF-β by decorin can recover FAP apoptosis and reduce the fatty infiltration. 56 TGF-β was found to down-regulate the expression of transcription factor 4 in FAPs through the ubiquitin-proteasome system and canonical Wnt/wingless signaling cascades. 25 However, the activation of the TGF-β/Smad3 pathway in FAPs contributed to the fibrosis in amyotrophic lateral sclerosis-induced muscle atrophy. 57 Recently, Uezumi et al identified BMP3B (GDF-10), a member of growth/differentiation factor families, was indispensable for maintaining muscle mass as well as muscle–nerve interaction. In sarcopenia, BMP3B expression was decreased in FAPs. Administration of ectogenic BMP3B efficiently reversed the aging-related muscle atrophy. 44 The members of the fibroblast growth factor (FGF) family are critical to regulate fibrogenesis and adipogenesis of FAPs. Four FGF receptors (FGFR) can be identified in satellite cells and FAPs. FGF21-FGFR2-betaKlotho pathway promoted adipogenesis of FAPs. 58 The elevated miR-214-3p had the potential to hasten FAP fibrogenesis by adjusting the FGF2/FGFR1/TGF-β axis, which shed light on new strategies for the treatment of fibrous degeneration of Duchenne muscular dystrophy (DMD) by interfering miR-214-3p. 59 A recent literature by Sebastian et al demonstrated that more aberrant FGF-2-dependent signaling promoted the formation of intramuscular adipose tissue in skeletal muscle donated by older people who are more than 75 years old when compared with that in people who are less than 55 years old. Mechanically, the elevated FGF-2 in aged skeletal muscle cells irritated the adipogenic differentiation of FAPs through the FGF-2/FGFR/FRA-1/miR-29a/SPARC axis. 60 Cellular communication network (CCN) factor families play a critical role in crosstalk between FAPs and surroundings. FAPs derived from young mice exhibit a higher potency in proliferation and adipogenesis but a lower ability for fibrotic differentiation when compared with those derived from elder mice. 61 Young FAPs can significantly activate muscle stem cells through WISP1, which is also called CCN4. Injection of WISP1 can improve the muscle regeneration. 61 Moreover, another study found that CTGF-CCN2 signals could promote denervation-induced fibrosis through a TGF-β-independent way. 23 In addition, it is well known that chronic kidney disease can bring about fatty infiltration in obese people. ECM protein CCN1 is up-regulated in chronic kidney disease and commits FAPs to an adipogenic fate. 62 It is still not fully understood how FAPs are activated after muscle injury. Joseph et al identified that the proteolytic process of hepatocyte growth factor could be activated after muscle injury. Hepatocyte growth factor activator can stimulate both FAPs and satellite cells to GAlert status for accelerating stem cell activation and tissue repair. 63 In addition, HIC1 may play an important role in quiescence–activation transition in FAPs. Deleting HIC1 can arouse FAPs and impair the muscle regeneration. 18 , 64 Currently, several transcriptional factors have been explored to regulate the proliferation and differentiation of FAPs. The key role of Osr1 in FAPs and the function of the Osr1 + FAP subgroup in muscle development and muscle injury have been increasingly emphasized. Osr1 can regulate the proliferation, apoptosis, adipogenesis, and quiescence of FAPs in injured muscles. 65 , 66 Two subgroups of Osr1 + FAPs, which additionally express dipeptidyl peptidase-4 or C–X–C motif chemokine ligand 14, can be found in uninjured muscles. 19 Another study identifies that Osr1 remains at a low level and frequency in adult FAPs. However, the expression of Osr1 was reactivated in FAPs after acute muscle injury. Osr1 + FAPs presented with an active phenotype. 65 Interestingly, Osr1 + muscle connective tissue cells may be embryonic FAPs. During muscle development, these cells can partially give rise to adult FAPs and are critical to provide a pro-myogenic niche for myogenic progenitors. Osr1 deficiency leads to limb muscle patterning defects. 22 These findings demonstrate that the Osr1 + subgroup of FAPs may be more primitive and active than Osr1 − FAPs. Krüppel-like factor families were reported to regulate the differentiation of FAPs. Contreras et al explored that the bulk protein level of transcription factor 4/TCF7L2 was up-regulated due to the expansion of transcription factor 4-positive FAPs in dystrophic muscles, denervated muscles, and chronically damaged muscles. 67 However, at the single-cell level, TGF-β-TGFBR1 signaling repressed the expression of transcription factor 4/Tcf7l2 through histone deacetylase (HDAC)-mediated degradation in FAPs and promoted fibrogenic differentiation. 25 Krüppel-like factor 6 was reported to regulate the expression of matrix metalloproteinase 14, 68 a critical factor to positively regulate adipogenesis of FAPs, 69 , 70 targeting Krüppel-like factor 6 using miR-22-3p could efficiently down-regulate the expression of matrix metalloproteinase 14. 68 Peroxisome proliferator-activated receptor γ (PPARγ) is a key regulator in adipogenesis. TGF-β1 and nitric oxide could inhibit the expression of PPARγ by up-regulating miRNA-27b, to prevent adipogenesis of FAPs. 24 , 71 Furthermore, an immunosuppressant, named azathioprine, can negatively regulate the adipogenesis of FAPs by inhibiting the expression of PPARγ by inactivating the AKT-mTOR pathway. 72 Recently, glucocorticoid chemical components were found to inhibit the adipogenesis of FAPs. They can induce the transcription of Gliz/Tsc22d3 and inhibit the expression of PPARγ. 73 In addition, using a GSK3 inhibitor, LY2090314, can inhibit adipogenesis of FAPs through repressing PPARγ by inhibiting WNT/GSK/beta-catenin pathway. 74 These studies together demonstrated that PPARγ should be a critical therapeutic target for adipogenesis of FAPs. Microenergy acoustic pulses induced FAP brown/beige adipogenesis in vitro featured by induction of uncoupling protein 1, a hallmark of brown/beige fat, which could further secrete several growth factors accelerating muscle repair evidenced by other literatures. 75 In one recent study, Wosczyna et al uncovered that the miR-206/Runt-related transcription factor 1 axis played an important role in regulating the adipogenesis of FAPs, 76 demonstrating that miRNAs participated in regulating the fate of FAPs, which should be further investigated. Inflammation takes place at the onset of muscle injury. The interaction between immune cells and FAPs is a key event in the early stage of acute muscle injury. 8 The immune cells and FAPs could regulate each other through an indirect way. 8 , 19 Eosinophils seem to play an opposite role in acute and chronic injured muscles. In glycerol-induced acute muscle injury, type 2 innate signals derived from eosinophils, i.e. , IL-4/IL-13 signaling, regulate FAPs to facilitate muscle regeneration. IL-4 can promote the proliferation of FAPs by activating the IL-4 receptor/STAT6 pathway and can inhibit adipogenesis of FAPs, thus preventing fatty infiltration in muscle injury. 77 Moreover, IL-4 can enhance phagocytosis of FAPs. The phagocytic ability of endothelial cells, muscle progenitors (satellite cells), macrophages, and FAPs has been compared. Interestingly, FAPs are more efficient in phagocytizing necrotic debris compared with the other three cell types. Thus, IL-4 not only commits FAPs to regenerative fate in muscle injury but also promotes the clearance of necrotic debris. 77 In another study, the researchers found glucocorticoid could induce adipose accumulation in muscle injury. However, treatment with IL-4 antagonizes this effect through IL4 receptors. 10 These two studies together indicate that IL-4/IL-4 receptor signaling is critical to creating a regenerative environment by regulating the proliferation, adipogenesis, and phagocytosis of FAPs. However, Kastenschmidt et al stated that the elevation of eosinophils through a group 2 innate lymphoid cells (ILC2s)-derived IL-5 dependent way. The expanded ILC2s and eosinophils could negatively impact muscle regeneration and promote fibrosis in DMD. 78 These findings implied that the function of eosinophils on muscle regeneration and degeneration is dependent on different circumstances. Type І inflammatory cytokines prevent the adipogenic differentiation of FAPs. It was observed that IL-1α and IL-1β intensely restrained FAP adipogenesis. On the other hand, betacellulin and epidermal growth factor conspicuously facilitate FAP proliferation. 79 TNF-α is a key cytokine that promotes apoptosis of FAPs in regenerative muscle. Ly6C high macrophages induce FAP apoptosis by producing TNF-α and prevent collagen deposition. Contrarily, TNF-α may exacerbate fibrosis in degenerative muscles. Anti-TNF treatment can attenuate fibrosis in degenerative muscles. 2 Mechanistically, TNF-α-stimulated fibrosis might be mediated partly through WNT/beta-catenin signaling. 25 FAPs can also produce and secrete inflammatory cytokines to regulate other cells and influence muscle repair. Sarcopenia is commonly seen in the elderly, characterized by muscle wasting, fatty infiltration, and fibrosis. 80 Impaired recruitment of regulatory T cells is an important reason for muscle regeneration failure in sarcopenia. Regulatory T cells marked by Foxp3 and CD4 are critical to tissue repair and regeneration. It was reported regulatory T cells significantly decreased in elder mice and the muscle regeneration was impaired. 81 Kuswanto et al found that IL-33 was indispensable for maintaining the regulatory T cell homeostasis after muscle injury. Furthermore, they proved that FAPs were the dominant source of IL-33 in muscles. Muscle injury can lead to a mass of dead FAPs that can release abundant IL-33. Then, regulatory T cells can be recruited through IL-33/suppression of tumorigenicity 2 signal to injured muscle and facilitate muscle regeneration. 81 Interestingly, in the study performed by Kastenschmidt et al which we discussed above, the expanded ILC2s and eosinophils were just regulated by IL-33 secreted by FAPs. 78 Denervation-associated muscle atrophy is a common complication after nerve system injury or diseases. Several studies proved that FAPs expand in denervated muscles. 57 , 67 , 82 Although denervation is less inflammatory muscle atrophy compared with acute muscle injury or inflammatory muscle diseases, Madaro et al revealed that denervation causes FAPs to secrete IL-6 through activation of the STAT3 pathway. Blockage of FAP-derived IL-6 can counter muscle atrophy and fibrosis in denervated muscles. 82 Emerging evidence revealed that epigenetic remodeling of histones can change the natural fate of FAPs and regulate them to commit to myogenic phenotype. Noticeably, the pharmacological treatment with HDAC inhibitor (HDACi) has been under clinical trials on DMD. HDACi can induce FAPs to gain promyogenic phenotype in a dystrophic muscle environment at an early stage of DMD. 83 HDAC/myomiRNAs/BAF60 axis is critical to this transition. The expression of myomiRNAs, such as miRNA-1.2, miRNA-133, and miRNA-206, up-regulate in FAPs after down-regulation of HDAC. These miRNAs favor the formation of BAF60C by targeting BAF60A and B. Furthermore, one study found that treatment with HDACi could promote the expression of miRNA-206 in extracellular vesicles, which promotes MuSC activation and muscle regeneration. 84 However, current studies also demonstrated that FAPs derived from late-stage of DMD are resistant to HDACi-induced chromatin remodeling. 83 , 85 , 86 Consalvi et al recently revealed the possible underlying mechanism. In the late stage of DMD, FAPs exhibit an aberrant HDAC activity which leads to pan hypo-acetylation at the promoters of genes regulating the cell cycle. This process cannot be fully reversed by HDACi. On the other hand, the author found that H3K9/14 hyper-acetylation at promoters of senescence associated secretory phenotype genes in natural FAPs at a later stage of DMD could be inhibited by HDACi and the fibrosis can be attenuated. 87 In addition, several studies confirmed in recent years that HDACi could attenuate fibrosis in multiple organs and tissues. 88 , 89 For muscles, using HDACi could attenuate the fibro-adipogenic phenotype of FAPs. 84 , 90 Besides acetylation, methylation regulated the myogenic fate of FAPs. The cooperation between PR domain-containing 16 and G9a/GLP (H3K9 methyltransferases) could silence myogenic genes of FAPs, thus repressing their myogenic fate, which was evidenced by enriched H3K9me2 levels of regulatory genomic loci of myogenic genes (MyoD transcriptional start site, TSS; MyoD core enhancer, CE; Desmin, Des). 91 During muscle retraction, myofibers can produce a set of active factors called myokines in paracrine- and autocrine-dependent manners. 92 These myokines accumulate locally to form a special “myokine microenvironment”, which plays a critical role in regulating physiological and pathological processes. Limited studies explored how myokines affect FAPs. In our previous study, we tested the levels of several myokines in muscle injury compared with those in normal muscles. The results showed that IL-15 might participate in regulating the biological behavior of FAPs. Further investigation revealed that IL-15 can stimulate the proliferation of FAPs through JAK/STAT pathway. Moreover, treatment with IL-15 can prevent fatty infiltration and promote muscle regeneration. 93 In another study, Steven and colleagues found that overexpression of leukemia inhibitory factor, another myokine, could suppress FAPs and attenuate fibrosis through abrogating TGF-β signaling. 94 Interestingly, myokines may also participate in muscle degeneration caused by systematic disease. A recent report showed that myostatin exacerbated collagen deposition in muscles in chronic kidney failure. Overexpression of myostatin promoted FAPs to differentiate into myofibroblasts through activation of Smad3 signaling. 95 Senescent cells could permanently go into cell-cycle arrest and secrete abundant cytokines called senescence associated secretory phenotype. Cellular senescence can not only occur in older individuals but can be found in whole life. In recent years, the effects of senescent FAPs on muscle regeneration and degeneration were detailly studied. Recently, Emily Parker et al decoded that FAPs-derived extracellular vesicles presented a significant ascension in miRNAs, such as miR-124, miR-181a, miR-let-7b, and miR-let-7c after 14 days’ single-hindlimb immobilization in mice, which have formerly been testified to play vital roles in cellular senescence and muscle atrophy. However, the ascending effect of miRNAs was not significantly changed in FAP-derived extracellular vesicles from the IL-1β knockout mice miraculously. These data supported the idea that IL-1β activated by muscle disuse could directly stimulate the liberation of atrophy and senescence associated miRNAs. 96 On the other hand, adenosine 5′-monophosphate (AMP)-activated protein kinase (AMPK) pathway is considered to be an important regulator for FAP senescence. Saito et al compared the senescence of FAPs between muscle acute injury and idiopathic inflammatory myopathies. The authors found that exercise could induce the senescent phenotype of FAPs and promote muscle regeneration. The astriction of exercise on FAPs also was observed by Valero et al. 97 However, the senescence of FAPs regulated by exercise was prevented in chronic myopathy. The AMPK pathway is a possible regulator that makes this discrepancy. Administering AICAR, an agonist of the AMPK pathway, could restore the senescent phenotype of FAPs and improve muscle function. 45 Consistently, Liu et al also found inhibiting AMPK in FAPs enhanced the expression of p65 and TGF-β1 and induced an apoptosis-resistance phenotype. Furthermore, conditional depletion of AMPKα1 in FAPs enhanced the fibrogenic phenotype of these cells. 98 On the contrary, some other important studies found senescent FAPs exerted a harmful effect on muscle regeneration. Using Z24 −/− mice, an accelerated aging mouse model, Liu et al found that nearly half of FAPs presented with a senescent phenotype, the proportion was much higher than that in wild-type muscles. These senescent FAPs could inhibit the proliferation and differentiation of MuSCs. Administration of senolytics could efficiently transfer the senescent FAPs to apoptotic phenotype and restore the number of MuSCs. 99 Another study recently published by Nature resolved an atlas of senescent cells in muscles. The authors first established a technology that could identify and isolate senescent cells in vivo . They found senescent cells existed in both cardiotoxin-injured muscles from young and old mice. Macrophages, FAPs, and MuSCs were three major groups in senescent cells. Clearance of senescent cells could improve muscle regeneration and attenuate fibrosis. 100 The paradox about the function of senescent FAPs on muscle regeneration is still unclear. The differences in disease models or interventions may be possible reasons to explain the discrepancy. However, the detailed underlying mechanism should further be explored and verified. Another aspect should be noticed is that the relationship between senescence and apoptosis. Although senescence and apoptosis are two distinct cellular processes, they share the same pathway and stimuli partially. Many genes can be involved in both senescence and apoptosis, such as TRP53. Furthermore, varieties of senescent inducers may have the ability to induce apoptosis based on the different doses. One therapeutical strategy to treat cellular senescence is inducing the senescent cells into apoptosis. Thus, we think senescence and apoptosis may often coexist in complex circumstances, such as injured or atrophied muscles. Previous studies demonstrated that cilia were important to FAPs. Kopinke et al found that fatty degeneration was largely prevented after removing cilia from FAPs. Desert hedgehog protein secreted by Schwan's cells could inhibit the expression of PTCH1 and activate Smoothened on cilia, thus promoting the production of tissue inhibitor of metalloproteinase 3. The overexpression of tissue inhibitor of metalloproteinase 3 restricted the adipogenesis of FAPs through a non-cell-autonomous mechanism, which dominantly reversed the adipogenic induction of matrix metalloproteinase 14. 33 Yamakawa et al found that ablation of trichoplein keratin filament binding gene could induce ciliary elongation on FAPs in injured muscles, stimulate muscle regeneration, and inhibit adipogenesis. Mechanically, insulin/Akt and IL-33/suppression of tumorigenicity 2/JNK pathways regulated the dysfunction of cilia-dependent lipid raft dynamics and the expression of IL-13, which facilitated myoblast proliferation and M2 macrophage polarization. 101 Yao et al defined a subpopulation of muscle-resident FAPs characterized by heightened Hedgehog signaling, namely Gli1 + FAPs. Gli1 + FAPs with elevated heightened Hedgehog signal promoted regeneration of skeletal muscle by delivering trophic signals to support myogenesis while placing restrictions on adipogenic differentiation. 102 It is well known that FAPs and myocytes are overloaded with lipids in obesity. 103 Obesity commonly results from a long-term high-fat diet (HFD). In mouse models, HFD can regulate the proliferation of intradiaphragmatic FAPs through up-regulating serum thrombospondin 1. HFD can also promote the adipogenesis and fibrogenesis of FAPs. Those pathological changes of FAPs lead to diaphragm contractile deficits and finally induce respiratory dysfunction. 104 However, it was reported that short-term HFD could reprogram mitochondrial dysfunctions by regulating the beta-catenin/follistatin axis, thus ameliorating the pathological changes in dystrophic mice. 105 Mogi et al investigated the intramuscular fat deposition in wild-type mice and several diabetic mouse models, including KKAy mice, db/db mice, and diet-induced diabetic mice. They found that diabetes promoted aging-related obesity by inducing aberrant adipogenesis of FAPs. 106 Souza et al studied the effects of obesity and exercise on muscles exposed to radiation, and they found that HFD could increase fibrosis and fatty infiltration in muscles. Moreover, they noticed that the infiltration of FAPs in muscle was reduced in obese mice with treadmill exercise. 107 Sergio Perez-Díaz et al discovered that mice skeletal muscle FAPs could highly secrete nidogen-1 in response to HFD. Upward nidogen-1 from FAPs eroded the proliferation of muscle stem cells and triggered the fibrogenic fate of FAPs and sacrificed their adipogenic potential, accounting for an overaccumulation of ECM. 108 As described before, Farup et al analyzed the subgroup of FAPs in patients with type 2 diabetes mellitus. The authors isolated CD90 + FAPs as the major group to be responsible for fatty infiltration. PDGF was critical to induce CD90 + FAP proliferation and ECM remodeling during fatty infiltration. Treatment with metformin could reduce this process. 14 The communications between FAPs and MuSCs are core events during muscle regeneration and degeneration. They can regulate each other through a direct or indirect way. Uezumi et al unveiled that satellite cells and myotubes could inhibit the adipogenesis of FAPs. Further investigation indicated that it worked through direct cell–cell contact. 11 The mechanism of this interesting finding was explained in the following study. NOTCH signals were proven to be critical to this process. Delta-like canonical Notch ligand 1, the Notch ligand expressing on myofibers and muscle stem cells, was able to activate NOTCH signaling in FAPs through direct cell–cell contact and prevented adipogenesis. However, in a degenerative environment, FAPs were insensitive to NOTCH signals, which resulted in the accumulation of adipocytes. 109 In another study, Moratal et al co-cultured human FAPs with myogenic progenitors and found that myogenic progenitors modulated FAPs through soluble factors. Activation of the PI3K-AKT pathway in MuSCs promoted the proliferation of FAPs. In addition, MuSCs regulate adipogenesis and fibrogenesis of FAPs through phosphorylation of Smad2 and up-regulation of Gli1. However, this aforementioned mechanism did not exist in FAPs and myogenic progenitors derived from aged people or patients diagnosed with DMD. 110 In addition, some studies showed muscle contraction could impact fatty infiltration. It was reported that short-term limb disuse could lead to fatty infiltration. 111 However, another study reported that limb unloading could prevent fatty infiltration and muscle degeneration in injured muscles. 112 The detailed underlying mechanism should be further investigated. On the other hand, numerous studies demonstrated that FAPs could regulate MuSCs by secreting varieties of cytokines positively or negatively. Mechanical stress can change the secreta of FAPs to facilitate myoblast activation. 113 In an over-loaded model, the expression of thrombospondin 1 in FAPs was enhanced under the regulation of Yap/Taz. The secreted thrombospondin 1 can promote the proliferation of MuSCs through binding to its receptor CD47. 114 Schuler et al identified that FAPs excreted SPARC related modular calcium binding 2 protein which could accrue with aging. Rising SPARC related modular calcium binding 2 conduced to the deformed integrin beta-1/mitogen-activated protein kinase signaling during aging, thus resulting in damaged MuSC functionality and muscle regeneration. 115 In other studies, the frequencies of FAPs in denervated muscles or chronic inflammatory myopathy were also much higher than those in acutely injured muscles. Senescent FAPs expressed higher levels of IL-33 and TGF-α-stimulated gene-6. These two cytokines participated in muscle regeneration by regulating macrophages and regulatory T cells. In addition, the “don't eat me” signals CD274 and CD47 were up-regulated in aberrant activated FAPs, indicating that FAPs in degenerative muscles could not be efficiently cleared. 45 In sarcopenia, the secretome of FAPs was significantly changed and lost their ability to support muscle homeostasis. Lukjanenko et al compared the properties of FAPs in young and old mice. They found FAPs showed a tendency to fibrogenesis but with less capacity for adipogenesis. Moreover, the authors demonstrated that FAPs isolated from old mice showed an impaired ability to activate MuSCs. Mechanically, WISP1 decreased in FAPs from old mice. Administration of WISP1 could efficiently restore the myogenic capacity of MuSCs and promote muscle regeneration. 61 Moreover, denervation at neuromuscular junctions can be identified in sarcopenia. Uezumi et al found Bmp3b expressed by FAPs was critical to maintaining the myofiber mass and muscle–nerve interaction. However, the level of Bmp3b decreased in sarcopenia. 44 Activin A-ACVR1 axis is a critical pathway regulating osteogenesis of FAPs. Activin A excessively triggers canonical BMP signaling in FAPs through mutated ACVR1 receptors, which exacerbates the deterioration of fibrodysplasia ossificans progressiva. However, an artificial antibody against the extracellular domain of ACVR1 aggravated heterotopic ossification unexpectedly. This raised a concern for the safety and effectiveness of anti-ACVR1 antibodies. 116 Overexpression of ACVR1 in Tie2 + FAPs prevented fibrodysplasia ossificans progressiva mice from heterotopic ossification induced by injury by seizing a seat for essential signaling components and by conversing ACVR1 (R206H) into inactive or less active receptor complexes. 117 In addition, treatment with palovarotene was able to efficiently reduce the abnormal expansion of FAPs in fibrodysplasia ossificans progressiva and finally attenuated heterotopic ossification progression. 118 Besides to fibrodysplasia ossificans progressiva, Stanley et al explored that overexpression of ACVR1R 206H negatively influenced muscle regeneration after injury. The myogenic potential of MuSCs in ACVR1 R206H knock-in mice was impaired. FAPs from ACVR1 R206H/+ mice repressed myotube formation. 119 Several studies tried to treat muscle atrophy by targeting FAPs. Fiore and colleagues treated muscle fibrosis by using Nilotinib, a tyrosine kinase inhibitor. Unfortunately, muscle regeneration was simultaneously impaired when fibrosis was prevented. 1 The reason may be that nilotinib also disrupts myogenic progenitor differentiation. 120 On the other hand, imatinib showed a more favorable effect on treating fibrosis, since it prevented the proliferation and fibrosis of FAPs and had no impact on myoblast proliferation. 3 The TGF-β inhibitor SB431542 was found to reduce rotator cuff muscle fibrosis and fatty infiltration by inducing FAP apoptosis. 121 Batimastat, a broad matrix metalloproteinase inhibitor, can efficiently prevent adipogenesis of FAPs in muscle injury. 33 Annexin A2 is another potential therapeutic target in muscle atrophy diseases. Hogarth et al demonstrated that accumulation of annexin A2 in the myofiber matrix could favor adipogenic induction of FAPs. It is the main reason contributing to fatty degeneration in limb girdle muscular dystrophy 2B, which is caused by mutations in dysferlin. Using batimastat can attenuate the fatty infiltration and degeneration of dysferlin-deficient muscle. 70 Direct injection of FAPs may improve tissue repair. Recently, researchers injected beige-FAPs labeled with UCP-1 + Sca1 + Pdgfrα + CD31 − CD45 − integrin α7 − into torn rotator cuff. They found that this transplantation could suppress fibrosis and fatty infiltration, thus promoting vascularization and shoulder recovery. 122 , 123 In addition, adipose-resident FAPs can reinstate the regenerative function of muscle stem cells. 124 In chronic obstructive pulmonary disease, CD34 expressed on FAPs was critical to maintain the normal function of muscles in hypoxic conditions. CD34 depletion can lead to a reduction of FAPs as well as impaired strength of muscles. 125 , 126 Antioxidant compound tocotrienol (γ-tocotrienol, GT3) can reduce the production of reactive oxygen species in muscle stem cells of DMD mice, which is conducive to promoting the functional recovery and differentiation of muscle stem cells of DMD mice. At the same time, the application of GT3 can significantly reduce the percentage of Pdgfrα + fibroblast adipo-progenitor cells in the tibialis anterior muscle of DMD mice, thus controlling the progress of fibrosis and relieving the pathological symptoms of DMD. 127 Adipose-resident Pdgfrα + stromal cells reside in stromal vascular fraction. Increasing studies revealed that Pdgfrα + stromal cells are preadipocytes, which are the major cells to differentiate into adipocytes. 128 , 129 , 130 , 131 In addition, they shared many similar properties with muscle-resident FAPs. Many Pdgfrα + preadipocytes are found to be adjacent to blood vessels. 128 , 132 Rodeheffer et al screened potential cells that possessed differentiative ability through fluorescence activated cell sorter. They selected several classic stem cell biomarkers, including Sca-1, CD29, and CD34, to identify potential adipogenic progenitors in stromal vascular fraction. The results showed both Lin − CD34 + Sca-1 + CD29 + CD24 + and Lin − CD34 + Sca-1 + CD29 + CD24 − subgroups could differentiate into adipocytes in vitro . Nevertheless, only Lin − CD34 + Sca-1 + CD29 + CD24 + subgroup showed a potent capacity of adipogenesis in vivo , indicating that this subgroup was preadipocyte. 129 Interestingly, the Pdgfrα + CD24 + subgroup is the precursor of the CD24 − subgroup and accounts for a large population of adipogenic cells in the embryonic subcutaneous white adipose tissue. Meanwhile, the Pdgfrα + CD24 − subgroup can differentiate into mature adipocytes after birth. 128 Most Lin − Pdgfrα + cells are preadipocytes since they give rise to adipocytes. 131 Meanwhile, another combination of biomarkers for adipose-resident FAPs was considered, including Lin − Gp38 + Pdgfrα + . 133 Together, Lin − Sca-1 + Pdgfrα + could be definitely used to identify Pdgfrα + stromal cells in adipose tissues. In human adipose tissues, Lin − CD34 + CD44 + Pdgfrα + was used to sort out human adipose-derived FAPs ( Table 1 ). 133 CD9 distinguished the different fates of adipose-derived Pdgfrα + stromal cells. After HFD feeding, the Pdgfrα + CD9 high subgroup showed a pro-fibrotic phenotype, while the Pdgfrα + CD9 low subgroup was more likely to commit to adipogenic fate. In obesity, the Pdgfrα + CD9 low subgroup was obviously diminished, and white adipose tissue exerted a pro-inflammatory status. Consistently, Pdgfrα + preadipocytes derived from white adipose tissue in obese humans displayed pro-fibrotic features with high expression of α-SMA ( Table 2 ). 133 Interestingly, adipose-resident Pdgfrα + stromal cells presented with an obvious heterogeneity. Using the single-cell RNA-seq technique, Schwalie et al identified that three major subgroups could be distinguished by surface markers, CD55 & IL13RA1 (group 1), VAP1 & Adam12 (group 2), and CD142 & ABCG1 (group 3), respectively. 130 Notably, the CD142 + ABCG1 − subgroup was named as “Areg” because it displayed an inhibitory effect of adipogenesis through direct and paracrine manners. Rtp3, Spink2, FGF12, and Vit were involved in this adipogenic inhibition ( Table 2 ). 130 In another study, adipose-resident Pdgfrα + stromal cells with the expression of dipeptidyl peptidase-4 showed a higher proliferation capacity and gave rise to committed ICAM1 + and CD142 + preadipocytes, indicating that the dipeptidyl peptidase-4-positive subgroup had a strong stemness. 134 Interestingly, the aforementioned two studies presented a controversial function of the CD142 + subgroup in adipogenesis. This divergency is likely due to the heterogeneity in the CD142 + subgroup ( Table 2 ). Similar to the muscles, PDGF ligands also regulate the biological process of adipose. It determines adipocyte–myofibroblast transition in white adipose tissues. 135 , 136 In dermal adipose, the maintenance of adipose progenitors is controlled by the PDGFA/PI3K-AKT signaling pathway. 137 In addition, it is well known that there are several types of adipocytes, including beige adipocytes and white adipocytes, the former of which can provide energy and thermogenesis. The Pdgfrα/Pdgfrβ signal determines the balance between beige adipocytes and white adipocytes. Pdgfrβ + preadipocytes mainly contribute to white adipocytes and Pdgfrα + preadipocytes can generate beige adipocytes . 135 Figure 5 The signals that regulate Pdgfrα + preadipocytes in adipose. Fig. 5 Similar to muscle-resident FAPs, cilia play an important role in regulating Pdgfrα + preadipocytes. The proliferation and adipogenesis of Pdgfrα + preadipocytes and white adipose tissue expansion can be regulated by the cilia. It was reported that Pdgfrα + preadipocytes which were located along blood vessels were ciliated before differentiating into mature adipocytes. The preadipocytes with cilium, including 3T3-L1 preadipocyte cell lineage and primary Pdgfrα + preadipocytes, show sensitivity to proliferation. Consistently, loss of cilium contributes to a reduction of white adipose tissue. Further investigation has revealed that cilium is a sensor to varieties of signals. ω-3 fatty acids can accelerate the proliferation and differentiation of preadipocytes by mediating chromatin remodeling through FFAR4/cAMP/CTCF pathway. Finally, ω-3 fatty acids facilitate white adipose tissue expansion through enhancing Pdgfrα + preadipocyte proliferation, thus improving insulin sensitivity and tissue inflammation. 138 This result implies that adipogenesis of Pdgfrα + preadipocytes in adipose is critical to regulating metabolic homeostasis . HFD can also lead to fibrosis in adipose tissue by regulating the autophagic process in Pdgfrα + preadipocytes. Autophagy related 7 was critical to the autophagy-induced fibrotic phenotype of Pdgfrα + preadipocytes. Conditional autophagy related 7 knockout in Pdgfrα + preadipocytes obviously attenuates ECM gene expression in visceral, subcutaneous, and epicardia fats, exerting a general effect on fibrosis caused by Pdgfrα + preadipocytes. 139 Actually, the role of autophagy in cellular function and fates of Pdgfrα + preadipocytes are largely unknown, and further studies are needed . Tendons are dense connective tissues that connect muscles and bones, which are responsible for the mechanical load. After injury, tendons exhibit impaired healing potential with excessive scar formation. 46 Tendon matrix continuity and longitudinal alignment are essential to tendon regeneration. However, the cellular components regulating tendon regeneration and degeneration remain elusive. Harvey et al recently identified three major groups of tendon-resident progenitors through single-cell sequencing based on expression levels of Pdgfrα and TPPP3 ( Table 1 ). TPPP3 + Pdgfrα + progenitors could differentiate into chondrocytes and osteocytes. TPPP3 − Sca-1 + Pdgfrα + progenitors were tendon-derived FAPs, giving rise to adipocytes, chondrocytes, and osteocytes . The third subgroup TPPP3 + Pdgfrα − cells could only differentiate into chondrocytes. Then, they further investigated the functions of TPPP3 + Pdgfrα + and TPPP3 − Pdgfrα + subgroups and revealed that these two groups showed opposite functions in tendon repair. TPPP3 + Pdgfrα + progenitors were tenogenically predisposed and contributed to tendon regeneration. However, tendon-derived FAPs contribute to scar formation in tendon injury. In normal conditions, FAPs often locate in the sheath. However, these FAPs will migrate into tissue after injury. It should be noted that the three subgroups co-existed in the same niche in the tendon. PDGFAA signaling modulates regeneration and fibrosis simultaneously. 46 Thus, it is difficult to promote tendon regeneration and prevent scar formation simultaneously through regulating PDGFAA signaling. Further investigation should be performed to look for some therapeutic strategies that specifically target tendon-resident FAPs but not TPPP3 + tendon stem cells to attenuate scar formation. Smooth muscle is essential to vascular repair, but the source of smooth muscle cells was unclear before. Although Sca-1 + cells were once identified to have the potential to differentiate into smooth muscle cells, the role of Sca-1 + cells in differentiating into smooth muscle cells in vivo was still unclear. To resolve this issue, Tang et al analyzed the subgroup of Sca-1 + cells in the femoral artery wall through single-cell RNA sequencing. Two subgroups were identified, which expressed Pdgfrα + or Pdgfrβ + , respectively. Using lineage tracing mice, they found Sca-1 + cells did not contribute to the generation of smooth muscle cells in normal or slight injury models. However, when the artery suffered a severe injury, a mass of Sca-1 + cells-derived smooth muscle cells appeared in the injured site to repair the artery injury. Further investigation revealed that only the Pdgfrα + subgroup participated in this process. In normal conditions, the Pdgfrα + subgroup often locates out of the artery wall, but they will immigrate into the arterial wall if severe injury occurs. Mechanistically, the Pdgfrα + subgroup can rapidly proliferate via activation of yes-associated protein 1 after severe injury. After eliminating this subgroup, the artery repair will be significantly impaired. 140 Interestingly, they found that this group of cells negatively expressed CD45 and could form adipose tissue around the artery. Although this Pdgfrα + subgroup was not named as FAPs in this study, the features were like FAPs ( Table 2 ). In congenital and acquired cardiac diseases, fibrosis, and fatty infiltration are two typical pathological signs. Pdgfrα seems to be critical to pathological changes in cardiac development and diseases. Kim and colleagues identified Pdgfrα + cardiac progenitors originated from multipotent germline stem cells. They found that Pdgfrα + multipotent germline stem cells expressed more cardiogenic biomarkers than Pdgfrα − multipotent germline stem cells. Transplantation of these Pdgfrα + multipotent germline stem cells into rat myocardial infarction models can facilitate them to differentiate into functional cardiomyocytes and reduce fibrosis. 141 Similarly, several studies distinguished subgroups of Sca-1 + cardiac stem/progenitor cells through Pdgfrα. 142 , 143 Pdgfrα + side population cells showed more exact cells enriched for cardiogenic transcripts. 143 On the contrary, Chen et al found Lin − CD29 + mEF-SK4 + Pdgfrα + Sca-1 + periostin + cardiac fibroblast subset highly shared common biomarkers with FAPs ( Table 2 ). This group of cardiac fibroblasts contributed to heart failure in the presence of IL-17 by producing abundant granulocyte macrophage-colony stimulating factor. 144 These studies implied that like in tendons, Pdgfrα + progenitors might have distinct subgroups, which played different roles in heart repair . Figure 6 The role of Pdgfrα + stromal cells in cardiovascular diseases. Fig. 6 In 2016, Raffaella and colleagues identified cardiac FAPs encoding Pdgfrα and negatively expressing CD31, CD45, thymocyte antigen 1 (Thy-1), and discoidin domain receptor tyrosine kinase 2. These cells were bipotential. Collagen1α1 expressed broadly in these cells. Furthermore, a subset specifically expressed CEBP/α. Further investigation revealed adipogenic subsets of FAPs mainly expressed desmosome proteins and differentiated into adipocytes through a Wnt-dependent manner in arrhythmogenic cardiomyopathy. 145 Cardiac FAPs are often set in epicardium. 145 , 146 Quiescence-associated factor HIC1 was critical to the homeostasis of cardiac FAPs. Deletion of HIC1 in FAPs can lead to fibrofatty infiltration and cause major pathological features in arrhythmogenic cardiomyopathy. 147 Contreras et al used PDGFRa-H2B:eGFP mice to isolate enhanced green fluorescent protein-labeled FAPs from the heart. They found TGF-β could inhibit the expression of Pdgfrα in heart-resident FAPs. 24 In humans, cardiac FAPs located in epicardial layer can differentiate into myofibroblasts in the presence of angiotensin II. In pathological conditions, such as atrial fibrillation, subsets of cardiac FAPs can be reprogrammed towards a specific fate, leading to fibrofatty infiltration . 148 In 2009, Lin − Sca-1 + CD34 + Thy-1 + Pdgfrα + mesenchymal cells were identified in the lung parenchyma ( Table 1 ). These cells emerged during neonatal lung development and possessed fibroblastic, adipogenic, osteoblastic, and chondroblastic abilities, which were highly similar to muscle-resident FAPs. 149 Pdgfrα + fibroblasts showed obviously distinct features in lung injury and alveolar regeneration. Endale et al analyzed transcriptomic profiling and described the characteristics of Pdgfrα + fibroblasts during lung development. They found Pdgfrα + fibroblasts could immigrate from proximal bronchiolar at embryonic day 16.5–17.5 to distal alveolar location at postnatal day 5–28. Transcriptomic profiling showed that cell migration-associated genes were enriched at embryonic day 16.5. At embryonic day 18.5, these cells switched from smooth muscle cell phenotype to matrix-producing cells and lipofibroblasts, which exhibited FAP features. At postnatal day 7, several pathways were enriched in Pdgfrα + fibroblasts, including ECM organization, angiogenesis, and epithelial development. 146 These data revealed that these cells with features of FAPs in the lung migrated during development and exhibited distinct features in fibrosis and alveolarization. Li et al found that Pdgfrα + progenitors contributed to fibrosis induced by bleomycin in the lung through differentiating into myofibroblasts, but had little effect on hyperoxia-induced fibrosis. This indicated that lung-resident Pdgfrα + progenitors had distinct lineage potential. 150 Interestingly, when using lineage tracing, they found Pdgfrα + cells could co-express early growth response and intercellular adhesion molecule-2, the biomarkers of endothelial cells. 150 As we know, FAPs do not belong to endothelial cells. It was not sure whether FAPs could differentiate into endothelial cells in some special conditions, or whether Pdgfrα + cells contained both endothelial cells and non-endothelial progenitors in the lungs. Zepp et al further investigated myofibroblast subgroups in the lungs. They identified three subgroups: Axin2 + Pdgfrα + , Axin2 + , and Wnt2 + . 151 Axin2 + Pdgfrα + mesenchymal cells located around alveolar type 2 progenitor cells form a mesenchymal alveolar niche, which promotes alveolar type 2 progenitor cell self-renewal and differentiation into alveolar type 1 progenitor cells. On the other hand, Axin2 + Pdgfrα − cells may generate pathologically deleterious myofibroblasts after injury. This group was also the main source of airway smooth muscle cells. Notably, IL-6/STAT3 and FGF7 signals can promote alveolar type 2 progenitor cell self-renewal while BMP7 inhibited this process. 151 The lung is a heavily attacked organ by SARS-COV-2 virus. 152 Pulmonary fibrosis and impaired alveolar regeneration were two classic features of COVID-19. 153 , 154 , 155 Inflammatory cytokine “storm” contributed to the lung injury. IL-6 was one of the most important cytokines that exacerbated lung injury. Mesenchymal stem cell therapy has been considered a promising choice for COVID-19. 156 , 157 Considering the importance of Pdgfrα + fibroblasts in lung injury and regeneration, it is worthy to explore their biological changes during infection of SAR-COV-2. Bone marrow-derived mesenchymal stem cells (BMMSCs) were a group of MSCs located in bone and can be identified according to the markers Ter119 − CD45 − Sca-1 − Pdgfrα + ( Table 1 ). They highly express CD29, CD90, and CD44, and lowly express CD34. 158 To our best knowledge, limited studies used these markers in their research on BMMSCs. One study showed that Sca-1 + Pdgfrα + BMMSCs could promote bone marrow regeneration. 159 Another study exhibited that high-mobility group box 1 protein could recruit Pdgfrα + BMMSCs to the peri-infarction site to promote re-vascularization and finally reduce fibrosis. 160 In fact, Pdgfrα + stromal cells in pancreatic tissue have their tissue specific and widely recognized name, which is called pancreatic stellate cells (PSCs). PSCs can differentiate into fibroblasts and adipocytes. A recent article showed that the surface markers (Pdgfra + CD31 − CD45 − ) of PSCs are similar to the muscle-resident FAPs. 161 PSCs have two classical states: quiescent state and activated state. Quiescent PSCs are present by numerous prominent lipid droplets in the cytoplasm, α-SMA negative, and with limited proliferation and ECM production. When stimulated by endogenous and exogenous factors, PSCs activate into myofibroblast like phenotype with α-SMA positive and less or absent lipid droplets, showing significantly enhanced proliferation and ECM production capacity. 162 , 163 , 164 Physiologically, activated PSCs have the potential to recover to a quiescent state. However, under the stimulation of some pathological conditions, such as chronic pancreatitis and pancreatic cancer, continuously activated PSCs with fibroblast like phenotype can promote the malignant progress of chronic pancreatitis and pancreatic cancer with their strong fibrogenesis ability, and become the center of pancreatic fibrosis. Here, we list some factors that regulate PSC activation and fibrosis in chronic pancreatitis. Persistent activation of PSCs by cytokines during acute pancreatitis such as TNF-α, IL-1, IL-6, and IL-10, may be a factor involved in the progression from acute pancreatitis to chronic pancreatic injury and fibrosis. 165 Zheng M et al described a possible mechanism that IL-6 contributes to PSC activation and collagen І production through up-regulation of the TGF-beta1/Smad2/3 pathway. 166 An impaired Rora/Nr1d1/Bmal1 loop, called the circadian stabilizing loop could result in the deficiency of pancreatic Bmal1, which accounted for the fibrogenic properties of PSCs in a clock-TGF signaling-IL-11/IL-11RA axis-dependent manner. Thus, a protective pancreatic clock had the potential against pancreatic fibrosis in chronic pancreatitis. 167 What's more, Ng B et al put forward that anti-IL11RA could reduce pathologic (extracellular signal-regulated kinase, STAT, NF-kappa B) signaling in PSCs, and inhibit subsequent pancreatic atrophy and fibrosis. 168 Recently, Xuguang Yang et al defined a subpopulation of PSCs, VLDLR + PSCs, which were comparatively enriched in inflammatory responses, growth factor activity, and lipid metabolism-related pathways, and closely related to pancreatic fibrosis. In mechanism, increased intake of very low-density lipoprotein (VLDL) through VLDLR could promote the release of IL-33 from VLDLR + PSCs via the LA-EBF2-IL-33 axis. Up-regulation of IL-33 aggravated the alcohol/pancreatic injury-induced pancreatitis fibrosis progression by activating the pancreatic ILC2s through its receptor suppression of tumorigenicity 2. On one hand, activated ILC2s recruited more type 2 immune cells, M2-like macrophages, and Th2 cells via IL-13/IL-4, which accounted for pancreatic fibrosis. On the other hand, activated ILC2s secreted IL-13/AMP/leukemia inhibitory factor, which resulted in fibroblast activation and proliferation of PSCs, eventually promoting fibrosis. 161 Autophagy seems to promote PSC activation and subsequent fibrosis. A novel lncRNA named lnc-PFAR was demonstrated highly presented in mouse and human chronic pancreatitis tissues. lnc-PFAR enhanced PSC activation and pancreatic fibrosis through trigging a miR-141-RB1CC1 (RB1-inducible coiled-coil 1) axis-dependent-autophagy. 169 Therefore, inhibition of autophagy may become one of the targets for the treatment of fibrosis. A study pointed out that, the knockdown of RB1CC1 could block autophagy-dependent activation of PSCs and impaired pancreatic fibrosis in chronic pancreatitis. 170 Similarly, milk fat globule epidermal growth factor 8 appeared to alleviate pancreatic fibrosis via inhibiting lysosome associated membrane protein type 2A in chaperone-mediated autophagy and subsequent activation of PSCs. 171 Also, vitamin E derivatives tocotrienols selectively trigger the autophagy of inactivated PSCs by targeting mitochondrial permeability transition pore and ameliorated fibrogenesis associated with chronic pancreatitis. 172 Alcohol could accelerate the progression of pancreatic fibrosis. Repeated lipopolysaccharide resulted in significantly greater pancreatic fibrosis in alcohol-fed rats compared with rats fed the control diet without alcohol. Notably, PSCs were activated by lipopolysaccharide. Lipopolysaccharide plus alcohol exerted a synergistic effect on PSC activation and pancreatic fibrosis. 173 Continued alcohol administration prevented PSC apoptosis and perpetuated pancreatic injury/fibrosis. Withdrawal of alcohol led to increased PSC apoptosis and resolution of pancreatic lesions including fibrosis. 174 In addition, smoking contributes to PSC fibrosis. Ah receptor ligands found in cigarette smoke increased the severity of pancreatic fibrosis. In mechanism, Ah receptor ligands promoted the release of IL-22 from pancreatic T cells, which further activated the fibrogenic potential through IL22RA1 in PSCs. 175 Meanwhile, Li Z et al indicated that nicotine facilitates pancreatic fibrosis by promoting the activation of PSCs via the alpha7nAChR-mediated JAK2/STAT3 signaling pathway. 176 It has been reported that intracellular oxidation levels can regulate fibrosis. Nicotinamide adenine dinucleotide phosphate oxidase 1-derived reactive oxygen species in PSCs accelerated the fibrotic process of chronic pancreatitis by activating the downstream pathways AKT and NF-kB, raising matrix metalloproteinase 9 and Twist, and producing alpha-smooth muscle actin and collagen I and III. 177 The antioxidant, mitoquinone (MitoQ) inhibited PSC activation as well as the transition of the profibrogenic phenotypes by balancing the levels of free radicals and the intracellular antioxidant system, meaning that MitoQ is a potential candidate treatment for chronic pancreatitis. 178 Non-coding RNA also participates in the regulation of the fate of PSCs. miR-301a is highly expressed in activated PSCs in mice, sustaining tissue fibrosis in caerulein-induced chronic pancreatitis via Tsc1/mTOR and Gadd45g/STAT3. 179 Acinar cell-derived exosomal miR-130a-3p promoted PSC activation and collagen formation through targeting of stellate cellular PPARγ. Thus, the knockdown of miR-130a-3p significantly provided a potential new target for the treatment of chronic pancreatic fibrosis. 180 Pdgfrα + stromal cells participate in degeneration and regeneration in varieties of tissues and organs. They share similarities and have unique and important features. Pdgfrα + stromal cells in most tissues play a “double-edged sword" effect. This feature indicates that depletion of Pdgfrα + stromal cells would not be a proper therapeutic strategy. Moreover, Pdgfrα + stromal cells are multipotent mesenchymal cells. Noticeably, Pdgfrα + stromal cells are susceptible to surroundings. They are easily modified by various cytokines and cells. Pdgfrα + stromal cells lead to fibro-fatty infiltration in most organs and tissues. In our opinion, adipocyte accumulation and fibrosis in degenerative diseases might be a passive defense to cope with unlimited muscle atrophy. In addition, Pdgfrα + stromal cell-derived adipocytes show the features of “good” adipocytes, i.e. , beige adipose. The role of proper adipogenesis in muscle degeneration should be further investigated. Senescence and apoptosis are critical to maintain tissue functions. Rapid reduction of Pdgfrα + stromal cells is helpful to tissue regeneration, to avoid forming excessive fibrosis and adipose. Trp53 is a key regulator for senescence and apoptosis in muscle-resident Pdgfrα + stromal cells. Overexpression of Trp53 can restrict the proliferation of muscle-resident Pdgfrα + stromal cells and reduce abnormal fatty infiltration and fibrosis. Moreover, they down-regulate the expression of CD47 and PD-L1 to facilitate apoptotic Pdgfrα + stromal cells being phagocytosed by macrophages. Pdgfrα + stromal cells in different tissues can be subdivided into several subgroups. Interestingly, the Osr1 + subgroup and CD142 + ABCG1 + population have been identified in muscle-resident FAPs and Pdgfrα + preadipocytes, respectively. The Osr1 + subgroup can give rise to Pdgfrα + stromal cells in limb development and is the main subgroup in the late stage of muscle repair in adults. The evidence implies that the Osr1 + subgroup may be a more primitive Pdgfrα + stromal cells. Adipose-derived CD142 + ABCG1 + Pdgfrα + preadipocytes are also named as “Areg”, which inhibit the adipogenesis of preadipocytes. However, sole CD142 cannot indicate Areg. The identification of Areg demonstrates that huge heterogeneity exists in subgroups of Pdgfrα + preadipocytes. It is interesting to explore whether Areg transplantation or its secreta can efficiently treat fatty degeneration in degenerative diseases. Cardiac FAPs are currently considered to predominantly contribute to fatty infiltration and fibrosis in cardiac diseases. In addition, artery-derived Pdgfrα + stromal cells can differentiate into smooth muscle cells in the injured artery. Thus, it is interesting to explore whether cardiac FAPs can turn into myocardial cells in some conditions. Taken together, studies about Pdgfrα + stromal cells have increased dramatically in the past several years. However, the roles of Pdgfrα + stromal cells in regeneration and degeneration are still far from clear. Pdgfrα + stromal cells distribute in most tissues and organs because they are located in interstitial mesenchyme and adjacent to vessels. Further studies on Pdgfrα + stromal cells in different locations might shed light on the management of associated diseases. This article does not contain any studies with human or animal subjects performed by any of the authors. X.K., X.Q., S.F., and H.M. wrote the manuscript with the assistance of K.Z. K.Z. and H.Z. performed the drawing work. The authors declare no competing interests. This work was supported by grants from the 10.13039/501100001809 National Natural Science Foundation of China , the Sichuan Science and Technology Program (China) , the funding of Jinfeng Laboratory (Chongqing, China), 10.13039/501100002858 China Postdoctoral Science Foundation , 10.13039/501100009976 Chongqing Postdoctoral Science Special Foundation (China) (to X.K.), and Chongqing Science and Technology Bureau (China) . | Review | biomedical | en | 0.999997 |
PMC11696788 | Tooth extraction initiates a series of events that result in significant changes in the height and width of the alveolar ridge. Post-extraction socket preservation is a procedure that can be performed to reduce alveolar bone resorption. Extraction without socket preservation poses the ridge will undergo atrophy, the peak volume of the alveolar ridge decreases, particularly during the first 6 months and mostly on the buccal wall. 1 Socket preservation falls within the realm of bone tissue engineering (BTE). The gold standard is autografting, that involves taking donor tissue from the individual and applying it to the existing bone defect. Through the combination of stem cells, scaffolds, and growth factors (GFs), BTE achieves biomimetic conditions to enhance tissue and cell regeneration and growth. 2 Socket preservation can help maintain residual ridge, thus providing a high success rate for implant and fixed prosthesis treatments. Hydroxyapatite (HA) is a commonly used material in socket preservation as a bone graft. HA is the most stable calcium phosphate compound in terms of temperature, pH, and composition in the bloodstream, and its a derivative of calcium phosphate with a chemical formula and properties similar to the inorganic minerals found in bone and teeth. 3 HA has been shown to have good Biocompatibility and osteoconductive properties, meaning it can be well tolerated by human oral cavity tissues and can stimulate osteoblast differentiation. HA also has the ability to induce mesenchymal cells to differentiate into osteoblasts, making it a suitable scaffold material for bone tissue engineering. 4 Indonesia, as a tropical country, boasts a rich biodiversity. Through the Indonesian Food and Drug Monitoring Agency (BPOM), Indonesia has highlighted several indigenous medicinal plants. 5 Herbal medicine comprises numerous molecules that synergistically act on specific cellular targets. Purple leaf (Graptophyllum pictum Griff) belonging to the Acanthaceae family, has been registered as a medicinal plant in the pharmacopoeia's second edition in 2017 and used in traditional medicine to treat various diseases. 6 Purple leaf contains non-toxic alkaloids, flavonoids, steroids, saponins, and tannins, these constituents play roles as antimicrobials, immunomodulators, antioxidants, anti-inflammatory, analgesic, wound healing agents, and others. 7 Flavonoids also influence immune cells and mechanisms involved in inflammation processes. In a previous study purple leaf was found to stimulate ALP activity by 128 % in MC3T3-E1 osteoblast cells and also has the potential to reduce the number of osteoclast cells in Wistar rats induced by P. gingivalis with optimal concentrations of 5 % and 10 %. 8 Alkaline Phosphatase (ALP) is an ectoenzyme that hydrolyzes ester monophosphates and it is widely used in research as an early marker of osteoblast differentiation. The expression of ALP decreases followed by an increase in late markers such as osteocalcin (OCN). Meanwhile, HA simultaneously downregulates the ALP gene and upregulates osteopontin (OPN), OCN, and COL1. 9 This process is followed by calcium and phosphate deposition. 10 Cells such as osteoblasts and ASCs that have osteogenic properties will usually produce calcified nodules that adhere to the culture plate. 11 Osteoblast differentiation is a complex process involving the transcription factor osterix (Osx) for osteoblast differentiation and bone formation. 4 Osteopontin (OPN) is a sialoprotein expressed by several cells including osteoblasts, osteocytes, and odontoblasts. The expression and upregulation of osteopontin are influenced by transcription factors including Runt-related transcription factor 2 (Runx2) and Osterix. 12 Runx2 is a key transcription factor expressed by osteoblast lineage cells and chondrocytes. Precursor osteoblasts expressing Runx2 are called preosteoblasts. Runx2 is a major marker studied in osteoblast differentiation, particularly in the early differentiation phase. It has been proven that the use of biomaterials such as HA can enhance osteoblast differentiation by upregulating Runx2. 9 Combining purple leaf and HA, each with their respective constituents, is expected to yield a material suitable for bone tissue engineering. Nanobiotechnology has advanced, and nano-sized biomaterials have been widely applied for tissue engineering. 13 Various nano-structured matrices have been shown to stimulate cell differentiation with a focus on maintaining structural, compositional, and biological features of bone tissue. The main constituent used so far in this regard is nano-hydroxyapatite. 14 This research proves that the combination of nano suspension from purple leaves and hydroxiapatite in vitro can increase the bone remodeling process so that it is concluded that it can be a candidate for use as therapy in socket preservation as a preparation for supporting tissue for dental implants. This study has been approved by ethical health committee of the Faculty of Dental Medicine, Universitas Airlangga with the number of certificate No: 0673/HRECC.FODM/VII/2024, 1390/HRECC.FODM/XII/2023, and 1410/HRECC.FODM/XII/2023. This study design uses the laboratory experiment with a post-test-only control group design. Purple leaves extraction process in the Biology Department Universitas Katolik Widya Mandala, Surabaya, Indonesia. Purple leaves 600 g were subjected to an extraction process involving 2 L of 96 % ethanol and were keep at room temperature and placed in a closed container for 3 days and filtered with filter paper to obtain macerate. The pulp was evaporated to obtain a thick extract of purple leaves. Nanosuspension was made by adding 10 ml of hot distilled water to the mortar, then adding 1 g of CMC Na and waiting for 15 min. Next, stir until it became a gel mass. Then 1 g of nipagin solution dissolved in 10 ml of distilled water was added to the gel mass. The extract solution (1 g) that has been dissolved in 20 ml of 96 % ethanol was stirred until homogeneous in mortar. Hydroxyapatite (nanoXIM®HAp200, FLUIDINOVA, Portugal). Next, 100 g of distilled water was added and Turax for 10 min. The nanosuspension was stirred at a speed of 1400 rpm for 90 min at 50 °C. The MTT assay is a sensitive, quantitative and reliable colorimetric assay aimed at assessing the toxicity of internal cells, a culture. The principle of this study is that the yellow MTT reagent (3-(4,5-dimethylthiazolyl-2)-2,5-diphenyltetrazolium bromide) is reduced to a purple color by metabolically active cells. The final reagent results are then examined quantitatively using a spectrophotometer. The MTT assay test was performed on day 1, day 3, day 5 and day 7. Trypsinization was performed to remove the ASC layer on the culture flask. Each well in a 32-well microtiter tissue plate was filled with ASC suspension at a cell density of 8 × cells per well, the cell culture medium was then incubated for 24 h. Well, 4 repetitions were performed for each treatment group. After incubation for 1, 3, 5 and 7 days in a 5 % incubator at 37 °C. CO2At the end of the incubation, the medium in each well was discarded, washed with up to 100 μL of PBS and each well was given 100 μL of MTT, incubated at 37 °C for 4 h, then 200 μL of DMSO was added per well. Incubate again at 37 °C for 30 min. Living cells react with MTT (3-(4,5-dimethylthiazol-2yl)-2,5-diphenyltetrazolium bromide). The number of live cells turned blue with formazan, while the dead cells did not produce a blue color. The absorbance of formazan was measured spectrophotometrically using an ELISA reader with a wavelength of 570 nm. The darker the color, the higher the absorption value and the greater the number of living cells. In brief, the control and experimental groups were evaluated for calcium production on days 7, 14, and 21 of treatment using staining with alizarin red solution, a dye that binds calcium salts. Indeed, Alizarin red is an anthraquinone derivative used to identify osteocytes containing calcium in Adipose-Derived Stem Cell cultures. Cells were washed once with PBS and fixed in phosphate for 20 min and the supernatant was discarded. Cells were washed twice with PBS then fixed with neutral buffered formalin (10 %) for 30 min and washed with PBS. Alizarin Red Staining (40 mM in deionized water, pH 4.2) was added to the cells and incubated for 45 min in the dark. ASCs were washed four times and then PBS was added. For quantification, the stained cells were dissolved in 10 % acetic acid and the absorbance of the solution at a wavelength of 405 nm was calculated using a microplate reader. Adipose Stem Cells (ASCs) were isolated from visceral fat tissue of young male rabbits. ASCs were washed using Phosphate Buffer Saline (PBS) and incubated with collagenase I 1.5 mg/mL, 30 mL (Invitrogen), 37 °C, for 30–45 min. The enzyme was inactivated by adding α-MEM containing Fetal Bovine Serum (FBS). Samples were centrifuged at 2000 rpm for 5 min and the cell pellet was resuspended in complete medium consisting of Dulbecco's Modified Eagle's Medium (DMEM; Gibco), 10 % Fetal Bovine Serum (FBS; Gibco), penicillin 100 units/mL, streptomycin 100 μg/mL, and Fungizone solution (Antibiotic-Antimycotic (100x) Gibco, USA) 0.25 μg/mL. The cells were planted into tissue culture polystyrene dishes (TCPS) measuring 25 cm 2 and were incubated at 37 °C. After 72 h, cells were washed using PBS, α-MEM medium was added according to the capacity of the culture plate. The cells were kept in an incubator at 37 °C with a humidity of 5 % CO2. The medium was changed every two days until the cells reached 80–90 %. Characterization of ASCs was carried out using immunocytochemical examination, namely by staining anti-CD45, anti-CD73, anti-CD90, anti-CD105 (Sigma Aldrich®, USA). The monolayer cells were dissociated into single cells through trypsinization. Centrifugation was performed at 1600 rpm for 5 min. The cell pellet was resuspended in 1 ml of medium and seeded onto special glass slides at a volume of 20 μl. The glass slides were then placed in a box containing wet paper towels and incubated at 37 °C for 1 h. Fixation was carried out with 3 % formaldehyde for 15 min at room temperature. The slides were washed with PBS four times and allowed to dry. Blocking was performed with PBS containing 1 % serum for 15 min at room temperature. After washing with PBS four times, antibodies against Osterix, Osteopontin, RUNX2 and ALP were added and incubated at 37 °C for 45 min. The slides were washed with PBS four times, and excess water around the glass slides was dried with tissue paper. 50 % glycerin was dropped onto the glass slides, and the results were immediately observed under a fluorescence microscope at 40× magnification. Positive fluorescence results were observed, while negative results were not detected. Data was analyzed using IBM SPSS. . Data normality was tested by Shapiro-Wilk test. Data was tabulated as mean ± standard deviation. One‐way ANOVA was utilized to determine the statistical significance; a P ≤ 0.05 was considered significant. The results of the toxicity test of the combination of nano suspension from purple leaf and hydroxyapatite showed an increase in cell proliferation at all concentrations. Descriptively, the results of the analysis show that the average proliferation test results with the highest percentage of live cells were 12.5 nm on day 1, 100 nm on day 3, and 200 nm on days 5 and 7. ASCs characterizations were carried out using immunocytochemical examination. This test confirmed that the ASCs were mesenchymal derivatives. The results of immunocytochemical examination using an inverted and fluorescence microscope showed that the expression results of the CD 45 surface marker were negative, which was indicated by no green glow. The expression results of the surface markers CD90 and CD105 were positive as indicated by the green glow of ASCs. The ICC result of Runx2, ALP, Osterix and Osteopontin in each group from day 7, day 14 and day 21 represented on Fig. 1 , Fig. 2 . Fig. 1 The ICC result of Runx2, ALP, Osterix and Osteopontin in 400x magnificient Fig. 1 Fig. 2 The number of ICC result in graphic. Fig. 2 Mineral depositions of ADSCs were detected by alizarin red staining following 7, 14, and 21 days of osteogenic induction with combination of nanosuspension . Fig. 3 Alizarin Red S stained extracellular calcium deposition of the representative group. Fig. 3 Fig. 4 Average of calcium deposition on 7, 14, and 21 days. Fig. 4 Graptophyllum pictum (purple leaves) contains non-toxic alkaloids, flavonoids, steroids, saponins, and tannins. Among these compounds, some can stimulate the expression of osteogenic transcription factors and markers through various signaling pathways, such as the Wnt and MAPK pathways, to promote osteoblast differentiation. 15 Alkaloids and flavonoids in purple leaves possess anti-inflammatory and analgesic properties. Previous research indicates that purple leaves can stimulate ALP activity by 128 % in MC3T3-E osteoblast cells. Hydroxyapatite (HA) (Ca10(PO4)6(OH)2), an inorganic mineral constituent of human bone, can also be derived from animal bones. HA directly impacts bone defects by inducing a favorable immune response and promoting new blood vessel formation in damaged bone tissue. 16 HA activates osteoblasts and osteoclasts for bone remodeling by accelerating the differentiation of these cells. 17 In this study, a cell migration test was carried out on Adipose Stem Cells using the Scratch assay method. Cell migration and proliferation are important processes that trigger the synthesis of new extracellular matrix and contribute to wound healing. Recruitment of mesenchymal stem cells to the wound area is necessary to prepare for osteoblast differentiation at the ossification stage. 18 Based on the research results, ASCs cultured with a combination of purple leaf extract nanosupension and hydroxyapatite in osteogenic medium showed cell migration ability as indicated by a significant increase in the percentage coverage of the scratch gap area from day to time. This may be caused by the presence of calcium ions in the nanosuspension which supports migration and proliferation. The EDX test shows a high number of Ca 2+ ions in the combination of purple leaf extract nanosuspension and hydroxyapatite, where these ions can help increase the pH of the surrounding area and have a positive influence on the healing process. 19 Based on the research results on days 7, 14, and 21, an increase in the mean expression of Runx2 was observed on day 21 in the group 4. This group had the highest hydroxyapatite (HA) content. HA is a biomaterial that can enhance osteoblast differentiation by increasing Runx2 expression through the activation of extracellular signal-regulated kinase (ERK), p38, Wnt, and bone morphogenetic protein 2 (BMP-2) pathways. 9 Based on the post hoc analysis results for days 7, 14, and 21, significant differences were found between the group 1 and the treatment groups 3 and 4. The administration of groups 3 and 4 affected Runx2 expression on days 7, 14, and 21. In the treatment group 3 compared to the treatment group 4, post hoc analysis did not show significant differences. However, descriptive results indicated an increase in the mean expression of Runx2 in the latter treatment group. The post hoc analysis results on days 7, 14, and 21 between the treatment group 3 and the group 4 showed no significant differences. This indicates that the potential of each concentration, whether group 3 or group 4, is almost the same. However, post hoc analysis results showed that on days 7 and 14, there was a significant difference between the group 3 and the group 4 on day 21. Therefore, it can be said that the concentration of the nanosuspension extract of purple leaves and HA bovine bone and the treatment day can influence the expression of the osterix marker. This is consistent with previous research, which found that the concentration of an extract could affect the number of osteoblast cells. 20 The observation period from day 7, 14 to 21 in this study showed an increase in the average expression of osteopontin in the group 3 from day 7 to day 21. However, in the group 4, there was an increase in the average expression of osterix from day 7 to day 14, followed by a decrease on day 21. Nevertheless, post hoc analysis results on day 21 indicated no significant difference between the group 3 and the group 4. Thus, it can be said that although the average expression of osteopontin in the group 4 decreased on day 21, there was no significant difference compared to group 3. The expression of the osteopontin marker between the group 1 and the group 3 showed significant differences on days 7, 14, and 21. Similarly, there were significant differences between the group 1 and the group 4 on days 7, 14, and 21. The combination of nanosuspension extract of purple leaves and HA bovine bone helps to enhance the differentiation process of osteoblasts, as HA bovine bone can stimulate the proliferation of osteoblasts by activating mesenchymal cells. 21 Post hoc analysis results show that in the group 3, there was a significant difference between days 7 and 21. The post hoc analysis also revealed a significant difference between the group 3 on day 7 and the group 4 on days 14 and 21. In the group 4, there was also a significant difference between days 14 and 21. In all treatment groups, the mean ALP expression increased on day 21. Based on the post hoc analysis, significant differences were found between the group 1 and the treatment groups (group 3 and 4), on days 7 and 14. These results indicate that although ALP is ideally expressed after day 14, the addition of group 3 and 4 can increase ALP expression on days 7 and 14. 16 The post hoc analysis results support that the administration of group 3 and 4 can enhance ALP expression. Significant differences were observed when comparing the negative control group on days 7, 14, and 21 with the treatment groups on day 21. This is consistent with the theory that the flavonoid content in purple leaves exerts osteogenic effects through the ERK pathway dependent on estrogen (ER). 22 Purple leaves have a stimulating effect on ALP (Alkaline Phosphatase) activity in osteoblast cells. The flavonoid content in purple leaves exerts osteogenic effects through the estrogen-dependent ERK pathway. Estrogen receptors (ER) bind flavonoids, which subsequently activate the ERK signaling pathway, leading to the upregulation of osteogenic genes and proteins essential for bone formation and mineralization. 22 Previous research has shown a relationship between dose and biological response, where higher concentrations of active ingredients cause more pronounced biological effects. 23 Overall, the trend in the amount of calcium deposition increased from days 7, 14 and 21 and the amount of calcium deposition in the administration of the nanosuspension (Group 1 and 2) was higher than that of the untreated (Group 1 and 2), with the highest levels in the group 4. This is likely influenced by the presence of active compounds in purple leaf extract such as polyphenols and alkaloids which play a role in helping the osteogenic differentiation process of ASCs. 24 One of the phenolic contents that was successfully identified in the combination of purple leaf extract nanosuspension and hydroxyapatite is salicylic acid. Salicylic acid is a simple phenolic compound that is naturally found in plants. This compound is a precursor of Acetylsalicylic Acid or better known as aspirin. Acetylsalicylic Acid itself has been shown to increase osteoblast differentiation in human Dental Pulp Mesenchymal Stem Cells (hDPMSC) by activating the MAPK signaling pathway. 25 The results obtained from this study confirm that the combination of purple leaf extract nanosuspension and hydroxyapatite as a biomaterial for bone tissue engineering can be a scaffold for bone tissue regeneration. The combination of purple leaf extract nanosuspension and hydroxyapatite has characteristics that can increase the osteogenic differentiation of ASCs in vitro. However, a more detailed mechanism of the effect of ASCs regulated by the combination of purple leaf extract nanosuspension and hydroxyapatite is still unknown, so further research is needed. This principle applies to the use of purple leaves extract in osteogenic differentiation, where a 2 % concentration produces a more significant response compared to a 1 % concentration. These findings are consistent with the theory from previous studies, which demonstrated that increasing the concentration of bioactive compounds generally enhances biological activity due to stronger activation of cellular pathways and receptor interactions. This research not use patients. This research is funded by Lembaga Penelitian dan Pengabdian Masyarakat, Universitas Airlangga under the scheme of Airlangga Research Fund Batch 2 . The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | Study | biomedical | en | 0.999995 |
PMC11696793 | The World Health Organization (WHO) defines workplace violence as “incidents where staff is abused, threatened, or assaulted in the circumstances linked to their workplace, including commuting to and from the workplace, involving an explicit or implicit challenge to their safety, well-being or health.” Compared to all other professions, healthcare workers have a four times higher risk of violence in the workplace. It can be in the form of physical violence, psychological violence, or a combination of both . The earliest studies of violence against doctors are from the US, which dates back to the late 1980s. In these studies, 57% of the emergency care staff reported that they had been threatened with a weapon, whereas in the UK, 52% of doctors reported some kind of violence . In Asia, the highest rates of violence against medical professionals have been reported from China. Prevalence rates of violence against health professionals in countries like India, Israel, Pakistan, and Bangladesh have been higher when compared to the rates in Western countries . In India, nearly 75% of doctors report having dealt with one or the other form of violence during their practice. Almost 50% of these incidents were reported in intensive care units (ICUs), and in 70% of the cases, the relatives of patients were actively involved in such incidents . As per the Indian Medical Association (IMA), at least five cases of attacks on doctors are reported every month in Kerala, and more than 200 such attacks, including intimidation and threats, have been reported in the past three years . Recently, a young doctor was stabbed to death by an intoxicated patient who was brought to the emergency services by police officers for medical examination . Violence against doctors is a growing concern among the medical fraternity. Medical students are the clinicians of tomorrow. Hence, their perception regarding rising incidents of violence against medical professionals needs to be assessed. As per our literature review, although few studies were done among practicing doctors, there needed to be studies assessing the attitude of budding doctors toward such incidents in India. In this study, we aim to explore the perspective of medical students with regard to the increasing occurrence of violence against doctors and their concerns regarding the same. Hence, the objectives of the study include discussing the awareness of medical students toward rising violence against doctors, factors contributing to violence against doctors, and the attitude of medical students toward increasing violence against doctors. This was a cross-sectional study done among undergraduate medical students and interns in Ernakulam district in India during a period of three months . This study was approved by the Institutional Review Board of Amrita Institute of Medical Sciences and Research Centre, Kochi, with approval number ECASM-AIMS-2021-322, dated 27-07-2021. The study was carried out in accordance with the principles as enunciated in the Declaration of Helsinki. There are 30 medical colleges across Kerala, and Ernakulam has four medical colleges. We have selected three medical colleges at Ernakulam as a part of our convenience sampling, and these three medical colleges have around 1,500 students. All medical students from their first year till their internship studying in these medical colleges were included in the study. Participants who did not give consent were excluded from the study. The sample size was not calculated since we have included all medical students in these three medical colleges. As per our literature review, there is a scarcity of studies on this topic, and we could not find any validated questionnaire. A semi-structured questionnaire was prepared based on a questionnaire used in a previous study . We also conducted five focused group discussions (FDGs) on the students from the same institution. A total of 50 medical students participated. The points discussed were noted, and a few themes were identified, such as awareness, factors for violence, and the attitude of the students toward such incidents. Questions included in the questionnaire were based on these themes. The questionnaire had 21 questions and was divided into three sections. The first part included questions assessing sociodemographics and awareness regarding the incidents, the second part included variables assessing various factors that lead to such incidents, and the third part included the attitude of medical students toward rising violence (questionnaire in the Appendix). The questionnaire, prepared using Google Forms (Google LLC, Mountain View, California, United States), was circulated using platforms like WhatsApp and Facebook. Fifteen students from all three medical colleges were contacted, and the form was shared with their college mates. Informed online consent was obtained from all participants after providing details of the conduct of the study, assurance of participants' confidentiality, and voluntary participation. The data analysis was carried out using IBM SPSS Statistics for Windows, version 20.0 . Continuous variables were expressed using mean ± SD, while categorical variables were represented as frequency and percentages. Among 1,500 students, only 400 students (26.6%) attempted the questionnaire, and only 347 (23.1%) completed the questionnaire, among which the majority were females (258; 74.4%) and hailed from Kerala (306; 88.2%). The majority of the students who participated were doing their second year of medical training (108; 31.1%) and 243 (70.0%) did not have a medical background. The main reason for joining MBBS was out of self-interest (208; 59.9%) (Table 1 ). Furthermore, 344 (99.1%) reported being aware of violence against doctors, and 338 (97.4%) expressed concern about such incidents. The majority of them reported social media as the source of information regarding such incidents (303; 87.3%). Three hundred (86.5%) reported that doctors are at a higher risk of workplace violence than any other profession, and 284 (81.8%) believed that self-defense training should be included in the MBBS curriculum. A total of 248 (71.5%) reported that emergency medicine doctors were more prone to such violence than any other specialties and 195 (56.2%) believed that it would be safer to opt for a non-clinical or para-clinical specialty for post-graduation than a clinical specialty (Table 2 ). Among patient-related factors for violence, 232 (66.9%) considered “unrealistic expectation of family from doctors’’ as the main contributing factor. “Inadequate time spent by the doctors in explaining the prognosis” was considered the most common clinician-related factor contributing to workplace violence. Among policymaker-related factors, 254 (73.2%) believed that “not taking proper action against violators when an incident happens” was the major factor leading to rising incidents of violence against doctors. Among hospital-related factors, 150 (43.2%) believed that “giving unrealistic expectations to patients and relatives“ was the main contributing factor to violence against doctors (Table 3 ). Meanwhile, 221 (63.7%) of the participants preferred to go abroad for higher studies and continue their careers there. Two hundred fourteen (61.7%) suggest that the younger generation will take up this profession, and 249 (71.8%) will not opt for an alternative career if given a choice. Some (210; 60.5%) of the students were not aware of any legal provision to safeguard doctors from workplace violence. In addition, 287 (82.7%) believed that incidents of violence against doctors escalated after the COVID-19 pandemic, and 308 (88.8%) believed that such incidents would lead to more and more doctors adopting defensive medical practices (Table 4 ). Being a doctor was considered once a noble profession, and for a medical student, one would always dream about the respect attached to it. However, now, violence against doctors has become a global pandemic. In our survey, although we had circulated the questionnaire to all medical colleges, there was poor participation, and we had only 400 participants, of which only 347 (23.1%) completed the questionnaire. The majority of the students who participated were aware and concerned regarding the incidents of violence against doctors at the workplace. In our study, most participants were females since, in Kerala, girls always outnumbered boys in taking MBBS admission over the last five years . The majority, 208 (59.9%) of the participants, took this course out of self-interest, and 73 (21%) opted for this profession to serve humanity. A survey by the IMA reported that 82.7% of doctors felt stressed out by their profession, and 46.3% feared workplace violence. Many of those who are attacked or threatened reported to have experienced anger, fear, anxiety, self-blame, and loss of confidence. There were many incidents reported in India where doctors committed suicide after such allegations of medical negligence . Majority (71.5%) believed that emergency medicine physicians are more prone to get attacked by bystanders. Surgery (15%) and general medicine (5%) were also considered high-risk specialty. These findings highlight the fact that students will start their medical careers with a guarded approach toward their patients. An internal analysis of the National Eligibility cum Entrance Test for Post Graduates (NEET PG) 2022 first counseling results for the first 500 all-India ranks showed that students opt more for nonsurgical subjects. The majority of the students opted for specialties like general medicine, radio-diagnosis, and dermatology while only very less students opted for obstetrics and general surgery. These findings are similar to a study done by Babu et al., which noted that the majority of students preferred to take radio-diagnosis and pediatrics as their specialty . We assume that this could be because these specialties assure a high salary and are considered safer compared to others. General medicine and surgery were less preferred because of more hectic training and the need for superspecialty degrees for better pay and more risk of medical litigations. Contrary to this finding, general medicine was still the preferred specialty in NEET PG counseling; it might be because there is always a demand for general practitioners and could pave the way to explore many other areas of medicine. These assumptions require further empirical testing. In our survey, the majority (56.2%) felt that taking non/para-clinical subjects would have less risk of being attacked or threatened. In our survey, to assess the reasons for violence, we have divided the questions into patient-related, hospital-related, doctor-related, policymaker-related, and administrator-related factors. Among patient-related factors, "having unrealistic expectations of family from doctors" was considered one of the main reasons. Most of the time, when a patient gets better, the family believes that it’s God’s miracle, and when a patient dies, they believe that it is due to the doctor’s fault. Media might have a role in this since most of the time, when they report such deaths of patients, they allege it as medical negligence . Recently, certain films have portrayed doctors as being involved in the "organ transplant mafia" and as being unconcerned about the well-being of patients. Such films and media reporting of deaths in the hospital as the result of conspiracies could give a wrong notion about doctors and hospitals . Among doctor-related factors, “inadequate time spent by the doctors in explaining the prognosis of the illness to the family” was considered the main reason for violence. Most doctors are stressed due to heavy workloads, long working hours, and lack of adequate infrastructure, and hence, they spend less time explaining the prognosis of the patient to family members . During medical training also, doctors are not taught how to break the bad news to the family members or how important is to spend time with the family in conveying the prognosis to the patient or family . Among policymaker-related factors, “not taking proper actions against the violators when an incident is reported” was the main factor. Although there have been increasing incidents of violence against hospital staff, the government has not initiated any central law (no common CRPC/IPC section) against violators. Even though 19 state laws exist protecting hospital and their staff, most of the time, officials do not charge the case or compromise such incidents . Recently, after the murder of a house surgeon on duty, the Kerala government has approved an ordinance aimed at protecting healthcare staff, which warrants up to seven years of jail imprisonment for attacking healthcare staff on duty and up to five lakhs rupees as a fine. The ordinance also prescribes that all cases registered under the law should be probed by an officer, not below the rank of inspector, and the probe should be completed within 60 days from the registration of the First Information Report (FIR) . Among hospital-related factors, “giving unrealistic expectations to the family” was one of the reasons for violence. Most of the time, there is a lack of communication from the doctors, and the hospitals always portray their success stories in their advertisements, which gives a false sense of hope. With the advent of modern medicine, the cost of healthcare has increased globally. Still, due to low literacy rates in India, there is an unrealistic expectation that paying more money should save one's life, i.e., better outcomes are expected even for risky procedures . Furthermore, 249 (71.8%) of the participants do not want to change their careers if given an alternate choice, and 214 (61.7%) will suggest the same profession to the younger generation. Of the participants, 221 (63.7%) would like to go abroad for higher education and a job. This could be because of better pay and less workload in Western countries . The majority, 210 (60.5%), of the participants reported that they were not aware of any laws regarding hospital violence. Many, 308 (88.7%), reported that an increase in such incidents might lead to the practice of defensive medicine. Defensive medicine, in simple words, is departing from normal medical practice as a safeguard from litigation. Practicing defensive medicine is not good for patients or physicians. This is because doctors might advise doing certain diagnostic procedures for the confirmation of the diagnosis, which may add additional financial burden for the patients, and at the same time, they may also avoid doing certain risky procedures to avoid medical litigations . Our study has various limitations. The number of students who participated in our study was very low compared to the total number of medical students in three medical colleges at Ernakulum; therefore, we cannot generalize our findings. The questionnaire that was used in our study was not a validated questionnaire. Due to rising rates of violence, there is an urgent need to make healthcare facilities safe for doctors, as only then they can work with complete dedication. This needs to be done at various levels by the government, media, and medical professionals alike. Every doctor should follow the cardinal principle “do not overreach,” i.e., do not treat beyond the scope of one's training and facilities to prevent violence and litigations against themselves. Doctors cannot be held accountable for every death that occurs in the hospital on account of negligence. The government should make a stringent central law that protects hospitals and healthcare staff. | Other | biomedical | en | 0.999997 |
PMC11696794 | Diabetes mellitus type 2 (T2DM), which represents around 90% of diabetes, is a long-term, chronic, persistent metabolic disorder characterized by hyperglycemia due to the body’s inability to regulate blood sugar levels of insulin. Chronic high blood glucose levels are associated with long-term damage, dysfunction, and failure of different organs. T2DM is a multifactorial disease and some of the risk factors include genetic predisposition, aging, and obesity, which increases the risk of diabetes by 80-100 folds. The epidemiology is largely variable around the world, although the highest rates are in the Middle East and Pacific Islands . In the Gulf Cooperation Council (GCC) countries, there has been a growing trend of T2DM. One of the reasons that can explain the growing trend is that Arabs are genetically susceptible to developing the disease . In Saudi Arabia, the prevalence of diabetes has increased approximately ten-fold in three decades from 1982 to 2004 . Two large studies were done to assess the prevalence of diabetes in Saudi Arabia. The first one was published in 2004 and included 17,232 Saudi participants with a 98.2% response rate revealing that 4004, 23.7%, subjects were diabetic . The other one was done from 2007 to 2009 and involved 18,034 participants, aged 30 years old or older. It found that 25.4% were pre-diabetic and 25.5% were diagnosed with diabetes mellitus (DM) . This puts a significant economic burden on the government. The Saudi Ministry of Health spent in 2010 $0.87 billion dollars to treat diabetes, and the cost was expected to be $6.5 billion dollars by 2020 . The management of T2DM primarily focuses on lifestyle modification and weight reduction, which plays a major role in insulin resistance and the development of T2DM; medications are added when needed to achieve the target glycemic control. Another modality of treatment is bariatric/metabolic surgery, which underwent a significant improvement and developed many procedures that resulted in success in reducing weight and T2DM remission in the short term (1-2 years) . However, in the long-term results, many patients experienced T2DM relapse and weight regain. For example, in the Swedish Obese Subjects (SOS) trial, 50% of patients who achieved DM remission at 2 years relapsed after 10 years . In addition, a retrospective study done on patients who underwent the surgery between 2008 and 2011 revealed that half of the subjects regained weight after 7 years . A local study done in King Saud University Hospital revealed that 53.3% regained 25% or more of their lowest weight after 6 years of the surgery . Another recent study, which is the only one of its kind in Saudi Arabia, done to assess diabetes remission in Saudi patients following bariatric surgery found that 48.5% achieved diabetes remission according to the American Diabetes Association criteria that was published in 2019 . However, the study included participants who underwent the surgery 1 year ago, which could have an influence on the results. In conclusion, bariatric surgery could be a solution for the high T2DM rates and the obesity epidemic, which is a major risk factor for developing the disease in Saudi Arabia, and that will result in a healthier population and lower expenditure. However, local data regarding the impact of the surgery on T2DM remission and maintenance of healthy body weight in the long term are lacking. Such research is needed since there are genetic and diet differences between countries. The aim of the study is to assess the T2DM remission rate in the long term following bariatric surgery in Saudi patients. Ethical and study approval This is a retrospective cohort study that was conducted in King Abdulaziz Medical City, Riyadh, Saudi Arabia. It was approved by the Institutional Review Board (IRB) of King Abdullah International Medical Research Center (KAIMRC) , approved on 10 October 2023 and the study duration was 1 year. Patient data and inclusion criteria The study included all patients who were 18-65 years old, diagnosed with T2DM, and patients who underwent bariatric surgery for weight reduction at least 3 years ago. Patients were excluded if they were i) younger than 18 or older than 65 years, ii) not diagnosed with diabetes or diagnosed with diabetes other than T2DM, or iii) underwent bariatric surgery less than 3 years ago. The patients were identified and collected from the existing Electronic Medical Records in our hospital, and the collected data included date of birth, gender, height, weight before surgery and on each physical visit, BMI, date of diagnosis of T2DM, type of surgery, hemoglobin A1c (HbA1c) levels, and the use of hypoglycemic agents. Diabetes remission criteria The diabetes remission criteria used in this study were the American Diabetes Association (ADA) criteria . Diabetes remission was defined as an HbA1c level of <6.5 for at least three months without administering any hypoglycemic medication. Additional considerations in the ADA criteria were: i) if HbA1c was discovered to be unreliable, fasting blood glucose would be the alternative, ii) HbA1c should be performed prior to the intervention and repeated no sooner than three months, and iii) subsequent testing to assess the longevity of the remission favorable to be done at least yearly. To align these criteria with our study and hospital follow-up, complicated cases, such as malignancy and various hematological diseases, that may affect HbA1c reliability were excluded, three readings for HbA1c were obtained before the surgery, and all readings were obtained post-surgery. Weight calculation formulas For weight measurements, we calculated %Excess weight loss (%EWL) as follows: \begin{document}\%EWL = \left( \frac{\text{pre-surgery weight} - \text{follow-up weight}}{\text{operative excess weight}} \right) \times 100\end{document} The %Total Weight Loss (%TWL) was calculated using the following formula: \begin{document}\%TWL = \left( \frac{\text{preceding year weight} - \text{current weight}}{\text{preceding year weight}} \right) \times 100\end{document} The %Weight Regain (%WR) is defined as the weight regained after reaching the lowest weight, and it was calculated as follows: \begin{document}\%WR = \left( \frac{\text{current weight} - \text{nadir weight}}{\text{pre-surgery weight} - \text{nadir weight}} \right) \times 100 \end{document} Sample size There were 2035 patients who underwent bariatric surgery in KAMC from 2016 until the date of data request in 2023. 1069 patients underwent the surgery in 2016-2020. 888 patients either were duplicated, excluded for not meeting the criteria, or underwent the surgery for other purposes. Lack of follow-up was noted, so an additional criterion was added, and any patient who missed follow-up for consecutive 2 years was excluded, and the remaining sample was 74. The flow chart of sample size selection is shown in Figure 1 . Statistical analysis The SPSS for Windows version 22 was used (IBM Corp., Armonk, USA). A repeated-measures design was utilized to evaluate the longitudinal impact of bariatric surgery on glycemic control and weight management among participants over five years and included some adjustments for missing data. The study tracked changes in HbA1c levels, percentage excess weight loss (%EWL), total weight loss (%TWL), and weight regain (%WR) at annual intervals post-surgery. Gender-specific comparisons were also conducted to examine potential variations in outcomes between male and female participants. Statistical analyses included ANOVA for repeated measures to assess temporal trends and interactions, alongside logistic regression to identify predictors of diabetic remission and relapse. Significance thresholds were set at p < 0.05 with a confidence interval of 95%. Data collection encompassed demographic and clinical variables, with baseline and follow-up measures standardized for consistency. This robust design allowed for the analysis of both individual and group-level patterns in post-surgical outcomes. Out of 74 participants, the majority were female (53, 71.62%) compared to males (21, 28.37%). The mean height of participants was 159.8 cm with a standard deviation of 9.07 cm, while the mean weight was 114.79 kg (± 19.18 kg). The mean BMI was recorded at 44.91 ± 6.68 kg/m 2 , indicating that participants generally fell within the obesity weight range. In terms of diabetic remission, 20 participants (27%) out of the 74 patients achieved diabetes remission. Furthermore, seven participants (35%) out of the diabetic remission group experienced a relapse post-intervention and one participant (14%) achieved a second diabetes remission. The mean HbA1c level prior to surgery was 8.70 (± 1.68), indicating suboptimal glycemic control at baseline. The average estimated weight loss (%EWL) was 60.25% (± 21.48%), while the average total weight loss (%TWL) was 30.04% (± 8.90%), suggesting significant weight loss following bariatric surgery. Lastly, the average weight regain was found to be 14.26 ± 26.39% as shown in Table 1 . The pre-surgery HbA1c level (HbA1cB) was measured at 8.70 (± 1.68). At the one-year mark post-surgery (HbA1c12M), the mean HbA1c level significantly decreased to 6.76 (± 1.42), with an F-value of 52.85 and a p-value of 0.00, indicating a highly significant change in HbA1c levels. Subsequent measurements, as summarized in Table 2 , at 2 years (HbA1c24M), 3 years (HbA1c36M), 4 years (HbA1c48M), and 5 years (HbA1c60M) showed HbA1c levels of 6.71 (± 1.27), 6.81 (± 1.32), 6.78 (± 1.27), and 6.82 (± 1.25), respectively, although no additional F-values or p-values are provided for these later time points. Table 3 summarizes the significant pairwise comparisons of HbA1c levels across different time points, highlighting the mean differences and associated p-values. The comparisons demonstrate a statistically significant reduction in HbA1c levels when comparing the pre-surgery measurement (HbA1cB) to those at the 1-year (HbA1c12M), 2-year (HbA1c24M), 3-year (HbA1c36M), 4-year (HbA1c48M), and 5-year (HbA1c60M) marks, with all p-values reported as 0.000. The mean differences ranged from 1.88 to 1.99, indicating substantial improvements in glycemic control over time. In contrast, there were no significant differences noted between subsequent years (e.g., HbA1c12M vs. HbA1c24M), with p-values all at 1.00, indicating stability in HbA1c levels after the initial postoperative year. The odds ratio (OR) for gender is 2.16, with a 95% confidence interval (CI) of (0.10, 45.05) and a p-value of 0.49. This result indicates that while the odds of diabetic remission are higher for males compared to females, the association is not statistically significant. The analysis showed that baseline BMI does not significantly indicate relapse after bariatric surgery in diabetic patients. The odds ratio of 1.02 suggests a minimal 2.5% increase in the odds of relapse for each unit increase in BMI, but this effect is not statistically significant since the p-value is 0.65 (Table 4 ). These findings suggest that baseline BMI is not a strong factor in determining whether a patient experiences diabetes relapse post-surgery, and other variables may play a more significant role. The mean %EWL values for the first five years were recorded as follows: 58.91% (± 29.67%), 61.55% (± 25.79%), 60.77% (± 26.47%), 62.73% (± 27.18%), and 57.27% (± 43.82%). The F-value for the %EWL across time points was 0.48, with a corresponding p-value of 0.71, indicating no statistically significant difference in %EWL over the observed years. For %TWL, the values were 28.44% (± 8.25%), 30.25% (± 9.03%), 29.82% (± 9.80%), 30.79% (± 10.51%), and 30.90% (± 18.83%). The F-value for %TWL was 0.93 with a p-value of 0.44, suggesting a lack of significant variation in %TWL over the five-year period. In contrast, %WR showed a statistically significant change across the time points, with values starting at 6.77 ± 11.8% at Year 2, increasing over the years, and peaking again at 26.64 ± 44.84% by Year 5. The F-value for %WR was 239.96 with a p-value of < 0.001, indicating a significant overall difference across years for %WR. Weight measurements are shown in Table 5 . The results of the study showed significant improvements in DM and sustained lower HbA1c levels post-surgery, as evidenced by the marked reduction in HbA1c levels at various time points (1 year, 2 years, etc.). The gender distribution in our cohort, with females comprising 71.62% and males 28.37%, aligns with broader trends observed in bariatric surgery populations worldwide. This consistency suggests that gender-specific factors, such as higher referral rates and patient-driven requests due to body image concerns, may play a significant role in the decision to pursue bariatric surgery, highlighting the need for targeted interventions to address potential disparities in access or healthcare-seeking behaviors . The most pronounced finding of this study is the markedly lower rates of DM remission following bariatric surgery compared to both local and global literature. While the average remission rate reported in previous global studies hovers around 50%, a local study reported a remission rate of 48.5% . Our cohort demonstrated a substantially reduced rate of only 27%. This discrepancy is not only statistically significant but also clinically relevant, warranting careful examination and interpretation. Several factors may contribute to this notable disparity. Firstly, with regard to patient demographics and preoperative characteristics, our study population may differ in key aspects such as age, duration of diabetes, severity of insulin resistance, or preoperative BMI. These factors might have an influence on surgical outcomes and could partially explain the lower remission rates observed. Secondly, some clinicians may still prescribe antidiabetic medication after the surgery even though the medication could be ceased. This could be a major cause and result in lower participants who achieved DR depending on ADA criteria. Another possible factor is the lower sample size than anticipated, which is caused by the lack of follow-up (for two consecutive years) that led to many participants being expelled from the cohort study. In this specific aspect, our research appears to have more stringent criteria compared to other research. Another considerable factor is the variation in the criteria across published papers. The substantial difference in remission rates emphasizes the need for population-specific research and tailored approaches to bariatric surgery. It challenges the generalizability of global averages to all populations and highlights the importance of managing patient expectations based on local data. This study may influence local clinical practice by emphasizing the necessity of follow-up after surgery. Limitations Firstly, a significant challenge encountered during data collection for our study on the rate of diabetic remission following bariatric surgery was the substantial loss of follow-up among participants. This attrition resulted in missing participants, which poses a considerable limitation in accurately assessing long-term outcomes. This might have an impact on the results of diabetes remission in this study. Some patients who lost follow-up may have developed diabetes remission afterward and vice versa. The discontinuation of follow-up care is not uncommon in bariatric surgery cohorts, as highlighted by previous studies indicating a marked decline in patient compliance over time. For instance, follow-up rates can drop drastically from 90% in the first year to less than 10% after 10 years . This trend underscores the necessity for strategies to enhance patient retention and ensure comprehensive data collection, thereby enabling more robust and reliable conclusions regarding the efficacy of bariatric surgery in achieving sustained diabetes remission. Secondly, the substantial loss to follow-up not only resulted in missing participants but also significantly reduced the number of patients eligible for inclusion in our study. This reduction in sample size poses a considerable challenge to the statistical power and generalizability of our findings. The stringent inclusion criteria, coupled with the high attrition rate, led to a smaller cohort than initially anticipated. Consequently, our analysis may be limited in its ability to capture the full spectrum of outcomes across diverse patient profiles. This underscores the critical need for innovative retention strategies and perhaps the implementation of multiple imputation techniques or sensitivity analyses to mitigate the impact of missing data on the validity of our conclusions. Thirdly, the relatively new Electronic Medical Records did not include patients who followed up before the year of the introduction of the system. This study is consistent with previous studies and fills a crucial gap in localized data, providing insights that could guide future healthcare strategies and improve the management of T2DM in the region. It provides compelling evidence that bariatric surgery leads to substantial improvements in glycemic control and weight reduction among Saudi patients with T2DM. However, it shows a lower percentage of remission in our participants compared to other local and global studies and raises an important concern for follow-up of the patients. Further local research is recommended to confirm our findings, analyze the reason for lower rates, and investigate the predictors of remission and relapse. | Review | biomedical | en | 0.999997 |
PMC11696836 | Hurricanes are often destructive and can lead to acute and longer-term adverse health outcomes. Beyond immediate traumatic injuries, hurricanes can aggravate existing environmental health issues, such as when heavy precipitation and flooding spread pathogens and chemicals from flooded hazardous waste sites, oil refineries, animal manure ponds, or other industrial sites [ 1 – 5 ]. Environmental contamination exacerbated by hurricanes varies by region. As North Carolina (NC) is the third top hog producer in the United States (US) with 9 million hogs and also the third most hurricane-prone US state , hurricanes that strike NC may inundate hog manure ponds and result in contamination of nearby waterways . Most NC hogs are housed, with thousands in a single building, at large concentrated animal feeding operations (CAFOs) in the eastern, hurricane-prone region of the state . NC industrial animal operations produce over nine billion gallons of fecal waste annually . Liquid fecal waste from hogs is collected in uncovered pits, or lagoons, which are regularly sprayed onto neighboring fields . During heavy rain and hurricanes, fecal bacteria from manure-applied fields or from flooded lagoons may be transported from CAFOs into nearby waterways . Surface water near hog and poultry CAFOs has been found to have elevated levels of fecal indicator bacteria, nitrogen, and phosphorus [ 12 – 14 ]. Contact with pathogens from hog manure (e.g. Escherichia coli, Salmonella, Campylobacter, Yersinia enterocolitica, Cryptosporidium, Giardia ) may cause diarrhea, vomiting, nausea, or other gastrointestinal distress in humans, collectively referred to as acute gastrointestinal illness (AGI) . AGI is painful and can be detrimental to health, especially in young children and older adults . Approximately 2330 000 waterborne enteric illnesses occurred in 2014 in the US, which incurred about $160 million in direct healthcare costs . Although news reporters regularly discuss the dangers of flooded hog CAFOs when large hurricanes strike NC, very few studies have examined the effect of flooded hog CAFOs in NC on AGI [ 19 – 21 ]. Communities near hog CAFOs have reported various health problems, including diarrhea, headaches, methicillin-resistant Staphylococcus aureus -related infections, impaired quality of life, and eye, nose, and throat irritation . Many residents near CAFOs use private wells, which have a higher risk of contamination than community water supplies . Hog CAFOs are densely concentrated in rural, eastern NC counties that typically have reduced healthcare access, have a higher percentage of people of color than the state average, and are also home to other detrimental industrial exposures like poultry CAFOs and landfills [ 24 – 27 ]. CAFO exposure in NC is an environmental justice issue. Multiple studies have found that vulnerable subpopulations have disparate exposure to CAFOs, including Black and Hispanic residents in Wisconsin and low-income communities in Delaware and North Carolina [ 28 – 31 ]. These vulnerable populations living near CAFOs may also be particularly vulnerable during natural disasters. Hurricane Matthew and Hurricane Florence were the two largest hurricanes to strike NC in the past decade and cost the state $1.5 billion and $22 billion, respectively . Hurricane Florence drenched NC with 8 trillion gallons of water in one week, making it the wettest hurricane on record in the state . Hurricane Matthew caused at least 14 hog manure lagoons to flood and 2 lagoons to breach , and at least 110 hog manure lagoons were breached or inundated in NC due to Hurricane Florence . Hurricane flooding in North Carolina has led to elevated fecal coliform levels, high nutrient concentrations, and severe dissolved oxygen deficits in surface water, some of these elevations may be due to CAFOs and sewage treatment plants [ 37 – 40 ]. This paper examines the combined effect of hurricane precipitation and hog CAFO exposure on AGI in NC and assesses this effect across two different hurricanes—Hurricanes Matthew and Florence. Previous studies have found hurricanes and high hog CAFO exposure to be associated with increased AGI rates , but this is the first study to examine how the rates of AGI emergency department (ED) visits in NC change after hurricanes in areas with heavy hurricane precipitation and varying exposure to hog CAFOs. Understanding the connection between flooding, hog CAFOs, and health is important in developing appropriate interventions, especially as climate change models predict that NC will continue to see an increase in heavy precipitation events . The study population comprises of NC residents who lived in areas that received heavy precipitation during Hurricanes Matthew or Florence, including residents who lived near many or no hog CAFOs. Cases include NC residents who visited a NC ED in 2016–2019 and had an AGI-related diagnosis code. The finest geographic resolution ED data was the ZIP code level; thus, all analyses were conducted at this level. Hurricane Matthew struck NC on 8 October 2016, and Hurricane Florence hit NC on 14 September 2018. We examined the change in AGI ED rate during the three weeks after the hurricanes by using 2016–2019 data trends to estimate the predicted AGI ED visit rate had the events not occurred. We were interested in a three-week post-hurricane period because there is likely a lag between water contamination and human exposure to contaminated water, because flooding from Hurricane Florence lasted a week or more in some areas, and because the AGI-causing pathogens in floodwater have up to a two-week incubation period . We obtained daily precipitation data as 4 km-by-4 km raster data from the Parameter-elevation Regressions on Independent Slopes Model Climate Group . We then subsampled this data into 1 km raster data and used the 1 km centroids to aggregate the precipitation data to the NC ZIP code polygons. We assigned ZIP codes the daily maximum precipitation recorded in the ZIP code. For each ZIP code, we summed the daily maximum precipitation during the week of Hurricanes Matthew and Florence to capture the total hurricane precipitation by area for each storm. ZIP codes in the top quartile of hurricane precipitation (Matthew: >9 inches/229.67 mm; Florence: >12.8 in/325.19 mm) were categorized as severely affected by the hurricane (‘heavy storm precipitation’). We used 2019 swine permit data from the NC Department of Environmental Quality (DEQ), which included the location, number of animals, and type/life stage of animals of each permitted animal facility . We counted the number of hog CAFOs contained within each ZIP code or a half mile of each ZIP code’s geographical boundary. We categorized areas with no hog CAFOs as ZIP codes that neither contain a hog CAFO nor have any hog CAFOs within a half mile of the ZIP code border. We categorized ZIP codes with hog CAFOs into low hog CAFO-exposed ZIP codes (1–10 hog CAFOs within the ZIP code or within a half mile of the border) and high hog CAFO-exposed ZIP codes (>10 hog CAFOs). Poultry CAFOs are often co-located near hog CAFOs in NC, and exposure to pathogens that may be found in poultry waste can also lead to AGI. Our main analyses focused on hog CAFOs because hog CAFOs produce mostly liquid waste collected in uncovered lagoons that can flood while poultry CAFOs produce mostly dry waste; however, we also examined poultry CAFOs and the co-location of both poultry and hog CAFOs. We obtained data on poultry CAFO locations from the Environmental Working Group and Waterkeepers Alliance. They identified these locations using high-resolution satellite data and aerial photographs; they also estimated the number of birds at each facility using the National Agriculture Imagery Program as well as the 2017 Census of Agriculture from the United States National Agricultural Statistics Service . We developed similar categories for poultry CAFOs as we did for hog CAFOs. ZIP codes with no poultry CAFOs within the ZIP code or a half a mile of the border were categorized as 0 poultry CAFOs. ZIP codes with poultry CAFOs were categorized into 1–10 poultry CAFOs and >10 poultry CAFOs within the ZIP code or a half a mile of the border. We also developed categories of ZIP codes with heavy storm precipitation and >10 hog CAFOs and >10 poultry CAFOs, heavy storm precipitation and no hog CAFOs or poultry CAFOs, and a middle category for the ZIP codes with heavy storm precipitation and some CAFOs. AGI was measured using data from the NC Disease Event Tracking and Epidemiologic Collection Tool (NC DETECT), a public health surveillance system of civilian ED visits in NC. AGI rates for 2016–2019 were calculated at the ZIP code level. We used diagnostic codes (International Classification of Diseases, Tenth Revision; ICD-10) to classify intestinal infectious illness (A00–A09), unspecified noninfectious gastroenteritis and colitis (K52.3, K52.89, K52.9), diarrhea (R19.7), and nausea and vomiting (R11.10-R11.12) as AGI ED visits. Similar diagnosis codes have been used in other studies of flooding and AGI . Our analyses examined all-cause AGI rates because specific pathogens are seldom tested for and are rarely included in hospital discharge reports. Data on the total number of residents and other demographics were available at the block group-level from the 2017 American Community Survey (ACS). We assigned these values to the centroids of each 2010 Census block based on the proportion of the block group population within that block and then aggregated these block centroid data to create ZIP code-level population estimates. We also used the 2018 CDC/ATSDR social vulnerability index (SVI) for NC to examine the other social and environmental exposures and vulnerabilities that residents living near hog CAFOs and hurricane flooding face . The SVI assesses Census tract-level vulnerability in terms of socioeconomic status (SES), household composition and disability, minority status and language, and housing type and transportation. The SVI ranges from 0 to 1, with 1 being the most vulnerable. We attributed the tract-level SVI scores to the ZIP code level by taking the mean of the scores inside each NC ZIP code. We first described the demographics of residents living in ZIP codes with heavy hurricane rain and the various hog CAFO categories, as well as statewide, to assess exposure disparities. We assessed the change in AGI ED rate during the three weeks after Hurricanes Matthew and Florence in areas with heavy storm precipitation and 0 hog CAFOs, heavy storm precipitation and 1–10 hog CAFOs, and heavy storm precipitation and >10 hog CAFOs . We used interrupted time series, which allows every ZIP code to be compared to itself over time . This method uses the daily AGI ED rate in each ZIP code to predict the AGI ED rate after the hurricanes had there not been a hurricane. We modeled Hurricanes Matthew and Florence separately. Because of potential over-dispersion of the outcome, we used quasi-Poisson models; the regression model included indicator variables for the post-hurricane flood period and time-control variables for the day of week, month, and year, and an interaction between month and year. To estimate the change in population-based AGI ED visit rate after a hurricane, the yearly ZIP code population (derived from ACS data) was included as an offset in the model. Robust standard errors were used to calculate 95% confidence intervals (95% CI) using the sandwich package in R. In sensitivity analyses, we also examined the change in AGI ED visit rate during the 1, 2, 4, and 5-week periods after the hurricanes. Additionally, we examined the change in AGI ED visit rate during the three weeks after the hurricanes in ZIP codes with heavy storm precipitation and >20 hog CAFOs. Because CAFOs have differing numbers of animals, we also conducted sensitivity analyses based on total number of hogs and birds in CAFOs within ZIP codes . Because Hurricane Florence dropped substantially more rain than Hurricane Florence, we conducted a sensitivity analysis for Hurricane Matthew that only included ZIP codes that received >12.8 in/325.19 mm precipitation (Hurricane Florence’s heavy precipitation threshold) during the week of Hurricane Matthew. We also conducted a sensitivity analysis that included sanitary sewer overflow (SSO) data, as hurricane precipitation can also cause sewer overflows, which can spread fecal pathogens and could lead to AGI. The SSO data was provided from the NC DEQ, Division of Water Resources. This data reported all reported SSO incidents from 2016 to 2018 by county and included the date and estimated total volume in gallons. In this sensitivity analysis, we conducted interrupted time series analyses while adjusting for SSOs within the county within the past two weeks (see SI). Lastly, we examined the change in rate of all ED visits (not just AGI visits) during the three weeks after the hurricanes to assess if our results were driven by ED usage patterns after storms or specific to AGI. All analyses were performed in R (Version 4.1.3) . There were a total of 2714 hog CAFOs in areas that received heavy rain (>75th percentile of storm precipitation) during Hurricane Matthew and 2964 hog CAFOs in areas that received heavy rain during Hurricane Florence. In ZIP codes with heavy storm rain and >10 hog CAFOs, there were 663 AGI ED visits during the 3 weeks after Hurricane Matthew and 1063 AGI ED visits during the 3 weeks after Hurricane Florence, with 670 AGI ED visits the 3 weeks before Matthew and 927 AGI ED visits the three weeks before Florence. ZIP codes that contained >10 hog CAFOs and received heavy rain during Hurricanes Matthew or Florence had a higher proportion of Black, American Indian, and Hispanic residents and uninsured residents than the state average (table 1 ). ZIP codes with heavy storm precipitation and >10 hog CAFOs also had lower household annual median incomes and higher poultry density than both the state average and areas with heavy storm precipitation and no hog CAFOs. Additionally, areas with heavy storm precipitation and >10 hog CAFOs were more vulnerable areas according to every SVI scale—SES, disability, minority status, transportation, and total vulnerability. ZIP codes with many hog CAFOs and heavy storm precipitation are more rural, geographically isolated, and have lower overall AGI ED visit rates than ZIP codes with heavy storm precipitation and no hog CAFOs (table 1 ). We observed an 15% increase in AGI ED visit rate (rate ratio [RR] = 1.15, 95% confidence interval [CI]: 1.04, 1.27) during the three weeks after Hurricane Florence among ZIP codes with >10 hog CAFOs and heavy storm precipitation compared to the expected AGI ED rate at this time based on 2016–2019 trends (table 2 ). We did not observe a substantial increase in AGI ED visit rate (RR = 1.05, 95% CI: 0.86, 1.24) during the three weeks after Hurricane Matthew in ZIP codes with >10 hog CAFOs and heavy Hurricane Matthew precipitation. We did not observe an increase in AGI ED visit rate in ZIP codes with heavy storm precipitation and no hog CAFOs or in ZIP codes with heavy storm precipitation and 1–10 hog CAFOs during the three weeks after Hurricanes Matthew or Florence (table 2 ). We observed very small and imprecise increases in AGI ED visit rates in areas with heavy storm precipitation and >10 poultry CAFOs after the hurricanes (Matthew: RR = 1.08, 95% CI: 0.91, 1.26; Florence: RR = 1.06, 95% CI: 0.96, 1.16, table 2 ). During the three-week period after Hurricane Florence, there was a 13% increase in AGI ED visit rate in areas with 1–10 poultry CAFOs and heavy hurricane precipitation (RR = 1.13, 95% CI: 1.27). In areas with heavy storm rain, >10 hog CAFOs, and >10 poultry CAFOs, we observed a 15% increase in AGI ED visit rates during the three weeks after Hurricane Florence (RR = 1.15, 95% CI: 1.02, 1.28); we did not observe a substantial increase in these areas during the three-week period after Hurricane Matthew (RR = 1.08, 95% CI: 0.87, 1.29). In our sensitivity analyses, we observed relatively null results for various time periods ranging between 1 and 5 weeks after Hurricane Matthew in areas with heavy storm precipitation and various hog CAFO categories . While we observed increases in AGI ED visit rates during the three- and four-week periods after Hurricane Florence in ZIP codes with heavy storm precipitation and >10 hog CAFOs, we observed decreases in AGI ED visit rates in areas with heavy storm precipitation and >0 hog CAFOs during the one-week period after/during Hurricane Florence . Upon examining ZIP codes with >20 hog CAFOs and heavy storm precipitation, we observed very slight, imprecise increases in AGI ED visit rates during the three week after Hurricanes Matthew and Florence (Matthew: RR = 1.10, 95% CI: 0.83, 1.36, n = 300 AGI ED visits, 37 ZIP codes, mean precipitation: 323 ± 71 mm; Florence: RR = 1.13, 95% CI: 0.97, 1.29, n = 441 AGI ED visits, 43 ZIP codes, mean precipitation: 625 ± 120 mm). We observed a 34% increase in AGI ED visit rate during one-week period after Hurricane Matthew in areas with heavy hurricane precipitation and >20 hog CAFOs, although this is based on only 130 AGI ED visits during this period (RR = 1.34, 94% CI: 1.06, 1.61, 37 ZIP codes, mean precipitation: 323 ± 71 mm). When using Hurricane Florence’s upper quartile of storm precipitation (325 mm) to designate heavy storm rain for Hurricane Matthew, we observed fairly similar results to our main analysis of Matthew (table S1). While we did not observe significant increases in AGI ED visit rates after Hurricane Matthew, we observed a suggestive, imprecise 17% increase in AGI ED visit rates during the two weeks after Matthew in areas with >10 hog CAFOs and >323 mm storm precipitation (RR = 1.17, 95% CI: 0.94, 1.41; table S1). We observed similar results in our sensitivity analyses that included total number of animals as we observed in our main analyses of number of CAFOs (table S2). We observed a 16% increase in AGI ED visit rate during the three weeks during/after Florence in ZIP codes with >10 000 hogs and heavy hurricane precipitation (RR = 1.16, 95% CI: 1.05, 1.27) and no increase in areas with 0 hogs or 1–10 000 hogs. During this three-week post-Florence period, we also observed a 13% increase in ZIP codes with 1–1000 000 birds (RR = 1.13, 95% CI: 0.99, 1.28) and a 10% increase in ZIP codes with >1000 000 birds and >10 000 hogs (RR = 1.10, 95% CI: 0.98, 1.21, table S2). Additionally, the results from our sensitivity analyses that adjusted for SSOs were similar to our main analyses (table S3). Lastly, when examining the change in total ED visit rates after the hurricanes in the same ZIP codes with heavy hurricane precipitation and various levels of CAFOs, we observed no increase in total ED visit rate during the three weeks after Hurricane Florence in areas with heavy rain and 0 CAFOs, 1–10 hog CAFOs, or >10 hog CAFOs (table S2). In this paper, we found that areas with heavy hurricane precipitation and many hog CAFOs experienced an increased AGI ED visit rate during the three-week period after Hurricane Florence, compared to their expected AGI ED rates. We also observed an increase in AGI ED rates in areas with >20 hog CAFOs and heavy hurricane precipitation during the one-week period after Hurricane Matthew and a suggestive increase in areas with >10 hog CAFOs and extremely heavy storm precipitation during the two-week period after Matthew. This difference in timing of AGI ED visit rate increase is likely due to differences in intensity, duration, and antecedents between the storms. We did not observe an increase in AGI ED visit rate during the one- to five-week periods after the hurricanes in ZIP codes without hog CAFOs and with heavy storm precipitation, suggesting that the increase we saw after Florence in areas with heavy storm rain and many CAFOs may not be an independent effect of the hurricane. We also observed no increase in overall ED visit rate in these areas during the three weeks after Hurricane Florence, further suggesting that the presence of hog CAFOs in these communities may have led to the increased AGI incidence. Areas with many hog CAFOs and heavy storm precipitation were more vulnerable in terms of SES, disability, and availability of transportation than areas with heavy storm precipitation and no hog CAFOs. Although Matthew and Florence struck fairly similar areas of NC , Florence dropped substantially more water on NC . Also, several heavy rain events preceded Hurricane Matthew, while Hurricane Florence was preceded by a dry period. Hurricane Hermine struck NC five weeks before Hurricane Matthew, dropping up to 13 in. of rain, and severe heavy rain events, dropping up to 10 in. of rain, occurred just nine days before Matthew . These heavy rain events prior to Hurricane Matthew may explain why we saw an immediate increase in AGI ED rate, as river levels were already relatively high and most of Matthew’s precipitation fell on one day. Hurricane Florence was a slow-moving hurricane that stalled over NC, with parts of the state receiving up to 36 inches of rain . While many rivers crested within 4 d of Hurricane Florence’s landfall, some crested 9 d later, which may explain the delayed increase in AGI ED visit rate until 3 weeks after the storm . The decrease in AGI ED visit rates in areas with heavy storm precipitation and >0 hog CAFOs during the one-week period during/after Hurricane Florence is somewhat unexpected; however, this decrease could be due to the difficulty of rural residents in traveling to EDs during the week of Hurricane Florence because of the weeklong flooding. The differences we see during the first week during/after Hurricanes Matthew and Florence are likely because Florence caused much more destruction, was a slower and longer-lasting storm, and caused more people to evacuate than Hurricane Matthew . Previous studies have shown all hurricanes affect the environment and water quality differently; these differences can also be from the hurricanes’ direction after landfall, which also differed between Matthew and Florence . While the strongest increase in AGI ED visit rates occurred in the areas with heavy storm precipitation and >10 hog CAFOs during the three weeks after Florence, we also observed a suggestive increase in areas with 1–10 and >10 poultry CAFOs and heavy Florence rain, although these areas also contain hog CAFOs . When examining ZIP codes with both poultry and hog CAFOs, the results were similar to what we observed when examining just hog CAFOs. It is difficult to disentangle the effects of poultry and hog CAFOs because they are so commonly co-located in flood-prone eastern NC. Central and western NC contain poultry CAFOs without nearby hog CAFOs, but these areas received less rain during the hurricanes, making direct comparison difficult. Previous analyses of this NC ED data found that ZIP codes with high hog CAFO exposure had an 11% higher AGI ED rate than control areas and that areas with both poultry and hog CAFO exposure had a 52% higher AGI ED rate . However, in this current paper, which considers both CAFOs and hurricane precipitation, we did not observe a higher AGI rate in areas with both poultry and hog CAFOs than in areas with just hog CAFOs. Several of the findings in this paper are confirmed by other studies. Heavy rain and flooding have been linked to an increase in gastrointestinal illness rate, even in areas without CAFOs, because sewer overflows, overwhelmed municipal water systems, and damaged septic systems increase the spread of pathogens . The results from our sensitivity analysis that incorporated SSO data were similar to our main results, highlighting that SSOs were not driving the increase in AGI ED visit rate we observed after Hurricane Florence. Additionally, areas with >10 hog CAFOs did not have a higher total volume of SSOs during the week after the hurricanes struck NC than areas with 1–10 hog CAFOs (table S3). While some studies have observed an increase in AGI rate during the 0–5 d after flooding , others have seen the increase in AGI rate occur 7–30 d after flooding . The null result this study observed in ZIP codes with no hog CAFOs and heavy storm precipitation was somewhat unexpected, but some studies have also observed no association and our results indicate that nearby environmental exposures may play a large role in the relationship between heavy hurricane precipitation and AGI rate . Although prior analyses of this NC ED data found a small increase in rate of AGI ED visit rate during the three weeks after Hurricanes Matthew and Florence in areas with severe flooding, those analyses included all heavily flooded areas and did not consider other environmental co-exposures such as hog CAFOs . A recent study found that ED visits decreased in flooded census tracts during the month following Hurricane Harvey and that the decrease was smaller in areas with moderate, high, and very high vulnerability . Their results suggest that flood survivors with inadequate housing and transportation used EDs for healthcare during and after the flooding more than they normally did. One study found that hurricane-related ED visits for medication refills in NC were higher during the weeks after Hurricane Florence than before Florence, indicating that many residents use EDs to obtain medication when pharmacies are closed after large hurricanes . Our study highlights how residents who experienced heavy storm rain and were proximate to many hog CAFOs had more underlying social vulnerability than the state vulnerability average. Social vulnerabilities may affect ED usage and disaster vulnerability, and the socially vulnerable are often more likely to be exposed to harmful environmental exposures including CAFOs. Our results that areas with heavy hurricane rain and hog CAFOs have a higher proportion of Black and American Indian residents than the NC state average have also been shown in other studies over at least two decades. In 1999, Hurricane Floyd caused five hog lagoons to breach and at least 50 lagoons to flood in NC . Numerous lagoons suffered structural damage. Wing et al found that, according to satellite images from Hurricane Floyd, African Americans were more likely than white people to live in areas with flooded hog CAFOs in NC . Another study estimated that flooding affected 303 hog lagoons after Hurricane Matthew and 287 hog lagoons after Hurricane Florence (with affected by flooding defined as hog lagoons that flooded or were within 60 m of detected flooding) . These same analyses estimate that 299 permitted wastewater treatment plants (41% of wastewater treatment plants in the NC study area) were affected by Hurricane Florence flooding and 239 (33%) were affected by Matthew . Studies found elevated concentrations of E. coli, as well as both human and swine-associated fecal markers, in surface water after Hurricanes Matthew and Florence, suggesting that these hurricanes spread fecal waste . Researchers also observed Salmonella typhimurium in water samples near hog CAFOs after Hurricane Florence . Although there is a rich literature on the effects hurricanes have on water quality, few papers investigated health outcomes associated with this flooding. Setzer and Domino examined the health effects of flooded hog CAFOs in NC using Medicaid outpatient data to assess whether Hurricane Floyd was associated with increased waterborne disease-related outpatient visits in eastern NC . They examined counties with high concentrations of hogs and classified the counties on the impact of Hurricane Floyd measured by the Federal Emergency Management Agency’s (FEMA) assessment of the socioeconomic impact of Floyd (severe, moderate, minor, not affected). The study is somewhat limited by these definitions, as FEMA’s designation of hurricane impact is over the entire county and does not assess which hog CAFOs were affected by the heaviest precipitation. Using difference-in-differences, they found an increase in visits for ill-defined intestinal infections in severely and moderately affected counties, compared to unaffected counties. However, the study did not draw any conclusions regarding the combined effect of hurricane flooding and hog CAFOs on gastrointestinal illness, partly because their study did not include any counties that were affected by Floyd that did not have a high concentration of hogs—possibly because most counties severely harmed by Floyd contained hog CAFOs . While other studies have not examined the health effects of hurricane precipitation in combination with hog CAFOs, several studies have found increased concentrations of E. coli, Clostridium , and Giardia (which can cause AGI) in surface water and wells after heavy rain events, with stronger associations in areas with swine manure . Similarly, Febriani et al observed an association between high precipitation periods in the fall season and increased AGI risk three weeks later; they also found industrial farming and season to modify the association between cumulative precipitation and AGI four weeks later . These papers and others highlight that hog CAFOs are associated with increased AGI even during non-hurricane periods. Hog waste from lagoons is regularly sprayed onto nearby fields in NC, leading to elevated levels of nitrate, ammonium, phosphorus, and fecal coliform in surface water near poultry and hog CAFOs in NC . Runoff from fields with recent hog manure application has been found to have higher concentrations of E. coli compared to control fields; thus, hog CAFOs can pollute surface and groundwater even if manure lagoons do not spill . In a previous paper using the same NC ED data as this study, our study team found that the positive association between high hog exposure and AGI ED visit rate was stronger when a heavy precipitation event (>99th percentile of daily precipitation, >2.4 inches) had occurred within the previous week than when the previous week had been dry . That study supports this paper’s conclusions that exposure to both heavy hurricane precipitation and many hog CAFOs appears to increase AGI ED rate. This study’s strengths include using interrupted time series to compare ZIP codes to themselves over time as well as the examination of two hurricanes that struck the same general areas only two years apart. Comparing areas to themselves over time allows control for known and unknown time-invariant confounders, like demographics and constant environmental exposures . We also incorporated data on total number of hogs and birds as well as SSOs in sensitivity analyses to highlight the robustness of our results. We observed elevated AGI ED visit rates after Florence in ZIP codes with >10 hog CAFOs as in ZIP codes with >10 000 hogs; our results were similar when we measured CAFO exposure by number of CAFOs or by number of animals. Our study was limited by our inability to obtain information as to how the heavy storm precipitation compromised hog CAFOs and hog lagoons, as some lagoons breached, others experienced significant structural damage, and others only flooded. These different impacts of heavy precipitation on hog lagoons are likely to have large effects on the amount of hog waste and fecal bacteria that subsequently contaminate waterways. Because this information was unavailable, we examined the effect of heavy precipitation as a surrogate measure. This study is also limited by the ZIP code-level ED data. However, this ZIP code-level analysis is an improvement in geographic granularity over other studies that examined this question at a county level. Our analyses were also limited because the demographics of the areas with many hog CAFOs and heavy rain during hurricanes were quite different from those of areas without hog CAFOs. We compared AGI ED rates in ZIP codes after hurricanes to their expected AGI ED rates had the hurricanes not occurred because appropriate control areas could not be created. Our prior efforts to make hog CAFO and hog CAFO-free control areas comparable via weighting were unsuccessful because of marked sociodemographic differences between these areas (not shown). Because of these limitations, we are unable to make causal statements from our results. The differences in demographics and social vulnerability between the categories of ZIP codes of heavy storm rain with 0, 1–10, and >10 hog CAFOs could affect their ED usage patterns and how these populations responded to and recovered from the hurricanes. Thus, caution is required when comparing results between these CAFO count categories. Nevertheless, our findings that there was no increase in total ED visit rate during the three weeks after Hurricane Florence in areas with heavy storm rain and >10 hog CAFOs support our conclusion that the observed increase in AGI ED visits was related to the presence of the hog CAFOs. The unequal distribution and simultaneous concentration of hurricane-prone areas, hog CAFOs, and communities of color in ‘sacrifice zones’ can cause structural confounding issues that make causal analysis difficult . The high storm precipitation ZIP codes with >10 hog CAFOs received more rain, on average, than high precipitation ZIP codes with 0 hog CAFOs (table 2 ), highlighting that hog CAFOs are located in areas that receive an especially large amount of rain during hurricanes. However, this also makes it difficult to identify hog CAFOs as the causal agent. Most areas in NC that experienced heavy precipitation and flooding from these hurricanes have many hog CAFOs (except for the coast, which has very different demographics) and most unflooded areas have few or no hog CAFOs. This highlights an important environmental justice and climate justice issue, that flooding and related environmental health problems disproportionately harm low-income residents and people of color, who are also disproportionately harmed by hog CAFOs in NC. Historically, several Black towns, like Princeville, NC, were established in flood plains, as this was some of the only land available to Black people . Additionally, a recent study found that the current legal NC floodplain underestimates the impacts of flooding on areas with high proportions of older adults, disabled individuals, unemployment, and mobile homes . Existing social vulnerabilities and environmental injustices often contribute to disaster vulnerabilities . Hurricanes will continue to hit NC and hog lagoons will continue to flood and spread pathogens despite wide discussion of the effects of flooded and damaged lagoons and the ban on building new lagoons in the 100 year floodplain . The co-occurrence of hog CAFOs in communities of color and climate change impacting those same communities through hurricanes doubly harms these communities now and in the future. Over the last few decades, NC’s regulation of hog CAFOs has changed very little about these disproportionate exposures; instead, risks have increased over time as the industrial poultry industry has expanded in many of the same areas, and hurricanes have become more frequent and intense . Although the NC Swine General Permit provides some protection to the environment and nearby communities under usual conditions, this study and others suggest that the protection may be inadequate at preventing health problems resulting from the spread of hog waste during hurricanes and other heavy precipitation events. In addition to the human health effects from flooding at CAFOs, tens of thousands of hogs and poultry drowned during Hurricanes Floyd, Matthew, and Florence, and lagoon breaches during these storms killed many fish and caused algae blooms . While this paper focuses on AGI possibly caused by fecal bacteria, hog manure also contains nitrates, heavy metals, and antibiotic residues that also harm the environment and may adversely affect human health [ 73 – 77 ]. Hurricanes and heavy precipitation events are expected to continue increasing in frequency and intensity in the coming years because of climate change . The intersection of CAFOs and flooding has created complex environmental and climate justice issues that are exacerbated during every hurricane. Areas with hog CAFOs and frequent hurricane flooding in NC contain vulnerable communities that may be at increased risk for AGI after hurricanes. Disaster preparedness and response must consider both environmental and social vulnerabilities to improve health and reduce health disparities in NC. | Study | biomedical | en | 0.999996 |
PMC11696847 | Sex hormones play an important role in adipose tissue distribution, the development of prediabetes and type 2 diabetes mellitus (T2DM), and other cardiovascular risk factors between cisgender men and cisgender women , , . Thus, it is rational to assume that changes in sex hormone profiles due to gender-affirming hormone therapy (GAHT) might also lead to changes in adipose tissue distribution and glucose tolerance. However, there is still a lack of conclusive evidence on the changes brought on by GAHT, especially concerning intraorgan lipid content. Cisgender women generally show less ectopic lipid accumulation than cisgender men , , , , , . To our knowledge, no studies to date have concerned themselves with changes in myocardial and pancreatic lipid content in transgender people under GAHT, and only one study observed a decrease in hepatic lipid content in transgender women and an increase in hepatic lipid content in transgender men after 1 year of GAHT including GnRH analogs . Regional adipose tissue distribution shows a typical dimorphism between cisgender women and cisgender men – visceral adipose tissue (VAT) accumulation is more pronounced in cisgender men, and is associated with a higher risk of cardiovascular and metabolic diseases compared to subcutaneous adipose tissue (SAT) accumulation in the lower body, which is more prominent in cisgender women , , , . Additionally, in cisgender men, abdominal subcutaneous and visceral adipose tissue both seem to be similarly associated with insulin resistance, whereas insulin resistance in cisgender women seems to be particularly associated with the accumulation of visceral adipose tissue in the abdominal region . During GAHT, transgender people tend to gain weight and experience a change in body composition − transgender women gain fat mass and lose lean mass, whereas the opposite happens in transgender men , , , , , , . These results are relatively consistent across studies with different measurement methods (bioimpedance, magnetic resonance, dual-energy X-ray absorptiometry (DXA)) and different treatment protocols , . However, it is important to note that previous studies often included medications that are no longer used in contemporary GAHT protocols, such as ethinylestradiol , , . GAHT also impacts lipid profiles and glucose metabolism. In transgender women, feminizing hormone therapy leads to generally favorable lipid profile changes, resembling premenopausal cisgender women, with lower LDL cholesterol and triglyceride levels compared to pre-GAHT , , , , , , . Additionally, GAHT is associated with no to only slightly detrimental effects on glucose homeostasis and insulin resistance in transgender women, generally observed as an increase of fasting insulin and the HOMA-IR, without significant glucose level changes , , , . In transgender men, the data is somewhat inconsistent − some studies report elevations of triglycerides and decreases of HDL cholesterol, while others also note an increase of LDL cholesterol compared to baseline , , , , , , . Studies into GAHT effects on glucose metabolism in transgender men note either no changes or slight improvements in insulin resistance without changes in glucose levels , , . This study aimed to investigate short-term changes in intraorgan lipid content, the subcutaneous and visceral adipose tissue distribution, as well as changes in lipid and glucose homeostasis in transgender individuals after 6 months of GAHT. We focused on localized MR-spectroscopy and imaging to determine abdominal VAT/SAT as well as the lipid content of the myocardium, liver, and pancreas, which has not been sufficiently investigated in people under GAHT so far. This monocentric longitudinal study was conducted at the Department of Endocrinology and Metabolism and the Highfield MR Centre of Excellence at the Medical University of Vienna between 2019 and 2022. The study adhered to the Declaration of Helsinki and was approved by the local ethics committee. Before participating in the study, all participants provided written informed consent after receiving thorough information. The study included 15 transgender women (assigned male at birth) and 20 transgender men (assigned female at birth). Two study visits were conducted for each participant: the first before the initiation of gender-affirming hormone treatment (GAHT), and the second visit 6 months after treatment started. GAHT was indicated and administered entirely separately from the study, with treatment regimens and duration of the treatment not influenced by participation in the study. Transgender women included in the study used either transdermal or oral estrogen medication, usually in combination with cyproterone acetate; transgender men received intramuscular or transdermal testosterone. Due to the limited sample size, we were unable to stratify for the mode of application. Such stratification was further complicated by patients not necessarily adhering to one mode of application during the treatment. The study included transgender participants diagnosed with “gender dysphoria in adolescents and adults”, classified under the number 302.85 in the 5th edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-V) who were yet to begin GAHT and were in generally good health. Exclusion criteria included severe diseases (neurological and/or internal), significant abnormalities upon routine screenings or physical examination, pregnancy, general contraindications for magnetic resonance exams (i.e., non-MR-conditional devices or implants), current substance abuse, and/or non-compliance with the study protocol. All measurements were performed at the outpatient clinic of the Department of Endocrinology and Metabolism and at the High Field MR Centre of Excellence after an overnight fast of at least 8 h, with all patients undergoing the same set of examinations on both study visits. First, the participants underwent a magnetic resonance imaging and spectroscopy exam (MRI/MRS) in a 3-Tesla magnetic resonance device Magnetom Prisma Fit (Siemens Healthineers, Erlangen, Germany) to measure the lipid content in the myocardial, hepatic, and pancreatic tissues, and the distribution of VAT and SAT. The MRI and MRS measurements were performed in accordance with established methods at the High Field MR Centre of the Medical University of Vienna. Myocardial lipid content was measured with electrocardiogram-gated 1 H-MR spectroscopy evaluating the spectral signals acquired from the interventricular septum, similar to the exams conducted in previous studies , . The hepatic lipid content was determined by short echo time single-voxel MRS, whereas lipid content in the pancreas was quantified by multi-echo Dixon imaging sequences delivering fat fraction images; both of the measurements were done following previously published protocols , , . Additionally, the amounts of SAT and VAT were measured using magnetic resonance imaging at the height of the intervertebral disc between L2/L3 in an axial slice in T1-weighted images, enabling the calculation of the visceral to subcutaneous adipose tissue ratio. Following the MR exam, the patients underwent extensive bloodwork to assess the levels of hormones (luteinizing hormone, follicle-stimulating hormone, 17-β-estradiol, testosterone, sex hormone-binding globulin, androstenedione, dehydroepiandrosterone sulfate, 25-hydroxyvitamin D 3 ) and baseline metabolic parameters (fasting glucose, fasting insulin, parameters of lipid metabolism (total cholesterol, low-density lipoprotein cholesterol, high-density lipoprotein cholesterol, triglycerides, glycosylated hemoglobin (HbA 1c ) and a 75 g oral glucose tolerance test (OGTT). During the 2-hour OGTT, blood was drawn through an intravenous cannula placed in the cubital vein at 30-minute intervals (i.e., at baseline, after 30, 60, 90, and 120 min after glucose ingestion) to assess the time course of glucose, insulin, and c-peptide and to enable the subsequent calculation of surrogate markers of insulin sensitivity. We utilized the updated homeostasis model assessment for insulin sensitivity (HOMA2-%S) and the homeostasis model assessment for β-cell function (HOMA2-%β), which were calculated using the HOMA Calculator Version 2.2.3., available online at https://www.rdm.ox.ac.uk/about/our-clinical-facilities-and-mrc-units/DTU/software/homa to assess the insulin sensitivity and insulin secretion. The abdominal circumference was measured at the lower border of the rib cage, whereas weight was measured with an electronic scale (SECA 877/888) in light clothing. All laboratory assessments were conducted at the ISO 9001 central laboratory of the Clinical Institute for Laboratory Medicine at Vienna General Hospital, the exact methods can be found at the website www.kilm.at . The data was tested for normal distribution by the Shapiro-Wilk test. In the case of data following a normal distribution, the results are presented as mean (±standard deviation), or as median (interquartile range) if the data did not follow normal distribution. In order to assess the differences between the baseline values and values after 6 months of GAHT, the data was analyzed using the Wilcoxon signed-rank test. Possible correlations between the variables of interest were calculated using Spearman's rank correlation coefficient. Correlations were calculated for the values at follow-up. The level of significance was set at α = 0,05. The statistical analysis was done using IBM SPSS Statistics software version 29.0.0.0 (IBM, New York, USA). In 2 individuals, we were unable to complete the full MR examination due to physical proportions (1 TW) and claustrophobia (1 TM). The mean age of TW participating in our study at baseline was 35,33 years (±11,00), while for TM, the mean age at the beginning of the study was 23,80 years (±5,16). As expected, the hormone profiles of transgender persons after 6 months of GAHT changed to resemble the hormone profiles of the cisgender people of their affirmed gender. Detailed hormone profiles at baseline and after 6 months of GAHT can be found in Table 1 . Table 1 Hormone panels in transgender men and transgender women at baseline and after 6 months of gender-affirming hormone therapy (GAHT). The data is given as mean ± standard deviation (SD) if normally distributed, or as median and interquartile range (IQR) if not normally distributed. LH = luteinizing hormone, FSH = follicle-stimulating hormone, bioavailable T. = bioavailable testosterone, SHBG = sex hormone-binding globulin, DHEAS = dehydroepiandrosterone sulfate, 25-OH-Vitamin D = 25-hydroxy-Vitamin D. Mean (±standard deviation) or Median (interquartile range) Transgender women Transgender men Hormone panel Baseline After 6 months Baseline After 6 months LH (mIU/ml) 4,1 (2,8–5,9) 0,3 (0,2–0,7) 6,5 (5,5–9,0) 3,8 (2,6–9,4) FSH (mIU/ml) 3,1 (2,0–4,2) 0,3 (0,2–0,9) 4,3 (3,7–5,9) 4,7 (2,3–6,3) Prolactin (ng/ml) 9,9 (8,0–16,5) 26,1 (16,8–45,7) 16,4 (11,0––25,3) 14,0 (9,6–19,3) Progesterone (ng/ml) 0,15 (0,04–0,27) 0,19 (0,07–3,26) 0,34 (0,27–5,16) 0,26 (0,11–0,92) 17-β-estradiol (pg/ml) 33,0 (21,0–53,0) 124,0 (68,0–204,0) 68,5 (45,8– 135,0) 52,0 (39,8–98,8) Testosterone (ng/ml) 4,40 (4,22–6,00) 0,20 (0,1–0,41) 0,36 (0,29–0,50) 7,05 (3,52–9,36) Bioavailable T. (ng/ml) 2,47 (1,73–2,71) 0,06 (0,04–0,14) 0,12 (0,10–0,18) 3,38 (1,14–5,69) SHBG (nmol/l) 38,0 (±16,1) 48,9 (±20,1) 48,5 (32,6–65,8) 26,7 (22,6–44,5) Androstenedione (ng/ml) 1,26 (±0,56) 0,88 (±0,43) 1,80 (±0,69) 2,47 (±0,97) DHEAS (μg/ml) 2,76 (±0,93) 2,63 (±1,29) 2,92 (±1,07) 3,11 (±1,16) 25-OH-Vitamin D (nmol/l) 46,7 (±21,2) 56,2 (±21,2) 49,4 (±25,0) 54,4 (±25,9) In transgender women, we did not detect statistically significant differences in organ lipid percentages in the MR-based measurements after 6 months of feminizing GAHT compared to baseline, despite the median hepatic and myocardial lipid contents decreasing numerically and the mean pancreatic lipid content increasing numerically. The mean subcutaneous fat area increased and the mean visceral fat area decreased numerically, which was in line with the observed significant decrease in the VAT/SAT ratio . However, we did not observe any significant changes in weight, BMI, or abdominal circumference after 6 months of feminizing GAHT. Fig. 1 Boxplots of the visceral to subcutaneous adipose tissue ratio (VAT/SAT ratio) at baseline and after 6 months. The asterisk above the solid line symbolizes a significant change in the VAT/SAT ratio in transgender women after 6 months of GAHT at the significance level α = 0,05. In transgender women, mean concentrations of total and LDL cholesterol decreased significantly, while the concentrations of triglycerides increased minimally. In addition, we observed a significant increase in the HOMA2-%β index accompanied by a significant decrease in the HOMA2-%S after 6 months of feminizing GAHT. The detailed results for the cohort of transgender women can be found in Table 2 . Table 2 Outcome measures in transgender women at baseline and after 6 months of gender-affirming hormone therapy. The data is given as mean ± standard deviation (SD) if normally distributed, or as median and interquartile range (IQR) if not normally distributed. MR = magnetic resonance. VAT/SAT ratio = ratio of visceral adipose tissue to subcutaneous adipose tissue. SAT = subcutaneous adipose tissue. VAT = visceral adipose tissue. LDL-C = low-density lipoprotein cholesterol. HDL-C = high-density lipoprotein cholesterol. HbA 1c = glycated hemoglobin. HOMA2-%S = updated homeostatic model assessment for insulin sensitivity. HOMA2-%β = updated homeostatic model assessment for β-cell function. BMI = body mass index. P-values marked with an asterisk indicate statistical significance at the level of <0,05. Transgender women Primary endpoints MR parameters Baseline After 6 months p-value Myocardial lipid content (%) 0,45 (0,19–0,83) 0,40 (0,29–0,62) 0,754 Hepatic lipid content (%) 0,81 (0,40–1,86) 0,67 (0,41–1,44) 0,126 Pancreatic lipid content (%) 5,54 (±2,80) 6,66 (±3,11) 0,071 VAT/SAT ratio 0,930 (0,649–1,287) 0,758 (0,424–0,900) 0,011* SAT area (mm 2 ) 13,730 15,038 0,096 VAT area (mm 2 ) 13,983 10,830 0,064 Secondary endpoints Metabolic parameters Total cholesterol (mg/dl) 161 (±48) 149 (±38) 0,020* LDL-C (mg/dl) 93 (±41) 83 (±34) 0,020* HDL-C (mg/dl) 47 (42–60) 43 (39–49) 0,462 Triglycerides (mg/dl) 82 (60–117) 85 (57–97) 0,035* Fasting glucose (mg/dl) 87 (±7) 85 (±6) 0,054 Glucose after 120 min (mg/dl) 109 (±21) 106 (±23) 0,532 Fasting insulin (μIU/ml) 10,8 (±4,2) 13,3 (±4,0) 0,069 Insulin after 120 min (μIU/ml) 59,3 (30,3–85,2) 65,6 (50,3–82,4) 0,427 HbA 1c (%) 4,9 (±0,4) 4,9 (±0,3) 0,318 HOMA2-%S 83,03 (±31,11) 64,27 (±18,01) 0,047 HOMA2-%β 128,11 (±35,80) 156,80 (±39,49) 0,020* Secondary endpoints Anthropometric parameters Weight (kg) 80,0 (67,0–87,5) 79,5 (70,7–91,5) 0,529 BMI (kg/m2) 23,60 (22,60–27,13) 24,36 (22,57–26,17) 0,463 Abdominal circumference (cm) 85,0 (78,0–94,0) 87,5 (81,0–93,0) 0,209 In this cohort, levels of estradiol at follow-up did not correlate with any of the MR- and metabolic parameters. On the other hand, testosterone at follow-up significantly negatively correlated with the amount of VAT at follow-up (r ρ = -0,566, p = 0,035). Additionally, the level of testosterone was also associated with myocardial lipid content (r ρ = 0,696, p = 0,006). In transgender men, the MR-based measurements of intraorgan lipid percentages did not show any significantly changes after 6 months of testosterone treatment, albeit myocardial lipid content showed a decreasing tendency. In transgender men, we observed a tendency towards an increase of VAT, which did not reach statistical significance. There were no significant changes of SAT or the VAT/SAT ratio after 6 months of GAHT in this cohort . However, we observed significant increases in weight, BMI, and abdominal circumference after 6 months of masculinizing hormone treatment with testosterone. We did not detect any significant changes in lipid profiles after 6 months of masculinizing GAHT. No changes in fasting or postprandial glucose and insulin, as well as in the calculated indices for insulin sensitivity and β-cell function were detected in transgender men. Nonetheless, transgender men exhibited a significant increase in HbA 1c after 6 months of testosterone treatment. The detailed results for the cohort of transgender men can be found in Table 3 . Table 3 Outcome measures in transgender men at baseline and after 6 months of gender-affirming hormone therapy. The data is given as mean ± standard deviation (SD) if normally distributed, or as median and interquartile range (IQR) if not normally distributed. MR = magnetic resonance. VAT/SAT ratio = ratio of visceral adipose tissue to subcutaneous adipose tissue. SAT = subcutaneous adipose tissue. VAT = visceral adipose tissue. LDL-C = low-density lipoprotein cholesterol. HDL-C = high-density lipoprotein cholesterol. HbA1c = glycated hemoglobin. HOMA2-%S = updated homeostatic model assessment for insulin sensitivity. HOMA2-%β = updated homeostatic model assessment for β-cell function. BMI = body mass index. P-values marked with an asterisk indicate statistical significance at the level of <0,05. Transgender men Primary endpoints MR parameters Baseline After 6 months p-value Myocardial lipid content (%) 0,61 (0,30–0,99) 0,29 (0,22–0,51) 0,084 Hepatic lipid content (%) 0,42 (0,25–1,10) 0,53 (0,34–0,78) 0,965 Pancreatic lipid content (%) 5,60 (3,88–7,54) 5,97 (3,94–7,16) 0,795 VAT/SAT ratio 0,367 (0,325–0,512) 0,409 (0,298–0,505) 0,433 SAT area (mm 2 ) 16,714 16,390 0,658 VAT area (mm 2 ) 5999 6840 0,171 Secondary endpoints Metabolic parameters Total cholesterol (mg/dl) 163 (±27) 156 (±34) 0,888 LDLC (mg/dl) 92 (±26) 98 (±35) 0,365 HDLC (mg/dl) 52 (±13) 48 (±12) 0,121 Triglycerides (mg/dl) 78 (54–140) 80 (63–142) 0,380 Fasting glucose (mg/dl) 82 (±8) 81 (±10) 0,506 Glucose after 120 min (mg/dl) 102 (85–131) 115 (91–129) 0,538 Fasting insulin (μIU/ml) 10,3 (±4,7) 10,3 (±6,0) 0,970 Insulin after 120 min (μIU/ml) 73,9 (40,6–103,2) 94,4 (49,0–114,8) 0,156 HbA 1c (%) 5,1 (±0,3) 5,3 (±0,4) 0,001* HOMA2-%S 81,40 (60,10––122,48) 84,35 (59,60–146,45) 0,940 HOMA2-%β 140,29 (±48,32) 140,29 (±54,07) 0,940 Secondary endpoints Anthropometric parameters Weight (kg) 67,90 (57,15–76,83) 73,65 (61,25–82,28) 0,015* BMI (kg/m2) 25,60 (±5,35) 26,45 (±4,96) 0,015* Abdominal circumference (cm) 85,5 (74,0–92,8) 88,7 (75,1–100,0) 0,021* In transgender men, we observed a negative correlation between estradiol levels and pancreatic lipid content at follow-up (r ρ = −0,528, p = 0,024), while testosterone did not significantly correlate with any of the MR-based parameters. In the case of metabolic parameters, testosterone was associated with the HOMA2-%S values (r ρ = 0,486, p = 0,030). Our study focused on examining the changes in MR-measured lipid content in the liver, pancreas and myocardium, adipose tissue distribution, and other cardiometabolic risk factors in transgender women and transgender men before initiating GAHT and after 6 months of the treatment. Regarding changes in intraorgan lipid content, we did not detect any statistically significant changes in any organs in either cohort. Despite the results not being statistically significant, there was a trend towards an increase in pancreatic lipid content and towards a decrease in hepatic lipid content in transgender women, while transgender men exhibited a tendency towards a decrease in myocardial lipid content. In transgender women, we observed significant changes in abdominal adipose tissue distribution − in our study, the VAT/SAT ratio measured at a cross-sectional slice at the level of the L2/L3 vertebrae decreased significantly . This significant decrease of the VAT/SAT ratio seems to be the consequence of a less significant increase in SAT area and a less significant decrease in the VAT area in the transgender female individuals, illustrated by Fig. 2 . In cisgender persons, more pronounced adipose tissue storage in the subcutaneous than in the visceral compartment is more common in premenopausal cisgender women than cisgender men, and seems to be less detrimental for cardiometabolic health than the opposite , , . Contrastingly, previously published literature noted increases in both SAT and VAT in transgender women using GAHT , , , . Additionally, we did not observe the changes of weight or abdominal circumference noted in previous reports, which might be attributed to the reciprocal changes of VAT and SAT , . However, the results of these studies are not directly comparable due to different follow-up duration and differing GAHT protocols used , . Fig. 2 Differences in VAT and SAT distribution at the level of L2/L3 in a transgender woman before (A) and after (B) 6 months of gender affirming hormone therapy. Overall, in transgender women, we observed a statistically significant decrease in the VAT/SAT ratio after 6 months of feminizing GAHT. Additionally, we observed a tendency towards an increase of SAT and a decrease of VAT in this cohort. In transgender men, we observed a tendency towards an increase in abdominal visceral adipose tissue , which was not statistically significant. Such tendency has also been reported in previous studies , , , . However, this was not accompanied by significant changes neither in the VAT/SAT ratio, nor in the SAT area, which is in contrast with earlier research , , , . The results presented by Tebbens et al. demonstrated a decrease of the ratio of subcutaneous to visceral adipose tissue accompanied by an increase in VAT after 12 months, while other MR-based studies with a 1-year follow up period also note a decrease of subcutaneous abdominal fat , , , . A direct comparison between the results of those studies is not fully feasible due to the differing follow up length and marked differences in the treatment protocols , , . Nevertheless, these results suggest that changes in adipose tissue distribution in transgender men might require a longer time to develop. Fig. 3 Differences in VAT and SAT distribution at the level of L2/L3 in a transgender man before (A) and after (B) 6 months of gender affirming hormone therapy. In transgender men, we observed a tendency towards an increase of VAT, whereas we did not observe any changes of SAT or the VAT/SAT ratio after 6 months of GAHT. When discussing changes in body composition and fat distribution under GAHT, it is important to consider that the variability of the changes and interindividual differences in their extent is high, with a non-negligible amount of individuals not experiencing any changes at all , . We observed a decreasing tendency of hepatic lipid content after 6 months of feminizing GAHT which was not statistically significant, whereas Tebbens et al. observed a decrease of 1,55 % after 12 months in this group. Additionally, we observed a minimal numerical increase in hepatic lipid content in transgender men which was also not statistically significant, whereas Tebbens et al. report a significant increase thereof by 0.83 %. The differences might be due to the shorter duration of our study and differing treatment protocols, since Tebbens et al. also utilized GnRH-analogues. To our knowledge, our study is the first concerning itself with myocardial and pancreatic lipid content in transgender individuals under GAHT. We observed an increasing tendency in pancreatic lipid content and a minimal numerical decrease in myocardial lipid content in transgender women, whereas in transgender men, we noted a tendency towards a decrease in the myocardial lipid content. Despite the interesting trends in both groups, none of the results reached statistical significance, highlighting a need for further exploration after a longer therapy duration. We also examined the possible correlations between the sex hormone levels and the changes in MR-based and metabolic parameters at follow up. In transgender women, levels of estradiol at follow up did not correlate with any of the MR-based or metabolic parameters, while testosterone levels after 6 months of GAHT were moderately negatively associated with VAT at follow up. Additionally, testosterone in transgender women also correlated positively with myocardial lipid content, indicating that insufficiently suppressed testosterone concentrations in TW might be associated with higher levels of myocardial lipid content. In transgender men, there was a moderate negative correlation between estradiol levels and pancreatic lipid content, indicating that higher levels of estradiol in TM might be associated with lower levels of pancreatic adiposity. In transgender men, the levels of testosterone did not significantly correlate with any of the MR-based parameters. Regarding metabolic parameters in transgender men, higher testosterone levels were moderately associated with higher insulin sensitivity quantified by the HOMA2-%S index. The changes in lipid profiles in transgender women are in line with the expected changes with a decrease in total cholesterol and LDL-C concentrations , , , , , . Furthermore, we also observed significant changes in the calculated indices for assessing insulin sensitivity and β-cell function. While the HOMA2-%S − quantifying the insulin sensitivity − decreased after 6 months of GAHT, the HOMA2-%β – providing an estimate of the β-cell function − increased, which indicates a worsening of insulin sensitivity with an accompanying increase of insulin production. The previously published literature on the effects of GAHT on the glucose metabolism in transgender women is somewhat conflicting, and notes changes which range from no to slight worsening of glucose metabolism status , . We also explored the possible relationships between visceral and subcutaneous adipose tissue and the HOMA-2 indices. In transgender women, HOMA2-%β was positively associated with the amount of both SAT and VAT. On the other hand, HOMA2-%S was negatively correlated with adipose tissue amount in both compartments, with a stronger correlation with VAT. This is in line with the findings made in cisgender women, in which visceral adiposity seems to be more strongly associated with insulin resistance than subcutaneous adipose tissue . In transgender men, we did not observe any significant changes in the blood lipid profiles. Despite no significant changes in glucose and insulin levels, we detected a statistically significant, albeit small, increase of HbA 1c value, reflecting a slightly worse long-term glucose tolerance after 6 months of testosterone GAHT. This is a finding that is in contrast with the generally no or slightly beneficial changes in glucose tolerance and insulin sensitivity that are observed in transgender men under GAHT , . Previous studies on adipose tissue distribution often utilized ethinylestradiol (EE) as the main estrogen medication in feminizing GAHT, which is associated with significant adverse effects and therefore no longer in use , , . Given contemporary treatment protocols, our study, which excludes EE, offers an updated insight into the changes in adipose tissue distribution measured by MR imaging and cardiometabolic risk factors. Furthermore, we provide a novel insight into the changes in organ lipid content of transgender individuals after 6 months of GAHT using magnetic resonance imaging and spectroscopy, which, to our knowledge, has not been explored so far apart from a single study into hepatic lipid content with a vastly different treatment protocol . Follow-up studies might provide more insight into the long-term changes in organ lipid content and body fat distribution, as well as the possible clinical consequences thereof. The differentiation between the effects of different administration methods (oral vs. transdermal estrogen, or transdermal vs. intramuscular testosterone) was impossible due to the small sample size. Additionally, in many cases, patients switch between routes of administration, further complicating a clear stratification. In all cases, the GAHT was titrated to cisgender reference ranges. Previous research into this topic has not reported any significant differences between the effects of different testosterone and estrogen formulations regarding adipose tissue distribution and body composition , . Regarding metabolic parameters, oral estrogen seems to be connected to less favorable changes in lipid profiles compared to transdermal estrogen; in contrast, there seems to be no significant difference between testosterone formulations , , , , . Generally, studies stratifying by different routes of administration are scarce; further studies with larger populations enabling a clear stratification might contribute to more precise insights into the effects of GAHT. While our study detected interesting trends in the changes of intraorgan lipid content and adipose tissue distribution, the results must be interpreted cautiously due to the limited size of the study population and the duration of the intervention. Furthermore, we were not able to account for the psychosocial, dietary, and lifestyle circumstances accompanying the transition process, which should be more closely considered in future research. The lack of statistical significance in some of our findings could potentially be attributed to the limited sample size, implying that our study might have been underpowered to detect more subtle changes, in the sense that a Type 2 error cannot entirely be ruled out. Further research with larger populations and longer follow-up periods is already being conducted and may reveal more definitive findings. Nonetheless, our study provides novel insights into the short-term effects of GAHT on intraorgan lipid content and adipose tissue distribution, though the immediate influence on clinical practice is limited. Future studies elucidating the effects of GAHT stratified by the mode of application could help inform the decisions taken when choosing the most suitable GAHT regime for each individual, taking into account their baseline metabolic profile and adipose tissue distribution, as well as inform clinical practice highlighting possible areas requiring closer observation during GAHT in order to provide the most suitable care for individual patients. The findings provided by our study highlight important areas for further investigation − given the typically long-term and individualized nature of GAHT, future research should involve longer study durations and broader populations also encompassing non-binary individuals and those with less conventional GAHT requirements. Dorota Sluková: Writing – original draft, Project administration, Investigation, Formal analysis, Data curation. Carola Deischinger: Writing – original draft, Project administration, Investigation, Formal analysis, Data curation, Conceptualization. Ivica Just: Writing – review & editing, Methodology, Investigation, Data curation. Ulrike Kaufmann: Writing – review & editing, Supervision. Siegfried Trattnig: Writing – review & editing, Supervision. Martin Krššák: Writing – review & editing, Supervision, Methodology. Lana Kosi-Trebotic: Writing – review & editing, Supervision, Methodology, Funding acquisition, Conceptualization. Juergen Harreiter: Writing – review & editing, Supervision. Alexandra Kautzky-Willer: Writing – review & editing, Supervision, Conceptualization. The study is funded by the “Medical Scientific Fund of the Mayor of the City of Vienna”, project number 18036, awarded to Lana Kosi-Trebotic. The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: Lana Kosi-Trebotic reports financial support was provided by Medical Scientific Fund of the Mayor of the City of Vienna. If there are other authors, they declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | Study | biomedical | en | 0.999995 |
PMC11696849 | Chinese materia medica (CMM), the indispensable marrow of traditional Chinese medicine (TCM), has occupied a dominant position in combating and mitigating the global challenge of the corona virus disease 2019 (COVID-19). A total of 2711 varieties of CMMs were listed in the Pharmacopoeia of the People's Republic of China , representing indispensable constituents of the world's medical arsenal. Each variety harbors a diverse array of potent pharmaceutical compounds . Notably, CMMs exhibit reduced toxicity and fewer adverse effects compared with their synthetic chemical counterparts when employed in disease treatment . Most CMMs come from herbal origins, and many of them are served as daily vegetable supplements on the dining table. Their quality is the primary medical basis for effective prescriptions and clinical efficacy. Genuine CMM quality is intricately influenced by multiple factors, such as germplasm, ecological environment, and harvesting time, eventually reflecting in active metabolites as well as morphological structures . Indeed, disparate environmental conditions precipitate the development of distinct structures and the synthesis of specific bioactive constituents, thereby underpinning the pronounced quality and therapeutic efficacy of CMMs. Consequently, setting up a holistic quality visualization of CMMs will be of great significance for ensuring their clinical efficacy. Visualization of the quality of CMMs means intuitively showing the various external and internal factors that collectively reflect their species, quality, and efficacy in a holistic manner. These factors encompass, but are not limited to, the morphological structures of the organs and tissues, the distribution of active metabolites, and the overall dynamic changes elicited by temporal and environmental fluctuations, eventually allowing the quality of CMMs to be visualized holistically. Both morphology and spatiotemporal distribution of active metabolites in heterogeneous CMMs can be visualized to provide valuable support for its quality assessment. Conventional detection techniques, reliant solely on simplistic qualitative and quantitative analyses targeting specific active compounds, fall short of meeting the exigencies of TCM's modernization. New strategies integrating multidisciplinary approaches, such as optics and biochemistry, have thus been advanced to address this challenge . In pursuit of the most efficacious means of non-destructive quality visualization of the intricate and multifaceted system of CMMs, access to information in series must rely on the combination of advanced scientific instruments and computer tomography; therefore, it requires the new technology from a holistic point of view to investigate and evaluate the quality of CMMs. This review raised several aspects influencing the quality of the CMM: external and internal morphology, active metabolites, and their dynamic changes. What we need is to holistically visualize and measure these relevant factors underlying the CMM quality. In the current study, the high-field magnetic resonance imaging (MRI) combined with data from magnetic resonance spectroscopy (MRS) simultaneously is proposed for the first time as a strategy to monitor structures and metabolites in several CMMs. Furthermore, this study endeavors to point out the application potential of MRI in the quality formation and evaluation of CMMs through the holistic visualization and to provide a forward-looking outlook for future development of the pioneering technique. MRI images are obtained from signals spatially shown by nuclear magnetic resonance (NMR) experiments. MRI possesses a key advantage in performing both static and dynamic measurements. In particular, noteworthy is its capacity to collect data from within samples in a noninvasive way . By this means, the morphology of the internal tissues of any form of an opaque sample can be imaged while allowing a range of chemical parameters to be evaluated. This sets it apart from conventional analytical methods such as liquid chromatography (LC), gas chromatography (GC), and mass spectrometry (MS), which are burdened by complicated pretreatment processes that may precipitate target compound degradation. Several alternative methods for sample visualization exist, including positron emission tomography (PET) , confocal laser scanning microscopy , optical coherence microscopy , optical projection tomography , X-rays , and mass spectrometry imaging (MSI) . But they cannot study samples in such a holistic and non-destructive way as MRI. The common limitation of all optical techniques is that thick specimens of samples must be processed with organic solvents , causing damages to samples and rendering them incompatible with metabolite analysis. MSI, on the other hand, is constrained to surface imaging or tissue sections. On the contrary, MRI transcends such limitations, enabling information acquisition regardless of sample thickness and under non-destructive conditions. A summary of these techniques, alongside their respective strengths and shortcomings, is presented in Table 1 . Table 1 A summary of strengths and limitations of techniques mentioned above. Table 1 Techniques Strengths Limitations Conventional analysis methods (LC, MS, etc.) Accurate analysis with good reproducibility and low LODs Complicated pretreatment, homogenized tissues, and large sample volumes Near infrared Simple operation, low cost of detection, and non-destructive samples Low sensitivity, poor resolution, and minor detection range of compounds PET High sensitivity, on-site analysis, and real-time detection Requiring radiolabeling MSI Wide detection range of compounds simultaneously and high resolution (1−100 μm) Detection of tissue slide, being limited by slicing techniques MRI Holistic detection of the whole body in situ , non-destructive samples, and fast analysis speed High cost of equipment and middle resolution (0.05−200 mm) LC: liquid chromatography; MS: mass spectrometry; LODs: limits of detection; PET: positron emission tomography; MSI: mass spectrometry imaging; MRI: magnetic resonance imaging. Since the advent of MRI, a large number of applications in plant sciences have come to light. MRI techniques are now available that allow studying root, stem, leaf water content, root anatomy, and (radial and axial) transport in these organs in an integrative way . For instance, the quantification of fruit composition in oil palm carried out by Shaarani et al. identified a tissue-specific pattern of oil and water distribution. Similarly, Windt et al. were able to demonstrate that majority of water translocated into the tomato fruit occurs through the xylem rather than the phloem, thus resolving a longstanding challenge in fruit growth modeling. Additionally, MRI has also found applications in the study of certain parameters of fruit quality . In summary, MRI offers the capability to image the whole plant and simultaneously monitor water and metabolite dynamics in the plant. This function of MRI is consistent with the holistic research theory of TCM. The overall quality of CMM is reflected in multiple dimensions, which cannot be comprehensively understood by the determination of only one indicator. The challenge lies in locating the intricate distribution and corresponding structures of chemical molecules in the whole plant hinders the acquisition of data for exploring the quality of CMMs. Data generated from MRI can make it possible to investigate the localization of metabolites in heterogeneous tissues of CMMs in a sustainable manner. Attempts made by researchers in this field will put forward the observation of natural compounds in CMMs by MRI. The MRI analysis of multi-constituents CMMs can simplify the time-consuming and complicated pretreatment process for samples. The analysis offers a new strategy for deeper exploration of the quality of CMMs from a holistic perspective. While MRI has been adopted to image structures or measure the composition distribution of certain plants (such as food and crops), only water and some substances related to growth and development, such as sugars, free amino acids, and lipids are currently monitored ; the distribution of more efficacy-related and quality-responsive compounds in plants has not been reported. Given MRI's capacity to visualize plants from macroscopic to microscopic scales, this study advocates for leveraging MRI to assess the quality of CMMs from a comprehensive perspective encompassing whole body-tissue-metabolite dynamics, thereby achieving holistic visualization of CMM quality. The majority of CMMs are derived from various parts of plants, including roots, stems, leaves, flowers, fruits, and seeds. Therefore, this review examines the applications of MRI in plants and assesses its ability to non-destructively visualize the quality of CMMs, involving monitoring different parts of a single CMM, between similar species, as well as throughout the processing stages. The objective is to demonstrate the feasibility of MRI technology for the comprehensive visualization of CMM quality. Quality evaluation by morphological identification is a noteworthy method for evaluating the quality of CMMs, as evidenced by the optimal external morphology and internal structure. However, simultaneously acquiring information about both external morphology and internal structure poses a challenge. On one hand, the external morphology is one of the criteria for the quality of CMMs, which is traditionally identified by experience and microscopic identification. Distinguishing between easily confused and similar species becomes difficult. Visualization of morphology can aid in distinguishing similar parts or species and can provide insights into ongoing physiological processes in time for further better accumulating active compounds. On the other hand, imaging internal structures is hard to achieve by optical microscopy due to their opacity. Although thick opaque samples can now be treated with clearing protocols such as solvent-based clearing , the depth of optical imaging is still limited to a few hundred microns . Proton ( 1 H) NMR images are topological representations of the mobile water binding and portions in soft-tissue samples. The images offer a potentially unique means of gaining access to structural, growth, and hydrodynamic information in situ . Since MRI allows three-dimensional (3D) imaging of mesoscopic structures regardless of sample thickness, it compensates for the limitations of optical microscopes in displaying non-destructive internal images of CMMs. In this study, we use MRI to image several typical CMMs, capturing both the overall morphology and the internal structures at the same time. The morphological structure of living plants reflects their growth conditions and is influenced by physiological ecology and the environment. The overall morphology of plants at any given moment determines the physiological processes that are currently taking place, such as photosynthesis . In addition, products of the physiological process embody in the accumulation and distribution of metabolites in CMMs, which are finally the cornerstone of pharmacological functions . It has been reported that the same species of CMM with different morphological forms possessed different active components, leading to different functions in sure . With the aim of improving photosynthetic distribution, the branches and leaf shapes of herbal medicines were pruned and changed to better accumulate active metabolites . Real-time collection of the intact morphological information of CMM and changing it into the optimal morphology to absorb nutrition can benefit healthier growth and better accumulation of active compounds . The morphology reconstructed by the MRI associated with different growing conditions can promote functional studies of plant quality . However, in view of the constraints of small coils and conventional probe heads, visualization for the whole body of CMM is not as easy as for specific organs. Another question worth considering is whether the inherent insensitivity of NMR should be compensated by arranging maximum materials in the coil to obtain an available signal-to-noise ratio (SNR). The improvement of equipment made visualizing the whole plant come true. The whole plant of the small size of hydrophyte Chamaegigas intrepidus and the larger tropical liana Ancistrocladus heyneanus have been successfully recorded in high-field magnets. The most commonly used approach was constructing an appropriately built coil, as well as a modified probe head for specific species, if needed. In the present study, the microstructure of the whole body and the capsule of the well-known CMM plant Dendrobium huoshanense ( D. huoshanense ) were observed with a specific coil and magnet that allowed enough tissues to generate adequate signals . In addition to this, information about the compounds contained in specific regions of the imaged object can be obtained simultaneously. The specific regions of root and stem in Fig. 1 A were marked out separately as representative displays . Conventional T 1 -weighted pulse sequence MRI was applied to highlight the free water signal to image internal structures here. In addition to providing conventional T 1 , T 2 , or proton density-weighted structural images, MRI can also provide richer tissue information through more complex imaging methods, such as diffusion-weighted imaging (DTI) . It can provide more compound information based on the signal of water molecules within the tissue, which can be well reflected in the images of tissues through the fitting of physical and mathematical models . The distribution and quantification of several proteins were calculated according to whole-tissue DTI parameters . Some other processing algorithms combined with DTI had been shed light on digging out more measures about the structure and compounds of the detected object . The noninvasiveness of MRI makes it possible to visualize architecture and anatomies of the target CMM that cannot be achieved by conventional microscopy. Such systems have promising potential for providing information on identifying the specific morphology in quite a lot of CMMs in the case of high-resolution NMR imaging studies. Fig. 1 The holistic visualization of Dendrobium huoshanense ( D. huoshanense ). (A) The high-resolution three dimensions projection view of the whole plant D. huoshanense by magnetic resonance imaging (MRI). (B, C) Metabolic profiling in stem (red) (B) and root (blue) (C) of D. huoshanense by magnetic resonance spectroscopy. Fig. 1 The organs of CMMs have evolved as medicinal parts, and their quality can be reflected by a range of certain changes in the internal structures during development . In the visualization of the whole body of the CMM morphology mentioned above, the internal structural imaging of individual medicinal organs can also be acquired with MRI simultaneously. Many of the roots and rhizomes of medicinal plants, where photosynthetic products are stored, can be multifunctional in clinical treatments. The typical Chinese medicine “Gan Cao” is the root and rhizome of the herbal plant Glycyrrhiza uralensis . Furthermore, Ginseng Radix et Rhizoma (Ren Shen), Rhizoma Coptidis (Huang Lian), and Salviae Miltiorrhizae Radix et Rhizoma (Dan Shen) are also well-known Chinese medicines for roots and rhizomes . The architecture of roots and their microenvironment can regulate the growth of CMM . Roots absorb nutrients and water to ensure normal development of CMM and multiple active components biosynthesis . Plants can adapt to the external environment, which is partly characterized by changes in root morphology . The spatial distribution, surface area, and architecture of roots may affect the nutrient absorption and compound accumulation of CMM. Accurate clarification of root morphology can help CMM to accumulate active compounds . However, the configuration, sampling, and cleaning of roots may damage their structures, which makes clarifying root morphology difficult . In addition to architecture, growth conditions or diseases inside the root are also hard to observe and deal with . The internal structure reconstruction of various tissues within plants utilizing the MRI can facilitate studies in compound accumulation and distribution. Technological advancements in MRI have made it possible to model the 3D geometry of the rhizosphere not only inside the liquid or transparent mediums but also in soil or sand . Poorter et al. followed the real-time development of the more complicated horizontal distributions of the root of Hordeum vulgare. The morphological and physiological components observed by the MRI might explicate the detected growth patterns and propose an applicable environment for plant development. Previous research obtained the external and internal structures of radishes with distinct visibility of vessels in the xylem and phloem of Raphanus sativus by MRI . Afterwards, the MRI was utilized as a technique to clarify the quality of structures and functional properties during subsequent developmental stages . The type of roots of D. huoshanense are aerial roots, which are exposed to the moist air to absorb water and nutrients and then transport them through the xylem and phloem to various parts of the plant. There is a hydrophobic barrier between the aerial parts of higher plants and their environment, called the cuticle , which contains both epicuticular waxes and intracuticular waxes. The epicuticular waxes will show different morphological types of crystals, such as massive crusts, plates, granules, and tubules with a hollow center . These various structures not only prevent water loss and exogenous attack in plants, but also play a relevant role in building epidermis structures . When attempting to observe the structure of the root of D. huoshanense by MRI, we discovered tiny air bubbles on the epidermis . We speculated that these could be the type of epicuticular waxes of D. huoshanense for growth. What counts is the finding of the detailed structure inside the root, which was uncovered simultaneously when the whole body of D. huoshanense was detected by MRI. Fig. 2 The simultaneous visualization of structure and metabolite profiles of Dendrobium huoshanense ( D. huoshanense ). (A) Three dimensions flash magnetic resonance imaging (MRI) with 40-μm isotropic ultra-high resolutions. The arrows point to tiny bubbles inside the roots. (B) Internal images of a very dry capsule of D. huoshanense acquired by MRI in axial slices and coronal slices. (C) Chemical structure of differential chemical constituents in stem and root of D. huoshanense. (D) Distribution of chemical constituents in roots of D. huoshanense and its similar species Dendrobium moniliforme ( D. moniliforme ). Fig. 2 In addition, the internal structures of medicinal parts (roots, fruits, seeds, stems, and leaves) need to be evaluated when exploited as raw materials for TCM prescription. Whether their growth conditions are healthy or not exerts a strong influence on the clinical functions of CMMs. The same challenge exists in the medicinal organs of fruits. Detecting the internal structure and quality of fruits in time is a crucial strategy to enhance their functions . Attempts have been made to develop a noninvasive method of evaluation for the internal quality of agricultural products, which can also be exploited as Chinese medicinal materials. Moreover, application modes in agricultural products can provide a new idea for detecting of the quality of CMMs. MRI is one of the most potent strategies to evaluate the internal quality of CMMs based on the accumulated data. Microstructure determines the mechanical and transport properties of fruit tissues and the internal quality of horticultural products. The fruit of the herbal medicine of Malus pumila was widely known for its antioxidant, anti-inflammatory, and anti-cancer activities . The algorithm of MRI can efficiently help identify apples with better quality since discriminating between bruised and non-bruised apples utilizes only two scans of the image and simple computations . In 2015, Mazhar et al. made good use of 1 H-MRI to monitor bruise expression and internal quality over time in Persea americana fruit. 1 H-MRI has also been applied in the determination of the degree of maturity in, for instance, jujube . Data provided from MRI reflected the water status and migration dynamic process of jujube during the blackening process, which improved the internal quality of blackened jujube and had practical significance for promoting the deep processing and industrial development. When there are changes inside the fruit, MRI can also be used to immediately detect quality in the living organism, including Chaenomeles sinensis . An answer to the vital issue of detecting the internal conditions of fruits noninvasively was given. Under the premise of ensuring the integrity of the fruit, the measurements obtained by MRI on the dynamics of nutrients inside the organ during storage can provide scientific guidance on quality improvement. According to the studies above, it may be possible to use MRI to directly visualize whether the internal structure and conditions of medicinal parts of CMMs are healthy or not. Moreover, the MRI view of a very dry capsule of D. huoshanense is presented here to support the possibility of the above hypothesis. Images uncovered free liquid water content and distribution within the capsule as bright areas; as the higher the free liquid water content, the brighter it appeared. The content of water in the marginal area was higher than that in the center area inside the fruit of D. huoshanense . In addition, this may be caused by the presence of more seeds with high lipid content in the center region. This provided intuitive feedback for us to observe the distribution of seeds and the migration of water. Non-destructive imaging from MRI can help us to monitor the ever-changing states in real time during the growing process inside the capsule as it grows, further contributing to better quality control of medicinal organs. The corresponding author Professor Kai Zhong has applied MRI combined with MRS to examine the internal microstructures of plant organs to reflect their quality more than two decades ago. This provides the feasibility of detecting real-time internal quality conditions of some CMMs whose roots are as medicinal parts. This provides the feasibility of real-time internal quality conditions for detecting of some CMMs whose stems are as medicinal parts. There have also been numerous publications focusing on the applications of MRI in investigating the morphological structure of other tissues and tracking activities, including growth and ripening. The structures of whole pea ( Pisum sativum ) seeds were digitalized for visualization of seed anatomy, which made it feasible to measure the volume and proportional sizes altered over time . The developing processes of barley grains from anthesis to maturity and the formation of distinctive phenotypes of rice seeds exposed to specific conditions were able to be captured by MRI . The tissue architectures of stems and leaves were also ideal for the collection of high-resolution pictures because they suit the MRI probe well. In summary, MRI can image the morphology of the whole plant of CMMs while also obtaining the internal structure and information of individual organs or tissues non-destructively. As traditional medicinal parts, quality control of organs, such as roots and storage fruits, can assure the therapeutic effects of CMMs. MRI bridges the gap by ensuring that data will be accessed by traditional means after the sample has been destroyed. So, there are fewer samples required and more efficient analysis conducted by MRI when measuring the quality of CMMs. CMMs’ quality guarantees their clinical efficacy and is closely related to the pharmacodynamic material basis, i.e., active metabolites. Based on the characteristics of multiple components, CMM can contribute greatly to the medical field by transforming the condition from an abnormal to a normal state . The content and distribution of secondary metabolites differ in CMMs due to environmental and genetic factors, while they are variedly distributed in different medicinal parts of the same CMM . This will have a major impact on their quality and directly affect their efficacy. However, the traditional quality evaluation and component detection method of CMMs generally requires complex pretreatment of the sample (e.g., extraction, separation, and enrichment), and then followed by the use of the LC, GC, LC-MS or GC-MS methods to analyze the chemical composition . The MRI relaxation time curves reflecting the contents of water, starch, lipid, and other components in samples can be obtained in a short time, which is superior to other techniques . It is possible to observe not only the dominant 1 H resonance of water but also the considerable resonance lines emerging from protons in other active compounds. Resonance lines from protons in chemical compounds are usually differentiated by distinctive chemical shift imaging (CSI). What leads to the various chemical shifts is the distinct shielding effect of the surrounding electrons and nuclei against the external magnetic field. The spatial distribution, transport, and conversion of metabolites in a plant can be mapped well by a series of high-resolution spectra obtained by MRI . The spatial biosynthesis, transport, and metabolism of active metabolites are subject to specific regulation of complex metabolic networks . Phytochemicals in specific regions further lead to various pharmaceutical functions of different botanical parts of medicines. More importantly, there are similarities in the major metabolites between different medicinal parts, suggesting that they can be replaced by each other in some circumstances . To provide references for the development and utilization of CMM when it is considered as a whole, a visual strategy of comprehensively mapping metabolites in medicinal parts and other underutilized tissues will be of high value. Structure images of plants were usually acquired at the end of the CSI, aiming at topographically associating the metabolite spectroscopic data with the matching tissue structure by MRI. Higher concentrations of sucrose, or amino acids in plants, have resonance lines that are especially favorable for spectroscopic MRI measurements . It can be dated back to 1994 when MRI was established as a method for noninvasive measurement and localization of sucrose distribution in the Ricinus seedlings . There was a combination of the longitudinal section from the 3D MRI model and metabolite distribution within the embryo sac of growing pea seeds . The content and distribution of sucrose were subtly distinguished from glutamine and alanine when the legume embryo went through the storage process and environmental signals. In addition to sugar and amino acids, the distribution and flow of metabolites such as lipids in plants was another target of the application of MRI. Various lipid maps and related 3D models of seeds and fruits have already been elaborated . Several studies have explored the gradients of lipid storage in developing and living soybean seeds and correlated them with photosynthesis and plastid differentiation in soybeans . Quantitative profiling of the grain in vivo demonstrated lipid deposition was mainly comparted in the embryo and the aleurone layer . The same analyses were also conducted for in quantitative imaging of lipids in oilseed rape and fennel mericarps . The tissue-specific images of a particular metabolite in the plant can be mapped by MRI, accompanied by rational spatial resolution and the time required for the acquisition of the data. It was also possible to screen various bioactive compounds that were hard to distinguish using the hyphenated analytical methodology of MRI with other techniques for metabolite quantification . Assimilation of nitrate and ammonium in the form of the accumulation pattern of free NH 4 + in Picea abies was revealed by 14 N or 15 N-MRI. The extent to which the nitrogen source influenced the composition of the free amino acid pool in roots, stems, and needles was examined . A combination of metabolite spectroscopy and morphology imaging of whole plants can provide a chance to holistically visualize both their active compound accumulation and functional tissue information. In the results of MRI experiments conducted on the whole body of D. huoshanense , we monitored metabolite profiles of the root and stem. The MRS were processed in jMRUI, followed by raw data were read in MATLAB, and water signal peaks were calibrated . The reads of all peaks were normalized to get the relative intensities and spectra displayed in Figs. 1 B and C. Besides, D. huoshanense tested by MRI was also analyzed by a 600 Hz NMR instrument to validate the acquired data referring to the previous study . In detail, the freeze-dried powder of the whole plant of D. huoshanense and biological replicates dissolved in D 2 O were detected with 1 H NMR spectroscopy . The 1 H NMR spectra were processed, auto-phased, chemical shift referenced, and baseline calibrated using the Topspin data processing software. The Natural Products Magnetic Resonance Database (NP-MRD) and the Human Metabolome Database (HMDB) were used to search for metabolites in D. huoshanense . The characteristic peaks in the 1 H NMR spectra were assigned to these constituents based on the database and the reported chemical shifts for the metabolites in D. huoshanense , which have been added in Table S1 . Then the NMR database of the constituents in D. huoshanense was built up, and the total correlation spectroscopy (TOCSY) 1 H– 1 H spectrum and heteronuclear single-quantum coherence (HSQC) 1 H– 13 C 2D spectra were acquired for validation . Since the MRS data of the root and stem were obtained under the same conditions and in the same plant, the normalized integral value of these characteristic peaks represented the relative content of the chemical components . These compounds with different contents and related pharmacological effects were also annotated aligning with peaks acquired by 1 H NMR metabolomics ( Table 2 ) [ , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ]. All structures of compounds in Table 2 were drawn as shown in Fig. 2 C. A greater variety of chemical constituents were obtained in the stem than in the root. Furthermore, the integral value of most characteristic peaks was also significantly higher in the stem than in the root. It may be the reason why the stem D. huoshanense is usually the main medicinal parts when applied in clinical treatments. Table 2 The chemical constituents with different contents in the stem and root of Dendrobium huoshanense ( D. huoshanense ) detected by magnetic resonance imaging (MRI). Table 2 Number Chemical shifts (ppm) Compounds Pharmacological effects Refs. 1 0.98 (3 R ,6 R )-3-Hydroxyl-α-ionone Anti-inflammatory activity 2 1.00 ( S )-4-Isobutyl-3-oxo-3,4-dihydro-1H-pyrrolo[2,1-c]oxazine-6-carbaldehyde Antivirus activity 3 1.40 Grasshopper ketone Anti-inflammatory, antitumor, and neuroprotective activities 4 2.59 Dihydroconiferyl dihydro- p -coumarate Acetylcholinesterase inhibitory and antioxidant activities 5 2.63 Dendronbibisline B Anti-tumor activity 6 2.68 Naringenin Antivirus, anti-inflammatory, anti-cancer, and neuroprotective activities 7 2.70 Dihydroresveratrol Anti-inflammatory, anti-cancer, and intestinal protective activities 8 2.75 Batatasin III Anti-inflammatory and antitumor activities 9 2.80 Coniferyl p -coumarate Antioxidant activity 10 3.18 (+)-Syringaresinol Neuroprotective and anti-inflammatory, antioxidative, and α-glucosidase inhibitory activities 11 3.44 Lirioresinol A Antivirus and anti-inflammatory activities 12 3.45 N - trans -Coumaroyltyramine Anti-inflammatory, acetylcholinesterase inhibitory, and antimycobacterial activities 13 3.47 trans - N -Feruloyltyramine Anti-inflammatory, anti-tumor, and antioxidant activities 14 3.50 (+)-Lyoniresinol Tyrosinase inhibitory, neuroprotective, and anti-cancer activities 15 3.62 p -Methoxycinnamic acid Anti-inflammatory, anticancer, anti-atherosclerotic, and neuroprotective activities 16 3.82 4,4′-Dihydroxy-3,5-dimethoxybibenzyl Anti-cancer activity 17 4.17 Medioresinol Anti-inflammatory and prevent ischemic stroke activities 18 4.58 Sinapyl p -coumarate No pharmacological reports Not only the different parts of one CMM but also the differences between it and its similar species can be visualized by MRI. The sites of lipid deposition and images of the anatomy of layers parallelly analyzed offered the possibility of linking lipid accumulation to seed development. This approach has been used in two similar oat cultivars, with different colors used to represent different concentrations of lipids . Differences of them in the principal location of lipid deposition and content were discovered, and it further highlighted the difference in the relationship between energy storage and carbon allocation. This approach can also be applied to distinguishing very similar species of CMMs and seeking quality markers. Similar species of CMMs are easily confused and sometimes cannot be distinguished clearly on the basis of morphology. If the distribution of the active constituents can be combined, it is conducive to CMM identification. The differences between D. huoshanense and its similar species Dendrobium moniliforme (D. moniliforme) are hard to distinguish due to their similar morphology. The MRI performed on these two species and their biological replicates can be applied to clarify their differences in active metabolites. The main metabolites distributed in the roots of these two species were found to be various. There were significant differences in the contents of chemical components at 1.00 and 3.50 ppm . 1 H NMR detections of D. huoshanense and D. moniliforme were also obtained to verify the data from the MRI. Displayed in Fig. S4 , it can be found that the relative intensities of the two compounds with chemical shifts at 3.5 and 1.0 ppm in these two species were consistent in the MRI and NMR measurements. And we made it possible to detect comprehensive changes between different parts of one variety or between similar varieties by measuring multiple indicators noninvasively. This can help us to determine the material basis of CMMs in conjunction with related medicinal parts mentioned above to obtain more comprehensive data. As an imaging technology, MRI can not only visualize the overall microstructure of CMMs but also obtain the NMR data of metabolites to understand the differences in the types and contents of compounds between different tissues or plants. Therefore, MRI has the potential to evaluate and inspect the quality of CMMs through direct qualitative and quantitative analyses of the marker components of medicinal materials and, at the same time, enable the identification of potential active substances. The quality of CMM is closely related to its phenological period, special processing, and environment. The accumulation and mutual transformation of active components show a dynamic trend with the growth and processing of CMM. As reviewed above, MRI enables the non-destructive imaging of the whole body and tissue and the real-time spatial distribution of compounds to colocalize them with their botanical structures. The prominent effects of CMMs are based on their optimal character and high quality, which require real-time detection during dynamic changes by MRI. The quality of CMMs is influenced by various conditions, such as the harvesting period and the year of growth. The contents of soluble saccharides and amino acids kept on changing significantly with the age of Coptis chinensis , as reported . The harvest period (including the harvest year, month, or even day) of medicinal parts is one of the critical factors affecting the quality of TCM . It has certain rules to follow, namely, the overall evaluation between the dynamics of the accumulation of active components and the yield of medicinal parts. However, these two indicators are sometimes inconsistent, which makes it urgent for real-time detection of dynamic changes of them both in CMM. MRI enables visualization of metabolite distribution throughout plant growth. The developing processes of barley grains from flowering to maturity were visualized in terms of changes in lipid and other metabolite distribution in organs and tissues . Verscht et al. displayed the accumulation pattern of sucrose in the phloem of Ricinus seedlings and changes between normal and starvation conditions (sucrose deprivation or a cotyledon petiole break-off). The data offered by MRI enabled the direct connection between structural and metabolite imaging of dynamic chemical compounds to precisely identify and localize metabolites within the tissues of plants during the whole growth and development process . This can be used to determine the optimal harvesting period and medicinal parts. The medicinal plant needs to be concocted and processed before being applied to TCM preparation, during which active compounds dynamically change. Traditionally, raw and steamed Panax notoginseng possess respective pharmacological functions, while there are spatiotemporal changes of metabolites in a particular part during the steaming process . Accurately and noninvasively visualizing the spatiotemporal variation of metabolites during CMM processing is of great significance for clarifying the pharmacological effects of medicines. The moistening process of Rehmanniae Radix was detected quantitatively by low-field nuclear magnetic resonance and imaging (LF-NMR/MRI) technology . It attempted to elucidate the scientific indications of the moistening process by investigating changes in water absorption and expansion kinetics. In our review, the morphological and material basis changes of the roots of Salvia miltiorrhiza during sweating processing were also revealed by MRI . It further shed light on the dynamic metabolite transport and exchange of medicinal plants in situ during some unique processes through straightforward co-registration of MRI technique and isotope-labeling trace. Structural imaging and metabolite analysis before and after concoction can help clarify the special quality formation of CMMs. Fig. 3 Differences of Salvia miltiorrhiza during sweating processing revealed by magnetic resonance imaging (MRI). (A, B) Axial (A) and sagittal (B) images of the root before sweating processing. (C, D) Axial (C) and sagittal (D) images of the root after sweating processing. Fig. 3 The quality of CMMs can be largely influenced by the environment. Imaging techniques may help visualize plant changes under environmental factors, whether they are caused by the accumulation of active constituents and their effects on functions or structural alterations . MRI can also be applied in clarifying the quality formation of plants in specific environments . It can holistically elucidate the real-time response to specific stresses of plants. Instead, conventional physiological trials are inclined to emphasize the response generally applied to specific organ or tissue. The other problem is that only one stress is generally used to simulate the environment of CMM, which may be the interaction of a multitude of stresses during growth. MRI of plant physiology in situ may well be the solution to this problem . MRI can do a favor in claiming water transport and dynamics when various species are exposed to drought circumstances. It allowed visualization of changes taking place in plants Quercus ilex that experienced varying degrees of drought . MRI can be employed to uncover processes and mechanisms in CMMs that are already highly adapted to drought environments. Inspired by these studies, MRI can also be exploited to be a better alternative device for the identification of individuals with superiority about water use efficiency and quality. Besides, the detailed responses to subzero temperatures of plants and their single organs were carried out by MRI as well . Their tolerance to cold was primarily reflected in the fact that the water or sucrose was transported slowly. Furthermore, procedures have also been developed based on MRI to observe how various tree species deal with cold stress in situ . This implies a potential application of MRI in exploring how the special quality of CMM forms under a specific environment. Although MRI shows promise for the future task of evaluating the quality of CMMs, there are several technical challenges encountered. On one hand, the expensive device has hindered the prevalence of MRI. Low-cost MRI have been developed in experiments, but their resolution is far lower than that of other devices. On the other hand, in general circumstances, the imaging time significantly increases as the resolution requirement rises. To overcome these challenges, it is crucial to primarily focus on developing high-resolution and effective MRI devices based on preliminary theoretical research. Specific coils, probes, and equipment can be created or improved for a particular CMM plant to achieve functional visualization with higher resolution and less time. CMMs of high quality are often accompanied by the optimal morphology. Using more specific devices and coils for a particular CMM can make it possible to better identify its unique morphological characteristics. Additionally, MRI can also be applied to more studies of CMM in accordance with its ability to image CMM structures and analyze dynamic distributions of metabolites. For example, 1) revealing the dynamic transport of the specific compound in a single plant during the whole growth and development process. The MRI technique, combined with stable isotope labeling, can track specific metabolite molecules uptake and exchange in plants in real-time, in situ , and non-destructively . 13 C and 23 Na can be applied to acquire NMR metabolic information and imaging associated with specific molecules or their metabolic derivatives . As multiconstituents are rich in CMMs, stable labeling associated with MRI may offer a new perspective for proteins and polysaccharides with known structures. Thus, MRI may make some achievements in announcing the morphological and compositional changes of typical metabolites in a single living CMM during the growth and development stage based on its non-destructiveness. 2) CMMs in clinical use is not just a form of direct use. Many CMMs are processed into various forms of pharmaceutical preparations to facilitate taking or to increase efficacy and reduce toxicity. The quality inspection of such Chinese patent medicines prepared into capsules and tablets is also an essential part of the safety and effectiveness of the clinical medication. The current monitoring methods need to destroy these samples and then use various chromatographic or MS techniques for evaluation, which are cumbersome, time-consuming, and laborious. According to the characteristics of MRI, it may be possible to use MRI to directly visualize the internal composition information of these drugs to achieve the purpose of rapid detection of drug quality. 3) Establishment of a visual database on the 3D structures and chemical compositions of CMMs in different environments and ages has epoch-making significance in the fields of Chinese medicine identification, Chinese medicine chemistry, and Chinese medicine quality evaluation and inspection. MRI technology exhibits great potential in the quality evaluation of CMMs, specifically focusing on visualizing the structures and distribution of metabolites during dynamic processes. Nonetheless, less research has been conducted on the dynamics of metabolites absorption, distribution, metabolism, and excretion (ADME) of CMMs in animals by MRI. Combining the applications in animals by MRI with the comprehensive database can contribute to elucidating the quality and efficacy of CMMs. 4) Multimodal imaging combinations can help visualize the quality of CMMs more comprehensively. Optical imaging is a real-world technique for directly observing the microstructure of matter but is limited by penetration depth. It is possible to couple MRI with optical imaging to integrate their advantages, such as a holistic view of visualization and intuitive characteristics, which may make an unexpected breakthrough in the field of imaging. Moreover, imaging techniques with high spatial resolution (e.g., MRI and computed tomography (CT)) were often integrated with others with high sensitivity (e.g., PET and fluorescence imaging) to provide more detailed information about several diseases. The proper selection of multimodal imaging combinations will grant more powerful approaches to the visualization for the quality of CMMs and the more innovative application of MRI . The review discusses the potential of MRI technology for studying CMMs. This study highlights the capability of MRI to model the internal and external structures of CMMs, visualize the compartmentalization and transport of metabolites, track the dynamics of metabolism, and detect the quality of CMMs in specific environments. Overall, MRI entirely opens up a new perspective for non-destructively accessing CMMs’ quality in situ over time. With discrepancies exist in shapes and sizes, different CMMs only need to be replaced with the corresponding coils and measuring parameters when they are being observed. The set of MRI devices can be commonly used in different fields, leading to fewer costs and wider applications. As a newly introduced technology, the non-destructive nature of MRI enables repeated imaging experiments throughout the lifetime of a CMM, allowing for comprehensive monitoring of their morphology, active constituents, and response to various processes. Multiple pieces of information would be controlled in situ simultaneously using the method, laying the cornerstone for comprehensive analysis and precise quality evaluation. This appears to be an efficient and accurate method urgently needed in the modernization and development of TCM. Moreover, the factors mentioned above are directly responsible for the unique quality of the CMM. The key to holistic quality visualization of traditional CMMs lies in accurately measuring and systematically combining their typical traits of them, which can be accomplished by MRI as reviewed here. Potential application modes of MRI for visualizing the quality formation and evaluation of CMMs have been put forward. Based on existing research, MRI is expected to find expanded use in the future, suggesting that the development of a complete MRI-detecting methodology would require collaboration among multiple disciplines and organizations. By integrating a cooperative strategy, the adoption and utilization of MRI technology can be accelerated, further enhancing its role in visualizing the quality of CMMs. Jing Wu: Validation, Writing – original draft. Kai Zhong: Conceptualization, Writing – review & editing. Hongyi Yang: Methodology. Peiliang Zhang: Data curation. Nianjun Yu: Resources. Weidong Chen: Resources. Na Zhang: Data curation. Shuangying Gui: Supervision. Lan Han: Supervision. Daiyin Peng: Conceptualization, Funding acquisition, Project administration. The authors declare that there are no conflicts of interest. | Review | biomedical | en | 0.999997 |
PMC11696855 | The plant cell is surrounded by a complex and dynamic structure known as the cell wall, which plays a crucial role in various processes throughout plant growth, including cell elongation, plant defense, and stress tolerance . Plants have two distinct types of cell walls: the primary and the secondary cell walls, each with a unique composition influencing their stiffness and function . In young cells, the primary cell wall — located between the middle lamellae and cell membrane — facilitates cell elongation and differentiation due its flexibility and porosity. In contrast, the secondary cell wall is synthesized in specialized cells within the vascular system, fibers and other sclerenchymatous cells . This layer deposited between the primary cell wall and the cell membrane after the cell has finished its expansion, is more rigid and less porous . The primary cell wall is primarily composed of cellulose, pectins, hemicelluloses, and proteins, while the secondary cell wall consists mainly of cellulose, hemicelluloses and lignin . The composition and proportion of each polysaccharide, as well as their interactions, vary based on the plant species, tissue, and cell type, making it challenging to understand the intricate role of the cell wall in regulating diverse physiological processes . Due to the complexity and numerous interactions among cell wall polysaccharides, most current methods for analyzing plant cell walls are destructive, requiring polysaccharides hydrolysis, which often results in the loss of structural information, and spatial distribution of each component within the wall . This limitation has driven the need for simpler cell wall models to help elucidate the synthesis and organization of cell wall structures. Approximately 20 years ago, Arabidopsis seed coat mucilage was characterized for the first time , and has since become a model system for studying polysaccharides synthesis, modification, and organization . The mucilage in Arabidopsis seed coat is a gel-like structure produced by specialized epidermal cells called mucilage secretory cells (MSCs) between 6 and 12 days after pollination (DAP) . This process encompasses several well-described phases: synthesis, deposition, maturation, and desiccation of mucilage, all occurring within MSCs . Upon hydration of mature dried seeds, the polysaccharides present in the mucilage expands, exerting pressure on the MSC radial walls . This pressure causes rupture of the radial cell wall, releasing both soluble and adherent mucilage layers, which together form the characteristic Arabidopsis seed coat mucilage . Fig. 1 Structure and composition of Arabidopsis seed coat mucilage. A. Mucilage release from mature dry seeds. This panel illustrates the process of mucilage release, providing an overview of the seed coat epidermal cells. m, mucilage; c, columella rw, radial wall; dw, distal wall: SM, soluble mucilage, AM, Adherent mucilage. B. Composition of Arabidopsis mucilage. Arabidopsis mucilage primarily consists of pectins, specifically RG-I and HG. These pectins are anchored to the seed surface through interactions with RG-I xylan-cellulose fibers. Minor components, such as galactoglucomannans, AGPs, and RG-II are not represented here. c, columella rw, radial wall; dw, distal wall: SM, soluble mucilage, AM, Adherent mucilage. C. Pectin sugar content mucilage layers. This panel shows the average sugar content (expressed in mg/g of dry seeds) in both mucilage layers. The values are averaged from data obtained in studies by Macquet et al., 2007a , Macquet et al., 2007b , Saez-Aguayo et al. , Fabrissin et al. , and Parra-Rojas et al. . Fig. 1 Arabidopsis mucilage contains all the major groups of polysaccharides groups found in plant cell walls, predominantly pectins (90–95 %) along with smaller amounts of additional components. The mucilage primarily consists of rhamnogalacturonan-I (RG-I), with lesser quantities of homogalacturonan (HG), cellulose, galactoglucomannans, xylans, xyloglucan, and arabinogalactan proteins (AGPs) . Although rhamnogalacturonan-II has been suggested to play a role in Arabidopsis mucilage organization, its presence within the mucilage layers remains unclear . Arabidopsis mucilage, though jelly-like structure, is highly organized and consists of two distinct layers: Soluble Mucilage (SM) and Adherent Mucilage (AM) . As indicated by its names, the SM layer is readily extracted by incubating mature seeds in water; gentle shaking removes most of the SM, exposing the AM layer, which remains tightly bound to the seed coat integument . Both layers share a similar composition, primarily rich in rhamnogalacturonan I (RG-I); however, the AM contains higher levels of other components, such as HG, cellulose, and hemicellulose . The SM fraction is a water-soluble, primarily RG-I- based layer , with the advantage of being extractable without harsh treatments, allowing direct access to the native rheological structure of its polysaccharides . This feature represents a notable advantage in studying cell wall components, as many traditional methods are destructive and may lead to the loss of critical information about the polymer's native structure and ionic interactions within the matrix. RG-I in mucilage is characterized as a flexible polysaccharide within a backbone of galacturonic acid (GalA) linked to rhamnose (Rha) and sparse side-chains of arabinan, galactan, or arabinogalactan . In wild-type Arabidopsis Col-0 seeds, RG-I from SM has an average mass of 600 kDa, consisting of approximately 1845 [Rha-GalA] dimers subunits, which represents an exceptionally long polymer . Notably, while most RG-I in SM has this 600 kDa mass, a small fraction (about 10 %) has a very high molecular weight (> 40,000 kDa), corresponding to self-assembled RG-I structures similar to cellulose or amyloid polymers . RG-I synthesis occurs in the Golgi apparatus and is mediated by enzymes such as: rhamnosyltransferase 1 (RRT1) and the recently identified rhamnogalacturonan galacturonosyltransferase 1 (MUCI70/RGGAT1) . Additionally, galacturonosyltransferase-like 5 (GALT5) has been implicated in RG-I formation, acting as a terminator of RG-I polymer length . It is proposed that RG-I is initially synthesized with arabinan and galactan side-chains, which are subsequently degraded during mucilage maturation by α-arabinofuranosidases like BXL1, and β-galactosidases like MUM/BGAL6, yielding a smooth RG-I polymer . However, the glycosyltranferases responsible for synthesizing these side-chains of mucilage have not yet been identified. The adherence of the AM layer to the seed surface is driven by pectin attachment to cellulose microfibrils . Unlike pectins and hemicellulose, synthesized in the Golgi apparatus, cellulose microfibrils are produced at the plasma membrane by cellulose synthase complexes (CSCs) —membrane-bound protein complexes formed by cellulose synthases (CESAs) and other associated proteins . The intermolecular interactions between cellulose and pectin in composite hydrogels occur only when pectin is present during cellulose synthesis and depend on its degree of methylesterification . In mucilage, CESA1 , CESA2 , CESA3 , CESA5 , CESA9 and CESA10 are thought to be involved in cellulose biosynthesis . Among these, CESA3 and CESA5 are key for cellulose synthesis in mucilage and the formation of cellulosic ray, while CESA2 and CESA9 contribute to radial wall thickening. Although CESA1 and CESA10 are likely candidates based on their expression during mucilage synthesis, their roles remains to be demonstrated . Live-cell imaging of GFP-CESA movement upon hydration strongly suggests that cellulose is deposited in a coil-like structure around the cytoplasmic column by CSCs, unwinding during mucilage extrusion to form the cellulose ray in MSCs . Mutants affecting cellulose synthesis in MSCs, such as cesa3 , cesa5 , and cesa10 , exhibit altered cellulose assembly, leading to complete solubilization of the AM layer in water . Similar AM adherence loss occurs in mum5–1 mutants whith xylosyltransferase mutations, which display a redistribution of pectin from the AM to the soluble layer. Linkage analysis of the RG-I-enriched-fraction in mum5–1 reveals reduced xylan and cellulose linkage compared to wild-type, underscoring that xylan linked to RG-I is essential for mucilage adherence to the seed surface . While the precise positioning of these RG-I side chains remains unclear, recent studies on urgt246 and muci70 mutants show that xylan content correlates with RG-I polymer production, emphasizing xylan's importance for mucilage structure and seed adhesion . Homogalacturonan (HG) is the second main pectin domain in mucilage, comprising approximately 5 % of the total non-cellulosic polysaccharides in mucilage . HG consists of a backbone of galacturonic acid, which can be methylesterified . Synthesized in the Golgi apparatus with high methylation, HG is subsequently secreted into the apoplast, where it is demethylesterified by pectin methylesterases (PMEs), which catalyze the release of methanol from the carboxyl group of GalA . PME activity is, in turn, regulated by specific proteins called PME inhibitors (PMEIs). In Arabidopsis mucilage, HG has a low degree of methylesterification (DM) at around 8.6 %, indicating precise regulation during mucilage formation . Upon seed imbibition, immunolabeling reveals a structured pattern of HG within the AM, with higher DM in the outer AM and more demethylated HG in the inner AM . Studies on Arabidopsis mucilage have identified specific PMEs (e.g., HMS and PME58) and PMEIs (e.g., PMEI6, PMEI13, PMEI14, PMEI15, PMEI6), that further validate the utility of this model ( Table 3 ). Notably, the strong phenotype of pmei6 mutants, which exhibit delayed mucilage release and a lack of epidermal cell wall fragmentation , raises questions about the origin of HG domains in the AM. Researchers suggest that HG may migrate through the mucilage matrix upon seed imbibition, possibly originating from distal cell wall breakage. Regardless of its precise origin, subtle modifications in HG methylesterification significantly impact mucilage release, as shown in studies of the pmei6–1 mutant . Moreover, research has shown that PMEI6 interacts with PER36 (a class-III peroxidase), forming microdomains in the cell wall that establish optimal properties for effective mucilage release . Additionally, O -acetyl esterification, another key feature of HG structure, is under investigation. Although the specific acetyltransferase remains unconfirmed, studies suggests that trichome birefringence-like (TBL) family proteins may play a role in O -acetylation . Recent work on TBL38, and its role alongside PMEI6 and PER36, has underscored the significance of HG acetylation in ensuring proper mucilage release . Future research is needed to elucidate the balance between HG methylation and acetylation in Arabidopsis mucilage. Fig. 2 Comparison of pmei6–1 mutant phenotype with WT Col-0 using Ruthenium Red staining across five different procedures. Seeds were sown on stained agarose and subjected to direct imbibition with RR, which reveals released mucilage from both mucilage layers. Incubation with different solvents (water, EDTA, CaCl2), removes soluble mucilage, offering insights into the AM structure and potential changes in HG methylesterification and/or mucilage density. Scale bar = 100 μm. Fig. 2 In mucilage, galactoglucomannan (GGM) follows a structural pattern of repeating disaccharide [4)-β-Glc-(1,4)-β-Man-(1] . Despite being a minor component (approximately 5 % of total sugars) in mucilage, GGM content is higher here than in other tissues. The structure of GGM in mucilage is influenced by glycosyltransferases such as CSLA2 and MAGT1/MUCI10 , along with GDP-Man pyrophosphorylase VTC1 and its regulator KJC1, influence . Despite insights into GGM structure, its interactions with other mucilage polysaccharides remain unclear . Similarly, arabinogalactan proteins (AGPs) are minor components of mucilage. They consist of a core-protein backbone O-glycosylated by complex carbohydrates, mainly galactose and arabinose, with variable length and domain complexity, and often feature arabinogalactan (AG) type II side chains and a glycosylphosphatidylinositol (GPI) lipid anchor . AGPs have been observed forming direct links with pectins and hemicelluloses, creating a complex network . Despite limited information exists on AGP roles in mucilage, studies indicate that SOS5 and FEI2 are part of a pathway responsible for synthesizing seed coat mucilage, which includes pectin and cellulose . Additionally, three AGP glucuronosyltransferases (GLCATa, GLCATb, GLCATc) and two AGP galacturonosyltransferases (GALT2, GALT5) have been implicated in seed coat mucilage synthesis ; however, their mutants exhibited only mild phenotypes, affecting mucilage solubility and decreasing AM adherence . Recent work by Tan et al. indicates that AGPs are associated with pectins and, particularly in siliques, with HG. This observation raises questions about AGP involvement in mucilage structure: are AGP linked to HG or to other components? Regardless of the specific linkages, AGPs appear to play a role in organizing pectin components around cellulose microfibrils, an arrangement essential for cellulose ray formation and mucilage adherence to the seed surface. For instance, disruptions in glycosyltransferase family 14 members, responsible for adding Me-GlcA to AG glycans, led to loss of adherent mucilage, altered cellulose ray formation, and changes in seed coat morphology . Variation in AGPs content can be effectively detected in adherent and soluble mucilage layers using anti-AGPs-antibodies in dot-blot assays . Arabidopsis mucilage is rapidly released upon seed imbibition, making it possible to asses structural mutations by quantifying the total mucilage release . A rapid method for evaluating total mucilage release is to sow seeds on 0.05 % agarose, with or without Ruthenium Red (RR) staining, or by directly immersing mature seeds in a RR staining solution (typically 0.005 % to 0.02 % w / v ) . This method is well-suited to identify strong mucilage phenotypes, such as defects in mucilage release or loss of mucilage adherence. For detecting subtler phenotypes, we recommend sowing mature seeds on agarose plates and measuring the halo of mucilage release, while accounting for seed size . Though this approach is slower, it enables more specific screening of subtle mucilage release phenotypes . Another technique to quantify mucilage release involves creating a kinetic profile of mucilage extrusion in RR through the visualization over several minutes of the appearance of the mucilage halo, as described by Arsovski et al. , Saez-Aguayo et al., 2013 and, Parra-Rojas et al. . To visualize changes in the AM structure, RR staining can be applied by shaking mature seeds in water to remove the SM and subsequently staining the AM . Enhanced visualization of AM structural changes with RR staining can be achieved by incubation with EDTA or CaCl 2 to relax or densify the AM layer. EDTA chelates the calcium ions, relaxing the mucilage matrix by disrupting egg-box structures formed by demethylesterified HGs . In contrast, CaCl 2 favors the formation of these egg-box structures, densifying the AM . These imbibition techniques help describe subtle changes in mucilage structure, typically associated linked to HG methylesterification alterations, as described in the pmei6 mutant line . A more precise technique than RR staining for detecting AM structure changes involves whole-mount assay of AM from mature seeds using antibodies specific to pectin epitopes present in mucilage. To detect changes in HG structure, specific antibodies such as 2F4, JIM5, LM19, JIM7, and LM20 are commonly utilized, as these recognize egg-box structures and HG regions with varying methylesterification degrees and patterns . For detecting RG-I, antibodies such as INRA-RU1, INRA-RU2, and CCRC-M36 are effective . Although detecting RG-I lateral chains is challenging, it may be feasible using LM5 (galactan) and LM6 (arabinan) antibodies . For HC components in mucilage, one can take advantage of the fact that LM25 recognizes three forms of xyloglucans ; LM21 labels hetero-mannans ; and CCRC-M139 and INRA-AXI detect xylan in AM . Though these antibody signals can be quantified, changes in labeling patterns often provide more information than strict quantification. To visualize cellulose in mucilage, fluorescent dyes such as Calcofluor White and Scarlet Fast Red (S4B or Direct red) are effective . Changes in the amorphous and crystalline cellulose can be observed using CBM28 and CBM3a, respectively. To assess AM density, a method involving Dextran-FITC penetration into the mucilage capsule is valuable . Dextran-FITC, a fluorescently tagged molecule, reveals structural changes in mucilage matrix polysaccharide. It can be diluted in water, EDTA, or CaCl 2 to study these changes and quantified using ImageJ . This approach is particularly useful for phenotyping mutants with subtle structural variations . During seed imbibition, mucilage hydrates and expands, exerting pressure on the distal cell wall. This pressure leads to cell wall rupture and mucilage extrusion . Epidermal cells typically break at the corners where the distal and radial cell walls intersect. Certain mutations can affect the stiffness of these cell walls, altering their susceptibility to breakage. To characterize this phenotype, scanning electron microscopy (SEM) is effective for determining the shape and size of the seed coat epidermal cells and columellae in dry mature seeds . Additionally, confocal microscopy, utilizing stains such as Calcofluor or Direct Red 23 to visualize cellulose, can reveal changes in size and shape resulting from altered mucilage release and distal cell wall breakage. This approach has been used to study several mutants, including gosamt and cesa5 mutant lines . Methods for mucilage extraction have evolved significantly. Water has proven effective for extracting SM, as shown by Macquet et al. , and traditional pectin extraction with ammonium oxalate did not show substantial differences in SM extraction compared to water in Col-0 lines . It is estimated that SM contains approximately 20 mg of non-cellulosic sugars, primarily GalA and Rha . While the compositions of AM and SM are similar, AM has higher levels of Gal, Ara, and mannose, whereas SM contains more Xyl and cellulose . The proportion of sugars in SM is generally higher; but variations can occur based on plant cultivation conditions during seed maturation . Compared to SM, extracting AM is more challenging due to its properties; AM cannot be easily separated from the seed and requires harsher extraction methods or specific enzymatic actions. In 2007, Macquet et al. developed a method using Rhamnogalacturonan hydrolase (RGH) to extract AM by breaking down smooth RG-I attached to the cellulose fibers of the seed surface. Other enzymes, such as endopolygalacturonase (endoPG) or cellulases, were less effective than RGH . Despite its efficacy, RGH is not commercially available. To overcome this limitation, Zhao and Qiao developed a sonication-based method for AM extraction, which is simpler but harsh, potentially causing contamination of sugars from seed surface into the mucilage. Table 1 summarizes AM sugar content extracted using RGH digestion and sonication . Table 1 Comparisons of sugars measured in the adherent mucilage with enzymatic digestion and sonication. The table shows the concentrations of sugars forming pectins in AM, represented in mg/g of dry seeds and as percentage. The sonication method yields higher sugar extraction from the AM layer. Data were sourced from Fabrissin et al., 2019 ; Parra-Rojas et al., 2019 and Parra-Rojas et al., 2023 ; Saez-Aguayo et al., 2021 . Table 1 ENZYMATIC DIGESTION SONICATION Saez-Aguayo et al., 2021 Fabrissin et al., 2019 Parra-Rojas et al., 2019 Parra-Rojas et al., 2023 Sugars mg g-1 seed % mg g-1 seed % mg g-1 seed % mg g-1 seed % GalA 5.62 44.11 4.85 53.31 5.36 39.97 6.42 46.35 Rha 4.14 32.5 3.75 41.99 6.06 45.19 4.62 33.36 Ara 0.25 1.96 0.03 0.34 0.14 1.04 0.42 3.03 Xyl 0.14 1.1 ND 0 0.62 4.62 0.57 4.12 Man 0.29 2.28 ND 0 0.23 1.72 0.26 1.88 Gal 2.3 18.05 0.3 3.36 1 7.46 1.56 11.26 Total sugars (AM) 12.74 8.93 13.41 13.85 The main sugars identified include GalA, Rha, and Glc, though variablity among studies, complicates direct comparison. Sonication generally, yielded higher levels of total sugars, total mucilage, and AM specifically. For individual monosaccharides, sonication resulted in a 29.7 % increase in total sugars compared to AM obtained without sonication ( Table 1 ). Different methodologies did not show significant variation in the detection range of GalA; however, Rha levels varied from 3.75 to 6.06 mg/g of seed. A similar trend is observed for Gal, with enzymatic digestion yielding amounts between 0.3 and 2.3 mg, while sonication results ranged from 1 to 1.56 mg per gram of seed ( Table 1 ). Glucose data was excluded from Table 1 due to substantial variability, which would skew the relative sugar proportions. With sonication, it is probable that distal and radial cell walls components were inadvertently included in the AM, possibly explaining the higher sugar content. Extraction duration could also influence the detected AM sugar content; for instance Saez-Aguayo et al. extracted AM by incubating seeds with RGH at 40 °C overnight. Uronic acids were quantified using the m -hydroxydiphenyl method, whereas neutral sugars were analyzed as alditol acetate derivatives by gas-liquid chromatography after hydrolysis with 2M trifluoroacetic acid at 121 °C for 2.5 h. Fabrissin et al. reported a decrease in total monosaccharides by over 50 %, possibly due to reduced incubation time with RGH, from overnight to 1.5 h . In addition to differences between mucilage extractions methods, quantifying monosaccharides in each layer involves hydrolyzing polysaccharides to break the linkages between sugars. After hydrolysis, individual acidic and neutral sugar are typically quantified using High-Performance Anion-Exchange Chromatography with pulsed amperometric detector (HPAEC-PAD) or Gas Chromatography-Mass Spectrometry (GC–MS). Trifluoroacetic acid (TFA) hydrolysis is often the preferred method due to its rapid reaction kinetics and volatility, which eliminates the need for neutralization . Typically, TFA is used at a concentration of 2 M and heated to 121 °C, with hydrolysis times varying across studies from 30 min to 2 h. It is important to note that TFA effectively cleaves pectin and hemicellulose polysaccharides while leaving cellulose unaffected. However, the monosaccharide yield can vary significantly depending on structural modifications and interactions among polysaccharides . To eliminate non-cellulosic glucose, amylase may be added post-hydrolysis. In mucilage samples, cellulose can be recovered through centrifugation before proceeding with hydrolysis . Besides HPAEC-PAD and GC–MS, other techniques like colorimetric methods can be used to measure GalA and Rha content in mucilage layers. These colorimetric techniques, such as m -hydroxybiphenyl and orcinol assays , are currently employed for quantifying GalA and total sugars, providing an overview of the primary mucilage components. Cost-effective and rapid, these methods facilitate high-throughput screening of mucilage phenotypes, especially advantageous for small sample sizes typical of dry seeds. Colorimetric methods for GalA measurement are not only cheaper and faster but also require minimal sample quantities. When comparing GalA levels detected via colorimetric assays to HPAEC, colorimetric techniques often yield higher GalA values. This difference occurs because colorimetric methods detect GalA independently of polymer breakdown, unlike HPAEC, which requires full monomerization of GalA and may underestimate GalA content, potentially affecting the Rha/GalA ratio during mutant phenotyping. To improve accuracy, some studies combine neutral sugar quantification through TFA hydrolysis with colorimetric assays for GalA . For Rha quantification, the orcinol method is similarly inexpensive and fast, though it detects both neutral and acidic sugars with different specificities. Additionally, colorimetric assays can be effective for kinetic studies on mucilage release, as they allow determination of the Vmax of mucilage release, quantified as GalA, in mutants like pmei6 . To assess the methanol content of HG domains, a commonly used method involves a colorimetric assay. This assay includes saponifying pectins with 0.2 M NaOH for 1 h (optimized for mucilage), after which the reaction is halted by neutralizing with hydrochloric acid (HCl). The methanol content is then determined using alcohol oxidase activity, following the protocol described in Saez-Aguayo et al. . However, this method has challenges, as methanol's volatility necessitates conducting the process at lower temperatures to prevent evaporation. Additionally, determining the degree of methylation requires calculating the methanol-to- GalA ratio. This calculation is complicated because GalA content includes contributions from both RG-I and HG, which can significantly underestimate the actual HG methylation degree in the mucilage. For greater accuracy, isolating HG is essential. Saez-Aguayo et al. employed ethanol precipitation of HG after digesting RG-I, but alternative methods are needed given that RG hydrolases are currently not commercially available. Another alternative is to measure mucilage methylation using HPLC methods, as decribed in Levigne et al., 2002 . HG methylation analysis of mucilage using Matrix-Assisted Laser-Desorption Ionization Time-of-Flight Mass Spectra (MALDI-TOF) was used in Saez-Aguayo et al. to elegantly determine structural changes in HG. This technique involves digesting mucilage with polygalacturonases to hydrolyze HG into different oligogalacturonides with different degree of polymerization (DP). By detecting specific signals corresponding to unsaturated GalA oligomers, this method enabled the precise determination of changes in HG methylation . For detailed structural insights into RG-I and hemicelluloses, linkage analysis has also proven effective . This method can detect alterations in RG-I branching, as well as minor changes in xylan, galactan, arabinan and GCM structure structures within the SM layer . Solid-state NMR (Nuclear Magnetic Resonance) has become a valuable method for investigating the structure and dynamics of cell wall polysaccharides , including mucilage in A. thaliana seeds . Solid-state NMR allows for the analysis of cellulose, hemicellulose, and pectin networks in their natural hydrated state , which is crucial for understanding the polymer interactions in the cell wall matrix. For example, NMR techniques such as cross-polarization and magic-angle spinning (CP-MAS) help reveal how polysaccharide components like rhamnogalacturonan I (RG-I) in mucilage interact with cellulose , a primary structural component, contributing to the mucilage's functional properties. Research utilizing multidimensional solid-state NMR has provided insights into the structural nuances of pectic polysaccharides in Arabidopsis cell walls, which are central to mucilage's physical characteristics . Specific labeling techniques in solid-state NMR, like those with 13 C isotopes, allow researchers to resolve fine structural details in pectins, enabling a better understanding of how modifications to these polysaccharides affect mucilage adhesiveness and hydration properties. Laboratories may encounter limitations due to restricted access to advanced technologies (e.g., HPSEC-MALL) and the limited availability of specific cell wall hydrolytic enzymes (e.g., RGH), which hinders comprehensive analysis of mucilage composition - particularly in studies of mutants with subtle phenotypes. To address these challenges, we applied a traditional size exclusion chromatography (SEC) method recently adapted for analyzing Arabidopsis seeds mucilage, based on protocols developed for pectin domains in Chilean papaya mucilage . Using 50 mg of dry seeds, the SM was extracted via water imbibition, while AM extraction involved sonication . Mucilage layers were treated directly and loaded onto a Bio-Gel P-30 column for pectin domain separation as described in Sanhueza et al. . Uronic acid content was assessed in fractions to identify those containing RG-I , which were then dried, reconstituted in water, hydrolyzed with TFA, and analyzed by HPAEC. To evaluate the method's robustness, we compared WT Col-0 mucilage with that of the bxl1–1 mutant, which presents ramified RG-I . Consistent with Arsovski et al. , we observed a 48 % increase in Ara content in bxl1–1 GR-I from SM and AM (0.17 mg/g of dry seed in comparison to 0.11 mg of Ara /g per dry seed measured in WT-Col). We observed a similar increase in Ara content in the bxl1–1 mutant in RG-I purified from SM and AM, validating this methodology as a reliable approach for analyzing distinct pectin domains. SEC enabled the isolation of RG-I but also allowed the isolation of oligogalacturonanides and the extremely low-abundant rhamnogalacturonan-II, providing material that can be collected for subsequent analyses . Fig. 3 Exploring old-fashioned methods of separating pectins by exclusion chromatography to purify mucilage RG-I. A. Pipeline for the purification of RG-I mucilage from WT Col-0 seeds and bxl1–1 mutant lines, which have been described to have more arabinan side chains . The bxl1–1 mutant shows over 48 % of arabinose content in the SM compared to WT Col-0. B. Elution profile of pectins present in SM from WT Col-0 and bxl1–1 mutant lines. The collected RG-I fractions are indicated, represented schematically as RG-I. C. RG-I from bxl1–1 mutant mucilage contains higher arabinose levels. The separation of RG-I fractions was published in Sanhueza et al. . Mucilage layers were extracted from mature seeds, digested with endoPG, and separated on a Bio-Gel P30 column. Fractions 14 to 19 were pooled, and sugar was analyzed by HPAEC-PAD following TFA hydrolysis. Fig. 3 Another simple way to isolate RG-I involves ethanol precipitation. Following enzymatic digestion, acidic polysaccharides can be precipitated with a divalent (e.g., CaCl 2 ) and at least 30 % ethanol . This method allows the collection of larger fragments like RG-I via centrifugation, while leaving oligogalacturonides in solution. Additionally, RG-II can be eliminated by dialysis using a 12,000 MWCO (molecular weight cut-off) membrane, sufficient to exclude dimeric RG-II, estimated at 10 kDa . In this section, we have organized mutants based on their roles in the synthesis and/or modification of RG-I, HG and hemicelluloses. We also compile the observed phenotypes using different techniques ( Table 1 , Table 2 , Table 3 ). Based on this information, we outline the methods used and provide a roadmap to facilitate phenotyping mucilage mutants according to anticipated mucilage changes . Table 2 Mucilage mutants altered in RG-I polymer and their corresponding phenotypic traits are outlined. The table includes mutants with altered RG-I structure and associated phenotypes. The genetic functions are indicated as well as their main cytological and biochemical traits. n.r. (not reported). Table 2 Mutant Protein function Mucilage release (SM + AM) RR (AM) phenotype Immunolabeling with anti-CW antibodies Biochemical changes and techniques used Extras Refs. RG-I backbone urgt2 / urgt4 / urgt6 UDP-Rha transporter (to Golgi lumen) No obvious phenotype No obvious phenotype Less labeling with INRA-RU1 (RG-I). Reduced Rha and GalA amounts (Colorimetric and HPAEC-PAD). Increased number of shorter RG-I molecules (HP-SEC). Increased amount of xylans (linkage analysis). Reduced mucilage density with dextran FITC Rautengarten et al. Saez-Aguayo et al. uuat1 UDP-GlcA and UDP-GalA transporter (to Golgi lumen) No obvious phenotype Less staining with EDTA Decreased labeling with CCRC-M36 (RG-I) and LM6 (arabinan). Increased LM20 (high DM HG) labeling. Less Rutenium Red staining of AM after EDTA treatment Reduced GalA, Rha and Xyl content (Colorimetric and HPAEC-PAD). Increased DM of HG. n.r. Saez-Aguayo et al. uuat3 Putative UDP-uronic acids transporter (to Golgi lumen) No obvious phenotype Higher staining with EDTA Increase RG-I labeling with INRA-RU1 Reduced GalA, and Rha in SM and increased GalA in AM (Colorimetric and HPAEC-PAD). n.r. Parra-Rojas et al. mum4/rhm2 Converts UDP-Glc to UDP-Rha No mucilage release n.r. n.r. Decreased Rha and uronic acid amounts (Colorimetric and HPAEC-PAD). Reduced RG-I amount and molecular weight (HP-SEC). Flattened columela (MEB). Western et al., 2001 , Western et al., 2004 Usadel et al., Oka et al. rrt1 RG-I rhamnosyltransferase n.r. Lower halo/volume of extrusion in water n.r. Reduced Rha and GalA amounts (HPAEC-PAD). Increased columella width (MEB) Takenaka et al. gatl5 Putative galacturonosyltransferase n.r. Lower halo/volume of extrusion in water n.r. Reduced GalA and Rha amounts (HPAEC-PAD). Longer RG-I molecules (HP-SEC and linkage). radial cell wall columella width affected (MEB) Kong et al. muci70 / rggat1 RG-I galacturonosyltransferase Lower halo Lower halo/volume of extrusion in water Less labeling or RG-I with CCRC-M36 and INRA-RU1, more labeling of xylan with CCRC- M139 and INRA-AX1 Decreased Rha and GalA content and an Increase in Xyl content (colorimetric and HPAEC-PAD). RG-I polymer with higher and length shorter of RG-I polymer (HP-SEC) Collumelae shape less detectable (MEB), AM less dense (Dextran FITC) Voiniciuc et al. ; Fabrissin et al., 2019 cuaoa1 Cupper amine oxidase. Polyamine metabolism. Putative role in RG-I synthesis No obvious phenotype No obvious phenotype No obvious phenotype Reduced amounts of Rha and GalA in SM and slight increase of GalA in AM (Colorimetric and GC–MS) n.r. Fabrissin et al. RG-I side chains mum2 / bgal6 β-galactosidase no mucilage is released no mucilage is released no mucilage is released after alkali treatment to release mum2 mucilage, there is change in LM5 and JIM7 labeling in walls breakage. LM6 labeling is higher in the mum2 mutant. Increased Gal content in AM. Reduced GalA and Rha content in SM and increased of both sugars in AM. n.r. Dean et al. Macquet et al. ruby Galactose oxidase Mucilage has a WT release With water treatment the halo of AM is larger and more disheveled with stained dark particles n.r. Increase in Ara, Rha Gal and GalA content in mucilage extracted with Na 2 CO 3 (HPAEC-PAD, colorimetric assay). Increase in RG-I branching (Linkage). The presence of ruby particles corresponds to mucilage secretory cells detached from the seed coat Šola and Dean rgp1 / rgp2 UDP-arabinose mutase Quick solubilization of mucilage layers. Low AM staining with RR n.r. no data n.r. Rautengarten et al. uaft2 UDP-Arabinofuranose transporter (to Golgi lumen) No visible phenotype Less RR staining after EDTA treatment Reduced INRA-RU1 (RG-I) labeling. Reduced Ara content n.r. Parra-Rojas et al. bxl1 β-xylosidase / α-arabinofuranisidase Patchy and delayed mucilage release No differences with EDTA treatment Increased LM6 (arabinan) labeling in MSC primary cell wall Increased Ara Xyl and Fuc in SM (HPAEC-PAD) and reduced Xyl and Fuc content (HPAEC-PAD). More Ara ramification in SM (linkage) RG-I more ramified with Arabinan side chains (AFM) Arsovski et al. ; Williams et al. Table 3 Mucilage mutants altered in HGs domains and their corresponding phenotypic traits are outlined . The Table includes mutants with altered HG synthesis and modifications, along with their associated phenotypes. Genetic functions are indicated, detailing their cytological and biochemical traits. n.r. (not reported). Table 3 Mutant /AGI number Protein Function Mucilage release (SM + AM) RR Adherent Mucilage (AM) phenotype Immunolabeling with anti-CW antibodies Biochemical changes and techniques used Extras References HG synthesis gaut11 HG galactoturonosyltransferase Patchy and delayed mucilage release with less staining AM thinner and more stained n.r. Reduced Rha, GalA and Xyl. Increased Glc, Man and Gal (GC–MS). Changed in RG-I, HG, arabinan and galactan structure (linkage) Reduced mucilage area with lower density (Dextran FITC). Flattened columella (MEB) Caffall et al. Voiniciuc et al. gosamt1 / 2 / 3 Putative SAM transporters (to Golgi lumen) Delayed mucilage release with less staining Adherent mucilage has WT phenotype in water Decrease JIM7 (high DM HG) AM labeling. Changes in S4B (cellulose) AM labeling. Decrease in JIM7 labeling and increase of LM30 in SM (Dot blots) Reduced DM of HG (colorimetric assay). Reduced Rha, GalA and Xyl in SM and increased amounts of these sugars in AM (HPAEC-PAD). Gaps in radial wall thickness and changes in distal wall length (Calcofluor) Parra-Rojas et al. qua2 / tfa2 / tsd2 HG methyltransferase Reduced mucilage area n.r. Changes in JIM5 (low DM HG) and JIM7 (high DM HG) labeling. Reduced S4B (cellulose) labeling Reduced GalA. Decrease DM of HG. Reduced cellulose content Increased columella area and decreased radial wall thickness (MEB) Du et al. nks1/elmo4 Putative integral protein of a pectin synthesis protein complex. Phenocopy of qua2 n.r. Reduced AM mucilage area n.r. Less GalA content in MS (HPAEC-PAD) n.r. Lathe et al. HG modification pme6/hms Pectin methylesterase n.r. Reduced AM mucilage area after water shaking n.r. Not differences in mucilage chemotype The mucilage phenotype appears to result from alterations in embryo development Levesque-Tremblay et al., 2015 pme58 Pectin methylesterase n.r. Less AM staining after EDTA treatment Changes in LM19 (Low DM HG) labeling after EDTA treatment Increased DM of HG (colorimetric assay). Increased SM and reduced AM sugars in mucilage extracted with EDTA (HPAEC-PAD) Decreased MSC surface area (MEB) Turbant et al. pme31 Pectin methylesterase Less mucilage staining and weak release phenotype AM less stained with EDTA and CaCl 2 n.r. n.r. n.r. Zhang et al. pmei6 Pectin methylesterase inhibitor No mucilage release “Snake ski” residue attached to the seed after EDTA treatment and long imbibition in water and RR staining Strong decrease in JIM5 (low DM HG) and JIM7 (high DM HG) labeling Decreased DM of HG. Reduced sugar amounts in SM and increased in AM layers. GalA from HG is strongly reduced in both mucilage layers (HPAEC-PAD and colorimetric assays) Delayed mucilage release. Saez-Aguayo et al. pmei13 Pectin methylesterase inhibitor n.r. Halo of AM mucilage after water treatment is thinner n.r. n.r. n.r. Ding et al. pmei14 Pectin methylesterase inhibitor n.r. Bigger mucilage halo after NaOH treatment Increased 2F4 (HG egg boxes) and LM19 (Low DM HG) labeling. Decrease in LM20 labeling (high DM HG) Reduced methanol content (colorimetric assay). Thickener cell wall (MEB). Increased calcium in mucilage Shi et al. ; Ding et al., 2021 ; Allen et al. pmei15 Pectin methylesterase inhibitor n.r. Double mutant pmei15erf4 have a subtle phenotype with a bigger halo n.r. Double mutant pmei15erf4 have a subtle phenotype with a slight increase in the DM n.r. Ding et al. pmei18 Pectin methylesterase inhibitor Less mucilage staining and weak release phenotype AM less stained with EDTA and CaCl 2 n.r. No changes in GalA composition (Colorimetric assay) n.r. Zhang et al. sbt1.7 / ara12 Subtilase (Subtilisin-like serine proteases). Putative role in PME maturation Delay in mucilage release “Snake skin” residue attached to the seed after EDTA treatment a No evident differences with JIM5 (low DM HG) and JIM7 (high DM HG) but abnormal breakage of the distal wall. No changes in total amount of sugars (HPAEC-PAD). Less methanol content in MS and AM (colorimetric assay) Delayed mucilage release. Mucilage extrusion is better using EDTA, but abnormal breakage of the distal wall. Rautengarten et al. fly1 / fly2 RING E3 ubiquitin ligase. Putative role in PME recycling Reduced mucilage release with capsule formation, particles with a disk structure shape More AM labeling after EDTA treatment. CaCl 2 increased a not mucilage release Changes in JIM5 (low DM HG), JIM7 (high DM HG) and 2F4 (HG egg boxes) labeling Less sugars in mucilage due to mucilage release. No changes in total amount of sugars (HPAEC-PAD) In imbibed seeds cells are detached after imbibition (cryoSEM) Voiniciuc et al. ; Kunieda et al. per36 Peroxidase No mucilage release, phenocopy of pmei6 “Snake skin” residue attached to the seed after EDTA treatment and long imbibition in water and RR staining n.r. n.r. n.r. Kunieda et al., 2013 tbl38 Atypical homogalacturonan acetylesterase No mucilage release No adherent mucilage Changes in LM20 (High DM HG) labeling on developing seed section No data Reduction in LM20 labeling in the surface of MSCs Dauphin et al. Fig. 4 “Muci map-guide” for phenotyping mucilage mutants. This table was created to simplify mucilage mutant phenotyping. It summarizes techniques used for mucilage analysis, indicating the preferred methods recommended methods for mutant phenotyping with alterations in different pectin, hemicellulose, and cellulose polysaccharides. Fig. 4 Over the past two decades, various techniques have led to the characterization of approximately 90 genes involved in mucilage synthesis, deposition, and modification . Focusing on genes associated with RG-I synthesis and modification ( Table 2 ) RR staining assays reveal that mutants in UDP- sugars conversions and Golgi nucleotide sugar transporters mum4 , uuat1 , uuat3 , and uaft2 exhibit mucilage release or non-adherent mucilage phenotypes ( Table 2 ) . These mutants also could exhibit methylation defects in their AM potentially explaining the RR staining phenotype observed following EDTA treatment ( Table 2 ). This indicates that when RG-I content does not show a significant reduction in mucilage, cytological assays such as RR staining, are less effective in detecting mucilage phenotypes. In such cases, sugar quantification using colorimetric methods (e.g., m -hydroxybiphenyl and orcinol methods) and/or HPAEC-PAD analysis following TFA hydrolysis is recommended for identifying mucilage changes . Immunolabeling with anti-RG-I and anti-galactan antibodies, like the LM5 antibody, also helps detect changes in RG-I distribution within the AM . Further, determining RG-I structure through High-Performance SEC (HP-SEC) and linkage analysis is fundamental for identifying structural changes in RG-I . If HP-SEC and/or linkage analyses are unavailable, traditional SEC, as used for bxl-1 , may be employed. Also, dot-blot analysis using antibodies that target minor mucilage components, such as LM6 (anti-arabinan) and LM5 (anti-galactan), which recognize RG-I side chains, can effectively detect structural changes in fractions collected through SECs and/or extracted mucilage . Genes involved in the synthesis and modification of HG, including methylation and acetylation processes, have been extensively characterized, largely due to mucilage studies ( Table 2 ). A range of mucilage phenotypes has been identified, from strong phenotypes, such as pmei6, sbt1.7 or prx36, which do not release their mucilage, to more discrete phenotypes like the delayed mucilage release observed in gosamts mutants . Despite the diversity, mutants affecting HG typically exhibit alteration in the AM structure, which is easily observable by RR staining following seed imbibition in water, EDTA, and CaCl 2 ( Table 2 ). Therefore, assessing AM structure is often the first step in characterizing HG mutants. Characterizing HG mutant can be challenging, as HG represents only a small fraction of mucilage components, and mutations in HG metabolism sometimes result in minimal changes to the HG methylation pattern. To detect changes more thoroughly, immunolabeling using a panel of anti-HG antibodies (Table S1 X; JIM5, JIM7, LM20, LM19, and 2F4) has proven effective, revealing both significant and subtle changes in HG methylesterification patterns. For these mutants, it is advisable to quantify methanol and GalA released from pectin to calculate the DM . Before doing so, however, it is essential to separate HG domains by precipitation with ethanol after RG-I hydrolysis or by using SEC, as shown by Saez-Aguayo et al. and Sanhueza et al. . This ensures accurate DM assessment by excluding GalA from the RG-I backbone, which would otherwise underestimate the true DM of HG in mucilage. Due to the low content of hemicelluloses in mucilage, mutants typically exhibit mild mucilage phenotypes, with total mucilage release resembling that of WT lines ( Table 4 ). Interestingly, for AM all mutants except muci10 exhibit a reduced halo or changes in the AM staining when incubated in water, but not in EDTA or CaCl 2 ( Table 4 ). Generally, hemicellulose mutants have a more soluble mucilage layer and reduced AM, as hemicelluloses contribute to AM adherence ( Table 4 ). Immunolabeling provides more detailed insights for these mutants, with positive labeling seen using AX1, CCRC-M139, LM21, and CBM3a antibodies. Additionally, oriented crystalline cellulose can be detected based on light birefringence, while changes in cellulose organization and structure are generally assessed using calcofluor, pontamine, and CBM staining . All mutants display changes not only in glucose content but also in other sugars, highlighting the importance of performing sugar determination using HPAEC-PAD or GC–MS. Table 4 Mucilage mutants altered in cellulose and hemicellulose polymer and their corresponding phenotypic traits are outlined. The table includes mutants with altered hemicellulose and cellulose structure and associated phenotypes. Genetic functions are indicated, detailing their main cytological and biochemical traits. n.r. (not reported). Table 4 Hemicellulose mutants Mutant /AGI number Protein function Mucilage release (SM + AM) RR (AM) phenotype Immunolabeling with anti-CW antibodies Biochemical changes and techniques used Extras Refs. Xylan synthesis irx14 Putative Xylan β-1,4-xylosyltransferase (backbone) n.r. Reduced AM due to loss of adherence. Reduced labeling with CCRC-M139 (xylan) and LM11 (highly branched xylan) Reduced Xyl content. Increased Man content in SM and Ara in AM. Reduced crystalline cellulose and structure. Voiniciuc et al. Hu et al. muci21 / mum5 Putative Xylan β-1,2-xylosyltransferase (side chains) n.r. Reduced AM due to loss of adherence. Different CCRC-M36 (RG-I), CCRC-M139 (xylan) and AX1 (xylan) distribution Reduced Xyl content. Increased GalA in SM and reduced in AM Impaired cellulose structure by S4B labeling. Western et al. Voiniciuc et al. Ralet et al. irx7 Putative xylan xylosyltransferase n.r. n.r. Reduced labeling with CCRC-M139 (xylan) and LM11 (highly branched xylan). Reduced Xyl content. Increased Rha and GalA in SM and decreased in AM Reduced CBM3a (crystalline cellulose) labeling. Cell adhesion between MSCs is affected. Hu et al. Galactoglucomannan synthesis vtc1 GDP-Mannose pyrophosphorilase Slight reduction in mucilage area. n.r. Reduction in LM19 (unesterified HG) and calcofluor (b-glucans) labeling. Reduced Man in AM. n.r. Nishigaki et al. csla2 GGM mannosyltransferase n.r. Reduced AM area and lower density. Changes in LM21 (heteromannans) labeling distribution. Changes in distribution of CCRC-M14 (unsubstituted RG-I), JIM5 (low DM HG) and JIM7 (high DM HG) labeling Reduced Man and Glc in SM and after 2N NaOH extraction. Reduced crystalline cellulose with altered distribution Yu et al. , Yu et al. muci10 / magt1 GGM galactosyltransferase Reduced mucilage area and density AM is less adherent. n.r. Reduced Gal, Glc and Man. Reduced S4B (cellulose) and CBM3a (crystalline cellulose) labeling distribution. Voiniciuc et al. Cellulose mutants Mutant /AGI number Protein function Mucilage release (SM + AM) RR (AM) phenotype Immunolabeling with anti-CW antibodies Biochemical changes and techniques used Extras References Cellulose synthesis cesa5 / mum3 Cellulose synthase More soluble mucilage du to less AM adherence Reduced mucilage area by loss of AM adherence. Abnormal distribution of JIM5 (low DM HG), JIM7 (high DM HG), CCRC-M36 (unsubstituted RG-I) labeling Sugars more easily extractable, therefore, reduced sugar quantities in AM and increases in SM Decrease in radial wall thickness. Loss of organization of cellulosic rays and cellulose distribution using calcofluor (b-glucans), S4B (cellulose), CBM28 (amorphous cellulose) and CBM3a (crystalline cellulose). A Western et al. Harpaz-Saad et al. Mendu et al. Sullivan et al. Griffiths et al. cesa3 / irx1 Cellulose synthase Reduced mucilage area. n.r. Changes in CCRC-M36 (unsubstituted RG-I) and JIM5 (low DM HG) labeling Reduced crystalline cellulose. Less Rha in SM. Reduced cellulosic rays and abnormal CBM28 (amorphous cellulose) and CBM3a (crystalline cellulose) distribution. Griffiths et al. cobl2 GPI-anchored COBRA-LIKE protein. Synthesis and assembly of crystalline cellulose More soluble mucilage du to less AM adherence Reduced mucilage area by loss of AM adherence. n.r. Reduced crystalline cellulose. Reduced sugars in AM and increases in SM. Decrease in crystalline cellulose deposition and distribution by calcofluor (b-glucans) and S4B (cellulose) Ben-Tov et al. Ben-Tov et al. Linkage analysis is crucial for detecting subtle changes in HC structure in mucilage. If this is not feasible, PACE analysis of GGM extracted from mucilage, as demonstrated by Nishigaki et al. , can be used for HG structure analysis, highlighting the need to develop traditional techniques for studying HC in mucilage. Given the close interaction between cellulose and HC, their mutants often exhibit similar phenotypes. The cesA5 mutant exhibits a pronounced phenotype, characterized by high solubility of nearly all AM. This results from impared adhesion of RG-I to beta-glucan chains of cellulose, leading to an increased SM layer. It has been shown that CESA5 works in conjunction with CESA3 to form the AM cellulose matrix, alongside the GPI-anchored COBRA-LIKE protein (COBL2). Mutants of these genes exhibit similar phenotypes, with varying degrees of reduced AM and increased SM solubilization, which can be easily identified using RR staining. CESA5, along with CESA2 and CESA9, contributes to cellulose formation in the secondary cell wall of the radial and distal walls of epidermal cells. Triple cesa5x2x9 mutants exhibit changes in cellulose organization in AM due to the mutation of CESA5, resulting in alterations to the radial thickness of epidermal cells and the morphology of the distal cell wall. In this review, we summarized the importance of developing simplified models to study cell wall metabolism in plants, given the inherent complexity and current limitations of such research. Mucilage serves as an excellent model system for investigating the synthesis and modification of primary cell wall components, allowing the identification of key factors involved in the synthesis of HG, RG-I, and HC. We discussed various techniques used to phenotype mucilage mutants, discussing their advantages and limitations, and proposed a workflow for comprehensive mutant characterization. Despite substantial progress over the past two decades, the metabolism of pectin acetylation remains completely unknown, and our understanding of RG-II synthesis and modification is still limited. This highlights the urgent need for the development of additional simplified systems to advance our knowledge. Additionally, we emphasize the importance of more easily accessible techniques to facilitate cell wall research, particularly in laboratories with limited funding. Finally, the proposed pipeline aims to streamline the study of mucilage mutants, enhancing our understanding of plant cell wall biology. SSA - Design the research; SSA, DS and AL-G, write the article; DS, VJ, BG and AL-G create the tables; SS-A, VJ and BG created Figures; DS, realized the experiments; AGL, AM and AGR revised the manuscript. During the preparation of this work, the author(s) used ChatGPT AI to correct the language. After using this tool/service, the author(s) reviewed and edited the content as needed and assume full responsibility for the publication's content. The work has received financial support from ANID-Anillo ACT210025 project, Fondecyt 1201467 (to SS-A), ECOS 210032 (to SS-A), MNSAP (to SS-A). Susana Saez-Aguayo: Writing – review & editing, Writing – original draft, Supervision, Methodology, Investigation, Funding acquisition, Data curation, Conceptualization. Dayan Sanhueza: Writing – review & editing, Writing – original draft, Methodology, Formal analysis, Data curation, Conceptualization. Vicente Jara: Writing – original draft. Benjamin Galleguillos: Writing – original draft, Conceptualization. Alfonso Gonzalo de la Rubia: Writing – original draft, Conceptualization. Asier Largo-Gosens: Writing – original draft, Conceptualization. Adrian Moreno: Writing – review & editing. The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | Review | biomedical | en | 0.999997 |
PMC11696889 | This was a double‐blind, cross‐over study comparing Biotène and HydraSmile among patients with radiation‐induced xerostomia. Patients were recruited from UPMC's Head and Neck Cancer Survivorship Clinic between January 2021 and September 2022. Adult patients with subjective complaints of xerostomia were selected if they were previously diagnosed with squamous cell carcinoma (oral cavity, oropharynx, or larynx) and treated with radiotherapy (between 50 and 70 gray) at least 6 months prior to randomization. Exclusion criteria included any treatment for cancer in the last 6 months (Including surgery, radiation, and chemotherapy), recurrence of cancer, those with other medical conditions associated with xerostomia such as Sjogren's Syndrome, and those using pilocarpine or anticholinergic drugs. A project member explained the study to each participant, who read and signed an informed consent form. See Figure 1 for CONSORT flow diagram. This study was approved by the University of Pittsburgh Institution Review Board. An authorized third party repackaged the 2 products into opaque bottles labeled “A” or “B.” The contents of each type of spray bottle was not revealed to the research team or the study participants to preserve blinding. At the conclusion of the study, the research team was given the key revealing that Biotène was in bottle A and HydraSmile was in bottle B. Each patient was provided with both oral hydrating sprays (A and B) for use at home. The study was divided into 4 periods: 1 week with only water and no salivary substitute (washout period 1), followed by 2 weeks using one of the provided mouth sprays (mouth spray period 1), followed by 1 week with only water and no salivary substitute (washout period 2), followed by 2 weeks using the other provided mouth spray (mouth spray period 2). Study period length was adapted from previous randomized trials evaluating exogenous xerostomia products. 10 , 12 Computer‐generated simple randomization was used to assign patients to 1 of 2 groups: Group 1. Assigned to use product A (Biotène) in mouth spray period 1, followed by product B (HydraSmile) in mouth spray period 2. Group 2. Assigned to use product B (HydraSmile) in mouth spray period 1, followed by product A (Biotène) in mouth spray period 2. Patients were not permitted to use any other products to treat xerostomia, including chewing gum, hard candy, and lozenges for the entirety of the study. During washout weeks, participants were only permitted to use water for xerostomia relief. During mouth spray periods, participants were instructed to only use the sprays provided and water for xerostomia relief. Participants were permitted to use mouth sprays up to 4 times a day and 4 sprays with each use. Patients were sent a unique link via email to complete an online questionnaire at the end of each of the 4 study periods. The questionnaire included continuous variables derived from the 100 mm visual analog scale (VAS), the gold standard of symptomatic xerostomia evaluation. 10 , 11 , 12 , 13 , 14 Higher scores indicate better symptomatic control. At the end of the study, patients were asked “Overall, do you prefer mouth spray A, mouth spray B, or neither mouth spray?”. This analysis aimed to compare the relative treatment effect of HydraSmile versus Biotène, as well as evaluate each product's individual benefit compared to water. The primary outcome was change in overall xerostomia score with respect to baseline. The secondary outcomes were change in daytime xerostomia, sleep, speech, swallowing, and taste. This study followed a modified intention‐to‐treat design. Patients were required to report which product they used (product A or B) during each mouth spray period in the online questionnaire. During the analysis phase, participants who inadvertently used the products in the wrong order, were reassigned to the appropriate study group based on the protocol they completed. Assuming 1 − β = 0.9 and α = 0.05, a sample size of n = 96 was required to demonstrate a 5‐point change in 100 VAS score. Allowing for a dropout rate of approximately 10%, we aimed to recruit 110 patients. All statistical analyses were performed using STATA SE 17.0 for Mac OS. Descriptive statistics, including proportions, means, and standard deviations (SD), were used to compare demographic and clinical features between treatment groups. The primary and secondary outcomes were reported xerostomia scores derived from the 100‐mm VAS. Washout period scores were used as the baseline comparison for the mouth spray period that directly followed. Carryover effect was tested by unpaired t ‐test of the sum of outcomes after both treatments, with sequence as the grouping variable. 15 Period effect was tested by unpaired t ‐test of the difference in outcomes between Biotène and HydraSmile after both treatments, with sequence as the grouping variable. To evaluate the treatment effect of Biotène and HydraSmile, we used paired t ‐test to compare the outcome after treatment compared to the corresponding baseline measurements. To investigate the treatment effect of HydraSmile versus Biotène, we followed the recent recommendation for analysis of 2*2 cross‐over trials with 2 baseline measurements by Metcalfe and Mehrotra and implemented the analysis of covariance (ANCOVA) model to regress the difference in after‐treatment measurement between HydraSmile and Biotène over the difference of baseline between HydraSmile and Biotène. 16 , 17 The intercept term would be the treatment effect of HydraSmile compared to Biotène. In the exit survey, patients indicated which mouth spray (Biotène or HydraSmile) they preferred. A planned subgroup analysis was completed within each preference group to determine the effect of each mouth spray and the difference between them. Secondary end points were not adjusted for multiplicity, and therefore should be interpreted as exploratory hypothesis generating data. A total of 129 patients were enrolled in the study, of which 38 withdrew. Five patients withdrew after experiencing an adverse effect from HydraSmile (oral burning sensation and/or subjective lingual/labial swelling), 11 patients reported no longer having the capacity to participate due to social or medical factors, and 22 patients were lost to follow‐up. No patients experienced anaphylaxis from either product. The remaining 91 participants completed all intervention activities and were included in the final analysis (mean age 63.0 years [SD 9.7]; 85.7% male [n = 78]; 97.8% White [n = 89]). Thirteen patients who were randomized to Group 2 (Product B [HydraSmile], followed by Product A [Biotène]) inadvertently completed the Group 1 protocol (Product A, followed by Product B). These patients were re‐assigned to Group 1 per our modified intention‐to‐treat study design. The final analysis included 61 patients in Group 1 and 30 patients in Group 2 . The majority of patients were previously irradiated for cancer of the oropharynx (72.5% [n = 66]), while a smaller proportion were treated for cancer of the oral cavity (16.5% [n = 15]) or larynx (11% [n = 10]). The mean radiation dose received was 64.9gy (SD 6.2). See Table 1 for summary of demographics and clinical characteristics. Following the protocol outlined in the methods section, we found that there was no significant carryover effect or period effect for any of the 6 parameters evaluated ( Supplemental Table S1, available online ). At the conclusion of the study, there was no difference in overall treatment effect between HydraSmile and Biotène, with respect to baseline . Both products, however, were individually effective when compared to use of water alone. Participants achieved clinically significant improvements in overall xerostomia score with use of HydraSmile (mean difference 7.45, 95% CI 3.61‐11.29) and Biotène . Across our 5 secondary outcomes, the treatment effects of HydraSmile and Biotène were not statistically distinguishable . In comparison to use of water alone, participants using HydraSmile achieved statistically significant improvement in VAS score for daytime xerostomia (mean difference 4.52, 95% CI 0.96‐8.08) and clinically significant improvement in VAS score for swallow (mean difference 5.80, 95% CI 1.90‐9.70). With the use of Biotène, participants achieved clinically significant improvement in daytime xerostomia (mean difference 7.37, 95% CI 3.12‐11.63), sleep (mean difference 10.53, 95% CI 6.40‐14.66), speech (mean difference 8.73, 95% CI 4.25‐13.21), and swallow (mean difference 8.96, 95% CI 4.42‐13.50). Neither product allowed for improvement in taste compared to water . In our exit survey, 44% (n = 40) of patients reported a preference for Biotène, 50.5% (n = 46) preferred HydraSmile, and 5.5% (n = 5) had no preference. A subgroup analysis was conducted, in which participants were stratified by product preference. Within the Biotène preference cohort (n = 40), Biotène significantly improved overall xerostomia score (mean difference 9.80, 95% CI 3.66‐15.94, P = .003), while HydraSmile did not . There was no difference in treatment effect between HydraSmile and Biotène with respect to baseline . Within the HydraSmile preference cohort (n = 46), HydraSmile significantly improved overall xerostomia score (mean difference 9.37, 95% CI 4.09‐14.65, P < .001), while Biotène did not . HydraSmile displayed a greater improvement in overall xerostomia score, with respect to baseline, compared to Biotène . In this study, we found that Biotène and HydraSmile effectively improved symptoms of radiation‐induced xerostomia. While the treatment effects of Biotène and HydraSmile did not significantly differ, our exploratory analysis suggests that Biotène may provide a more comprehensive coverage of the subdomains evaluated. Ultimately, patient preference appeared to be the most important factor in predicting the effectiveness of a given product. Patients who preferred Biotène did not significantly benefit from HydraSmile, whereas those who preferred HydraSmile did not significantly benefit from Biotène. These data emphasize that patients with radiation‐induced xerostomia should be provided with multiple artificial saliva options to determine which works best for them. Multiple studies have explored the ways in which radiation‐induced xerostomia can reduce patient quality of life. 3 , 4 In addition to the lingering oral discomfort, these findings can be understood by considering how xerostomia broadly interferes with activities of daily life such as speech, taste, swallowing, and sleep. Several randomized trials have demonstrated that Biotène, as well as other artificial saliva substitutes, can effectively improve symptoms of xerostomia overall. 7 , 13 , 14 A few of these studies further delineated which components of xerostomia were improved with use of Biotène. Shahdad et al found that, in addition to overall xerostomia relief, Biotène improved swallowing and taste. The authors did not identify a significant improvement in chewing or speech with the use of Biotène. 10 In a similar study, Warde et al found that Biotène helped improve all domains evaluated, which included oral dryness, oral discomfort, sleep, speech, and swallowing. 11 In the current study, we found that in addition to improving overall oral dryness, Biotène significantly improved daytime xerostomia, sleep, speech, and swallowing. HydraSmile significantly improved overall oral dryness, daytime xerostomia, and swallowing. HydraSmile was found to provide a positive but nonxsignificant treatment effect for the remaining subdomains evaluated. Perhaps with a larger sample size, HydraSmile would provide a significant benefit with regard to sleep, speech, and taste. While not statistically significant, Biotène tended to outperform HydraSmile within the subdomains tested. In contrast, HydraSmile showed a nonsignificant trend towards outperforming Biotène with regard to overall symptomatic relief. Biotène may be more effective for patients who primarily suffer from disturbances in sleep and speech due to xerostomia, however, the current study does not demonstrate superiority of one product over the other. We found that 44% of patients preferred Biotène, 50.5% of patients preferred HydraSmile, and 5.5% of patients had no preference. Interestingly, patients who preferred Biotène did not significantly benefit from HydraSmile, whereas those who preferred HydraSmile did not significantly benefit from Biotène. Additionally, within the HydraSmile preference group, HydraSmile displayed a significantly greater treatment effect compared to Biotène. Currently, there are no other studies that evaluate how xerostomia product preference relates to effectiveness in the setting of xerostomia. While not quantified, many patients reported a preference based on product taste. This could have modified treatment effect by influencing a participant's willingness to use a given xerostomia spray. There are likely additional unmeasured interactions between product ingredients and patient clinical features that have also contributed to this finding. Overall, these results highlight that there is no easy way to predict whether Biotène or HydraSmile will work best for a given patient. If possible, patients should try multiple products to determine which is most effective for them. While lubricants and saliva substitutes have been shown to reduce symptoms of xerostomia, it has also been reported that these effects are generally short‐lived. 10 , 11 , 12 , 18 In a study published in 2021, Lung et al measured the mean duration of effect of Biotène spray to be 27 ± 25 min. 19 Gil‐Montoya et al in a systematic review previously noted that these types of products may not last long enough to improve quality of life meaningfully. 20 Therefore, rather than testing for the immediate effect of Biotène and HydraSmile after each use, we tailored our study to evaluate how routine use of each product influenced the overall symptomatic burden of xerostomia. Consequently, this study design likely underestimated the immediate treatment effect with each use. We found that both Biotène and HydraSmile provide longitudinal xerostomia relief in addition to immediate relief. We predict that use of these products can improve overall quality of life, however, this hypothesis will require testing in a future study. Five participants experienced mild adverse effects with use of HydraSmile. While none of the participants experienced anaphylaxis, these adverse reactions may have been allergy related. HydraSmile differs from Biotène in large part due to inclusion of several natural oils (avocado, peppermint, tea leaf, grapefruit, eucalyptus, wintergreen). This stands as another potential benefit of Biotène over HydraSmile. We recommend that patients avoid using HydraSmile if they have known allergies to any of these ingredients and discontinue use if they develop any sort of adverse reaction. This study is not without limitations. There may be a degree of attrition bias, given that 38 patients were unable to complete the study. Our sample size of 91 fell short of the 96 patients required to have 90% power to detect a 5‐point change in response. Overall, this increases our chance of type II error. Our modified intention‐to‐treat design, in which 13 patients from group 2 were reassigned after completing the group 1 protocol, may have also added bias to our analysis. Many participants found the product labeling (“A” and “B”) confusing and assumed that “A” was intended to be used during the first mouth spray period, and “B” was intended to be used during the second mouth spray period. Given that we found no evidence of a sequence or period effect in this cross‐over study, we feel any bias from this reassignment is minimal. While this study is much larger than similar studies (Warde et al, n = 28; Lopez‐Jornet et al, n = 30), our sample size was not sufficient to adjust for confounding variables in the final analysis. Additionally, we were unable to report objective measurements of salivary function to support our subjective survey data. Finally, the treatment effect of Biotène and HydraSmile was calculated in reference to use of water, which is known to improve oral dryness. 13 , 14 , 19 Therefore, the magnitude of the treatment effect may be underestimated, however, we would expect both products to be affected equally. In conclusion, we found that Biotène and HydraSmile effectively improved oral dryness among patients with radiation‐induced xerostomia. Direct comparison of the 2 products revealed a non‐significant difference in treatment effect across all domains evaluated. Therefore, this study did not find one product to be superior to the other. Through subgroup analysis we found that patients who preferred Biotène did not significantly benefit from HydraSmile, whereas those who preferred HydraSmile did not significantly benefit from Biotène. While Biotène and HydraSmile both have the potential to improve oral dryness, we recommend that patients try multiple products to determine which works best for them. Randall J. Harley , conceptualization, data curation, investigation, formal analysis, writing—original draft preparation, writing—review and editing; Eve Bowers , conceptualization, data curation, investigation, writing—review and editing; Jinhong Li , data curation, formal analysis, writing—review and editing; Mikayla Bisignani , data curation, writing—review and editing; Marci L. Nilsen , conceptualization, data curation, investigation, writing—review and editing; Jonas T. Johnson , conceptualization, data curation, funding acquisition, writing—review and editing. TJA Health, LLC, the producer of HydraSmile, provided both products free of charge and covered participant payments. They were not involved in the study design, data collection, data analysis, data interpretation, data reporting, or manuscript production. None. TJA Health, LLC, the producer of HydraSmile, provided both products free of charge and covered participant payments. | Study | biomedical | en | 0.999998 |
PMC11696981 | The complement cascade is an essential part of the innate immune system, comprising over 50 soluble and membrane-bound proteins that work together to destroy pathogens and maintain tissue homeostasis by removing dying cells ( 1 ). It involves three distinct pathways: the classical, alternative and lectin pathway. All rely on different molecules for initial cascade activation, yet they converge to a central step where the C3 convertase cleaves complement component C3 into the anaphylatoxin C3a and the opsonin (i)C3b. Invading microorganisms and dead cells become opsonized by (i)C3b, which enables phagocytic cells to recognize and internalize them. Depending on the nature of the dying cell, diverse molecules may facilitate clearance. For instance, apoptotic bodies unveil ‘eat me’ signals on their membranes, in particular phosphatidylserine, which is recognized by a multitude of phagocytic receptors on leukocytes ( 2 ). Complement also contributes to this removal via binding of C1q to the apoptotic cell, thereby opsonizing it and activating the complement pathway to facilitate clearance ( 3 , 4 ). Even though the general role of (i)C3b in phagocytosis is well-established, so far nearly all research on complement-mediated phagocytosis focused on microorganism clearance and efferocytosis (of apoptotic cells) ( 5 – 8 ). The removal of cellular corpses resulting from necrosis, referred to as necrotic cell debris, has been largely overlooked both in terms of mechanism description and physiological impact in vivo . Necrotic cell death diverges morphologically and immunologically from apoptosis due to plasma membrane rupture, the spilling of intracellular contents into the surrounding tissue, and the subsequent inflammatory response ( 9 ). These contents serve as damage-associated molecular patterns (DAMPs), which include ATP ( 10 ), high-mobility group box 1 ( 11 ), actin ( 12 ), mitochondria-derived molecules ( 13 ) and DNA ( 14 ), among others, and will interact with cognate pattern-recognition receptors (PRRs) on immune cells. For example, during drug-induced liver injury, widespread hepatocyte necrosis results in substantial DNA release from necrotic cells and causes intense TLR9-dependent inflammation ( 15 , 16 ). Other DAMPs such as histones and F-actin released from necrotic cells also contribute heavily to immune responses through Clec2d and Clec9a recognition, respectively ( 17 , 18 ). These highlight the importance of limiting the accumulation of debris/DAMPs in conditions where necrosis is prominent, such as drug-induced liver injury, atherosclerosis, stroke, severe trauma, and burn injuries. Phagocytosis is a cellular process of recognition and ingestion of particles larger than 0.5 µm, which promotes tissue homeostasis and elimination of microorganisms. Phagocytes recognize targets through specialized surface receptors, including non-opsonic receptors such as Dectin-1, Mincle, CD14 and CD36, which detect conserved molecular patterns, as well as various opsonic receptors. Complement receptors [e.g. CR1, CR3 (CD11b/CD18), CR4] are typical phagocytic receptors recognizing particles bound by complement opsonins ( 19 ). In general, complement has been implicated in the processing and removal of self-antigens, since the clearance of apoptotic cells is dependent on opsonization by C1q and C3 ( 20 , 21 ) and complement deficiencies increase the susceptibility to autoimmune disorders ( 22 ). Even though the role of complement in the clearance of apoptotic cells is clear, its contribution to the clearance of necrotic cells is poorly understood, with in vivo evidence lacking. Our group has recently shown that natural IgM and IgG antibodies are essential for the clearance of necrotic debris in vivo ( 23 ). Considering the substantial capacity of antibodies to initiate complement, the contribution of complement activation to necrotic cell debris clearance may be central. Therefore, we investigated complement activation in response to necrotic injury in mouse models of drug-induced liver injury and focal thermal injury (FTI) of the liver. We used intravital microscopy (IVM) to unveil the participation of complement in the clearance of necrotic debris in vivo and assessed its impact on the recovery from liver injury. 8-12 weeks old male and female C57BL/6J and C57BL/6NRj mice were purchased from Janvier Labs. Rag2 -/- mice (C57BL/6N-Rag2Tm1/CipheRj) were bred in specific pathogen-free (SPF) conditions at the Animal Facility of the Rega Institute (KU Leuven). C3 -/- mice (B6.129S4-C3tm1Crr/J) and Itgam -/- mice (B6.129S4-Itgamtm1Myd/J) were purchased from The Jackson Laboratory. Mice were housed in acrylic filtertop cages with an enriched environment (bedding, toys and small houses) and kept under a controlled light/dark cycle (12/12h) at 21°C with water and food provided ad libitum . All experiments were approved and performed following the guidelines of the Animal Ethics Committee from KU Leuven . Mice were starved for 15h and given a single oral gavage of 600 mg/kg APAP (Sigma-Aldrich) dissolved in warm PBS. Administration via oral gavage reflects the typical route of APAP-induced liver injury in patients. After 24, 48 or 72h, mice were sacrificed under anesthesia containing 80 mg/kg ketamine and 4 mg/kg xylazine, whereafter liver and blood were harvested. ALT in serum was determined with a kinetic enzymatic kit (Infinity, Thermo Fisher Scientific) according to the manufacturer’s instructions. Serum levels of mouse C3 were determined by a commercially available C3 ELISA kit according to the manufacturer’s instructions. Human neutrophils were purified from whole blood of healthy volunteers by immunomagnetic negative selection (EasySep™ Direct Human Neutrophil Isolation Kit, StemCell Technologies) according to the manufacturer’s instructions. Ethical permission for use of human blood-derived leukocytes was obtained with the ethical committee from the University Hospital Leuven . Mouse bone marrow neutrophils were extracted from femurs and tibias of C57BL/6J mice by flushing the bones with 5 ml cold RPMI-1640 medium using a 26-gauge needle. Cells were filtered through a 70 µm nylon strainer and further purified with the EasySep™ mouse neutrophil enrichment kit (StemCell Technologies), following the manufacturer’s instructions. Liver sections were stained with hematoxylin and eosin (H&E) and used to estimate hepatic necrosis via measurement of the necrotic area in the images. The livers were washed with 0.9% NaCl and fixed in 4% buffered formalin. Subsequently, the samples were dehydrated in ethanol solutions, bathed in xylol and included in histological paraffin blocks. Tissue sections of 5 μm were obtained using a microtome and stained with H&E. Sections were visualized using a BX41 optical microscope (Olympus) and images were obtained using the Moticam 2500 camera (Motic) and Motic Image Plus 2.0ML software. The left liver lobes of mice were harvested, embedded in Tissue-Tek O.C.T. Compound (Sakura Finetek Europe) and snap frozen in liquid nitrogen. 10 µm sections were cut using a Cryostat Microm CryoStar and subsequently fixed, permeabilized and blocked. Sections were incubated overnight at 4°C with 10 µg/ml polyclonal rabbit anti-human/mouse fibrin(ogen) (Dako), 5 µg/ml rat anti-mouse C3b/iC3b (clone 3/26, Hycult Biotec) and 5 µg/ml rabbit anti-mouse C1q (clone 4.8, Abcam). Secondary antibodies were added for 3h at RT: Alexa Fluor 647 donkey anti-rabbit, Rhodamine RED-X (RRX) donkey anti-mouse IgM, Alexa Fluor 488 donkey anti-rat, Alexa Fluor 560 donkey anti-rabbit (all at 10 µg/mL, Jackson ImmunoResearch). 10 µg/ml of Hoechst was added for 30 min at RT to stain nuclei. Finally, slides were mounted with ProLong Diamond Antifade Mountant. Images were captured using the Andor Dragonfly High-Speed Confocal Microscope (Oxford Instruments) or a Zeiss Axiovert 200M fluorescence microscope, and analyzed with FIJI. 8 images were acquired per liver with a 25X objective. Stained areas were selected using the threshold tool in FIJI, from which the percentage area of staining was determined. Pearson’s coefficient was calculated in FIJI using the JACoP plugin. Comparisons between WT and Rag2 -/- mice were normalized to the degree of injury [% of fibrin(ogen) labeling]. All images can be provided in different colors upon request. Mice were anaesthetized with a subcutaneous injection of 80 mg/kg ketamine and 4 mg/kg xylazine. For the experiments with APAP-induced liver injury, fluorescent antibodies (4 µg/mouse) and dyes (2 µl of a 10 mM Sytox Green solution; Thermo Fisher Scientific) were dissolved in 100 µl sterile PBS and injected intravenously 10 minutes before the surgery. The surgical procedure is described in detail in Marques et al. ( 24 ). For the FTI experiments, 1 mm 3 burns were made with a cauterizer and the injury site was then stained with 10 µl of pHrodo Red succinimidyl ester (SE) (4 µM; Thermo Fisher Scientific). The incision was stitched, and after 6h, mice were again anaesthetized with ketamine and xylazine to image the burn injury site. Images were taken every 30 sec for at least 30 min with the Dragonfly Spinning-Disk Confocal Microscope (Oxford Instruments) using the 25X objective. Quantification of phagocytosis was done in a blind manner by two individuals and counted manually. The percentage of Sytox Green labeling was determined from 2 mosaic images, each composed of 16 images per mouse, with FIJI software using thresholding. The % of CD11b + cells containing DNA was determined using Imaris software. Surfaces overlaying live cells and DNA debris were generated from 3D images and counted manually. Necrotic debris was generated from HepG2 cells by inducing mechanical disruption with a pellet mixer for 5 min in 0.1 M sodium bicarbonate (pH 8.5). The necrotic debris was labeled by adding 2 µl of 10 mM pHrodo Red SE (Thermo Fisher Scientific) solution per 10x10 6 cells. The debris was opsonized with 20% normal human serum, C1q-depleted serum (Complement Technology) or C3-depleted serum (Complement Technology) in PBS for 1h at 37°C. Opsonized debris was added to purified neutrophils in a 1:10 cell/debris ratio. Neutrophils were stimulated with 10 -7 M N-formyl-Met-Leu-Phe (fMLF; Sigma-Aldrich) for human neutrophils or 1 µM WKYMVM (Phoenix Pharmaceuticals) for mouse neutrophils, and labeled with 1 µM calcein AM viability dye (Invitrogen). Two 3D mosaics are captured per well, each comprising 9 overlapping images taken after 3h incubation at 37°C with the 25X objective of the Dragonfly Confocal Microscope (Oxford Instruments). Each condition was plated in duplicate and replicated at least 3 times. 3D reconstructions were generated using Imaris software. Additionally, with Imaris, surfaces were overlaid onto live cells and necrotic debris through thresholding, after which the volume of overlap was calculated in µm 3 . Liver lobes were surgically removed, put in MACS tubes with RPMI-1640 (Biowest Riverside) and minced with a gentleMACS Dissociator (Miltenyi Biotec). To this suspension, 2.5 mg collagenase D (Roche) and 1 mg DNAse I were added per liver for 1h at 37°C. The cell suspension was washed with PBS (300 g, 5 min, 4°C). Non-parenchymal cells were separated by density gradient centrifugation at 60 g for 3 min at 4°C. Supernatant was collected and filtered through a 70 µm nylon cell strainer. After centrifugation (300 g, 5 min, 4°C), ACK Lysing buffer (Gibco) was added for 10 min to the pellet to lyse red blood cells. 1x10 6 cells were collected in FACS tubes and washed with PBS (300 g, 5 min, 4°C). Zombie Aqua Fixable Viability dye (Biolegend) together with mouse FcR blocking Reagent (Miltenyi Biotec) were incubated for 15 min in the dark. Then, cells were washed with PBS supplemented with 0.5% bovine serum albumin (BSA) and 2 mM EDTA and the fluorescently labeled antibodies were incubated for 25 min at 4°C in the dark. After a final washing step, cells were read in a Fortessa X20 (BD Biosciences). Data was analyzed using FlowJo 10.8.1 software. HepG2 cells were mechanically lysed using a 22G syringe in PBS with 20 µg/ml RNAse A (Sigma-Aldrich) to generate RNA-free necrotic debris. The debris was incubated for 30 min at 37°C to allow enzyme activity. Then, the debris was washed twice with PBS containing 2 mM EDTA and 0.1% BSA, and centrifuged at 60g for 3 min to remove intact cells. The debris was opsonized with 20% serum or C3-depleted serum (Complement Technology) and then incubated for 1h at 37°C. Neutrophils from healthy donors were purified by immunomagnetic negative selection with an EasySep kit and stimulated with 10 -7 M fMLF. Cells and debris were co-incubated in a 6 well plate at a 1:10 cell/debris ratio and centrifuged at 300g for 5 min before being incubated for 3h at 37°C. Cells were harvested and total RNA was extracted by lysing the cells with β-mercaptoethanol and a Rneasy Plus Mini Kit (Qiagen) following the manufacturer’s instructions. After extraction, total RNA quality and quantity were determined using a Nanodrop. cDNA was obtained by reverse transcription using the high-capacity cDNA Reverse Transcriptase kit (Applied Biosystems). mRNA levels were analyzed by quantitative PCR using a TaqMan Gene Expression Master Mix (Applied Biosystems) and a 7500 Real-Time PCR System apparatus. Expression levels of genes of interest were normalized for the average RNA expression of three housekeeping genes (CDKN1A, 18S and GAPDH) using the 2 −ΔΔCT method ( 25 ). Data were analyzed using GraphPad Prism v9.3.1. All data are expressed as mean ± standard error of the mean (SEM). A Shapiro-Wilkinson test was performed to check for normality. Normally distributed data were analyzed with a Student’s t test or One-way ANOVA. Non-parametric data were analyzed with a Mann-Whitney test or Kruskal-Wallis test. Grubb’s test (extreme studentized deviate) was applied to determine significant outliers, which are identified as red dots in the graphs and removed from statistical analysis. A p-value equal or lower than 0.05 was considered significant. To assess complement activation at necrotic injury sites, a mouse model of paracetamol/acetaminophen (APAP)-induced liver injury was used, as it is characterized by extensive death of hepatocytes through necrosis ( 16 ). A sublethal dose of 600 mg/kg APAP was administered via oral gavage, causing significant liver damage as early as 12h after administration. In this acute model, hepatocellular necrosis was observed by elevated levels of serum alanine aminotransferase (ALT) , an enzyme primarily found in hepatocytes that serves as a biomarker for liver damage. Using histopathology, necrotic lesions were detected around the centrilobular veins, a typical pattern for APAP-induced injury, with the highest severity 12h after APAP administration . At 48h, the injury decreased significantly, as depicted by lower serum ALT levels and reduced necrotic areas . Fibrin deposition, known for its specific accumulation at necrotic sites after the activation of the coagulation cascade ( 26 ), was evaluated over time on liver cryosections to estimate the area of necrosis. Significant fibrin(ogen) staining was observed after 24h, whereafter it gradually decreased . A similar pattern was observed for IgM, with the highest deposition occurring after 24h and diminishing at later timepoints . Remarkably, C1q and (i)C3b deposition remained high up to 48h , indicating that the deposition of antibodies preceded complement activation through C1q binding and C3 cleavage. The staining in necrotic regions was not because of autofluorescence or unspecific labeling as confirmed in cryosections stained with secondary antibodies only . Pearson’s correlation coefficient between (i)C3b and fibrin was significantly higher in comparison to (i)C3b and intact cell nuclei in the liver . In addition, the other components of the classical complement pathway, IgM and C1q, also had high colocalization with (i)C3b . All this shows that complement proteins are deposited specifically at sites of necrotic injury in the liver. These data were supported by significantly lower C3 levels in the serum of APAP-treated mice , which confirm complement activation in response to injury. To investigate the contribution of C3 to the resolution of necrotic liver damage, C3 -/- mice were subjected to APAP overdose and evaluated at 2 timepoints: a) after 24h, to assess the peak of injury and b) after 48h, to observe the degree of tissue repair. No differences in serum ALT levels were observed after 24h, while significantly higher levels of ALT were found after 48h in C3 -/- mice compared to WT mice . In addition, fibrin staining of liver cryosections revealed no differences at the peak of injury, whereas more fibrin staining was found in C3 -/- mice at the later timepoint , indicating that C3 deficiency leads to larger, unresolving necrotic areas in the liver. During the liver regeneration phase, cellular proliferation can be estimated by the expression of Ki67 in liver cryosections. Using this approach, we observed a significant decrease in cell proliferation in C3 -/- mice 48h after APAP overdose , confirming that the absence of C3 impairs liver regeneration and recovery from injury. To directly assess whether C3 deficiency affects the amount of necrotic debris in injured tissues, we performed confocal IVM of mouse livers. Considering that DNA is abundantly released by necrotic hepatocytes ( 16 ) and based on the observed differences during the resolution phase , we chose to measure the amount of DNA exposed in the liver 48h after the APAP challenge using the membrane-impermeable DNA dye Sytox Green. Interestingly, WT mice presented minimal extracellular DNA in the liver at the 48h timepoint, which is consistent with the removal of necrotic debris and tissue recovery at that phase . Moreover, the vast majority of the fluorescent signal observed in the images of WT mice consisted of background fluorescence from healthy hepatocyte nuclei . In contrast, C3 -/- mice had significantly more extracellular DNA debris, demonstrating that these mice have a clear defect in the removal of necrotic debris from the liver . These results link poor recovery from liver injury in C3 -/- mice to impaired clearance of necrotic cell debris. To explore how debris persisted in injury sites, an analysis of the recruited leukocytes and their ability to take up debris was performed. The inflammatory response triggered by liver injury led to the recruitment of CD11b + leukocytes to necrotic areas identified by the extracellular DNA staining . These cells consisted primarily of inflammatory monocytes (CCR2 + ), neutrophils (Ly6G + ) and macrophages (F4/80 + ) . Of interest, CD11b, the α M subunit of the complement receptor CR3 (CD11b/CD18), which is known for its involvement in complement-mediated phagocytosis, was increased in neutrophils and monocytes during liver injury . Besides CD11b+ leukocytes, other immune cells (DCs, T and B cells) are present in lower percentages in the injured liver . However, their role in debris phagocytosis is less anticipated and therefore not examined in this study. We then inquired whether CD11b + cells were able to internalize extracellular DNA debris using IVM. Numerous CD11b + leukocytes were visualized deep within necrotic areas using Z-stacks, and multiple cells contained Sytox Green + particles . In total, 12% of the CD11b + cells contained DNA particles, with an average of 2 particles per cell . Internalization of DNA debris was confirmed upon intravenous administration of DNAse to remove the bulk of extracellular DNA in necrotic areas. DNA-positive vesicles in CD11b + leukocytes remained after DNAse injection, indicating that the DNA particles were located intracellularly, likely in phagosomes, which are not accessible to the circulating DNAse treatment . After observing impaired necrotic DNA removal in C3 -/- mice , flow cytometry was performed to quantify DNA uptake by leukocytes in WT and C3 -/- mice . Again, the membrane-impermeable DNA dye Sytox Green was injected intravenously 2h before sacrificing mice that received an APAP overdose 24h prior. This allowed sufficient time for the fluorescently-labeled necrotic DNA to be phagocytosed. We observed that a significantly lower percentage of neutrophils and macrophages were able to internalize DNA debris in the absence of C3, demonstrating that debris clearance depends at least partially on complement opsonization . Importantly, no differences were observed in the number of neutrophils (Ly6G + ) and macrophages (Ly6G - /Ly6C - /F4/80 + ) present in the injured liver between WT and C3 -/- mice , suggesting that impaired DNA removal observed in C3 -/- mice may be due to the reduced phagocytic capacity of these cells. In contrast, classical and non-classical monocytes did not require C3 to take up DNA debris in the injured liver , suggesting that different leukocyte populations utilize distinct mechanisms for debris clearance. However, the number of classical (Ly6G - /Ly6C + ) and non-classical monocytes (Ly6C - /CX 3 CR1 + ) recruited to the injured liver in C3 -/- mice was significantly reduced , showing that less monocytes reached the injured liver to perform debris phagocytosis. Overall, these data show that necrotic DNA debris is cleared by leukocytes in the liver and that neutrophils and macrophages require complement opsonization of debris for its uptake. Reduced clearance of DNA debris in the absence of C3 was observed in a model of acute liver injury induced by APAP overdose. To validate whether this finding applies to other types of injuries, we investigated debris clearance in a model of focal thermal injury (FTI) of the liver. In this model, necrotic lesions are induced locally with a hot needle, facilitating the observation of phagocytosis in vivo . This was challenging in the APAP model due to widespread necrosis throughout the liver causing an abundance of debris. The localized nature of the thermal injury allowed us to label necrotic debris by applying a droplet of the pH-sensitive dye pHRodo Red succinimidyl ester on top of the lesion. This dye binds covalently to proteins and exhibits increased fluorescence when the ingested material is processed in the acidic environment of a phagolysosome. Our previous work demonstrated that neutrophils predominated in the injured area 6h after FTI, with monocytes being attracted after 12h ( 23 ). The entire image of the FTI shows distinct burn injury zones, with neutrophils mostly accumulating around the injury core . Using IVM, neutrophils carrying pHRodo-labeled debris were observed crawling from the injury border into the necrotic core . Approximately 75% of neutrophils at the injury site had pHRodo-containing phagosomes, whereas less than 5% of neutrophils in healthy areas of the liver phagocytosed debris . This finding was confirmed by flow cytometry, which showed a significantly increased MFI of pHRodo in neutrophils and monocytes at the burn injury site compared to leukocytes in healthy areas . Similarly to the APAP model, complement proteins C1q and (i)C3b were specifically deposited on necrotic lesions 6h after FTI . Both components colocalized with each other and fibrin . Quantification of necrotic debris clearance by flow cytometry demonstrated a significant decrease in the percentage of neutrophils and non-classical monocytes phagocytosing debris in C3 -/- mice compared to WT . Interestingly, this phenomenon was not observed in classical monocytes nor macrophages . To be noted, the number of neutrophils migrating in the burn injury site , and the percentage of neutrophils, macrophages and non-classical monocytes attracted to the injured liver did not differ between WT and C3 -/- mice, while the percentage of classical monocytes was significantly reduced in C3 -/- mice . Due to the increase in CD11b + cells in response to liver injury and its known role in complement-mediated phagocytosis, the role of CR3 on debris uptake and liver resolution was investigated. In the FTI, using CD11b -/- mice, we found a significant decrease in debris clearance by neutrophils and non-classical monocytes , while no difference was observed in classical monocytes and macrophages . Moreover, the attracted phagocyte populations at the injury site were not affected by the deficiency in CD11b -/- , indicating that the defective clearance is not connected to inhibition of leukocyte recruitment . Conversely, no significant effect was observed on the progression of liver injury in CD11b -/- mice or in WT mice that received a CD11b blocking antibody, as evidenced by similar ALT values and fibrin staining after APAP overdose. These data show that in the FTI, neutrophils and monocytes migrate to the necrotic lesion to phagocytose necrotic debris, a process which depends on C3 and CD11b/CD18 for neutrophils and non-classical CX 3 CR1 + monocytes, whereas classical CCR2 + monocytes rely on other unidentified receptors. Investigating the factors driving debris phagocytosis in vivo is challenging due to the necessity of multiple knock-out strains and the technical limitations associated with observing cellular events in living mice. To overcome this, we developed an in vitro phagocytosis assay, enabling us to study the role of specific complement proteins in necrotic debris clearance. This approach also allowed us to verify our findings in human neutrophils and human necrotic debris. In this assay, necrosis was induced by mechanically disrupting HepG2 cells, whereafter sera lacking specific complement components were used to opsonize the debris. Images were taken with a confocal microscope 3h after combining the opsonized necrotic debris with human or mouse neutrophils. Importantly, the HepG2 cell debris itself did not contain detectable levels of C3/(i)C3b, as demonstrated by immunostaining HepG2 debris in vitro . However, when the debris came in contact with whole serum, it became clearly opsonized by C3/(i)C3b confirming the capacity of debris to induce complement activation. In vitro engulfment of pHRodo-labeled necrotic debris by live neutrophils was visualized by 3D reconstruction . Interestingly, the percentage of bone-marrow derived mouse neutrophils that performed phagocytosis did not differ when presented with debris opsonized with serum, serum lacking C3 (from C3 -/- mice) or lacking antibodies (from Rag2 -/- mice) . However, the volume of the necrotic debris internalized by neutrophils was significantly reduced when the debris was opsonized with serum lacking C3 . Likewise, freshly isolated neutrophils from healthy donors showed no difference in phagocytosis rates when debris was opsonized with serum, C1q-depleted serum or C3-depleted serum . Nevertheless, a significant decrease in the volume of debris uptake was observed when it was opsonized with serum lacking C3 . Latrunculin served as a positive control, as it inhibits phagocytosis globally by disrupting actin polymerization. These findings underscore the ability of both human and mouse neutrophils to internalize necrotic debris, while highlighting the role of complement on the amount of debris taken up through phagocytosis. To investigate the impact of debris phagocytosis on gene expression, qPCR was performed on human neutrophils incubated with necrotic debris from HepG2 cells. Neutrophils were exposed to pure unopsonized debris, debris opsonized with normal serum or with C3-depleted serum for 3 hours. The data were normalized to unopsonized debris in order to account for stimulation by DAMPs present in necrotic cell debris. Interestingly, uptake of serum-opsonized debris induced the upregulation of PTGS2 (encoding COX2) in neutrophils . COX2 is an enzyme with dual role in inflammation, catalyzing the production of pro-inflammatory prostaglandins from arachidonic acid, such as PGE2, but also participating in the synthesis of numerous pro-resolving lipid mediators ( 27 ). PTGS2 upregulation was reversed when the debris was opsonized with serum lacking C3, indicating a direct effect of complement on the upregulation of COX2, which was already observed for monocytes but not neutrophils ( 28 ). CXCR2 , coding for a major chemokine receptor in neutrophils that promotes both chemotaxis and reverse migration ( 29 , 30 ) was also upregulated by incubation with opsonized debris. Moreover, in the absence of C3, CXCR2 expression levels returned to baseline . In addition, incubation with opsonized debris led to the expression of other immunoregulatory and pro-resolving genes in neutrophils, namely, CXCR4 , encoding a chemokine receptor associated with homing of neutrophils to the bone marrow for apoptosis and removal ( 31 , 32 ); the immunoregulatory cytokine IL10 , and ANXA1 , encoding the protein annexin A1 that dampens leukocyte chemotaxis, respiratory burst and phagocytosis ( 33 ). The absence of C3 in the serum significantly reduced the expression of CXCR4 , IL10 or ANXA1 , indicating the essential role of C3 opsonization in the induction of pro-resolving genes in neutrophils. Moreover, alterations in gene expression in neutrophils are specific, since multiple genes were unaffected by stimulation with opsonized debris, including ALOX5, CYBB, CASP3, ARG1, FPR1 and FPR2 . Overall, the gene expression induced by the clearance of opsonized necrotic debris reflects a pro-resolving response in neutrophils, which is C3-dependent and plays a central in promoting tissue repair. Lastly, we investigated whether the classical complement pathway contributed to (i)C3b opsonization of necrotic debris. For this, Rag2 -/- mice, which lack mature T and B cells and therefore also antibodies, were subjected to the APAP overdose. Activation of the classical complement pathway requires the association of C1q with target-bound IgM or multiple IgGs, thus, this complement pathway cannot be activated in Rag2 -/- mice. First, the absence of IgM was confirmed by immunostaining, which revealed that IgM labeling in the injured Rag2 -/- liver was essentially absent . Interestingly, 24h after APAP overdose, liver cryosections showed significantly increased fibrin deposition in Rag2 -/- mice compared to WT mice . In our previous work, the absence of antibodies was shown to be responsible for a delayed liver resolution due to impaired necrotic debris clearance ( 23 ). Because the degree of necrotic injury also affects the degree of C1q and (i)C3b deposition, complement immunostaining was normalized to the area of fibrin staining. With this approach, we observed no difference in C1q binding to necrotic areas in Rag2 -/- mice compared to WT . Similarly, the mean fluorescence intensity (MFI) of C1q was not different between WT and Rag2 -/- mice, indicating that C1q binding does not rely on antibody opsonization of necrotic debris . However, the area of (i)C3b deposition was significantly decreased in Rag2 -/- mice, with also significantly lower MFI in (i)C3b-stained areas , even though it has been shown that Rag2 -/- mice have elevated C3 levels in blood ( 34 ). These data indicate that loss of IgM and IgG antibodies diminishes the level of complement activation on necrotic sites, even though C1q binding remains unaffected. These data also demonstrate that the classical complement pathway is activated in response to necrotic debris, leading to (i)C3b deposition in injury sites. We observed specific deposition of C1q and (i)C3b at necrotic lesions in the liver, in line with previous literature ( 35 – 37 ). We also demonstrated that in the absence of antibodies IgM and IgG, complement protein C1q still bound necrotic debris , likely due to its interaction with various ligands which include histones, DNA, C-reactive protein, pentraxin 3 and serum amyloid P component ( 4 , 38 – 40 ). Interestingly, the absence of antibodies significantly reduced C3b deposition, even though Rag2 -/- mice have elevated C3 levels compared to WT mice ( 34 ). This suggests that C3b deposition on necrotic lesions relies, at least partially, on the classical complement pathway, which is hampered in the absence of antibodies despite C1q presence. An in vitro study corroborated this, since adding C1q to sera lacking IgM and C1q did not affect C3 deposition on apoptotic cells ( 41 ). Of course, the activation of the alternative and lectin complement pathway in response to necrotic cells should not be overlooked, as properdin, a positive regulator of the complement system, has been proven to bind to necrotic cells and activate the alternative pathway ( 42 ). Also, mannose-binding lectin was shown to interact with apoptotic and necrotic cells and to facilitate uptake by macrophages in vitro ( 43 ). Realization that the classical pathway is activated reveals additional opportunities to ameliorate debris clearance. Patients with severe necrotic injuries might benefit from intravenous immunoglobulin (IVIG) supplementation and blood transfusions. The administered natural antibodies may bind necrotic debris, triggering C3 cleavage via the classical pathway, and aiding in the clearance of debris. Moreover, our previous work showed that the supplementation of natural antibodies directly enhanced Fc receptor-mediated phagocytosis, meaning that debris would be cleared via both complement- and Ab-mediated phagocytosis ( 23 ). Evaluating liver injury in C3 -/- mice following APAP overdose showed us a delayed recovery and the prolonged accumulation of necrotic debris . Roth et al. observed lower serum ALT levels in C3 -/- mice 6 and 12h post-APAP, possibly due to the administration of a lower dose of 300 mg/kg APAP intraperitoneally and the shorter evaluation time ( 35 ). However, other studies have similarly noted impaired liver regeneration in C3 -/- mice after toxic-injury induced by CCl 4 and partial hepatectomy ( 44 , 45 ). Although impaired liver resolution and debris accumulation in C3 -/- mice could be explained by impaired debris phagocytosis, other factors also contribute to this worsened phenotype. The absence of anaphylatoxins C3a and C5a impact liver regeneration by affecting hepatocyte priming and inhibiting neutrophil and monocyte chemotaxis ( 41 , 46 , 47 ). In addition to the decreased presence of monocytes at the injury site to perform debris phagocytosis, the reduced differentiation into monocyte-derived macrophages also contributes to delayed injury resolution, as observed in CCR2 -/- mice ( 48 , 49 ). The absence of the C3a/C3aR axis may also influence CCL2 expression in leukocytes, potentially affecting monocyte infiltration into the injured liver in C3 -/- mice. This is supported by studies showing that C3a upregulates CCL2 expression in human keratinocytes and mast cells ( 50 , 51 ). This emphasizes that liver resolution is a complex process involving multiple factors, with debris phagocytosis being just one of the contributing events. We showed that neutrophils, along with macrophages, rely on complement to phagocytose debris in a model of APAP-induced liver injury . This phenomenon is not limited to the liver, as similar uptake of debris occurred in the lungs of mice with acid-induced lung injury ( 52 ). Using confocal microscopy, we observed an average of two DNA-containing phagosomes per cell, consistent with previous findings of macrophages ingesting one or more small cytosolic particles from necrotic cells ( 53 ). Interestingly, cells that relied on complement for recruitment did not rely on it for phagocytosis, and vice versa , highlighting the existence of multiple pathways for leukocyte recruitment and debris clearance which may compensate each other. Moreover, in a model of FTI of the liver, the phagocytosis of pHRodo-labeled protein-rich necrotic debris was observed, complementing our findings and the one of Wang et al., where neutrophils engulfed nuclear debris ( 54 ). Our results showed that in this model phagocytosis in neutrophils and non-classical monocytes is complement-dependent , adding a mechanistic layer to the role of CX 3 CR1 + monocytes in sterile injury resolution. Classical monocytes (CCR2 hi , CX 3 CR1 lo ) surround the FTI site and transition into non-classical monocytes (CX 3 CR1 hi , CCR2 lo ) essential for injury repair, a process dependent on IL-10 and IL-4 ( 55 ). Human neutrophils exhibited a pro-resolving phenotype after phagocytosis of necrotic debris, marked by the gene upregulation of IL10, PTGS2, CXCR4, CXCR2 and ANXA1 . This gene expression shifts depend on the type of meal ingested, as illustrated in macrophages, where efferocytosis of apoptotic cells triggers an anti-inflammatory response ( 56 ). Research on efferocytosis revealed that the uptake of lipids from apoptotic bodies stimulates sterol receptors (PPARs and Liver X receptors), triggering an anti-inflammatory response via IL-10 and TGF-β production ( 57 , 58 ). Due to the lipid-rich nature of the debris, these insights could apply to necrotic cells as well. The upregulated expression of CXCR4 is associated with reverse migration of the neutrophils to the bone marrow, as similarly shown in vivo by Wang et al. ( 54 ). This process would in theory assist to alleviate the burden of dead cells to be cleared at the injury site when neutrophils undergo apoptosis. The opsonization of debris in the absence of C3 significantly impacted gene expression levels, reducing them to levels comparable to control. This suggests that the specific opsonization process, rather than phagocytosis itself, play a key role in regulating the expression of genes associated with resolution. The in vitro phagocytosis assay showed that both mouse and human neutrophils ingested necrotic debris, however, the phagocytosed volume was reduced in the absence of C1q and C3 . Opsonins affect hydrophobicity and surface charge, consequently influencing receptor interactions ( 59 ). This could potentially impact debris removal and explain the larger area of necrotic debris observed in C3 -/- mice 48h post-APAP . Also, mechanosensing of the target, which drives actin-based protrusions to mediate particle internalization, might be impaired, possibly due to the reduced stiffness/rigidity of the debris in the absence of C3 ( 60 ). In conclusion, our study demonstrates the crucial role complement proteins play in the opsonization and subsequent phagocytosis of necrotic debris. This mechanism was confirmed in both mouse and human neutrophils, irrespective of the nature of injury (chemical or thermal). This highlights a general complement-dependent pathway for debris clearance specifically for neutrophils. Consequently, individuals with complement deficiencies, whether it is due to genetic factors or auto-immune diseases, might exhibit impaired clearance of necrotic debris, causing prolonged inflammation and poor tissue regeneration. These individuals could potentially benefit from C3 or plasma supplementation to enhance debris clearance and fasten injury recovery. | Study | biomedical | en | 0.999997 |
PMC11697044 | A 68-year-old man was referred to the vascular surgery department for evaluation of an EIA aneurysm incidentally found on a screening magnetic resonance imaging after an elevated prostate-specific antigen on routine screening laboratory tests. Medical history was significant for type 2 diabetes mellitus, hypertension, hyperlipidemia, and coronary artery disease without a history of myocardial infarction or preventative intervention. He presented to the office and was evaluated for symptoms; he reported none. There was no history of trauma, infectious etiology, or prior vascular access. The patient denied any history of smoking, cycling or extreme sporting, or family history of aneurysms. Blood cultures were negative, and leukocytes were within normal limits. Computed tomography angiogram of the chest, abdomen, and pelvis with runoff revealed an isolated right sided saccular 2.6-cm EIA aneurysm above the inguinal ligament, with no extension proximally or distally . There was no aneurysmal or major atherosclerotic disease in the abdominal aorta or distal arterial vessels and there were no signs of disease in the contralateral iliac arteries. Based on discussion with the vascular surgery team, the patient was given the options of endovascular vs open surgery. Using shared decision-making, an endovascular approach was chosen to treat the isolated EIA aneurysm, and the patient provided consent. Fig 1 Preoperative computed tomography angiography showing an isolated aneurysm of the suprainguinal right external iliac artery ( EIA ). ( A ) Sagittal. ( B ) Axial. ( C ) Coronal with measurement. The operation was performed with the patient under general anesthesia. An 8F short 25 cm sheath (Terumo Medical Co, Tokyo, Japan) was placed percutaneously at the left femoral artery, and the Omni Flush Soft-Vu Angiographic Catheter (Angiodynamics, Latham, NY) was advanced and positioned into the infrarenal abdominal aorta. An aortogram was captured to visualize the iliac arteries with an oblique view to visualize the right hypogastric takeoff. A widely patent bilateral iliac system was visualized, and a large 2.6-m aneurysm was identified 4 cm above the femoral bifurcation. The Omni Flush over a floppy Glidewire (Terumo Medical Corp., Somerset, NJ) was advanced across the iliac bifurcation, beyond the aneurysm sac, and further down into the right superficial femoral artery. The short 8F sheath was exchanged for an 8F 45 cm Ansel Sheath (Cook Medical, Bloomington, IN) over a J-tipped Stiff Amplatz Wire (Boston Scientific, Natick, MA), and advanced to the mid-right EIA just proximal to the aneurysm. After heparinizing the patient and measuring the native vessel for optimal graft selection, the 9 mm × 10 cm Viabahn stent graft (W. L. Gore & Associates, Flagstaff, AZ) was ultimately selected and carefully deployed in a distal to proximal fashion. The entirety of the aneurysmal sac was covered while ensuring maintaining patency of the common femoral artery distally and hypogastric artery proximally. Final angiography demonstrated widely patent right EIA and widely patent femoral bifurcation with complete exclusion of the EIA aneurysm. Given that the stent graft appeared to have an excellent seal, it was decided not to post-dilate with balloon angioplasty. Fig 2 Intraoperative angiography. ( A ) Preoperative aortogram. ( B ) Preoperative iliac angiogram. ( C ) Completion angiogram after stent graft deployment. ( D ) Reconstructed three-dimensional image of 2-week follow-up computed tomography scan showing completely excluded aneurysm with patent stent graft in suprainguinal right external iliac artery ( EIA ). The patient was discharged home on postoperative day 1 without complication on a regimen of aspirin only, indefinitely. On outpatient review at 2 weeks, the patient was well. No difference in peripheral pulse examination was found. The patient was asymptomatic preoperatively and remained symptom free at the 2-week follow-up. Postoperative computed tomography angiography demonstrated an excluded aneurysm sac with a good apposition of the stent graft with no evidence of endoleaks or stent graft-related complications . We plan to perform annual ultrasound surveillance of the stent graft. Common and internal iliac artery aneurysms have multifactorial pathogeneses that are nearly identical to that of abdominal aortic aneurysms, as seen by their histological similarities. The particular rarity of aneurysms involving the EIA can be attributed to the unique lamellar architecture and biomechanical properties of the external iliac arterial walls, particularly in the tunica media. 7 Distinct from the more proximal aortoiliac segments, external iliac arteries possess a more structured and layered lamellar architecture, as well as a higher elastin-to-collagen ratio, 8 which allows them to withstand higher hemodynamic stresses and accommodate higher pressures, thus reducing the susceptibility to wall weakening and aneurysmal dilation. Isolated iliac artery aneurysms, without any other identifiable aortoiliac or peripheral vascular disease, have been described in multiple investigations to be a rare pathology. Silver et al 9 in 1967 performed a chart review of patients with arterial aneurysms affecting the aortic or iliac artery systems and found 571 patients with abdominal aortic aneurysms and only 11 patients with isolated iliac artery aneurysms, a relative frequency of 1.9%. Later in 1983, McCready et al 10 reported the frequency of isolated iliac artery aneurysms of 0.9% and provided one of the only anatomical frequency distributions amongst isolated aneurysms of the iliac artery system: 90% of all isolated iliac artery aneurysms affect the common iliac artery solely, whereas <1% affect the EIA solely. Finally, in 1989 Brunkwall et al 5 performed the largest investigation on isolated iliac artery aneurysms, reporting 13 cases during the 15-year compilation of autopsy and operating records in Malmo Sweden, population 230,000. They found only one isolated EIA aneurysm in that same study. 5 Regardless of how rare they are, many investigations have demonstrated that isolated iliac artery aneurysm are associated with a high risk of rupture and mortality, with rates of rupture between 14% and 75%. 2 , 11 , 12 The high mortality rate is postulated to be due to the lack of inclusion of iliac artery aneurysms in a differential diagnosis of pelvic conditions, partly owing to their rarity. Also, owing to the nature and location of the pathology, they are difficult to detect on physical exam until they are at a size when they are at risk for a morbid rupture. 3 McCready et al described that 78% of the patients in their study presented asymptomatically with their iliac artery aneurysm, similar to the patient described in this case study. Few cases have been reported describing isolated aneurysms of the EIA ( Table ). The first case report in 1952 described a patient who presented symptomatically with abdominal and lower limb pain. Surgical exploration revealed an EIA aneurysm that had ruptured. 13 The next three case reports published between 1986 and 2009 described iliac artery aneurysms involving the EIA that were found histologically to be due to cystic medial necrosis. 14 , 15 , 16 More recently, in 2019 two cases were presented with a 65-year-old symptomatic patient with left lower limb edema and a 55-year-old symptomatic patient with intermittent left thigh pain associated with paresthesias, both ipsilateral to the isolated EIA aneurysm. 17 , 18 Finally in 2020, there was another case report, similar to Crivello's, describing a symptomatic isolated EIA aneurysm associated with cystic medial necrosis. 19 In the present report, we have presented the case of a 68-year-old man who was completely asymptomatic, with an incidental finding of an EIA aneurysm. Until the presentation of our case, a search of PubMed found only seven reported cases of isolated EIA aneurysm, none of which were completely asymptomatic or repaired endovascularly. Table Reported cases of isolated external iliac artery ( EIA ) aneurysm Author Age, years Sex Size, cm Year Priddle et al 13 29 Female 4.0 1952 Crivello et al 14 27 Male Not available 1986 Mohan et al 15 66 Male 11.0 1997 Kato et al 16 78 Female 4.0 2009 Van de Luijtgaarden et al 17 65 Male 3.5 2019 Hussain et al 18 55 Male 7.0 2019 Chatzantonis et al 19 51 Male 2.0 2020 Current case 68 Male 2.6 2023 With this case report, we hope to stimulate a discussion on when to intervene on these rare pathologies. Although we have guidelines, based on large sample studies, for when to fix common or internal iliac artery aneurysms, the rarity of isolated EIA aneurysms leaves vascular surgeons without clear directions on when to fix them, especially when they are asymptomatic. Of the different publications reporting sizes of isolated EIA aneurysms, we found that most were repaired between 2 and 4 cm, mostly in men approximately 60 years old. Until clearer, large sample studies are performed, we recommend early repair of these aneurysms (diameter ≥2 cm) owing to the risk of rupture or symptoms seen in prior publications. Additionally, we now have the availability of minimally invasive interventions, with endografts that are able to be surveilled postoperatively with ultrasound examination. None. None. | Clinical case | clinical | en | 0.999996 |
PMC11697048 | Colorectal cancer (CRC) is one of the most malignant diseases that easily metastasizes to important organs such as the liver, lung, and ovary. 1 Many strategies have been employed for clinical CRC therapy such as chemotherapy, targeted therapy, and immunotherapy. 2 , 3 , 4 Although chemotherapy is still the preferred strategy for CRC treatment, most patients only show a good response at the first treatment, and long-term administration is always not effective in reducing tumor recurrence. 5 Besides, the well-known acute toxicity also confines the application of chemotherapy in some patients. 6 The emergence of targeted drugs such as cetuximab has greatly reduced drug toxicity and improved the effectiveness of CRC therapy, but the frequent gene mutations such as P53 and KRAS in CRC cells always lead to clinical resistance to targeted drugs. 7 , 8 Immune checkpoint therapy has brought a breakthrough in cancer therapy, and the programmed cell death 1 (PD1) antibody has been approved for the treatment of CRCs with high levels of microsatellite instability. 3 However, CRCs with low levels of microsatellite instability exhibit a conventional morphology with minimal tumor-infiltrating lymphocytes, resulting in limited response rates among patients. 9 Therefore, there is an urgent need for the development of novel anti-tumor drugs that can effectively treat CRC patients with different gene statuses. In contrast to traditional targeted drugs, photodynamic therapy (PDT) destroys cancer cells immediately without considering their genetic status, making it a promising approach for treating various types of CRC. 10 PDT contains two individual non-toxic components, namely a laser device and a photosensitizer, 11 among which the laser device is used to generate a specific laser with an appropriate wavelength to activate the photosensitizer, 12 and the photosensitizer is used to target tumors and generates reactive oxygen species (ROS), especially singlet oxygen ( 1 O 2 ) upon laser irradiation to induce cell death. 13 PDT-induced cell death always promotes the release of tumor-associated antigens into the microenvironment, these released tumor-associated antigens can be presented by antigen-presenting cells to activate cytotoxic T cells for specific tumor killing. 14 Besides, several damage-associated molecular patterns related to immunogenic cell death (ICD) have been demonstrated, including the release of large amounts of ATP and high-mobility group box 1 (HMGB1) into the extracellular milieu, and the translocation of calreticulin (CRT) from the endoplasmic reticulum to the cell surface. 15 , 16 Despite being approved for clinical use 30 years ago, PDT is not widely utilized in cancer therapy due to the lack of suitable photosensitizers. Although first-generation photosensitizers such as hematoporphyrin and their derivatives show excellent photophysical and electrochemical properties, the self-quenching behavior, complex composition, and poor photochemical stability significantly impede their practical applications. 17 , 18 Second-generation photosensitizers such as chlorin E6 (Ce6) possess specific molecular structures, enhanced tumor-targeting capabilities, increased ROS production, and augmented cell-killing potential. 19 Ce6 is a degradation product of chlorophyll and generates numerous ROS upon the irradiation of 650–700 nm, and its derivatives such as talaporfin sodium and temoporfin have been approved for clinical cancer therapy. 20 , 21 The third-generation photosensitizers feature the highest targeting ability and suitable half-lives achieved through the conjugation of second-generation photosensitizers with targeting molecules such as antibodies, peptides, or nanoparticles. 22 Polypeptide is a kind of natural material with protein homology, good biocompatibility, and low toxicity. As a representative of the new generation of biological materials, polypeptide structure is simple and easy to synthesize, indicating a potential use in encapsulating and transporting small molecule drugs. 23 , 24 Gly-Phe-Phe-Tyr (GFFY) peptide was reported to possess the capability of self-assembling and was used for the development of different drugs or biomaterials. A highly sensitive aggregation-induced emission (AIE) fluorescent light-up probe TPE-GFFYK (DVEDEE-Ac) was designed based on the peptide GFFY, which can induce the ordered self-assembly of AIE luminogen (AIEgen), yielding close and tight intermolecular steric interactions to restrict the intramolecular motions of AIEgens for excellent signal output. 25 The naphthylacetic acid-modified D-enantiomeric GFFY (D-Nap-GFFY) can form a nanofiber hydrogel which is selectively taken up by antigen-presenting cells, and D-Nap-GFFY-encapsulated T317 (D-Nap-GFFY-T317) enhances dendritic cell maturation and infiltration into tumors, increases CD3 + /CD8 + cells in tumors, and inhibits tumor angiogenesis. 26 The naphthylacetic acid-modified GFFY (Nap-GFFY) also is a novel vaccine adjuvant, antigens can be easily incorporated into the hydrogel by a vortex or by gently shaking before injection, and the vaccines can stimulate strong CD8 + T-cell responses, which significantly inhibits tumor growth. 27 In addition, a naproxen acid-modified tetra peptide of GFFY (Npx-GFFY) hydrogels enhances the protection of the H7N9 vaccine and is a promising adjuvant for H7N9 vaccines against highly pathogenic H7N9 virus. 28 Besides, a previous study confirmed that the hydrogel formed by GFFY peptides has good stability in terms of both humoral immunity and anti-tumor cellular immunity. 29 The diameter of self-assembled particles formed by GFFY peptides varies depending on the coupling molecules used, but most macroparticles show a size of approximately 100 nm. 30 This size ensures that the macroparticles can target and penetrate tumor tissues through the enhanced permeability and retention (EPR) effect, a well-established mechanism by which macroparticles ranging from 100 nm to 800 nm in size enter solid tumors. 31 In this study, we have developed a third-generation photosensitizer, namely Ce6-GFFY, by combining the peptide GFFY with the Ce6 molecule. A series of experiments were conducted to investigate the functional mechanism of Ce6-GFFY in CRC therapy. Our findings indicate that Ce6-GFFY forms macroparticles, effectively targets and accumulates in tumor tissues, and induces significant ROS production in cancer cells upon the irradiation of a 660 nm laser. Additionally, Ce6-GFFY effectively inhibits the growth of both primary and metastatic tumors through the induction of ICD, demonstrating a promising application for the clinical treatment of CRC. HCT116 (human) and CT26 (mouse) CRC cell lines were purchased from ATCC (Rockville, MD, USA) and maintained in DMEM or 1640 medium at 37 °C in 5% CO 2 . The medium was supplemented with 10% fetal bovine serum (FBS; Thermo Fisher Scientific, Waltham, MA, USA). All cell lines were authenticated by short tandem repeat profiling and were tested for mycoplasma contamination. GFFY was synthesized by Synpeptide (Shanghai, China), chlorin e6 was purchased from Macklin Biochemical (Shanghai, China), 4′,6-diamidino-2-phenylindole and 2′,7′-dichlorodihydrofluorescein diacetate were purchased from Beyotime Biotechnology (Shanghai, China), propidium iodide and annexin V-FITC apoptosis detection kit (#BMS500FI-300) was purchased from Thermo Fisher Scientific (Waltham, MA, USA), and rabbit anti-CRT antibody was purchased from Abcam (Boston, MA). Ce6-GFFY was synthesized through a dehydration condensation reaction between the carboxyl of Ce6 and the amino group of GFFY, and purified using high-performance liquid chromatography to obtain the compound binding to only one GFFY peptide, then the compound was further identified using mass spectrometry. Ce6-GFFY was suspended in phosphate-buffered saline (PBS) (0.1 mg/mL) at room temperature for 10 min, then the particle size, zeta potential, and polydispersity index were measured by dynamic light scattering according to the manufacturer’s protocol. The morphological feature of Ce6-GFFY was examined using transmission electron microscopy (Tecnai Spirit T12) according to the manufacturer’s protocol. For stability examination, Ce6-GFFY macroparticles were incubated at 37 °C for 24 h, 48 h, 72 h, and 7 d, respectively, then the particle size and polydispersity index were detected by dynamic light scattering. Besides, Ce6-GFFY macroparticles were frozen at −80 °C for 1 h, and thawed at 37 °C, and then the particle size was examined by dynamic light scattering. Cell viability was measured using a CCK-8 cell counting kit (Beyotime Biotechnology, Shanghai, China). 7000 cells were seeded in 96-well plates and treated with drugs at various concentrations for 1 h and then treated with or without laser irradiation for 1 min (660 nm, 0.02 W/cm 2 ), followed by incubation at 37 °C for 24 h. After the addition of the CCK-8 reagent, the cells were continually incubated at 37 °C for 3 h before being detected by a microplate reader (TECAN, Victoria, Austria). For cell endocytosis detection, CT26 and HCT116 cells were treated with Ce6 molecules (5 μM) or Ce6-GFFY (5 μM) at 37 °C for 1 h. For ROS detection, cells were treated with Ce6 molecules (5 μM), GFFY peptide (5 μM), or Ce6-GFFY (5 μM) at 37 °C for 1 h and then treated with or without laser irradiation for 1 min (660 nm, 0.02 W/cm 2 ). Afterward, the cells were collected and stained with DCFH-DA at 37 °C for 20 min. For cell death analysis, cells were treated with Ce6 molecules, GFFY peptide, or Ce6-GFFY (CT26, 10 μM; HCT116, 5 μM) at 37 °C for 1 h and then treated with or without laser irradiation for 1 min (660 nm, 0.02 W/cm 2 ). After incubation at 37 °C for 24 h, the cells were collected and stained with propidium iodide (1:100) and annexin V-FITC (1:200). For examination of CRT expression, cells were treated with Ce6 molecules, GFFY peptide, or Ce6-GFFY (CT26, 10 μM; HCT116, 5 μM) at 37 °C for 1 h and then treated with or without laser irradiation for 1 min (660 nm, 0.02 W/cm 2 ). After incubation at 37 °C for 8 h, the cells were collected and blocked with 5% bull serum albumin for 10 min and then incubated with a primary anti-calreticulin antibody , followed by the incubation of an Alexa Fluor 488-conjugated secondary antibody . For tissues, primary and metastatic tumors were digested into single cells using the KeyGEN tissue dissociation kit (#KGA829, KeyGEN BioTECH) following standard protocol. Digested tumors were mashed through 40 μm filters into PBS and were centrifuged at 300 g and 4 °C for 5 min; the obtained cells were blocked with 5% bull serum albumin for 10 min and incubated with a surface antibody mixture at room temperature for 2 h. Antibodies against CD45 , CD3 , CD8a , CD11b , and Gr-1 were used. The above treated cells were determined by flow cytometry (Beckman–Coulter) and analyzed by FlowJo v.10.8.1 software. For cell endocytosis detection, CT26 and HCT116 cells were treated with Ce6 molecules (5 μM) or Ce6-GFFY (5 μM) at 37 °C for 1 h, and then the cells were fixed with 4% paraformaldehyde for 15 min and stained with DAPI for 20 min at room temperature. For ROS detection, cells were treated with Ce6 molecules (5 μM), GFFY peptide (5 μM), or Ce6-GFFY (5 μM) at 37 °C for 1 h and treated with or without laser irradiation for 1 min (660 nm, 0.02 W/cm 2 ); afterward, the cells were stained with DCFH-DA at 37 °C for 20 min. For examination of CRT expression, cells were treated with Ce6 molecules, GFFY peptide, or Ce6-GFFY (CT26, 10 μM; HCT116, 5 μM) at 37 °C for 1 h and treated with or without laser irradiation for 1 min (660 nm, 0.02 W/cm 2 ); afterward, the cells were incubated at 37 °C for 8 h. After being fixed using 4% paraformaldehyde for 10 min and blocked with 5% bull serum albumin at 4 °C overnight, the cells were incubated with a primary anti-calreticulin antibody , followed by the incubation with an Alexa Fluor 488-conjugated secondary antibody . Then, the cells were stained with DAPI at room temperature for 10 min. For tissues, paraffin-embedded samples were sectioned at 4 μm thickness. Antigen retrieval was performed by a pressure cooker (at 95 °C for 10 min) in citrate antigen retrieval solution . The sections were then blocked in PBS containing 2% goat serum albumin at room temperature for 1 h. Then, the sections were incubated in the mixture of two primary antibodies at 4 °C overnight. The following primary antibodies were used: rat anti-Gr-1 , mouse anti-Cytokeratin Pan , and rabbit anti-CD8 . The sections were washed with cold PBS and incubated with the mixture of two secondary antibodies raised in different species at room temperature in the dark for 2 h. The following secondary antibodies were used: Alexa Fluor 488 labeled anti-rabbit , Alexa Fluor 594 labeled anti-rat , Alexa Fluor 488 labeled anti-mouse , and Alexa Fluor 594 labeled anti-mouse . Then, sections were counter-stained with DAPI at room temperature for 10 min. The above treated samples were examined by laser confocal fluorescence microscopy and analyzed using Zeiss v.3.1 software. Four-to-six-week-old BALB/c mice and BALB/c nude mice were purchased from Guangdong Medical Laboratory Animal Center (Guangzhou, China). All mice were maintained under standard conditions and treated according to institutional guidelines for animal care. For primary tumor therapy, 2 × 10 5 CT26 cells were suspended in a 1:1 mixture of PBS and matrigel and subcutaneously injected into the flanks of the mice. When the volume of tumors reached 70 mm 3 , the mice were randomized into treatment and control groups. The treatment groups received tail intravenous injections of GFFY (2.5 mg/kg), Ce6 (2.5 mg/kg), or Ce6-GFFY (5 mg/kg), and the control group received PBS treatment, both groups were treated with once 660 nm laser irradiation for 8 min (1 min on, 1 min off; 4 cycles) on the tumor region at a power of 0.2 W/cm 2 . Tumor volume and mouse body weight were recorded every three days, and tumor tissues were collected and weighed at the end of treatment. Main organs such as the heart, liver, spleen, lung, and kidney were collected for pathological analysis and the blood was collected for blood routine examination. For metastatic tumor therapy, 2 × 10 5 CT26 cells were subcutaneously injected into the right flank of the mice (primary tumor), and 1 × 10 5 CT26 cells into the left flank (metastatic tumor). When the volume of primary tumors reached 200 mm 3 , the mice were randomized into treatment and control groups. The treatment groups received tail intravenous injections of Ce6-GFFY (5 mg/kg) and the control groups received the treatment of PBS; both groups were treated with once 660 nm laser irradiation for 8 min (1 min on, 1 min off; 4 cycles) on the primary tumor region at a power of 0.2 W/cm 2 . Tumor volumes were recorded every two days and tumor tissues were weighted and collected for further analysis such as immunofluorescence and flow cytometry detection at the end of treatment. For in vivo imaging, Ce6-GFFY (5 mg/kg) and Ce6 molecules (2.5 mg/kg) were injected into mice burdened with or without xenograft tumors through the tail vein (100 μL/mouse), and the fluorescence intensity of mice or main tissues such as brain, heart, liver, spleen, lung, kidney, intestine, and stomach was detected and analyzed with an IVIS spectrum imaging system (PerkinElmer, MA, USA). All animal experiments were approved by The Institutional Animal Care and Use Committee at Sun Yat-sen Cancer Center. Statistical analyses were performed using GraphPad Prism 8. Experiments were performed with 3 biological replicates, and the data from three independent experiments were presented as mean ± standard deviation and were compared using an unpaired t -test (groups ≤2) or ordinary one-way ANOVA (groups ≥3), and data with two independent variables was analyzed using two-way ANOVA. P < 0.05 was considered statistically significant . Ce6-GFFY was synthesized by coupling chlorin-e6 with peptide GFFY. To ensure a relative homogeneity of the Ce6-GFFY molecules used in subsequent experiments, we further performed high-performance liquid chromatography purification after the chemical synthesis reaction and obtained a compound binding to only one GFFY peptide . Besides, the Ce6-GFFY molecules were further confirmed using mass spectrometry . We also carried out the proton nuclear magnetic resonance analysis to identify the molecular feature of Ce6-GFFY. The nuclear magnetic resonance data successfully identified the distribution of 1 H on different functional groups, indicating that the molecular structure of Ce6-GFFY is relatively complex . However, the information provided by nuclear magnetic resonance is limited, making it difficult to determine the specific coupling site of the peptide GFFY on the Ce6 molecule. In fact, almost every photosensitizer developed based on Ce6 has encountered structural confirmation challenges. For example, talaporfin is a photosensitizer synthesized by coupling a single aspartic acid to the carboxyl group of Ce6 and has been approved for clinical use, but it was impossible to determine the coupling position of this amino acid for a long time. However, based on the chemical synthesis processes, we can make reasonable conclusions about the molecular structure of the photosensitizer. According to the previous studies, an anhydride will be firstly formed between the Ce6 15 2 and 13 1 carboxylic acid groups during the synthesis of Ce6-based photosensitizers, and this is more likely than a larger ring anhydride between the 17 3 and 15 2 acids. 32 This phenomenon has been verified in a wide variety of nucleophiles such as ethoxide, propylamine, isopropylamine, ethanolamine, p-tolylthiolate, phenoxide, isobutoxide, and benzyloxide; all of them yield the 15 2 -conjugates, with several of these structures being confirmed by single-crystal X-ray structures. 33 For talaporfin, the aspartic acid nitrogen atom undergoes nucleophilic attack upon the aliphatic side of the anhydride to produce the 15 2 conjugates and the structure has also been demonstrated using single-crystal X-ray diffraction. 34 In this study, the synthesis processes of Ce6-GFFY are the same as that of talaporfin, so the coupling position of the peptide GFFY is most likely to be on the 15 2 carboxylic acid group of Ce6 . Figure 1 Synthesis and characterization of Ce6-GFFY. (A) Schematic diagram of Ce6-GFFY synthesis. DCC, N, N′-dicyclohexylcarbodiimide; DMAP, 4-dimethylaminopyridine; CH 2 CL 2 , dichloromethane. (B) The size distribution of Ce6-GFFY macroparticles was analyzed using DLS. d, diameter; PDI, polydispersity index; DLS, dynamic light scattering. The data are representative of five independent experiments. (C) The Zeta potential of Ce6-GFFY macroparticles was analyzed using DLS. Blank, PBS. The data are representative of five independent experiments. (D) Ce6-GFFY macroparticle image was photographed by transmission electron microscopy. Scar bar, 100 nm. (E, F) Particle size (E) and polydispersity index (F) of Ce6-GFFY macroparticles incubated at 37 °C at different times as indicated was detected by DLS. The data are representative of five independent experiments. (G) Particle size of Ce6-GFFY incubated at room temperature (normal) or underwent −80 °C/37 °C freezing-thawing (Freeze-Melt) was detected by DLS. The data are representative of five independent experiments. Figure 1 Then, we examined the molecular characteristics of Ce6-GFFY from various perspectives. Dynamic light scattering analysis showed that Ce6-GFFY formed macroparticles when suspended in PBS, the average diameter of the polymers was 158.7 ± 2.8 nm , and the zeta potential was −23.1 ± 0.9 mV . We further confirmed the aggregation of Ce6-GFFY molecule using transmission electron microscopy, and the data showed that Ce6-GFFY formed irregular polymer with a uniform size distribution . Then, we explored the stability of Ce6-GFFY in different conditions. The Ce6-GFFY solution was incubated at 37 °C for different times, then the particle size and average polydispersity index were detected. Our results showed that there were almost no changes in the particle size during the incubation, even after seven days of incubation, indicating that Ce6-GFFY macroparticles had a high stability in the normal store and transport conditions . Moreover, the size of Ce6-GFFY macroparticles also remained stable after repeated freezing (−80 °C) and thawing (37 °C), which further identified the high stability of Ce6-GFFY macroparticles . Above all, Ce6-GFFY molecules form a uniform macroparticle aggregation in solution, and the particles remain stable under extreme conditions. Successfully entering cells through endocytosis is the prerequisite for photosensitizers to exert anti-tumor effects, thus we first focused on exploring the uptake of Ce6-GFFY by CRC cells. CRC cells derived from mouse (CT26) and human (HCT116) were treated with Ce6-GFFY or Ce6 molecules, respectively. The uptake of the agents was determined through confocal laser scanning microscopy due to the specific fluorescence produced by Ce6 molecules. Flow cytometry analysis showed that the Ce6-GFFY uptake of CT26 and HCT116 cells was much higher than Ce6 molecules, which enter the cells via free diffusion . Confocal laser scanning microscopy also demonstrated that cells treated with Ce6-GFFY had a noticeable aggregation of Ce6 fluorescence in contrast to cells treated with Ce6 molecules, indicating that Ce6-GFFY has an optimal cellular endocytic activity . Figure 2 Ce6-GFFY penetrates colorectal cancer cells and generates ROS. CT26 and HCT116 cells were treated with indicated agents at 37 °C for 1 h and treated with or without 660 nm laser irradiation for 1 min at a power of 0.02 W/cm 2 . The data are representative of three independent experiments. (A) Cells were treated with 5 μM Ce6-GFFY or Ce6 molecules and then subjected to flow cytometry determination and the Ce6 positive cells were analyzed. (B, C) Cells were treated with 5 μM Ce6-GFFY or Ce6 molecules; the cellular fluorescence was determined by confocal laser scanning microscopy (B) and the mean fluorescence intensity was analyzed (C). Red, Ce6; Blue, 4′,6-diamidino-2-phenylindole (DAPI); MFI, mean fluorescence intensity. Scale bar: 40 μm. (D) Cells were treated with agents as indicated and then treated with or without laser irradiation; the cellular ROS levels were detected using flow cytometry assays and the ROS positive cells were analyzed. ROS, reactive oxygen species. (E, F) 5 μM GFFY peptide, Ce6-GFFY, Ce6 molecules, or PBS treated cells with or without laser irradiation were stained with ROS probe DCFH-DA at 37 °C for 20 min; the cellular fluorescence was determined by confocal laser scanning microscopy (E) and the mean fluorescence intensity was analyzed (F). DCFH-DA, 2′,7′-dichlorodihydrofluorescein diacetate. “L” in “PBS + L”, “GFFY + L”, “Ce6+L”, “Ce6-GFFY + L”: laser irradiation. MFI, mean fluorescence intensity; green, DCFH-DA; BF, bright field. Scale bar in CT26: 40 μm; HCT116: 50 μm. Statistical analyses were performed using one-way ANOVA except for (C), which was performed using unpaired t -test; bars, standard deviation; ∗∗ P < 0.01; ∗∗∗ P < 0.001; ∗∗∗∗ P < 0.0001. Figure 2 Generally, PDT exerts its tumor cell-killing ability through ROS generated by photosensitizers under laser irradiation with specific wavelength. 35 To confirm the ability of Ce6-GFFY to generate ROS in tumor cells, we treated CT26 and HCT116 cells with Ce6-GFFY, and subsequently monitored the intracellular ROS levels using a molecular probe, namely 2′,7′-dichlorodihydrofluorescein diacetate (DCFH-DA). Flow cytometry analysis revealed that Ce6-GFFY induced a higher level of ROS compared with the treatment with either GFFY peptides or Ce6 molecules alone upon laser irradiation, and minimal ROS generation was observed in cells treated with Ce6-GFFY without laser activation . Confocal laser scanning microscopy analysis also revealed that the Ce6-GFFY treated cells exhibited a substantial increase in ROS production upon 660 nm laser irradiation, whereas the levels of ROS were minimal in cells treated with GFFY peptides or Ce6 molecules, and negligible ROS generation was observed in non-irradiated cells . In summary, Ce6-GFFY macroparticles can effectively penetrate CRC cells and induce a substantial production of ROS. The cellular metabolism of ROS is tightly regulated, and excessive ROS production within a short time can result in cellular dysfunction and eventual cell death. To confirm the efficacy of ROS generated by Ce6-GFFY in suppressing CRC cells, we exposed CT26 and HCT116 cells treated with Ce6-GFFY to brief laser irradiation and assessed their proliferation status. Our results demonstrated that the ROS generated by a low concentration of Ce6-GFFY upon laser irradiation is sufficient to significantly impede the proliferation of CT26 (IC50 = 6.268 μM) and HCT116 (IC50 = 5.299 μM) cells . Figure 3 Ce6-GFFY suppresses the proliferation of colorectal cancer cells. CT26 and HCT116 cells were treated with indicated agents at 37 °C for 1 h and then treated with or without 660 nm laser irradiation for 1 min at a power of 0.02 W/cm 2 . The data are representative of three independent experiments. (A) Cells were incubated at 37 °C for 24 h after being treated with laser irradiation and the indicated dose of GFFY peptide, Ce6, and Ce6-GFFY, and then cell proliferation was determined using CCK-8 assays. The IC50 of Ce6-GFFY under laser irradiation was analyzed. IC50, 50 % inhibitory concentration. (B) Cells (CT26, 10 μM; HCT116, 5 μM) were stained with annexin V and propidium iodine dye, and then the ratio of dead cells was analyzed using flow cytometry. (C) The ratio of necrotic and apoptotic cells in the combined treatment of Ce6-GFFY and laser irradiation were analyzed. (D – F) After treated with the indicated agents (CT26, 10 μM; HCT116, 5 μM), cells were incubated at 37 °C for 8 h and stained with a CRT antibody, then the expression of CRT was analyzed using flow cytometry (D) and confocal laser scanning microscopy (E), and the CRT fluorescence was analyzed (F). CRT, calreticulin; green, CRT; blue, DAPI; MFI, mean fluorescence intensity. Scale bar, 40 μm “L” in “CT26+L”, “HCT116+L”, “PBS + L”, “GFFY + L”, “Ce6+L”, “Ce6-GFFY + L”: laser irradiation. Statistical analyses were performed using one-way ANOVA; bars, standard deviation; ∗∗ P < 0.01; ∗∗∗ P < 0.001; ∗∗∗∗ P < 0.0001. Figure 3 Next, we investigated the mechanisms underlying the inhibitory effects of Ce6-GFFY on tumor cell proliferation. Flow cytometry analysis was carried out using propidium iodide and annexin V staining to explore the status of Ce6-GFFY treated CT26 and HCT116 cells after brief laser irradiation. The results demonstrated that the combined use of Ce6-GFFY and laser irradiation induced a mortality rate of 73% in CT26 cells and 48% in HCT116 cells, while most cells in other groups remained viable . Further analysis revealed that part of the dead cells derived from the treatment of Ce6-GFFY and laser irradiation were necrotic (52% in CT26 cells and 21% in HCT116 cells) and apoptotic (21% in CT26 cells and 27% in HCT116 cells) . ROS-induced cell death always induces the alteration of damage-associated molecular patterns, which plays an important role in ICD. Damage-associated molecular patterns can be detected by hallmarks such as HMGB1, ATP, and surface-exposed CRT, 15 , 36 among which CRT is a classical hallmark that acts as an “eat-me” signal to stimulate dendritic cells maturation and promote T cell-mediated antitumor immunity. 37 , 38 Therefore, we further examined the expression of CRT in CRC cells induced by Ce6-GFFY. CT26 and HCT116 cells were treated with the combination of laser irradiation and Ce6-GFFY or other agents, and then the expression of CRT was evaluated through flow cytometry analysis using a CRT antibody. The results showed that the expression of CRT in Ce6-GFFY treated CT26 and HCT116 cells was much higher than other groups . Moreover, the immunofluorescence assays performed using confocal laser scanning microscopy further demonstrated that the CRT expression in Ce6-GFFY group was significantly up-regulated in both cells, while no significant changes were observed in other groups . These findings suggest that Ce6-GFFY can effectively induce ICD in CRC cells. In general, the combination of Ce6-GFFY and laser irradiation induces ICD in CRC cells, indicating a promising application of Ce6-GFFY for CRC therapy. Ce6-GFFY induced ICD indicates that Ce6-GFFY could be a potent anti-tumor drug candidate, we thus explored its metabolism and tumor-targeting ability in mice before investigating its potential therapeutic effect. The kinetics of Ce6-GFFY metabolism were determined in mice through tail vein injection. Living imaging analysis revealed that Ce6-GFFY exhibited a prolonged retention time in the mice for over 48 h, while Ce6 control molecules were almost cleared within 12 h after injection . The Ce6 fluorescence statistics indicate that the half-life of Ce6-GFFY was 10 h in mice, whereas that of Ce6 molecules was only 3 h, indicating that the macroparticles formed by Ce6-GFFY effectively extended the retention time of the drug in vivo . We also collected the main organs of mice treated with Ce6-GFFY for further imaging analysis and found that the Ce6-GFFY accumulation mainly occurred in the liver, stomach, and intestine, showing a typical metabolism process of protein drugs . Then, we explored the tumor-targeting ability of Ce6-GFFY in mice bearing CT26-derived tumors. Living imaging analysis showed that Ce6-GFFY accumulated rapidly in the tumor regions after tail vein injection . Remarkably, Ce6-GFFY exhibited stable aggregation in tumor tissues even after 24 h of administration, whereas it was almost entirely cleared from normal tissues except for the liver, which serves as a metabolic organ for large particles . In summary, previous studies and our data both demonstrate that macroparticles exhibit a prolonged half-life in vivo , thus enhancing the drug uptake by tumors and extending the therapeutic window of drugs. 39 , 40 Besides, the tumor targeting ability of Ce6-GFFY indicates that it is a promising agent for clinical CRC therapy. Figure 4 Pharmacokinetics and tumor-targeting ability of Ce6-GFFY. (A) Mice were treated with 2.5 mg/kg Ce6 control or 5 mg/kg Ce6-GFFY through tail vein injection, and the Ce6 luminescence was detected at the indicated time after the injection. n = 5. (B) The half-life of Ce6-GFFY and Ce6 molecules was analyzed based on the Ce6 luminescence changes. T 1/2 , half-live. (C) Main organs from mice in (A) (12 h) were collected for imaging. n = 3. (D) 2.5 mg/kg Ce6 control and 5 mg/kg Ce6-GFFY were injected into mice bearing CT26-derived tumors through tail vein, and the Ce6 fluorescence intensity was measured and analyzed at the indicated time after the injection ( n = 3). Red cycle, tumor region. (E) Main organs, along with the tumors from mice in (D) (24 h) were collected for imaging and analysis. n = 3. Figure 4 Considering the significant advantages of Ce6-GFFY in terms of metabolism and tumor targeting, we subsequently investigated its potential anti-tumor activity. A subcutaneous tumor mouse model was established using CT26 cells, and the mice were treated with Ce6-GFFY (5 mg/kg) once, followed by 8 min of laser irradiation 6 h after injection; tumor growth was assessed every three days . The data demonstrated that the combination of Ce6-GFFY and laser irradiation significantly inhibited tumor growth after a single treatment, while no significant change was observed in the groups treated with other agents combined with laser irradiation . Moreover, tumor growth curve statistic also confirmed the potent inhibition of tumor growth induced by the combined use of Ce6-GFFY and laser irradiation . Importantly, there was no decrease in mice weight during the treatment, indicating a minimal side effect . Figure 5 Ce6-GFFY prohibits colorectal cancer growth and has little side effects. Agents were injected through the tail vein of the CT26-derived subcutaneous tumor mice model, and the 660 nm, 0.2 W/cm 2 laser irradiation (1 min on, 1 min off; 4 cycles) was performed 6 h after the injection. Only a single dose was administered during the entire treatment cycle. (A) Schematic diagram of the PDT strategy. PDT, photodynamic therapy. (B – D) Mice were treated with PBS, GFFY (2.5 mg/kg), Ce6 (2.5 mg/kg), or Ce6-GFFY (5 mg/kg), and tumor tissues were collected (B) and weighed (C) after treatment, and tumor growth curve was analyzed during treatment (D). n = 4. (E) Mice body weight was analyzed during treatment. n = 4. (F) Pathological analysis of hearts, livers, spleens, lungs, and kidneys derived from the indicated agents treated mice using hematoxylin-eosin (H&E) staining. n = 4. “L” in “PBS + L”, “GFFY + L”, “Ce6+L”, “Ce6-GFFY + L”: laser irradiation. Scale bar, 200 μm. Statistical analyses were performed using two-way ANOVA; bars, standard deviation; n.s., not significant; ∗∗∗ P < 0.001; ∗∗∗∗ P < 0.0001. Figure 5 We further evaluated the toxicity of Ce6-GFFY in mice using pathologic analysis. At the end of the treatment, we performed the murine blood routine analysis, data showed that the combined administration of Ce6-GFFY and laser irradiation did not elicit any significant inflammatory responses . Besides, the molecular indices indicated that Ce6-GFFY did not exert an impact on the hepatic and renal function of mice . Histopathological analysis of organs such as heart, liver, spleen, lung, and kidney of Ce6-GFFY and laser irradiation co-treated mice showed that there was no apparent toxicity in mice . Therefore, our data demonstrate that the combined use of Ce6-GFFY and laser irradiation can effectively suppress CRC growth through a single treatment with no obvious side effects, indicating that Ce6-GFFY has good drug properties. Activating the anti-tumor immunity is an effective way to suppress cancer recurrence and metastasis. 41 , 42 Considering that our cellular-level results demonstrated that the combination of Ce6-GFFY and laser irradiation induced significant immunogenic cell death, and in vivo experiments confirmed the drug ability of Ce6-GFFY. Therefore, we conducted a comprehensive investigation to determine whether the PDT of Ce6-GFFY could enhance the anti-tumor immune responses in mice. We first constructed a mouse model using CT26 cells, which were transplanted subcutaneously in the left and right flanks of the BALB/c mouse, respectively, to mimic the primary and metastasis tumors. Then, the mouse was administered with Ce6-GFFY via tail vein injection, followed by laser irradiation on the primary tumor area while the metastatic tumor remained unirradiated . The data showed that the growth of primary tumors was significantly suppressed by the combined use of Ce6-GFFY and laser irradiation . Interestingly, the growth of metastasis tumors was also inhibited, even in the absence of irradiation . Tumor weight analysis further substantiated the inhibitory effect on both primary and metastatic tumor growth, thereby suggesting a potential induction of anti-tumor immunity through photodynamic treatment mediated by Ce6-GFFY . Figure 6 Ce6-GFFY activates anti-tumor immunity and suppresses metastatic tumor growth. Primary and metastasis tumor model was constructed by subcutaneously transplanting CT26 cells in the left (metastasis tumor) and right flanks (primary tumor) of BALB/c mouse. Then Ce6-GFFY (5 mg/kg) was injected through the tail vein of the mouse, and the 660 nm, 0.2 W/cm 2 laser irradiation (1 min on, 1 min off; 4 cycles) was performed on the primary tumor 6 h after the injection. Only a single dose was administered during the entire treatment cycle. (A) Schematic diagram of the mouse model construction and PDT strategy. (B) Primary tumor tissues were collected and tumor growth was analyzed after treatment. n = 6. (C) Metastasis tumor tissues were collected and tumor growth was analyzed after treatment. n = 6. (D) Primary and metastasis tumors were weighed and analyzed. n = 6. (E) Primary and metastasis tumors were collected and dispersed into single cells for flow cytometry analysis, and the amount of cytotoxic T cells (CD45 + CD3 + CD8 + ) and myeloid-derived suppressor cells (MDSCs, CD45 + CD11b + Gr-1 + ) were analyzed. n = 3. (F, G) IF staining for CD8 + T cells (CD8) and MDSCs (Gr-1) in primary and metastasis tumors (F), and the number of positive cells per mm 2 was analyzed (G). n = 3. IF, immunofluorescence. “(L)” in “Primary (+L)”: laser irradiation. Scale bar, 20 μm. The data are representative of three independent experiments. Statistical analyses were performed using unpaired t -test; bars, standard deviation; ∗ P < 0.05; ∗∗ P < 0.01; ∗∗∗ P < 0.001; ∗∗∗∗ P < 0.0001. Figure 6 Cytotoxic T cells eliminate tumor cells by recognizing tumor-associated antigens, and thus their extensive infiltration into tumor microenvironment is essential for the induction of anti-tumor immunity. 43 In addition, myeloid-derived suppressor cells exert immunosuppressive effects by producing arginase-1, inducible nitric oxide synthase, and other inhibitory substances, thereby playing an important role in reshaping the tumor immune microenvironment. 44 , 45 Therefore, we subsequently focus on exploring the changes of cytotoxic T cells and myeloid-derived suppressor cells in tumors with or without photodynamic treatment using Ce6-GFFY. The flow cytometry analysis demonstrated that the number of cytotoxic T cells (CD45 + CD3 + CD8 + ) was increased and myeloid-derived suppressor cells (CD45 + CD11b + Gr-1 + ) were decreased in both primary and metastasis tumors, despite only the primary tumor being subjected to laser irradiation . Moreover, immunofluorescence assays further confirmed that the cytotoxic T cells (CD8 + ) were accumulated whereas the number of myeloid-derived suppressor cells (Gr-1 + ) was decreased in both primary and metastasis tumors . Together, our results demonstrate that Ce6-GFFY is a promising agent in activating anti-tumor immunity and treating metastatic CRC. Currently, early CRC is usually treated with surgical excision, and the advanced CRC is treated with chemoradiotherapy, targeted therapy or immunotherapy based on the genetic status such as the RAS/BRAF mutation, microsatellite instability/deficient mismatch repair. 46 , 47 However, there are certain limitations to current therapeutic strategies, for example, only about 15 % of CRC patients had deficient mismatch repair with high levels of microsatellite instability, and the proportion of stage III and IV CRC patients is even lower at 11% and 5%, respectively, 48 among which only 30%–50% CRC patients are responsive to immunotherapy. 9 , 49 , 50 Therefore, targeted drugs developed in novel strategies are urgently needed. Unlike traditional targeted drugs, PDT requires a combination of drug and instrument (laser) to work. 51 PDT consists of three necessary elements: photosensitizer, laser, and oxygen, among which photosensitizer determines the tumor-targeting ability and therapeutic effect of PDT. 52 PDT is theoretically characterized by reduced toxicity and repaid effect compared with traditional drugs, which led to its FDA approval for clinical use 30 years ago. 11 However, the clinical application of PDT in cancer therapy on a large scale has been limited due to the lack of safe and effective photosensitizers. In this study, a novel photosensitizer Ce6-GFFY was synthesized through the conjugation of the photosensitive molecule Ce6 with the self-assembling peptide GFFY. 53 Ce6-GFFY forms stable macroparticles with a diameter of approximately 160 nm in solution, and our data demonstrate that these particles possess excellent targeting ability for CRC and exhibit potent anti-cancer effects while causing minimal side effects; the novel photosensitizer Ce6-GFFY developed in this study can induce rapid and efficient ICD of tumor cells under laser irradiation, and thus the systemic anti-tumor immune response will be activated after irradiating at a specific tumor site; and the tumors at various metastatic sites will be eliminated via immune-mediated killing. Different from the current anti-tumor drugs, Ce6-GFFY kills tumor cells via a physical manner that ignores the gene status of CRC, and thus it has a great potential in CRC patients, especially those who cannot be treated with any existing therapeutics, such as clinical drug resistance. 54 The tumor-targeting mechanism of Ce6-GFFY macroparticles remains uncertain; however, the EPR effect may be the underlying mechanism. The intervascular spaces in tumors contain pores ranging in size from 100 nm to 780 nm, which allow the infiltration of macroparticles. 55 Previous studies have shown that the EPR effect primarily occurs in solid tumors due to their disorganized and abnormal vasculature compared with healthy tissues, along with the impaired lymphatic clearance from the tumor stroma, thus facilitating the penetration and retention of macroparticles in tumors. 31 , 56 Besides, the shape, as well as the softness of macroparticles also have a potential impact on tumor accumulation through the EPR effect. 57 Some studies have shown that the EPR effect is more potent when the surface of macroparticles distributes a negative charge. 58 Our data demonstrated that Ce6-GFFY macroparticles have an irregular shape and a negative charge (derived from the Ce6 molecule) on the surface, indicating that Ce6-GFFY macroparticles have a good EPR effect, which makes it effective in tumor targeting and penetrating. Photosensitizer is activated by laser, the wavelength of which is also contained in sunlight. Therefore, patients need to avoid exposure to sunlight for several days after receiving PDT, which has had a certain effect on their everyday lives. 59 To address this issue, the half-life of photosensitizer needs to be suitable, a half-life of several hours of the photosensitizer seems to be suitable for the clinical application of PDT, as the patients can return to their normal lives within hours of the end of the treatment. In addition to the suitable half-life of photosensitizer, intra-tumoral injection is another effective way to reduce the side effects of PDT, the photosensitizer is injected into the tumor tissue through an endoscope or a drainage tube, then the optical fiber is guided to the tumor site where the drug was injected for laser irradiation. 60 Compared with intravenous injection, the dose of photosensitizer used for intra-tumoral injection is very low, and laser irradiation is performed within minutes of the injection, thus little normal tissues would be penetrated by the drug during the treatment, as well as little side effects would emerge to patients. Traditional PDT strategies seem to be more suitable for superficial tumors (such as skin cancer) than internal tumors. 61 However, benefiting from the improvement of tumor-targeting ability and half-life of novel photosensitizers, the interventional PDT will play an important role in the treatment of a variety of tumors in the future. Ce6-GFFY macroparticle has an ideal tumor-targeting ability and a half-life of about 10 h in mice, indicating that Ce6-GFFY is a promising agent for CRC therapy. In this study, we developed a novel photosensitizer termed Ce6-GFFY by covalently combining a photo-responsive Ce6 molecule with GFFY peptide. Ce6-GFFY forms stable macroparticles with an average size of 160 nm in solution, and these macroparticles have an ideal tumor-targeting ability and a suitable half-life in mice. Ce6-GFFY macroparticles induce ICD through ROS when treated with 660 nm laser irradiation. The combined use of Ce6-GFFY and laser irradiation significantly activates anti-tumor immunity by promoting the infiltration of cytotoxic T cells and prohibiting the accumulation of myeloid-derived suppressor cells in tumors, thus suppressing the growth of both primary and metastatic CRCs. Our data indicate that Ce6-GFFY is a promising agent for CRC therapy with little side effects. This work was supported by grants from the 10.13039/501100001809 National Natural Science Foundation of China . Wei Qiao: Data curation, Formal analysis, Methodology, Resources, Software, Supervision, Validation, Visualization, Writing – original draft. Shuxin Li: Data curation, Formal analysis, Methodology, Resources, Software, Validation, Visualization. Linna Luo: Data curation, Formal analysis, Resources, Software, Validation. Meiling Chen: Data curation, Methodology, Software. Xiaobin Zheng: Methodology, Resources. Jiacong Ye: Conceptualization, Resources. Zhaohui Liang: Validation. Qiaoli Wang: Validation. Ting Hu: Validation. Ling Zhou: Resources. Jing Wang: Resources. Xiaosong Ge: Resources. Guokai Feng: Resources. Fang Hu: Resources. Rongbin Liu: Funding acquisition, Resources, Supervision. Jianjun Li: Conceptualization, Project administration, Resources, Supervision. Jie Yang: Conceptualization, Funding acquisition, Investigation, Project administration, Resources, Supervision, Writing – original draft, Writing – review & editing. The authors declared no competing interests. | Review | biomedical | en | 0.999997 |
PMC11697063 | Tumors are formed because of many reasons. For example, when cells in the cell cycle lose their regulation, control of cell proliferation is lost. Under normal circumstances, such cells are eliminated by the immune recognition function or may escape from immune cell monitoring by changing their surface antigens. Tumor cells compete with each other and eventually develop into malignant tumors, leading to cancer occurrence. 1 , 2 In addition to changing their surface antigen as mentioned above, tumor cells can affect immune cells or tissue components, thereby creating an environment conducive to tumor growth. 3 Such an environment with hypoxia, 4 poor nutrients, 5 high acidity, 6 and an immunosuppressive microenvironment 7 contains tumor cells as well as immune cells such as T cells, dendritic cells (DCs), macrophages, myeloid-derived suppressor cells (MDSCs), and regulatory T cells . Other non-immune cells and cytokines also occupy crucial positions in the tumor microenvironment (TME). 8 , 9 Figure 1 The UPS regulates the TME. (A) E1 ubiquitin-activating enzyme activates the carboxyl group of the C terminus of ubiquitin in an ATP-dependent manner through the formation of high-energy thioester bonds. Next, the ubiquitin molecule, which binds to E2, is moved to the targeting protein with the help of E3 ligase. Then, substrate protein labeled with ubiquitin enters into 26s proteasome for degradation, broken down into polypeptides and small-molecule amino acids. Deubiquitinase is to avoid degradation of substrate proteins by removing the ubiquitin tag from the substrate proteins. (B) UPS regulates tumor cells in the TME by regulating the levels of proteins related to the cell cycle, energy metabolism, and angiogenesis. (C) UPS regulates the anti-tumor immunity of T cells by regulating the protein levels of PD-1/PD-L1 and some inflammation-related cytokines. It also regulates the anti-tumor immunity of immune cells in the TME by regulating the maturation and anti-inflammatory presentation ability of DCs, the differentiation and their cytokines secretion of MDSCs, and the polarization of macrophages. (D) UPS modulates their role in tumor progression by regulating the lipid metabolism of adipocytes and the transformation and protein formation of CAFs. (E) UPS regulates tumor progression by regulating the levels of metalloproteinases and collagen in the TME. UPS, ubiquitin-proteasome system; TME, tumor microenvironment; PD-1/PD-L1, programmed death-1/ligand-1; DC, dendritic cell; PMN/M-MDSC, polymorphonuclear/monocytic-myeloid-derived suppressor cells; CAFs, cancer-associated fibroblasts; ICAM1, intercellular adhesion molecule 1. Fig. 1 Being a part of post-translational protein modification, ubiquitination is closely related to various physiological cellular activities, including regulation of protein transcription and interactions, DNA replication, cell growth response, immune responses, and signal transduction. 10 , 11 , 12 Ubiquitin modification is a reversible enzymatic cascade wherein ubiquitin ligases and deubiquitinating enzymes precisely regulate substrates. The ubiquitin molecule is a 76-amino acid-long protein, where adjacent amino acids directly form proteins through covalent binding. This molecule includes seven lysines (K6, K11, K27, K29, K33, K48, and K63). 13 Ubiquitin is modified through monoubiquitination and polyubiquitination. When a single ubiquitin molecule is added to a substrate's lysine residue, monoubiquitination occurs. In polyubiquitination, ubiquitin molecules are added to a single ubiquitin molecule to form polyubiquitin chains. 14 , 15 Polyubiquitin chains K48 and K11 mainly mediate proteasomal degradation. However, K63-linked polyubiquitination, which is typically less common in tumors, is usually not involved in proteasomal degradation but is associated with cellular signal assembly and transduction and repair of damaged cells. 16 , 17 The substrate protein is labeled with a ubiquitin molecule and then degraded in the 26s proteasome. The ubiquitin-proteasome system (UPS) includes three classes of ubiquitinates E1 ubiquitin activating enzymes (E1s), E2 ubiquitin binding enzymes (E2s), and E3 ubiquitin ligases (E3s). Based on the supply of ATP, E1 transfers activated ubiquitin molecules to E2. Bound ubiquitin molecules interact with E2 to transfer the ubiquitin molecules to E3. 18 In this process, E3 ubiquitin ligases play a substantial role. E3 ubiquitin ligases can be categorized into three families based on their structural characteristics and operational mechanism: RING (really interesting new gene) E3s, HECT (homologous to E6-AP carboxyl terminus) E3s, and RBR (RING-between-RING) E3s. 19 RING E3s transfer ubiquitin molecules from E2 to the lysine of the substrate, while HECT and RBR E3 ligases can catalyze the transfer of ubiquitin molecules from E2 to the cysteine of the substrate. 20 In deubiquitination, ubiquitin molecules are removed from the substrate using deubiquitinating enzymes. The deubiquitinase protein family removes ubiquitin molecules from substrate proteins by hydrolyzing the peptide or isopeptide bonds at the carboxyl-terminal end of the ubiquitin, which is opposite to the functions of E3 ubiquitin ligase. 21 Based on their sequence and structural domain characteristics, these deubiquitinating enzyme classes can be categorized into five families: UCH (ubiquitin carboxy-terminal hydrolases) family, USP/UBP (ubiquitin-specific protease and ubiquitin-binding protein) family, OTU (ovarian tumor proteases) family, MJD (Machado-Joseph domain) family, and JAMM (JAB1/MPN/MOV34) family. 22 Ubiquitination modification, as a critical post-translational protein modification , can regulate tumor progression by targeting various TME-related cells and proteins. Tumor cells undergo uncontrolled rapid proliferation and metastasis, which is a characteristic that distinguishes them from normal cells. The entry of cells into the cell cycle for mitosis in order to generate new cells requires the involvement of various cyclins/cyclin-dependent kinases to ensure normal cell proliferation. Ubiquitination, a post-transitional modification, is crucial for regulating the stability of various cyclins and cyclin-dependent kinases during cell cycle progression. Two important E3 ligases such anaphase-promoting complex/cyclosome (APC/C) and Skp1-Cul1-F-box (SCF) play crucial roles in regulating cell cycle proteins 23 , 24 . These two ubiquitin ligases are Cullin RING E3 ligase family members. APC/C is linked to the complex consisting of a scaffold Cullin-like protein APC2 and a coactivator subunit. This ubiquitin ligase regulates G1 phase cell activity by binding to coactivators cell division cycle 20 homolog and e-cadherin (CDH1) and subsequently regulates mitotic progression. 24 SCF contains an adaptor protein SKP1, scaffold protein CUL1, and a RING finger protein 1 (RBX1/RNF1) recruiting E2, which together address DNA damage in the cell cycle by binding to other ubiquitin ligases including FBXW7 (WD repeat domain containing 7), β-Trcp (β-transducin repeat-containing proteins), and SKP2. 25 Two ubiquitin ligases also interact. For example, the SCF/SKP 2 axis regulates APC/CDH1-mediated C-terminal binding protein interacting protein degradation to regulate p-RB in the G2 phase by inhibiting transcriptional gene responses of the E2F complex and regulating the stability of the cyclin/cyclin-dependent kinase inhibitor p27 by cooperating with SCF/SKP2 axis and APC/CDH1 to induce G2 retardance. 26 Thus, the imbalance in the expression of APC/C and SCF and SCF-associated ubiquitin ligases may affect the cell cycle, which thus affects cell proliferation and mediation of tumorigenesis. 27 , 28 For example, USP10 can up-regulate tumor development in esophageal squamous cell carcinoma by modifying cyclin Anillin in concert with the CDH1 of APC/C. 29 p53 is a key protein detected in the G1 phase and even in the whole cell cycle. It is a well-known tumor suppressor gene. 30 , 31 The majority of tumorigenesis is associated with p53 mutations. 32 Mouse double minute 2 (MDM2) functions as a classical ubiquitin ligase that regulates p53 protein degradation. MDM2 can undergo self-ubiquitination, but such ubiquitination is unstable and may cause aberrant p53 activation. 33 MDM4 (also known as MDMX) interacts with the MDM2 protein to ensure that p53 transcriptional activity is normal. 34 Furthermore, ubiquitin ligases such as tripartite motif-containing 28 (TRIM28), RNF2, and Cul4a can also promote p53 degradation by cooperating with MDM2. 35 , 36 , 37 , 38 Other ubiquitin-related enzymes such as TRIM31 can form a competitive relationship with MDM2 and prevent MDM2 from interacting with p53, leading to p53 activation in breast cancer. 39 E3 ligases such as TRIM24, TNF receptor-associated factor 6 (TRAF6), TRAF7, and C terminus of Hsc70-interacting protein (CHIP) maintain a low cellular p53 expression in the absence of signal activation of p53 genes 40 , 41 , 42 , 43 ( Table 1 ). Because of the special properties of the MDM2/p53 axis, most p53-related inhibitors, such as PROTAC, were developed based on this axis. 44 Other ubiquitin ligases are associated with the cell cycle such as TRIM21, USP1, and USP7 in cancer development. 45 , 46 , 47 Figure 2 UPS regulates tumor cells. (A) E3 ubiquitin ligases APC/C and SCF regulate the cell cycle. MDM2, as a classical ubiquitin ligase, regulates p53 to regulate the cell cycle, while TRIM2, another E3 ligase, competes with it. Other ubiquitin ligases such as Cul4a, TRIM28, and RNF2 also regulate MDM2 ubiquitin ligase, which indirectly regulates p53 protein stability. In addition, several ubiquitin ligases including CHIP, TRIM24, TRAF6, and TRAF7 can also regulate p53. (B) TRIM25, FBP1, RNF167, and TRIM22 regulate the mTOR signaling pathway by modulating PTEN, FBXW7, sestrin2, and RNF2, respectively, which ultimately regulates energy metabolism in tumor cells. (C, D) HIF-1α is regulated by different ubiquitin proteases under hypoxia and normoxia conditions respectively. APC/C, anaphase-promoting complex/cyclosome; CDC20, cell division cycle 20; SCF, Skp1-Cul1-F-box; CDH1, E-Cadherin; SKP, S-phase kinase-associated protein; TRIM, tripartite motif-containing; MDM, mouse double minute; RBX/RNF, RING finger protein; CRL, cullin-RING E3 ubiquitin ligase; USP, ubiquitin-specific protease; OTU, ovarian tumor proteases; mTOR, mechanistic target of rapamycin complex; FBXW, F-Box, and WD repeat domain containing; FBP1, fructose-bisphosphatase 1; UCH, ubiquitin carboxy-terminal hydrolases; VHL, Von Hippel-Lindau protein; HIF, hypoxia-inducible transcription factor; UBE2K, ubiquitin-conjugating enzyme E2K; MAEA, macrophage-erythroblast attacher; VEGF, vascular endothelial growth factor; DUB, deubiquitinase; TRAF, tumor necrosis factor receptor-associated factor. Fig. 2 Table 1 Summary of ubiquitination enzyme regulation of targeting proteins. Table 1 Enzyme Name Targets Cancer Animal models Reference E3 ligase APC/C G1 period / / 23 SCF DNA damage / / 24 E3 ligase FBXW7 SCF / / 25 β-Trcp / / SKP2 / / DUB USP10 ANLN Esophageal squamous cell carcinoma Human 29 E3 ligase MDM2 P53 Neuroblastoma Mice 33 MDM4 Neuroblastoma Mice 34 TRIM28 Melanoma / 36 TRIM31 Breast cancer Mice 39 RNF2 Ovarian tumor Mice 37 Cul4a Breast cancer/liver cancer Mice 38 TIRM24 Breast cancer Drosophila 40 TRAF6 Lung cancer Mice 41 TRAF7 Breast cancer / 42 CHIP Lung cancer Mice 43 E3 ligase TRIM25 PTEN Non-small cell lung cancer Mice 50 E3 ligase RNF43 p85 Colorectal cancer Human 51 DUB USP18 mTOR Ovarian cancer Human 52 E3 ligase RNF167 Sestrin2 Colorectal cancer Human 53 E3 ligase TRIM22 NRF2 Osteosarcoma Human 54 E3 ligase FBXW7 mTOR Nasopharyngeal carcinoma Human 25 E3 ligase Smurf1 VHL Many types of cancer / 60 DUB VHL (HIF-1α) normoxia Renal cancer Mice 62 USP8 Non-small-cell lung cancer Mice 61 USP9x Pancreatic cancer, gastric cancer Mice 60 UCH-L1 Ovarian cancer Mice 63 E3 ligase MDM2 (HIF-1α) Hypoxia Mesothelioma, ovarian cancer Mice 67 Based on the particularity of the TME, energy metabolism, including glucose metabolism, is mostly enhanced in tumor cells. 48 Mechanistic targets of rapamycin complex (mTOR) which consists of mTOR1 and mTOR2 mutation are involved in most cancer. 49 For example, TRIM25 down-regulates the activity of this protease through polyubiquitination modification of PTEN in non-small cell lung cancer, thus activating the PI3K/mTOR pathway to promote tumor development . Other ubiquitination modifications can also exert a tumor growth-promoting role by enhancing the mTOR signaling pathway. 50 Mutations in RNF43 G659fs are frequently found in colorectal cancer, and RNF43 G659fs mutations can bind to p85 and thus enhance p85 ubiquitination, leading to mTOR signaling activating. How p85 ubiquitination is regulated remains unclear. 51 The deubiquitination enzyme USP18 can also up-regulate ovarian cancer development in ovarian cancer by activating the AKT/mTOR signaling pathway through direct regulation of mTOR and AKT proteins. 52 In a few cancers, mTOR is also inhibited through ubiquitin modification. The E3 ligase RNF167 cooperates with STAM-binding-protein-like 1 to modify an amino acid sensor, sestrin2. This sensor transacts amino acid signals to mTOR1 and in turn, activates the mTOR signaling pathway. When sestrin2 ubiquitination increases, it inhibits mTOR signaling in colon cancer. 53 Other ubiquitin ligases, such as TRIM22, can accelerate nuclear factor erythroid 2-related factor 2 (NRF2) degradation and thus regulate mTOR signaling. In osteosarcoma, down-regulated TRIM22 expression led to increased stability of the NRF2 protein and inhibition of the mTOR-associated autophagy signaling pathway, thereby triggering cancer development. 54 Based on the important role of the mTOR signaling pathway, developing mTOR-related inhibitors seems extremely crucial. In nasopharyngeal carcinoma, fructose-1, 6-bisphosphatase 1 inhibits autologous ubiquitination of the E3 ligase FBXW7, thereby stabilizing this ubiquitin ligase, promoting FBXW7 to regulate mTOR protein ubiquitination, inhibiting the mTOR signaling pathway to inhibit glycolysis, and promoting radiation-induced apoptosis and DNA damage for tumor growth inhibition 55 ( Table 1 ). Because tumor cells require a large amount of nutrition from the TME, stromal cells in the TME would be nutrient-deficient, and the function of related stromal cells would be inhibited, thereby leading to tumor cell proliferation. 56 Rapidly proliferating tumor cells stimulate angiogenesis, but the uneven distribution of the new tumor vasculature results in the uneven distribution of oxygen, which makes the TME present a temporary or permanent hypoxic state. 57 Being a regulator, it can guide rapid tumor cell vascularization, offering oxygen and nutritional conditions for tumor cell growth and metastasis and enabling cancer cells to rapidly adapt to severe hypoxic conditions. 56 Along with hypoxia-inducible factor 1α (HIF-1α), HIF-2α, and HIF-3α are members of the HIF family. HIF-1α is the most sensitive to the oxygen content in the TME. 57 The HIF-1α content is low under normal oxygen conditions , which because this protein can be targeted through ubiquitination by E3 ubiquitin ligases such as Von Hippel Lindau (VHL) to mediate its degradation. This is a complex process involving other post-translational modifications. Hydroxylation-mediated changes in the two aerobic-dependent hydroxyprolines of HIF-1α indicate that HIF-1α can be modified through ubiquitination so that it enters the proteasomal degradation pathway. 58 Other E3 ligases such as Smad ubiquitylation regulatory factor 1 would also participate in regulating VHL stability under normal conditions. 59 Some deubiquitinases such as USP8, USP9X, and UCHL1 can participate in VHL-mediated ubiquitination modification of HIF-1α. 60 , 61 , 62 However, under hypoxia, VHL ubiquitination can no longer modify HIF-α , which leads to HIF1-α stabilization, thereby mediating vascularization activity and rapid tumor growth. 63 USP25 can regulate HIF-1α-associated transcription factors under severe hypoxia, regulating cancer development. 64 MDM2 can participate in the regulation of p53 protein stability. It can also directly ubiquitinate HIF-1α. According to some reports, MDM2, p53, and HIF-1α form a ternary complex, leading to MDM2 degradation in a p53-dependent manner 65 , 66 , 67 ( Table 1 ). Several ubiquitin-related enzymes also regulate HIF-2α protein stabilization; for example, in gliomas, USP33 modifies HIF-2α through deubiquitination to promote angiogenesis and cancer progression. 68 HIF may also regulate other activities of cancer cells in tumor development. For example, the ubiquitin-conjugating enzyme E2K could increase HIF expression in hepatocellular carcinoma, promoting tumor cell proliferation and migration. 69 HIF-1α can also be up-regulated by the E3 ligase macrophage-erythroblast attacher, leading to the proliferation of tumor cells and elevated migration capacity in glioblastoma. 70 CD4 + and CD8 + T cells can be differentiated into the corresponding T-helper (Th1, Th2, Th9, Th17) and regulatory T cells and cytotoxic T-lymphocytes respectively, under the stimulation of corresponding major histocompatibility complex (MHC) molecules. 71 , 72 T-helper, by recognizing MHC-II antigens on dendritic cells, secretes inflammation-related factors such as interleukin2 (IL-2) and interferon (IFN-γ). 73 Regulatory T cells, which can be differentiated from CD4 + , secrete IL-2 which regulates the homeostasis and function of natural killer cells. 74 PD-1, an inhibitory receptor of T cells, acts as a crucial checkpoint for immune escape. PD-1 and its ligands PD-L1 or PD-L2 play an extremely significant role in regulating tumor progression. 75 Aberrant ubiquitination and deubiquitination of this checkpoint affect checkpoint-mediated immune activity. 76 F-box only proteins 38 (FBXO38) is a PD-1-specific E3 ligase and mediates polyubiquitination of the K233 site on PD-1, thereby reducing PD-1 expression on the T cell surface and blocking PD-1/PD-L1 axis-mediated immunosuppression . In FBXO38 conditional knockout mice, PD-1 levels were elevated in tumor-infiltrating T cells, which resulted in more rapid tumor development in the mice. 77 Kelch like family member 22 (KLHL22), another E3 ligase of the BTB-CUL3-RBX1 complex, can specifically recognize the substrate and mediate ubiquitination. This ligase mediates PD-1 degradation before translocating to the T cell surface. A marked decrease in the level of this ubiquitin ligase in the tumor-infiltrating T cells led to PD-1 overaccumulation and T cell suppression. 78 USP12 also regulates PD-1 stabilization in cancer development. 79 MDM2, an E3 ubiquitin ligase of PD-1, can promote PD-1 degradation through ubiquitination of disaccharidased PD-1 and enhance the anti-tumor effect of T cells. 80 Along with the regulation of PD-1 in T cells, the ubiquitination system targets PD-1 in tumor-associated macrophages, thereby regulating overall tumor growth and development. In macrophages, the E3 ubiquitin ligase c-Cbl induces ubiquitination degradation by interacting with the PD-1 tail, thus ultimately improving the phagocytic ability of macrophages and exerting anti-tumor effects. 81 Tumor cells also regulate the ubiquitination of PD-L1, a PD-1 ligand, altering the expression of leucine-rich repeat kinase 2, ring finger protein 125, TRIM28, circ-0000512, USP22, OTUB1, etc . 82 , 83 , 84 , 85 , 86 , 87 ( Table 2 ). In summary, ubiquitination is crucial for regulating the PD-1/PD-L1 axis. Ubiquitination allows it to be combined with immune checkpoint inhibitors, such as PD-1/PD-L1, thereby improving patient response rates and treatment effects. Of note, although multiple studies have reported the regulatory effect of ubiquitination on the PD-1/PD-L1 protein level, it is not an isolated event but is closely related to other post-translational modifications. For example, MDM2 mainly promotes the ubiquitination of deglycosylated PD-1 to down-regulate its protein levels. 80 This suggests that studies should focus more on the overall concept of cells during research. In the future, researchers may concentrate more on the synergistic effects of ubiquitination and other post-translational modifications of proteins in order to improve the intervention efficiency. Figure 3 UPS regulates immune cells in the TME. DCs contact T cells via MHC to transmit antigen information. The dashed box proximal to DCs depicts the mechanism by which ubiquitination modulates DCs. Ubiquitin enzymes MARCH 1, UCH-L1, MARCH9, and HRD1 regulate MHC-I and MHC-Ⅱ respectively. The results of MARCH1 regulating MHC-Ⅱ may affect the stabilization of MHC-Ⅰ. Ubiquitin-editing enzyme A20, which exerts a deubiquitinating function, mediates the maturation of DCs by regulating NEMO in the NF-κB signaling pathway of DCs. UBR5 also mediated the antigen presentation function of DCs through the regulation of IFN-γ protein stability. PD-1/PD-L1 could be regulated by DUBs and E3 ubiquitin ligases such as USP22, OTUB1, and USP12. USP can regulate T cells by autophagy and NF-κB signaling which ultimately regulate their anti-tumor immune response of them. DCs, dendritic cells; TLR, Toll-like receptor; NF-κB, nuclear factor kappa B; NEMO, nuclear factor-kappa B essential modulator; MARCH, membrane-associated ring–CH–type finger 1; HRD, 3-hydroxy-3-methylglutaryl reductase degradation1; UBR, the ubiquitin-binding region; UCH, ubiquitin carboxy-terminal hydrolases; MHC-Ⅰ/Ⅱ, major histocompatibility complex Ⅰ/Ⅱ; IFN-γ, interferon-γ; USP, ubiquitin-specific protease; OTU, ovarian tumor proteases; RNF, RING finger protein 1; TRIM, tripartite motif-containing; LRRK, eucine-rich repeat kinase; FBOX, F-box-containing protein 38; KLHL, Kelch-like family member; MDM, mouse double minute; UBA, ubiquitin-like modifier activating enzyme. Fig. 3 Table 2 Summary of ubiquitination regulation of targeting proteins. Table 2 Enzyme Name Targets Cancer Animal models Reference E3 ligase FBXO38 PD-1 B16F10 melanoma Mice 78 KLHL22 PD-1 Cub cutaneous melanoma Mice 79 MDM2 PD-1 Colorectal cancer Mice 81 c-Cbl PD-1 (macrophages) Colorectal cancer Mice 82 DUB USP12 PD-1 Lung cancer Mice 80 E3 ligase RNF125 PD-1 Head and neck squamous cell carcinoma Mice 84 TRIM28 PD-1/TBK1 Gastric cancer Mice 86 DUB USP22 PD-L1 Pancreatic cancer Mice 88 OTUB1 Murine breast cancer Mice 87 DUB USP18 TAK1 / / 89 USP22 T cell Pancreatic cancer Mice 92 E1 activating enzyme UBA6 IκBα (T cells) Lupus Mice 93 E3 ligase UBR5 IFN-γ Triple-negative breast cancer Mice 104 MARCH1 MHC-II / Mice 97 MARCH9 MHC-I / Mice 98 HRD1 BLIMP-1 / Mice 99 DUB A20 NEMO Dendritic cells Mice 101 UCH-L1 MHC-I Listeria Mice 100 OTUD6A NLR3 114 E3 ligase Praja2 MFHAS1 Malignant fibrous histiocytoma Mice 109 Pellino-1 K63 of IRAK1 Melanoma Mice 110 FBXW7 c-Myc Lewis lung carcinoma cells Mice 112 ITCH Macrophages / Mice 111 TRIM24 Macrophages Breast cancer Mice 113 CRL4 CD47 Multiple myeloma Mice 120 UBR SHP-2 Many types of cancer / 121 DUB Mysm1 Macrophages / Mice 115 OTUD5 YAP Triple-negative breast cancer Mice 116 DUB USP12 p65 Colorectal cancer Mice 126 E3 ligase TRAF6 STAT3 of k63 Lung cancer Mice 127 Ubiquitination modification also regulates other T-cell functions. In the presence of androgens, the protein level of USP18, a deubiquitination enzyme, in T cells is up-regulated. This enzyme promotes transforming growth factor-beta (TGF-beta)-activated kinase 1 deubiquitination and inhibits TAK1 phosphorylation, and subsequent activation of the NF-κB signaling pathway, which ultimately induces the inhibition of the anti-tumor effect of T cells. 88 In addition, in a study of oral lichen planus, TRIM2, a ubiquitination ligase, also ubiquitinated NF-κB and activated its signaling pathway, ultimately up-regulating the inflammatory function of T cells. This suggested that targeting TRIM2 helps regulate the anti-tumor effect of T cells. 89 Moreover, E2–E3 ubiquitin ligases in T cells were disrupted in patients with renal metastatic cancer, which led to autophagy defects in circulating and tissue-resident CD8 + memory T cells and ultimately resulted in dysfunction and apoptosis. 90 In addition to directly affecting T cells, ubiquitination can indirectly affect T cell function through the regulation of ubiquitination in tumor cells. The decreased expression of the deubiquitinase USP22 in pancreatic cancer cells promoted the infiltration of natural killer and T cells, thereby enhancing the anti-tumor immune response of the TME. 91 Ubiquitination modification also regulates T cells to promote the differentiation of other cells. E1 ubiquitin-activating enzyme UBA6 increases p65 activation in the NF-κB signaling pathway of T cells by accelerating IκBα degradation. UBA6 regulates IFN-γ stability by modulating p65 of the NF-κB signaling pathway to promote Th1 and Tc1 cell differentiation 92 ( Table 2 ). Ubiquitination has a crucial regulatory role in the validation function and anti-tumor effect of T cells. It has a crucial impact on the survival of memory CD8 + T cells. However, the underlying mechanism remains unclear. If the regulatory action of ubiquitination-related enzymes on memory CD8 + T cells can be clearly studied, the findings may have a great effect on improving anti-tumor immunity. DCs are crucial for the immune system. They play a vital role in connecting innate and adaptive immunity. These cells can drive adaptive immunity through antigen presentation and regulate the activity of innate immune cells by secreting immunostimulatory cytokines. 93 MHC-I molecules load and present endogenous peptides to CD8 + T cells through different intracellular pathways. This is of great significance for the anti-tumor function of T cells. By contrast, MHC-II molecules load and present most exogenous peptides to CD4 + T cells. Furthermore, endogenous peptides in DCs can also be presented to CD8 + T cells using a cross-presentation approach. 94 Therefore, the antigen presentation function of DCs is of great significance for the anti-tumor immune response of immune cells in the TME. MARCH1 can mediate the ubiquitination of MHC-II molecules on the DC surface . This ubiquitin ligase regulates MHC II stability through ubiquitination at the tail of the MHC-II β chain. Then, the expression of MHC-II molecules and CD86 on the DC surface was regulated, thereby suppressing T-cell activation. 95 , 96 Moreover, MARCH1-mediated regulation of MHC-II affected the maturation of MHC-I stabilization. A specific relationship exists between MHC-I and MHC-II. MARCH1-mediated MHC-II ubiquitination affects the antigen presentation pathway of MHC-I. MHC-I expression was reduced in MARCH1-deficient DCs. MARCH1 does not directly regulate MHC-I. It is indirectly induced through MHC-II ubiquitination. 96 MARCH9, another ubiquitin ligase, regulates MHC-I ubiquitination. This transmembrane protein depends on lysine residues in the cytoplasmic tail for its ubiquitination function. MARCH9 plays a key role in regulating the entry of MHC-I into nucleosomes and MHC-I-mediated antigen presentation. 97 The E3 ligase 3-hydroxy-3-methylglutaryl reductase degradation 1 (HRD1) regulates ubiquitination modification of B lymphocyte-induced maturation protein 1, a transcription factor for MHC-II in DCs, thereby promoting MHC-II transcription and affecting CD4 + T cell activation in the inflammatory response. 98 UCH-L1 can regulate antigen cross-presentation pathway by promoting the recycling of MHC-I molecules in DCs. MHC-I at the cytoplasmic membrane or endoplasmic reticulum is recruited during antigen cross-presentation for phagosomal-cytoplasmic and vesicular cross-presentation pathways. Subsequently, some peptides derived from external pathogen molecules are loaded onto MHC-I in phagosomes and then shuttled to the plasma membrane for presentation and act as MHC-I/AG complexes. UCH-L1 deficient DCs present with reduced MHC recycling capacity. UCH-L1 deficient mice have a significantly reduced ability of antigen cross-presentation to cytotoxic T-lymphocytes in vivo and in vitro after infection with Listeria monocytogenes. 99 A20, a deubiquitinase targeting NEMO of DCs, up-regulates the maturation and cytokine production of DCs. A20 deficiency can lead to the development of autoimmune defects. 100 , 101 , 102 Ubiquitin ligase UBR5 does not directly target DCs in triple-negative breast cancer. However, IFN-γ expression increased in UBR5 knockout 4T1 tumor-bearing mice could enhance the antigen-presenting ability of DCs, promoting treatment and presentation of DCs to T cells, and triggering a specific immune response to a tumor to inhibit tumor growth 103 ( Table 2 ). In summary, ubiquitination significantly affects MHCI/II protein levels in DCs, which can subsequently affect the activation and anti-tumor function of T cells by impacting the antigen-presenting ability of DCs. Current research in this area is focused on exploring mechanisms, and gaps remain in how to intervene. Follow-up research is warranted to determine how to enhance the antigen-presenting ability of DCs and activate T cells by interfering with DC ubiquitination. Macrophages are among the most crucial cells in the tumor immune microenvironment. They can be roughly categorized into two polarization directions, M1 (anti-tumor macrophages) and M2 (pro-tumor macrophages). 104 , 105 , 106 However, in reality, the functions of macrophages are far from simple, and the macrophage population has strong heterogeneity and plasticity. In tumors, macrophages often tend to be M2-like macrophages. Therefore, targeting the elimination of M2-like macrophages or transforming them into M1-like macrophages is the main research direction in cancer treatment. Moreover, macrophages have a specialized and significant antigen-presenting function. In lung adenocarcinoma, under the action of microRNAs secreted by tumor cells, macrophages exhibit inhibition of the ubiquitination and degradation of misshapen-like kinase 1 through a series of pathway reactions , which ultimately activates the downstream c-Jun N-terminal kinase signaling pathway and polarizes the macrophages toward M2-like macrophages and thus promotes tumor progression. 107 The E3 ubiquitin ligase Praja2 catalyzes ubiquitination of the modified malignant fibrous histiocytoma amplified sequence 1 (MFHAS1). 108 This protein can activate JNK/p38 and NF-κB pathways to promote M1 macrophage polarization and inflammatory responses. 108 Pellino-1, an E3 ubiquitin ligase, regulates M1 macrophage polarization. However, new studies have demonstrated that Pellino-1 can inhibit IL-10-mediated M2 macrophage polarization by regulating k63 ubiquitination of IL-1 receptor-associated kinase 1 to activate signal transducer and activator of transcription 1 (STAT1) in response to IL-10 stimulation. 109 Some other E3 ligases such as FBXW7, itchy E3 ligase, and TRIM24 can also regulate macrophage polarization 110 , 111 , 112 ( Table 2 ). Figure 4 UPS regulates TAMs and CAFs in the TME. (A) Ubiquitination modification regulating macrophage polarization. Macrophages receiving different signals can be polarized into macrophages of M1 and M2. Ubiquitin enzymes that regulate macrophage polarization are shown in the figure. The polarization of tumor-associated macrophages can be regulated by UPS such as FBXW7, ITCH, and Mysm1. The ubiquitination enzymes CRL4 and UBR can regulate CD47/SIRPα to mediate tumor immune response in TAMs. (B) The transformation of normal fibrocytes to tumor-associated fibroblasts through regulation by snails can be regulated by UPS such as USP27X. CAFs could also transform into normal fibrocytes by regulation of CXCL12/CXCR4/CTGF. FBXW, F-Box and WD repeat domain containing; USP, ubiquitin-specific protease; TRIM, tripartite motif-containing; OTU, ovarian tumor proteases; CRL, cullin-RING E3 ubiquitin ligase; UBR, the ubiquitin-binding region; SIRP, CD47-signal-regulatory protein; EMT, endothelial-mesenchymal transition; TRAF, tumor necrosis factor receptor-associated factor; CAFs, cancer-associated fibroblasts; CXCL, C-X-C motif chemokine ligand; CTGF, connective tissue growth factor. Fig. 4 OTUD6A, a deubiquitination enzyme, in macrophages, can up-regulate NLRP3 protein levels through deubiquitination, which elevates IL-1β levels, ultimately enhancing the inflammatory function of macrophages. 113 Another deubiquitinating enzyme Myb-like, SWIRM, and MPN domains 1 (Mysm1) regulates macrophage survival and polarization. Mysm1-deficient macrophages produce more pro-inflammatory factors including IL-1β, TNFα, and iNOS, and sustained phosphorylation of AKT, a major PI3K target, can be detected. However, the exact mechanism of how Mysm1 regulates macrophage polarization remains unknown. 114 The deubiquitinase enzyme OTUD5 mediates YAP deubiquitination, thereby stabilizing the protein to promote M2 macrophage polarization. M2 macrophages with high YAP expression enhance the cellular invasive capacity of cancer cells, thereby improving the progression of triple-negative breast cancer 115 ( Table 2 ). Phagocytosis and antigen presentation are vital functions for macrophages to exert their anti-tumor effects. However, during interactions with macrophages, tumor cells often transmit the “don't eat me” signal to evade macrophage phagocytosis. For example, tumor cells can express CD47 to interact with SIRPα on the macrophage surface and mediate immune escape. 116 , 117 , 118 CD47 can be ubiquitinated by DDB1-CUL4A, which then blocks the CD47/SIRPα immune checkpoint and improves the anti-tumor immune response. 119 UBR also regulates the CD47/SIRPα axis during immune therapy 120 ( Table 2 ). In summary, ubiquitination is crucial for regulating macrophage polarization and function. This regulatory effect occurs in already existing tumors as well as in some precancerous lesions of tumors. 113 Thus, macrophage ubiquitination can not only serve as a target for anti-tumor therapy but also prevent tumor occurrence. MDSCs are an immature population of immune cells, which differentiate into DCs, macrophages, and neutrophils. 121 MDSCs secrete high NO, Arg1, iNOS, and ROS concentrations, which inhibit immune cells in the TME, especially T cells, promote tumor cell growth, and cause tumor immune escape. 122 Targeting MDSCs is likely to be a breakthrough therapy against tumors in the future. 123 MDSCs can be simply divided into two subgroups based on their surface marker: granular or polytype nucleoid (PMN-) and mononuclear (M−) MDSCs. The series of chemokines secreted by M-MDSCs can promote regulatory T-cell proliferation and differentiation to inhibit the immune microenvironment. 124 USP12 can regulate p65 deubiquitination in the NF-κB signaling pathway in MDSCs, thereby mediating PD-L1 and iNOS expression and the anti-tumor immune response of CD4 + T cells. At the same time, USP12 can affect INF-γ stability and reduce the anti-tumor immune capacity in the TME. 125 TRAF6, another member of the ubiquitin ligase family, modifies K63 polyubiquitination and STAT3 phosphorylation, thereby affecting MDSC differentiation. Examples of MDSC ubiquitination are few. More ubiquitination regulatory proteins will be identified in future studies. 126 Tumor-associated fibroblasts (CAFs) are the most abundant in stromal cells in tumors. They secrete cytokines and chemokines to enhance the proliferation and metastasis of malignant tumors. 127 CAFs have a wide range of sources. During the transition from normal fibroblasts to CAFs, snail plays a crucial role as a transcription factor regulating cell protein expression and cytokine secretion. 128 TRAF4, which is highly expressed in normal lung fibroblasts after radiotherapy , interacts with NADPH oxidase-2 (NOX2) and NOX4, thereby delaying lysosomal-dependent degradation. NOX2 and NOX4 localization in endosomes is stabilized and can activate the NF-κB signaling pathway in healthy cells of the lung, increasing ICAM1 secretion and non-small cell lung cancer invasion. 129 In invasive basal-like breast cancer cells, the ubiquitin editing enzyme A20 promotes tumor migration by modifying the monoubiquitination of three lysines in snails to promote transforming growth factor-β (TGF-β)-induced epithelial-mesenchymal transition in invasive basal-like breast cancer cells. 130 USP27X expression was positively correlated with snails. TGF-β-activated USP27X can serve as a deubiquitinating enzyme and stabilize snails, and the decreased USP27X expression leads to the inhibition of TGF-β-induced activation of epithelial-mesenchymal transition and fibroblasts. 128 Also, reports have proposed that activated CAFs are recovered to normal static fibroblasts by targeting signaling pathway downstream molecules, such as C-X-C motif chemokine ligand 12 (CXCL12), CXCR4, and anti-connective tissue growth factor (CTGF). 131 , 132 , 133 Therefore, these results all implied that targeting CAFs can be a future direction for tumor treatment ( Table 3 ). Targeting CAFs is of great significance in regulating the TME, especially that related to tumor invasion and metastasis. The expression of its related proteins may serve as both a target for subsequent research about anti-tumor therapy and an important indicator for judging tumor prognosis. The expression of CAF-related proteins may serve as both a target for subsequent research about anti-tumor therapy and a crucial indicator for judging tumor prognosis. Table 3 Summary of ubiquitination enzyme regulation of targeting proteins. Table 3 Enzyme Name Targets Cancer Animal models Reference E3 ligase TRAF4 NOX2/NOX4 Non-small-cell lung cancer Mice 130 A20 TGF-β Basal-like breast cancer Mice 131 DUB USP27x Snail Invasive basal-like breast cancer cells Mice 129 DUB USP18 ATGL Lung cancer cells Mice 136 DUB UCH-L1 COL1A1 Uterine leiomyoma 141 COL3A1 / DUB USP3 COL6A5 COL9A3 Gastric cancer / 140 E3 ligase HRD1 MMP2/9 Colon cancer / 153 MDM2 MMP9 Metastatic breast cancer / 151 TRIM13 MMP9 Clear-cell renal cell carcinoma / 152 FBXW2 MMP2/9 Lung cancer / 146 UCH-L1 MMP1 Brain glioma / 145 DUB OTUD7B TRAF3 Lung cancer Mice 154 USP15 MMP3 Non-small cell lung cancer Mice 149 Previous reports have only reported the link between adipocytes and obese patients, but new research has shown that some biomarkers in the adipose tissue of cancer patients can serve as an indicator of cancer characteristics, thereby suggesting a crucial link between tumor cells and adipocytes. 134 Although a substantial gap exists in the study of the role of ubiquitination in the interaction between adipocytes and tumor cells, the regulatory role of ubiquitination in lipid metabolism is clear. Ubiquitin-specific proteases, such as USP18, can promote the growth of lung cancer cells by inhibiting the degradation of adipose triglyceride lipase and promoting lipolysis and fatty acid oxidation 135 ( Table 3 ). Targeting ubiquitination to regulate adipocytes and lipid metabolism and ultimately exert anti-tumor effects may be the future research direction. Because the extracellular matrix (ECM) is rich in proteins such as collagens, matrix metalloproteins, and fibronectin. It maintains the overall environmental stability of the TME. 136 Some ubiquitin proteins can promote tumor proliferation by regulating the protein stability in the ECM and building a “highway” for the rapid migration of tumor cells. Collagen (COL) is the largest protein family in the ECM. As a major component involved in maintaining the ECM framework, collagen is a key player in maintaining ECM stability. 137 Most current studies on collagen have targeted cell fibrosis, but a few studies have reported ubiquitination-mediated regulation of collagen that promotes tumor cell migration. 138 COL9A3 and COL6A5 are members of the collagen family . The deubiquitination enzyme USP3, an essential mediator regulating oncogenic activity both in vitro and in vivo , can deubiquitinate COL9A3 and COL6A5 in gastric cancer cells. The elevated USP3 expression can affect the abundance of COL9A3 and COL6A5, thereby promoting tumor proliferation and migration of gastric cancer cells. 139 Moreover, according to a new report, UCH-L1 regulates cancer cell migration and contraction by regulating the stability of COL1A1 and COL3A1 proteins. 140 Figure 5 The UPS regulates the extracellular matrix in the TME. COLs and MMPs metalloproteinase are important proteins in ECM, which are regulated by ubiquitin enzymes during cancer progression. The level of COLs and MMPs can be regulated by UPS such as HCHL-L1, USP3, and mdm2. FBXW2 can regulate MMP2/9 protein stabilization through β-Trcp/FBXW2/SKP2 signaling and promote tumor cell proliferation. FBXW2 can also cause drug resistance during clinical treatment by modifying the ubiquitination of P65 protein. ECM, extracellular matrix; COL, collagen; MMP, matrix metalloproteinase; UCH, ubiquitin carboxy-terminal hydrolases; USP, ubiquitin-specific protease; OTU, ovarian tumor proteases; FBXW, F-Box and WD repeat domain containing; TRIM, tripartite motif-containing; SKP, S-phase kinase-associated protein; TRAF, tumor necrosis factor receptor-associated factor; SKP, S-phase kinase-associated protein. Fig. 5 The metalloproteinase family also occupies a large proportion of the ECM. This protease family can hydrolyze most proteins in the ECM, and even some cytokines and chemokines, thereby promoting tumor cell growth. 141 , 142 E3 ubiquitinase-regulated matrix metalloproteinases (MMPs) are found in most cancers. For example, RING E3 ubiquitin ligase and HECT ubiquitin ligase are involved in regulating MMP stability and thus affect tumor development. 143 One report for the first time identified MMP-1, UCHL1, and the 20s proteasome in patient plasma as markers for glioma. However, it could not clarify the specific regulatory relationship among MMP-1, UCHL1, and the 20s proteasome 144 . FBXW2, a RING E3 ubiquitin ligase, serves as a vital regulator in lung cancer. FBXW2 promotes MMP2, MMP7, and MMP9 expression by forming the β-Trcp/FBXW2/SKP2 axis with other ubiquitin ligases such as β-Trcp and SKP2. 145 , 146 , 147 The latest report proposes that FBXW2 overexpression in breast cancer leads to p65 ubiquitination, eliminating the effect of p65 resistance on paclitaxel use. 146 In other tumors such as non-small cell lung cancer, USP15 has been reported to be positively associated with MMP3. 148 In addition to regulating p53 protein stability, MDM2 also regulates MMP9 protein stability in ECM. There is an association between MDM2 expression in prostate cancer and the expression of MMP family proteins, especially MMP9, which promotes tumor cell migration by balancing pro-angiogenic mechanisms. 149 Moreover, MDM2 has also been shown to down-regulate the abundance of MMP3, MMP10, and MMP13, with a role in inhibiting the invasion of breast cancer cells. 150 TRIM13 can inhibit clear-cell renal cell carcinoma invasion by down-regulating MMP9 expression. 151 HRD1 promotes the proliferation and migration of colon cancer. The expression of this ubiquitin ligase was found to be higher in cancer cells than in other cells, and the expression of MMP2 and MMP9 was also elevated. However, the specific mechanism of how HRD1 regulates MMP2 and MMP9 is still unclear. 152 In addition, in lung cancer cells, LCL161 drugs could up-regulate the expression of MMP9 protein and thus induce cancer cell migration. OTUD7B inhibits the activation of NF-κ B by deubiquitinating TRAF3, which in turn promotes the transcription of MMP9, thereby exerting an inhibitory effect on the migration of lung cancer cells. 153 Although many inhibitors regulating ubiquitination have been screened out, very few drugs are truly applied for clinical therapeutic usage. MG132, a modified version of the first proteasome inhibitor, was widely investigated in most laboratories for proteasome inhibition. 154 Other proteasome inhibitors, such as bortezomib, carfilzomib, and ixazomib, were successively developed. They received FDA approval for clinical treatment, where the drugs exhibited good results in the treatment of various malignant tumors, especially multiple myelomas. 155 Bortezomib, the first proteasome inhibitor discovered, was developed and exploited in the clinical treatment of multiple solid tumors and hematology tumors. This inhibitor blocks the proteolytic function of the 26S proteasome complex by covalently binding to the β5 subunit of the 20s proteasome. 156 Clinically, bortezomib can be used alone or in combination with other chemotherapeutic drugs. For example, in the multiple myeloma clinical phase 2 experimental report, complete response/stringent complete response rate improved after treatment with the bortezomib-cyclophosphamide-dexamethasone combination. 157 The poor solubility of bortezomib owing to its chemical structure makes the clinical translation of this inhibitor difficult despite its excellent therapeutic efficacy. Second, due to the strong toxicity of bortezomib, patients experienced vomiting, nausea, poor mental state, and even abnormal perception symptoms during clinical trials. 158 , 159 , 160 , 161 Finally, the inhibitor may also lead to drug resistance because of the binding of bortezomib to the β5 subunit of the 20s proteasome, which thus inhibits the binding of the β5 subunit to other subunits. 162 Therefore, the development of relevant inhibitors based on bortezomib may be improved in future drug development. Subsequently, carfilzomib and ixazomib were also developed in 2012 and 2015, which were used to solve the problem of drug resistance arising during medication. Clinical data after the use of related inhibitors have also been reported. 163 Of note, E3 ubiquitin ligase is among the most crucial components of the UPS system, which guarantees the highly specific degradation of substrate proteins. Developing inhibitors targeting this ligase can maximize the drug's function. For example, MDM2-targeting-related inhibitors have been developed to block the binding of the MDM2 N-terminal domain to the peptide segment of p53. 164 Nutlin-3a and its derivatives play pivotal roles in inhibiting the growth of hematological malignancies, glioblastoma, and acute myelocytic leukemia cells because their structure is similar to that of p53 and allows competitive binding of MDM2 to p53. 165 , 166 Other inhibitors targeting MDM2 such as AMG-232 (KRT-232), APG-115, and Brigimadlin have also been reported recently. 167 , 168 , 169 PROTAC is used as a targeted UPS technology for regulating target protein degradation. The mechanism of this technology is not directly targeting E3 ubiquitin ligase, but by recruiting E3 ligase, one end connects to the target protein and the other end connects to E3 ubiquitin ligase, forming a ternary complex of target protein PROTAC-E3ligase, thereby achieving the degradation of the target protein. 170 This technology has the advantage of reducing drug resistance and toxicity. 171 It has good effects in treating various cancers. For example, in the treatment of triple-negative breast cancer, PROTAC targeting the MDM2-p53 axis can significantly improve the survival period of tumor-bearing mice. 172 Although multiple E3 ubiquitin ligases have been discovered, few ubiquitin ligases are targeted by PTROTAC. Such molecules only target classical proteins such as VHL and MDM2, 173 which means that there are still limitations in tumor treatment. We look forward to developing more types of ligases targeted by PROTAC in the future. The development of DUB inhibitors is another important target for cancer therapy. Some inhibitors are widespread and can target multiple types of DUB. For example, B-AP15 as a DUB inhibitor can address the problem of resistance arising during bortezomib treatment. It binds to the 26s proteasome to inhibit the function of the deubiquitinating enzymes USP14 and UCHL5. 174 Another inhibitor, VXL1570, also inhibits the functions of USP14 and UCHL5, which when used alone caused tumor reduction in Waldenstrom's macroglobulinemia tumor-bearing mice. Both the aforementioned inhibitors combined with bortezomib or ibrutinib could kill Waldenstrom's macroglobulinemia cancer cells. Because of the difference in the chemical structures of the two inhibitors, the water solubility of VXL1570 was better than that of BAP15, which resulted in a higher stability of VXL1570 in the patient's body. VXL1570 is approved for use in clinical trials. However, two patients with multiple myeloma developed severe exhalation insufficiency and diffuse pulmonary infiltration due to the severe toxicity and side effects of VXL1570. Thus, the clinical experiment was stopped when the patients died during phase I treatment despite the advantages of a broad spectrum of the inhibitors. 175 The development of high-specificity of inhibitors is the focus in tumors. 176 In the past few decades, the regulatory role of UPS in tumor progression has been extensively studied, especially to determine its impact on the biological behavior of tumor cells themselves and the shaping of the tumor immune microenvironment by tumor cells. Here, we retrospect the regulatory effects of USP on tumor cells, immune cells, stromal cells, and ECMs. This enhances our understanding of ubiquitination and provides a basis for further research on tumor occurrence and development and the development of ubiquitination-targeting anti-tumor drugs. In the study of UPS and tumors, numerous studies have reported the important role played by UPS in tumor cells, T cells, and tumor-related macrophages. However, most current research is limited to the effect of UPS on one cell type. The TME contains multiple cells, which often results in unpredictable other effects in the organism. Only considering its regulatory effects on one or more cells when developing targeted UPS drugs is not appropriate as other unpredictable effects are often observed during clinical treatment. UPS regulates multiple components of tumors and ultimately affects tumor progression. It regulates the cycle, energy metabolism, and protein molecule expression of tumor cells by regulating the ubiquitination and deubiquitination of target proteins. It also regulates the interaction between tumor cells and other cells as well as the function of immune cells and interstitial cells other than immune cells. However, a close synergistic relationship exists between ubiquitination regulation and other post-translational modifications. 80 Other post-translational modifications may play a regulatory role in protein ubiquitination. Moreover, the protein ubiquitination level can affect other post-translational modification processes. This suggests that attention must be paid to this point in future research. UPS-regulated targeting protein stability reported in some studies is only limited to changes in the protein level, but the specific mechanism remains unclear. Based on the characteristics of the UPS system, the development of related inhibitors such as PROTAC has become the recent research focus. 44 This type of inhibitor can hydrolyze proteins with the help of the UPS system, which causes the pathological protein to be tagged with ubiquitination, thereby achieving target protein degradation and tumor treatment. However, toxicity- and specificity-related concerns of these inhibitors need to be solved. In addition, this article only describes ubiquitination- and deubiquitination-associated enzymes. Some proteases also possess the function of ubiquitination modification. Such a modification is called ubiquitination-like modification. This type of modification also plays a pivotal role in regulating tumor development and needs to be explored. Yulan Huang and Yuan Gao conceived the idea of the manuscript and wrote it. Zhenghong Lin and Hongming Miao revised the manuscript. All authors read and approved the final manuscript. This work was supported in part by the 10.13039/501100001809 National Natural Science Foundation of China and the Chongqing Fund for Outstanding Youth (China) . All authors declared no conflict of interests. | Study | biomedical | en | 0.999996 |
PMC11697073 | The publication of the ImageNet classification with deep networks in 2012 1 marked a critical turning point for modern AI, particularly deep learning, and its significant impact on healthcare. This advancement enabled the use of medical data (text, images, etc.) with ground truth for tasks like clinical decision-making, diagnosis, prognosis, and patient management. Late 2022 witnessed another major leap forward with the introduction of large language models (LLMs) and large visual models (LVMs) to the public. 2 These models possess broader capabilities unlike specialized AI models designed for specific tasks. Building upon these developments, the recent proposal of Foundational Models 2 holds a great potential to further revolutionize healthcare. By combining LLMs and LVMs, these models (Foundational AI Models) are envisioned as highly trained and adaptable learning tools capable of understanding and working with different medical data, including text reports, images, clinical and pathological data, and even scientific literature. In other words, Foundational AI models are large-scale, general-purpose models trained on vast amounts of data that can be adapted to a wide variety of specific tasks. Unlike traditional AI models, which are designed for a single task or domain, foundational models learn broad patterns and representations from diverse data sources, allowing them to generalize across many different applications. 3 A key characteristic of foundational models is their lack of pre-programmed functionality for a specific task. This allows them to extract and learn broader and complex patterns/features and fundamental relations within the healthcare domain, making them flexible and adaptable. Consequently, they can be fine-tuned and applied to various tasks, ultimately serving as the foundation for building diverse, specialized AI tools within healthcare. Computer-aided diagnosis (CAD) systems are in our life for many decades now, and they primarily focus on detecting abnormalities in medical data to aid diagnosis. They rely on smaller and more specific datasets tailored to a particular disease or imaging modality. Often, they use rule-based, or machine learning algorithms (from pre-deep learning era) trained on labelled data for specific tasks (eg, detecting abnormalities in a specific type of scan). CAD has its roots in clinical practice, closely tied with the “Oslerian” strategy. As an example for disease-specific approach in clinical medicine, Dr William Osler contributed significantly by focusing on specific organs and their diseases. This “Oslerian” method led to categorizing conditions like diabetes into different types, aiding treatments decisions but potentially neglecting individual patient needs. 4 In addition to disease-specific approaches, medicine has come a long way since the days of Hippocrates, who emphasized treatment tailored to individual needs, much like the personalized medicine we strive for today. This focus on the whole person , rather than just the disease, has been a consistent thread throughout history, albeit with significant shifts along the way. For an evolved approach as an example, the common approach in healthcare before precision medicine was “one-size-fits-all” or standardized medicine. This model relied on treating patients based on generalized protocols and population-based averages, rather than individual patient characteristics. The shift toward personalized medicine emphasized the importance of tailoring medical treatments to individual patients based on their genetic, environmental, and lifestyle factors. This personalized approach aligns perfectly with the historical emphasis on treating the whole person and represents a significant leap forward from the disease-centric Oslerian approach. 5 Foundational models have its roots in this strategy as they aim to understand underlying disease processes and predict patient outcome. Foundational models employ deep learning architectures (more advanced than conventional machine learning strategies) to learn more complex, non-linear relationships within the data, enabling them to identify nuanced patterns and make predictions across various medical tasks. Furthermore, foundational models function as general-purpose models that can be adapted to various medical tasks via fine-tuning or prompting. To do so, these models need to be trained on massive and diverse datasets encompassing various medical data types (patient records, imaging scans, genetic information). Table 1 illustrates the major differences between classical CAD systems and foundational models. Histopathology analysis : While many CAD systems focus on task-specific model development for pathology image analysis (eg, detecting certain cancer types), foundational AI models provide general-purpose histopathology analysis for multiple tasks, yielding better detection and diagnosis rates. 6 Lung nodule malignancy prediction : CAD systems have moderate-to-high success in predicting lung nodule malignancy in CT scans. Foundational AI models significantly improve these rates by being pre-trained on large amounts of unlabelled data, enabling them to learn more general and robust data representations. They can potentially uncover new biomarkers due to their capacity to integrate larger and multimodal datasets, whereas current CAD systems are limited. 3 Multi-modality data integration : Many CAD systems accept single-modality data and have limited capacity for combining multiple data types. Foundational AI models have the flexibility and power to combine imaging and non-imaging data such as clinical information, lab results, reports, and molecular data, offering a holistic understanding of a patient's condition. For instance, integrating a patient’s CT, MRI, and endoscopic ultrasound imaging with family history and laboratory data can, hypothetically, more accurately assess the risk of developing pancreatic cancer—a feat not achievable with conventional CAD systems. Both foundational AI models and modern medicine share a crucial understanding: single-faceted approaches have limitations. Just as doctors rely on diverse data points like blood tests, imaging, family history, Foundational AI models excel at processing and analysing varied data types such as genetic, environmental, behavioural, and more. 7 This comprehensive approach allows both disciplines to paint a more complete picture of the patient or the disease, leading to improved diagnosis, treatment, and even prevention. The similarities extend beyond data analysis. Both Foundational AI models and modern medicine strive for personalized care. By considering individual characteristics and circumstances, they aim to tailor interventions for maximum effectiveness and minimize side effects. This shift toward personalized medicine represents a significant paradigm shift, moving away from the “one-size-fits-all” mentality of the past. Most current AI models in medicine rely on limited, single-type data for risk predictions and diagnoses, making them imperfect yet still valuable. In contrast, foundational AI models go beyond these limitations. They can analyse multiple data types, including genetic, clinical, pathology, imaging, lifestyle, and information from other organs. This enables them to predict cancer risk with greater accuracy, tailor screening approaches to individual needs, and suggest personalized treatment options based on a comprehensive understanding of the patient. Furthermore, unlike the static nature of current models, foundational AI models can track changes over time using multi-modal data. This allows for both adaptive care plans and intervention, as well as facilitating collaborative decision-making through interaction with physicians. Taking an even more holistic approach, these models can incorporate socio-economic and environmental factors into their analysis, providing a more comprehensive picture of individual health. Foundational AI models hold a great potential, but limitations must be acknowledged. The future of medicine lies in embracing the best of both worlds: the insights of modern medicine and the power of Foundational AI models. This synergy can create a healthcare system that is truly personalized , data-driven , and human-centred . By addressing the challenges of bias, interpretability, and data scarcity and diversity, we can ensure that AI serves as a tool to augment, not replace, the human touch in healthcare. To mitigate data bias and ensure generalizability, it is imperative to curate diverse and representative training datasets. Collaboration across institutions and regions can facilitate the pooling of data, helping to overcome scarcity and promote inclusivity. Addressing interpretability challenges involves developing methods to elucidate the decision-making processes of complex models, fostering transparency and trust among clinicians and patients. Investing in computational resources and infrastructure is crucial to democratize access to advanced AI tools, preventing the widening of existing disparities. Additionally, establishing rigorous standards for model validation and encouraging a culture of thoroughness over speed in research publications will enhance the reliability of AI applications in medicine. The future of medicine lies in the synergy between human expertise and advanced AI technologies. By fostering collaboration, promoting ethical practices, and prioritizing patient-centred outcomes, we can harness the full potential of foundational AI models. This integrated approach promises not only to improve individual patient outcomes but also to contribute to a more equitable and effective healthcare system for all. So, let us strive for a future where technology and human expertise work in tandem to create a more holistic and effective approach to medicine. | Review | biomedical | en | 0.999996 |
PMC11697094 | The skin accounts for approximately 16% of the body weight and is the largest organ of the body. 1 Once the skin barrier is damaged, the body initiates precise regulation of wound contraction, hemostasis, inflammation, angiogenesis, granulation tissue proliferation, and epithelial remodeling to promote wound healing. 2 , 3 Macrophages play a crucial role in the inflammatory response of wound tissue, and their active plasticity allows them to regulate tissue damage and repair functions, while macrophage-mediated inflammatory responses are closely associated with wound healing. 4 Macrophages polarized by environmental signals can be broadly classified into two main groups: classically activated macrophages with pro-inflammatory properties (M1), whose prototypical activating stimuli are interferon-gamma and lipopolysaccharide, and macrophages with anti-inflammatory and wound healing functions alternative to activation (M2), further subdivided into M2a (after exposure to interleukin (IL)-4 or IL-13), M2b (immune complexes in combination with IL-1β or lipopolysaccharide), and M2c (IL-10, transforming growth factor-beta (TGF-β) or glucocorticoids). 5 , 6 The phenotype of macrophages is influenced by the microenvironment of the wound and evolves during the healing process from a pro-inflammatory (M1) profile in the early stages to a less inflammatory pro-healing (M2) phenotype in the later stages. 7 M1 macrophages dominate in the early stages of wound healing and display phagocytic activity and the secretion of proinflammatory cytokines such as IL-1β, IL-6, IL-12, tumor necrosis factor-alpha (TNF-α), and oxidative metabolites to remove pathogens, tissue debris, and senescent cells from the wound surface. 8 In the middle to late stages, the M0 macrophage phenotype is reprogrammed to an anti-inflammatory M2 phenotype, secreting anti-inflammatory cytokines such as IL-4 and IL-10 to suppress the local inflammatory response and producing vascular endothelial growth factor (VEGF) to promote angiogenesis and stabilization. 9 Dysfunctional M0 macrophage polarization to M2 macrophage polarization, reduced M2 macrophage numbers, and diminished anti-inflammatory and angiogenic capacity are reasons why trauma results in the long-term persistence of nonhealing in the inflammatory phase. 10 Therefore, effective regulation of the polarization of M0 macrophages to M2-type macrophages, which exert anti-inflammatory effects and promote angiogenesis, will significantly improve wound healing. Mesenchymal stem cells (MSCs), an important endogenous cellular reservoir for tissue repair and regeneration, can effectively respond to inflammation and regulate macrophage reprogramming. 11 Currently, the ability of tissue engineering to promote wound healing has been investigated mainly through the secretion of paracrine growth factors, immune factors, chemokines, and extracellular vesicles by MSCs. 12 Recent studies have shown that extracellular vesicles produced by bone marrow-derived MSCs can contribute to tissue repair by promoting angiogenesis under a variety of pathological conditions, including skin wound healing, acute kidney injury, and myocardial infarction. In addition, they are widely used as drug delivery systems for cardiovascular diseases, neurodegenerative diseases, liver diseases, lung diseases, and kidney diseases. 13 , 14 , 15 Extracellular vesicles can be divided into three subgroups (exosomes, microvesicles, and apoptotic vesicles) and play a role in intercellular communication by transmitting complex signals. 16 Apoptotic bodies (ABs) are the largest extracellular vesicles, with a diameter of approximately 50–5000 nm, and are rich in DNA, microRNA, mRNA, proteins, and organelles. 17 After bone marrow-derived MSCs undergo apoptosis, macrophages rapidly respond to apoptotic signals, recognize and take up apoptotic vesicles within a short period, and trigger the polarization of M0 macrophages to the M2 phenotype, while M2 phenotype macrophages further enhance the function of fibroblasts and synergistically promote skin wound healing. 18 Therefore, the ABs of MSCs may serve as promising candidates for the development of cell-free therapies and provide new strategies for the treatment of cutaneous wounds. To further achieve the controlled release of key bioinformatic molecules to M0 macrophages within the wound surface and drive the polarization of M0 macrophages to M2 macrophages, the selection of suitable wound dressings to load therapeutic factors is a promising strategy. 19 Scaffolds serve as a means of restoring the morphology and function of diseased, damaged, and lost tissues by acting as an extracellular matrix for supporting cells and their fate and function. 20 Various natural and synthetic biopolymers can be used to fabricate such scaffolds, such as natural biomacromolecules, including silk fibroin, collagen, gelatin, chitosan, and hyaluronic acid, and synthetic biopolymers, including polyethylene glycol, polycaprolactone (PCL), polylactic acid-glycolic acid copolymer, and poly l -lactide. 21 Previous studies have shown that the loading of MSC-derived extracellular vesicles on heparin-modified 10.13039/100018919 PCL scaffolds inhibits thrombosis and calcification in the treatment of cardiovascular disease, thereby improving graft patency and enhancing endothelial and vascular smooth muscle regeneration while inducing M1 macrophage polarization to M2c macrophages. 22 MSC-exosomes loaded on 10.13039/100018919 PCL scaffolds modified with S-nitrosoglutathione reduce the expression of proinflammatory genes in treated macrophages and accelerate osteogenic differentiation in bone defects. 23 In our study, we prepared 10.13039/100018919 PCL scaffolds using an electrospinning technique that is thought to better mimic the physical structure of the extracellular matrix as well as the suitable mechanical properties for the delivery of apoptotic vesicles and wound dressings. 24 To investigate the specific regulatory mechanism of MSC involvement in apoptotic vesicles, we investigated the mechanism of action of MSC-AB-loaded 10.13039/100018919 PCL scaffolds in regulating macrophage polarization for wound healing in a mouse wound model, providing an experimental basis and theoretical rationale for the development of new drugs. Primary MSCs were derived from bone marrow-derived stem cells (BMSCs) harvested from C57BL/6 mice. Bone marrow was obtained from the femurs and tibias of C57BL/6 mice and was washed and filtered to form single-cell suspensions. Primary BMSCs were cultured in Dulbecco's modified Eagle medium (DMEM) (Gibco, Grand Island, NY, USA) supplemented with 10% fetal bovine serum (Gibco) and 1% penicillin/streptomycin (Invitrogen, Carlsbad, CA, USA) at 37 °C in a 5% CO 2 cell culture incubator. The medium was changed every 2–3 days. The adherent cells were digested with 0.25% trypsin (MP Biomedicals, Irvine, CA, USA) and passaged in vitro , and third- and fourth-generation BMSCs were used for subsequent experiments. A mouse macrophage line (RAW264.7) and 293T cells were obtained from the CAS Cell Bank. The NIH-3T3 mouse fibroblasts used for the fibroblast scratch migration experiments were obtained from Procell Life Science & Technology Co., Ltd. (Wuhan, China). After BMSCs were cultured in serum-free DMEM for 24 h, staurosporine (0.5 μM) (MedChemExpress, NJ, USA) was added for 12 h to induce apoptosis of MSCs. The medium was then collected and centrifuged at 300 g for 10 min to remove cells and debris. After two repeated centrifugations, the supernatant was collected and further centrifuged at 3000 g for 30 min to concentrate the ABs into pellets, which were then resuspended in 1× phosphate buffer saline solution (PBS) and stored at −80 °C for subsequent experiments. Protein concentrations were measured using the BCA Protein Assay Kit (Beyotime Biotechnology, Shanghai, China). The purified ABs were characterized by western blotting using primary antibodies against caspase-3, CD9, CD63, GAPDH, and cleaved caspase-3 (rabbit mAb, Cell Signaling Technology, Boston, MA, USA). Dynamic light scattering analysis was performed using a Zetasizer Nano ZSE (Malvern Panalytical, Malvern, England, UK). The morphology of the ABs was observed by scanning electron microscopy (Hitachi, TKY, Japan). One gram of PCL solid particles was dissolved in 10 mL of dichloromethane to form a 10% (w/v) PCL/dichloromethane solution, which was stirred continuously at room temperature for 2 days until the solution became clear and transparent. The electrospun emulsion was drawn into a 10 mL syringe (with a 21G stainless steel flat-tipped dispensing needle) and placed on a microinjection pump (LongerPump, Baoding, China) for electrospinning, which was performed in a fume hood and collected from the holder via a homemade drum collector. The electrostatic spinning parameters were set as follows: injection rate of 1 mL/h, voltage of 15 kV, receiver speed of 150 rpm, and reception distance of 12–15 cm. The resulting fiber material was dried well in a fume hood and left at room temperature. PCL scaffolds were dried at room temperature for 1 day. All scaffolds were cut into 10 × 10 mm squares, fixed to the sample stage with double-sided carbon conductive adhesive, and examined by field emission scanning electron microscopy (Hitachi) after 40 s of gold spraying under vacuum with the acceleration voltage set to 10 kV. The scaffold diameters were statistically analyzed by ImageJ software (NIH, Bethesda, MD, USA) to characterize the PCL fiber scaffold morphology. The PCL scaffold material was also cut into 15 mm diameter circles to fit 24-well culture plates. The materials were sterilized using Co irradiation at a radiation dose of 10 kGy. After sterilization, the materials were fixed at the bottom of the 24-well plates, washed three times with PBS, and then incubated in PBS for 24 h. Subsequently, mouse bone marrow MSC ABs were inoculated at 50 μg/mL on the surface of the materials and incubated in a cell culture incubator at 37 °C for 12 h. The scaffold materials were incubated by scanning electron microscopy (Hitachi) to observe the morphology of the loaded BMSC-AB-PCL fibrous scaffold as a means of characterization. ABs of mouse bone marrow MSCs were inoculated at 50 μg/mL on the surface of PCL fibrous scaffolds and incubated in an incubator at 37 °C for 12 h. An equal amount of suspension was added to PBS and collected every 12 h. After the protein was measured with a BCA protein assay kit, an equal amount of the protein concentration was measured with a BCA protein assay kit (Beyotime Biotechnology), and then the protein was added to the scaffolds until the protein concentration was less than 5 μg/mL. According to the manufacturer's protocol, the cytoskeleton green fluorescent dye phalloidin (Thermo Fisher Scientific, Inc., Alexa Fluor 488, Invitrogen, USA) and exosome membrane red labeling dye (1′-dioctadecyl-3,3,3′,3′-tetramethylindole dicarbapenem, DiD) (Thermo Fisher Scientific) were used to label the purified ABs. ABs were incubated in 5 μg/mL DiD staining solution at 37 °C for 30 min, washed with PBS, and centrifuged at 3000 g for 30 min twice. The unattached dye was removed using an ultrafiltration tube (300 kDa, Sigma–Aldrich, Saint Louis, MO, USA). RAW264.7 cells were inoculated in 35 mm confocal dishes at a density of 1 × 10 6 and then cocultured with different concentrations of DiD-labeled ABs in a 37 °C incubator for 4 h and 6 h. After removal at different time points and fixation with 4% paraformaldehyde, cytoskeletal staining and nuclear staining were performed, and the cells were placed under a laser confocal microscope (Olympus SpinSR10, Shinjuku, TKY, Japan) for photographic observation. The cytotoxicity of PCL-ABs and PCL to RAW264.7 cells was evaluated according to the instructions of the cell counting kit-8 (CCK-8; Beyotime Biotechnology). Cells (1 × 10 4 cells/well) were first inoculated in 96-well plates and cultured overnight in DMEM. Next, BMSC-ABs (0, 5, 10, 15, 25, 50, 100, and 200 μg/mL) were added to the cells and incubated at 37 °C with 5% CO 2 for 24 h. Then, the cells were washed with PBS and incubated with 10% CCK-8 solution at 37 °C for 4 h. Finally, the cells were incubated for 4 h using an enzyme marker (Bio-Rad, Hercules, CA, USA) to measure the absorbance at 450 nm. Murine-derived RAW264.7 macrophages were treated with purified ABs, and after 0 h, 24 h, 48 h, and 72 h of culture, the cells were lysed, the proteins were extracted, the protein concentrations of the cells and ABs were determined by BCA protein assay, and the expression of the respective inflammatory factor proteins was detected by western blotting. Equal amounts of total proteins were separated by 4%–10% SDS‒PAGE and transferred to PVDF membranes, which were blocked with 5% skim milk at room temperature for 60 min and then incubated with primary antibodies (GAPDH, CD206, arginase-1, and VEGF) (rabbit mAb, Cell Signaling Technology) at 4 °C overnight. After washing with Tris-buffered saline with Tween® 20, the membranes were incubated with secondary horseradish peroxidase-coupled goat anti-rabbit IgG (Cell Signaling Technology) at room temperature for 2 h. The protein bands were visualized with enhanced chemiluminescence (Thermo Fisher Scientific). After treatment of the purified ABs with murine-derived RAW264.7 macrophages, the macrophages stained after 0 h, 24 h, 48 h, and 72 h of culture were analyzed by flow cytometry (Thermo Fisher Scientific). The cells were stained and analyzed using FITC-conjugated anti-mouse/human CD11b mAb (Blue Laser 488 nm), PE-conjugated anti-mouse CD86 mAb (Blue Laser 488 nm, Green Laser 532 nm/Yellow‒Green Laser 561 nm), Brilliant Violet 421-conjugated anti-mouse F4/80 mAb (Violet Laser 405 nm), and BrilliantViolet 650-conjugated anti-mouse CD206 mAb (Violet Laser 405 nm) (BioLegend, Diego, CA, USA) according to the manufacturer's instructions. All the data were analyzed using FlowJo software (Treestar Inc., Leonard Herzenberg, Palo Alto, CA, USA). Total RNA was extracted from BMSC-derived ABs using TRIzol reagent (Invitrogen, Carlsbad, CA, USA). RNA extraction was followed by DNA digestion with DNaseI. RNA quality was determined using a NanodropTM OneC spectrophotometer (Thermo Fisher Scientific, Inc.) to determine the A260/A280 ratio. RNA integrity was confirmed by 1.5% agarose gel electrophoresis. The quality of the RNA was quantified with a Qubit 3.0 (Thermo Fisher Scientific, Inc.) using a QubitTM RNA wide range detection kit (Life Technologies). Strand RNA sequencing libraries were prepared using the Ribo-Off rRNA Depletion Kit (mouse) and the KC DigitalTM Strand mRNA Library Preparation Kit (Illumina, San Diego, CA, USA) according to the manufacturer's instructions. Library products corresponding to 200–500 bp were enriched, quantified, and finally sequenced on a NovaSeq 6000 sequencer (Illumina) using the PE150 model. The sequences of CCL-1 (C-C motif chemokine ligand 1) containing the wild-type (WT) or mutant (Mut) binding site of miR-21a-5p were designed and synthesized by GenePharma (Shanghai, China). 293 T and RAW264.7 cells were cotransfected with the corresponding plasmids and miR-21a-5p mimics/miR-NC or miR-21a-5p inhibitors/inh-NC with Lipofectamine 2000 (Invitrogen, Carlsbad, CA, USA). To construct a luciferase reporter gene vector containing the CCL-1 promoter, full-length CCL-1 promoters containing wild-type or mutant CCL-1 were cloned and inserted into pGL3-basic vectors (Genecreate, Wuhan, China) and subsequently cotransfected with or without the miRNA overexpression vector. After 48 h of incubation, the activities of firefly and renilla luciferase were measured using the dual luciferase reporter assay kit (Promega, Madison, WI, USA). The miR-21a-5p inhibitor and inhibitor-negative control (NC) (Shanghai Gene Pharma Co., Ltd., Shanghai, China) were used at a final concentration of 100 nM Lipofectamine 3000 (Invitrogen, Thermo Fisher Scientific, Inc.). The sequences of the inhibitor and negative control are shown in Table 1 . The same conditions were applied for each transfection experiment. After 12 h, the transfection was assessed under a fluorescence microscope, and further experiments were continued at 24 h. Table 1 Primer sequences for miR-21a-5p and the inhibitor. Table 1 DUplexName SenseSeq5'→3′ SenseSeq5'→3′ MW mmu-miR-21a-5p UAGCUUAUCAGACUGAUGUUGA (FAM)AACAUCAGUCUGAUAAGCUAUU 14512.99 mmu-miR-21a-5p inhibitors (Cy3) (mU) (mC) (mA) (mA) (mC) (mA) (mU) (mC) (mA) (mG) (mU) (mC) (mU) (mG) (mA) (mU) (mA) (mA) (mG) (mC) (mU) (mA) 7909.72 Inhibitors-NC (mC) (mA) (mG) (mU) (mA) (mC) (mU) (mU) (mU) (mU) (mG) (mU) (mG) (mU) (mA) (mG) (mU) (mA) (mC) (mA) (mA) 6953.66 After the transfection of the miR-21a-5p inhibitor and CCL-1 receptor antagonist into RAW264.7 macrophages, the expression of CD206, arginase-1, and CCL-1 mRNA was assessed by quantitative PCR at 24 h. Total RNA was isolated using TRIzol reagent (Life Technologies). According to the manufacturer's instructions (Promega, Madison, WI, USA), single-stranded cDNA was prepared from 1 μg of mRNA using reverse transcriptase with oligomeric dT primers and V-normalized to GAPDH mRNA levels, and the respective inflammatory factor gene expression was determined using the 2 −ΔΔCt method. The primer sequences for the RAW264.7 macrophages are shown in Table 2 . The effect of bone marrow MSC-derived apoptotic vesicles on the reprogramming of M0 macrophages was assessed by western blot analysis using anti-CD206, anti-arginase-1, and anti-CCL-1 primary antibodies and GAPDH as an internal reference at 48 h. Table 2 Primer sequences for quantitative reverse transcription PCR. Table 2 Primers Sequence (5'−3′) CD206-F CTCTGTTCAGCTATTGGACGC CD206-R TGGCACTCCCAAACATAATTTGA Arginase-1-F CGGCAGTGGCTTTAACCTTG Arginase-1-R TTCATGTGGCGCATTCACAG CCL-1-F GATGAGCCACCTTCCCATCC CCL-1-R TGACTGAGGTCTGTGAGCCT GAPDH-F ACTCTTCCACCTTCGATGCC GAPDH-R TGGGATAGGGCCTCTCTTGC The cells were cultured in serum-free DMEM for 24 h, after which 500 μL of medium was collected. The samples were processed using the Bio-Plex Mouse Cytokine 23-Plex Panel Array (Bio-Rad Laboratories, Hercules, CA, USA) and assayed using the Bio-Plex Protein Array System (Bio-Rad Laboratory) according to the manufacturer's instructions. Concentrations were calculated using the following equation: relative concentration = cytokine concentration ÷ total protein concentration. Additionally, the levels of the cytokines TNF-α, TGF-β, von Willebrand factor (vWF), and VEGF were measured using a mouse ELISA kit (BioVision, San Francisco Bay, CA, USA), standard curves were generated according to the manufacturer's instructions, and the concentrations of the factors were determined from the optical density data. NIH-3T3 mouse fibroblasts were inoculated in the lower layer of a Transwell plate (6-well plate, 0.4 μm, Jet Bio-Filtration, Guangzhou, China) and cultured to 100% confluence. RAW264.7 cells treated in the PBS group (M0), ABs group (M0 + ABs), PCL group (M0 + PCL), and PCL + ABs (M0 + PCL + ABs) groups were transferred to the upper layer of the Transwell plate at 30%–40% inoculation density, and when the cell density reached 100%, the old medium was removed, the cells were washed with PBS three times, and the medium was replaced with fresh complete culture medium. NIH-3T3 cells were scratched vertically along the diameter of the 6-well plate using a 200 μL pipette tip, and RAW264.7 cells treated as described above were placed on the upper layer of the Transwell plate for coculturing. After 12 h, the scratch area of each group was recorded, and the change in scratch area was calculated by ImageJ software (NIH, Bethesda, MD, USA). Twelve male (8-week-old) C57BL/6 mice (18–22 g) were purchased from the Animal Experiment Center of Chongqing Medical University (Chongqing, China). The mice were randomly divided into three groups, the PBS group, PCL group, and BMSC-AB-PCL group, with four mice in each group. After anesthesia by intraperitoneal injection of sodium pentobarbital (40 mg/kg), the mice were shaved and a full-thickness skin wound (0.8 cm in diameter) was produced on the back of each mouse. PCLs and PCLs loaded with BMSC-ABs were then placed over the wound surface of the mice. Images of the wounds were taken on days 0, 2, 4, 6, 8, and 10. Changes in wound size were analyzed using Image-Pro Plus software (Media Cybernetics, Rockville, MD, USA). ABs were fluorescently labeled according to the manufacturer's reagent instructions (Cyanine7 NHS Ester Cy7 NHS, MKBio, Shanghai, China). In vivo fluorescence analysis was performed on the trabeculae of C57BL/6 mice. ABs (10 μg/10 μL) or PBS was injected into the subcutaneous tissue of mice ( n = 3), and the fluorescence intensity was measured by an in vivo imaging system (AniView100, BLT, Guangzhou, China). The fluorescence intensity of the region (ROI) was quantified using AniView software (BLT). Mice were sacrificed by intraperitoneal injection of 150 mg/kg sodium pentobarbital on day 10 after the establishment of the mouse model of trauma. Mouse tissues were collected to observe pathological changes. Tissues from the wound site were collected, fixed in 10% paraformaldehyde, and embedded in paraffin. Pathological changes in the tissues were examined using an hematoxylin-eosin staining kit (Solarbio, Beijing, China). Semiquantitative analysis of hematoxylin-eosin staining was based on the number of follicles and granulation tissue (scores: 0–4; higher scores indicate greater numbers), inflammatory infiltration (scores: 0–2; higher scores indicate less infiltration), and neovascularization (scores: 0–4; higher scores indicate more vascularity). An overall score was calculated, with higher scores indicating better wound recovery. In addition, collagen deposition in the tissue was measured using a Masson trichrome staining kit (Solarbio) and quantified using ImageJ software. Neovascularization was further examined by immunohistochemical staining with a CD31 (Abcam, Cambridge, England, UK) antibody. All the quantitative analyses were performed independently by three pathologists in random unknown groups. All the data are shown as mean ± standard deviation. All groups were compared using t -tests or one-way ANOVA. p values less than 0.05 were considered statistically significant. Graphical analysis was performed using GraphPad Prism 9.0 (GraphPad Software, San Diego, CA, USA). We obtained primary bone marrow MSCs from the femurs and tibias of 6-to-8-week-old male C57BL/6 mice, and after induction of the cells with staurosporine in culture for 24 h, apoptosis was detected by flow cytometry, which revealed a 99.94% apoptosis rate in the control group . MSC-derived ABs were collected by differential gradient centrifugation . The collected ABs showed a typical vesicle structure under scanning electron microscopy, and dynamic light scattering analysis revealed a particle size of approximately 2.314 μm . The zeta potential of the ABs was −17.8 mV, indicating that they were relatively stable . In addition, western blotting confirmed that the ABs in the collected pellets all expressed the same membrane markers (CD9 and CD63) as the source cells and had high levels of cleaved cysteine protease-3 . These results indicate the successful collection of apoptotic vesicles while inducing apoptosis. Figure 1 Acquisition and identification of bone mesenchymal stem cell-derived apoptotic bodies (BMSC-ABs) and preparation and characterization of polycaprolactone (PCL) fiber scaffolds. (A) Acquisition of BMSC-ABs. (B) Identification of apoptotic cells by flow cytometry. (C) Morphological images of ABs. Left: scanning electron microscopy (SEM) images of ABs (scale bar, 2 μm); right: particle size of ABs measured by dynamic light scattering (DLS). (D) Zeta potential of the ABs. (E) Flow chart of PCL scaffolds prepared by electrostatic filament mimicking technique. (F) SEM image of an electrostatic filament mimicking the PCL scaffold. Scale bar, 20 μm. (G) SEM images of electrostatic filament-mimicking PCL scaffolds loaded with ABs. Red arrows indicate sites representing ABs. (H) Identification of ABs by western blot. (I) Release rate of AB-loaded PCL scaffolds. Figure 1 Next, we prepared PCL scaffolds by electrostatic filament mimicry and observed them by scanning electron microscopy . The extracted MSC-ABs were loaded on the PCL scaffold, and subsequent scanning electron microscopy showed that the ABs were successfully adsorbed on the scaffold surface . To determine the release rate of the AB-loaded PCL scaffolds, the ABs were placed in PBS and DMEM, and the suspensions were collected every 12 h. The protein concentration was detected by BCA, and the results showed that the release rate tended to stabilize after 24 h, and the release amount in DMEM was slightly greater than that in PBS . These results indicate that the electrostatic filament-mimetic preparation of PCL scaffolds can better deliver and release apoptotic vesicles. Moreover, we evaluated the efficiency of macrophage uptake of apoptotic vesicles from bone marrow-derived MSCs by incubating macrophages with PCL scaffolds of fluorescent dye-labeled apoptotic vesicles at concentrations ranging from 0 to 100 μg/mL for 4 or 6 h . Confocal microscopy revealed that the attachment and internalization of apoptotic vesicles from MSCs increased in a dose- and time-dependent manner, with the internalization of apoptotic vesicles by macrophages stabilizing at an apoptotic vesicle concentration of 50 μg/mL for 6 h . The survival rate of RAW264.7 cells incubated with PCL and PCL + ABs was maintained at approximately 90%, as determined by CCK-8 assays . Therefore, 50 μg/mL was chosen as the optimal concentration of apoptotic vesicles to determine whether reprogramming of the macrophage phenotype was possible, and the rate was maximized at 6 h. Figure 2 Polycaprolactone-loaded bone mesenchymal stem cell-derived apoptotic bodies (PCL-BMSC-ABs) guided in vitro polarization of M0 to M2 macrophages (Mϕs). (A) Immunostaining of 0–100 μg/mL PCL-BMSC-ABs incubated for 4 h. Green: cytoplasm; blue: nucleus; red: ABs. (B) Immunostaining of 0–100 μg/mL PCL-ABs incubated for 6 h. Green: cytoplasm; blue: nucleus; red: ABs. (C) Relative fluorescence intensity of 0–100 μg/mL PCL-BMSC-ABs incubated for 4 h and 6 h n = 3; ∗∗ P < 0.01. (D) RAW264.7 cell viability after incubation with PCL-ABs. (E) Western blot analysis of Mϕs reprogrammed with the M2 phenotype treated with 50 μg/mL PCL-ABs. (F) Relative grayscale values of western blot analysis of 50 μg/mL PCL-BMSC-ABs Mϕs reprogrammed with the M2 phenotype. n = 3; ∗∗ P < 0.01, ∗ P < 0.05. (G) Flow cytometry comparison of CD206-positive M1 Mϕs and Mϕs incubated with 50 μg/mL PCL-BMSC-ABs for different periods. Figure 2 To test whether BMSC-ABs could induce M0 macrophages to polarize to an anti-inflammatory M2 phenotype, we first incubated RAW264.7 cells with 50 μg/mL PCL-BMSC-ABs for 0 h, 24 h, 48 h, and 72 h. Western blot analysis revealed increased expression of arginase-1 and CD206, indicating that BMSC-ABs enhanced the transformation of M0 macrophages to an anti-inflammatory M2 phenotype and that VEGF expression also increased, indicating that BMSC-ABs promoted angiogenesis . Notably, the expression of arginase-1, VEGF, and CD206 was significantly increased in RAW264.7 cells incubated with BMSC-ABs and remained stable at 48 h . To accurately quantify the extent of M0 polarization to M2 polarization, we used flow cytometry to compare the CD206 positivity of RAW264.7 cells incubated with 50 μg/mL BMSC-ABs for different durations . Flow cytometry analysis revealed that the percentage of M0 macrophages that reprogrammed into M2 macrophages reached 58.3% after incubation with BMSC-ABs for 24 h, increased significantly to 67.6% after 48 h, and stabilized at 69.2% after 72 h . Extracellular vesicles can regulate gene expression at the posttranscriptional level by delivering miRNAs, thereby affecting the function of recipient cells. We analyzed and quantified the expression of miRNAs in BMSC-ABs. A total of 353 known miRNAs were identified via miRNA sequencing analysis of RNA purified from BMSC-ABs. Next, the top 50 known miRNAs detected in BMSC-ABs were sorted based on total read counts . After the miRNAs were predicted to target genes, miR-21a-5p was predicted to bind to the target gene CCL-1 by dual-luciferase experiments . After transfection, colocalization of the fluorescently labeled miR-21a-5p inhibitor and macrophages was detected by fluorescence microscopy . BMSC-ABs were added to M0 that had been transfected with miR-21a-5p inhibitors, and CCL-1 receptor blockers were added to M0 that had been transfected with miR-21a-5p inhibitors, and the transcript levels of CD206, arginase-1, and CCL-1 were assessed at 48 h. The effect of stem cell-derived ABs on the programming of M0 was assessed by western blot analysis after 48 h. The western blot results revealed a significant increase in M2-specific proteins detected in the BMSC-AB and miR-21a-5p inhibitor NC groups compared with the control, miR-21a-5p inhibitor, and CCL-1 receptor blocker groups , indicating that miR-21a-5p in BMSC-ABs significantly affected macrophage function via CCL-1. The western blot results were consistent with the quantitative PCR results . Figure 3 miRNA sequencing analysis of bone mesenchymal stem cell-derived apoptotic bodies (BMSC-ABs)-loaded polycaprolactone (PCL) scaffolds driving the molecular reprogramming of macrophages (Mϕs) to M2-Mϕs. (A) The top 50 known miRNAs detected in BMSC-ABs. (B) miRNA-mRNA regulatory network. (C) The binding of miR-21a-5p to the target gene CCL-1 in 293T cells validated by dual luciferase assay. (D) Colocalization of miR-21a-5p with Mϕs. (E) Fluorescence intensity analysis. n = 3; ∗ P < 0.05. (F) Western blot analysis of the effect of the miR-21a-5p inhibitor on the reprogramming of BMSC-ABs to Mϕs. (G) Relative grayscale values of the western blot analysis of miR-21a-5p inhibitor's effect on the reprogramming of BMSC-ABs that drive Mϕs to M2-Mϕs. n = 3; ∗∗∗ P < 0.001. (H) Quantitative PCR analysis of miR-21a-5p inhibitor's effect on the reprogramming of BMSC-AB-driven Mϕs to M2-Mϕs. n = 3; ∗∗∗ P < 0.001. Figure 3 Although the above findings suggest that BMSC-ABs can drive the transition of M0 to M2 phenotype via miRNA , the ability of programmed M2 macrophages to produce anti-inflammatory cytokines and promote fibroblasts is unknown. We evaluated changes in the secretion of anti-inflammatory and pro-inflammatory cytokines by programmed M2 macrophages and activated M0 macrophages in serum-free medium and then further analyzed the effect of programmed M2 macrophages on fibroblast migration . A Bio-Plex protein array was used to analyze the levels of certain cytokines and chemokines in the supernatants of various cell types. In the M0 and M0+PCL groups, the levels of anti-inflammatory cytokines (IL-4, IL-10, CCL-1, and TGF-β) and vascular indicators (VEGF and vWF) were significantly lower than those in the M0+Abs and M0+PCL + ABs groups, and the expression levels of pro-inflammatory factors in each group (IL-1β, IL-6, and TNF-α) were not significantly different . Interestingly, there was a significant difference in the anti-inflammatory cytokine IL-10 and the angiogenic indicator vWF between the M0+ABs and M0+PCL + ABs groups, suggesting that the PCL material may have a synergistic role in the transition of M0 to M2 phenotype. Different macrophage populations were cocultured with fibroblasts in transwell chambers for 24 h , and differences in fibroblast migration were observed by inverted microscopy at 4× every 12 h . The results showed that fibroblasts reached satisfactory migration capacity within 24 h. The M0+PCL + Abs and M0+ABs groups demonstrated significantly enhanced fibroblast migration compared with the M0 and M0+PCL groups . These results further support the idea that BMSC-ABs can program M0 to M2 and may promote wound healing. Figure 4 In vitro anti-inflammatory and pro-fibroblast migration effects of reprogrammed polycaprolactone (PCL)-loaded M2 macrophages (RM2). (A) Schematic diagram of the scratch assay. (B–F) Levels of cytokines and chemokines in the supernatants of various cell types. n = 3; ∗∗∗ P < 0.001, ∗∗ P < 0.01. (G) Differences in fibroblast migration at different time points (with 4 × microscopy). (H) Migration distance of fibroblasts at different time points. n = 3; ∗∗ P < 0.01. Figure 4 Before exploring the effect of PCL-loaded BMSC-ABs on wound progression, we performed real-time fluorescence imaging analysis of the in vivo distribution of PCL-loaded BMSC-ABs. The fluorescence signal of Cy7-N-hydroxysuccinimide (NHS)-labeled ABs was clearly maintained after trauma for 2 days and gradually decreased over time. On day 2 after coinjection, the signal decreased to less than 10% of the initial value. Substantial programming was found at 48 h during in vitro coincubation . This result suggested that locally injected ABs will have sufficient time to reprogram M0 to M2. Thus, our data suggest that local macrophage programming can be achieved every two days of local treatment. Figure 5 In vivo biodistribution of bone mesenchymal stem cell-derived apoptotic bodies (BMSC-ABs). (A) Real-time imaging of Cy7-N-hydroxysuccinimide (NHS)-labeled ABs. Fifty micrograms of apoptotic vesicles suspended in 20 μL of phosphate buffer saline solution were injected into the tissue near the wound site through subcutaneous injection for real-time observation. (B) Observation of trauma-related changes in mice at different time points. (C) Wound size changes in mice at different time points. (D) Kidney function indicators. (E) Liver function indicators. (F) Histopathological sections of various tissues and organs. Figure 5 To investigate the effect of BMSC-ABs on trauma-related inflammation and angiogenesis, we generated a 0.8 cm diameter trauma model in the skin of anesthetized mice on the back via a trauma punch and covered the trauma with PCLs loaded with BMSC-ABs and a separate PCL. Wound healing assays revealed that the PCL-BMSC-ABs promoted wound healing , and there was no significant difference in wound healing between the PCL and PBS groups . Moreover, the liver and kidney function indices and histopathological sections of the mice were not significantly different between the groups ( P > 0.05) . We also examined the histological changes in the wounds. The results of hematoxylin-eosin staining and Masson staining showed that the wounds in the PCL-ABs treatment group exhibited a significant trend toward healing, inflammatory cell regression, and collagen fiber formation ( P < 0.05), while those in the control group (PBS injection) and the PCL treatment group covered with PCL alone showed a significant trend toward delayed wound healing compared with those in the PCL-ABs group and increased inflammatory cells, and there was no significant difference between the two groups ( P > 0.05) . Based on these histological findings, we further evaluated the distribution of traumatic macrophage types by immunohistochemistry staining of the traumatic tissue. The percentage of arginase-positive macrophages significantly increased ( p < 0.01) in the PCL-ABs treatment group , whereas the percentage of INOS (inducible nitric oxide synthase)-positive macrophages decreased ( P < 0.01) , indicating that ABs can promote the conversion of macrophages from the M0 phenotype to the M2 phenotype. Interestingly, the percentage of CD31-positive cells increased in the PCL-AB treatment group ( P < 0.05) , suggesting that ABs may promote neovascularization. The above results further demonstrate that BMSC-derived apoptotic vesicles can reduce inflammatory infiltration by programming M0 macrophages into M2 macrophages, thereby preventing or reducing delayed wound healing and thus exerting anti-inflammatory and angiogenic effects in mice. Figure 6 Effect of bone mesenchymal stem cell-derived apoptotic bodies (BMSC-AB)-loaded polycaprolactone (PCL) scaffolds on wound healing. (A) Hematoxylin-eosin staining maps of each group. (B) Degree of healing shown by hematoxylin-eosin staining. ∗∗ P < 0.01. (C) Masson staining of each group. (D) Masson staining of each group relative to the average optical density (AOD). Differences between the PCL group and the PCL + AB group. ∗∗ P < 0.01, ∗ P < 0.05. (E) Immunohistochemical analysis of the expression of the trabecular cell marker inducible nitric oxide synthase (INOS). (F) Immunohistochemical analysis of the expression of the traumatic cell marker CD31. (G) Immunohistochemical analysis of the traumatic cell marker arginase-1. (H) Quantitative analysis of traumatic cell markers. Differences between the PCL + ABs group and each other group. ∗∗ P < 0.01. Figure 6 The skin is the body's first line of defense and has the essential function of repelling pathogens and preventing mechanical, chemical and physical damage, which, when damaged, can also lead to infection and necrosis, as well as other serious local and systemic consequences. 25 Persistent skin inflammation can lead to the onset and progression of chronic inflammatory diseases, resulting in delayed wound healing. 26 Therefore, there is an urgent need for novel effective strategies for the treatment of skin injuries to improve the healing process and repair the skin barrier. 27 An imbalance in macrophage number and function is one of the important causes of the long-term persistence of trauma in the inflammatory phase. 28 A decrease in the number of M2 macrophages leads to a significant increase in the levels of the traumatic local inflammatory cytokines TNF-α and IL-6 and a decrease in the level of the anti-inflammatory cytokine IL-10. 29 , 30 Given the high plasticity among macrophage phenotypes, restoring the normalization of macrophage phenotype number and function by directly reprogramming M0 macrophages to M2 macrophages may be an effective strategy for treating traumatic inflammation and accelerating wound healing. The field of exosome research has attracted renewed interest due to the discovery of tubular RNAs, including mRNAs and miRNAs, in exosomes. 31 Previous studies have shown that exosomes secreted by human MSCs accelerate wound healing by reducing the number of neutrophils, inhibiting macrophage recruitment to the site of injury, promoting M2 macrophage polarization, angiogenesis, and collagen deposition, and modulating the inflammatory response. 32 However, the limitations of obtaining the number and function of exosomes under different experimental conditions, the highly variable relative ratio of cell-released exosomes to other small extracellular vesicles, the difficulty in controlling exosome production and release, and the nonspecific recognition of target cells still hamper clinical wound repair applications. 33 , 34 , 35 In recent years, apoptotic vesicles have been found to be effective in overcoming the limitations of exosome application as a product of programmed apoptosis. 36 Phosphatidylserine (PtdSer, PS) and annexin-V (Anxin-V) can be transferred to the surface of the vesicle envelope during apoptosis, where they act as signals to trigger phagocyte recognition and uptake and accurate delivery of apoptotic vesicles to their target cells for apoptosis-mediated cell reprogramming. 37 During the acute inflammatory phase of wound healing, most of the removal of decayed and damaged cells is carried out by macrophages and neutrophils through phagocytosis and in the absence of an inflammatory response during this process. Apoptotic vesicles promote the efficient removal of apoptotic material by peripheral phagocytes and mediate the transfer of biomolecules, including miRNAs and proteins, between cells to aid intercellular communication. 38 , 39 , 40 There is evidence that mononuclear phagocytes respond to apoptotic cells by releasing anti-inflammatory factors, including IL-10 and TGF-β1 and that apoptotic vesicles can break down apoptotic cells into smaller fragments to facilitate the removal of apoptotic debris and intercellular communication. 41 , 42 , 43 Extensive apoptosis of exogenous MSCs in a short period, down-regulation of the expression of the proinflammatory cytokines IL-6 and TNF-α, and up-regulation of the anti-inflammatory cytokine IL-10 in the wound area accelerate wound healing. 44 Liu et al reported that ABs derived from bone marrow MSCs triggered the polarization of macrophages toward the M2 phenotype, which could enhance the migration and proliferation of fibroblasts. 18 However, the molecular mechanisms by which stem cell-derived ABs act have not been elucidated. In this context, we extracted MSC-derived ABs and programmed them to verify the effect of ABs on wound healing trends, inflammatory cell regression, and collagen fiber formation in a mouse skin wound healing model. In this study, PCL fiber scaffolds prepared by electrostatic spinning were used to deliver apoptotic vesicles to the trauma site. PCL, a biodegradable, biocompatible and FDA-approved polymer, is widely used in the fields of tissue regeneration and drug delivery because it better mimics the physical structure of the extracellular matrix and has suitable mechanical properties. 45 , 46 Electrostatic spinning can provide nanoscale extracellular matrix mimetic structures with a high specific surface area and high porosity, which overcomes the disadvantages of conventional nanofibrous scaffolds due to their smaller fiber size and pore size, dense fiber structure, and lower porosity, resulting in low cell infiltration. More importantly, the porosity of PCL allows nutrient exchange between the inner and outer sides of the trauma surface, which is more compatible with the needs of trauma healing. 47 Therefore, we loaded MSC-ABs onto PCL fiber scaffolds in this study. PCL scaffolds have good biocompatibility, and BMSC-derived apoptotic vesicle-loaded PCL scaffolds could prevent or reduce delayed wound healing by reprogramming M0 macrophages into M2 macrophages and reducing inflammatory infiltration, which in turn exerted anti-inflammatory and angiogenic effects in mice. However, the exact underlying mechanism needs to be further investigated. miRNAs are a class of small noncoding RNAs approximately 22 nt long that are widely found in plants and animals and mainly function by inhibiting and regulating the translation of target genes. 48 The genome of an organism can encode thousands of miRNAs, which target approximately 60% of protein-coding genes, and regulate gene expression translation by binding to target mRNAs, leading to inactivation or activation of the latter and participating in important biological processes in life, such as cell proliferation, differentiation, apoptosis, growth and development of the organism, and regulation of the immune response to pathogen infection. 49 Up-regulation of miR-21-3p, miR-126-5p, and miR-31-5p and down-regulation of the genes miR-99b and miR-146a were associated with wound healing. 50 The ratio of M2 macrophages to M1 macrophages was positively correlated with miR-21. In the early stages of inflammation, pri-miR-21 dominates and has pro-inflammatory effects. Conversely, during the repair phase of inflammation, mature miR-21 exerts anti-inflammatory effects and converts macrophages to M2 macrophages, which exhibit low inflammatory levels and an immunosuppressed state, leading to persistent inflammation. 51 , 52 Previous studies have shown that M2-ABs can complete phenotypic reprogramming from the M1 phenotype to the M2 phenotype by targeting M1-mediated delivery of miR-21a-5p. 53 Similarly, we also found that miR-21a-5p expression was high according to transcriptome sequencing, leading us to hypothesize that miR-21a-5p could also drive Mϕ reprogramming to M2-Mϕ. We found that miR-21a-5p is a key molecule for wound healing and that its reduced expression may be one of the pathogenic molecular mechanisms of impaired M0 to M2 conversion; inhibition of miR-21a-5p expression in MSC apoptotic vesicles silences the target gene CCL-1 and inhibits M0 to M2 conversion. In this study, we validated that MSC-ABs deliver miR-21a-5p to promote the conversion of M0 to M2 macrophages, which in turn exerts anti-inflammatory and pro-angiogenic effects on M2 macrophages, providing theoretical support for an in-depth study of the pathological mechanisms of wound healing. In this study, we isolated and characterized BMSC-derived ABs and explored their regulatory mechanisms in macrophage programming, and we selected mouse trauma as a disease model. We found that the delivery of miR-21a-5p to the ABs of BMSC-programmed macrophages to M2 macrophages, which target the CCL-1 gene, promoted angiogenesis by secreting the anti-inflammatory cytokines IL-4, IL-10, CCL-1, and TGF-β and the angiogenesis-related factors VEGF and vWF to alter the local inflammatory environment. We found that BMSC treatment effectively promoted wound healing and attenuated the development of early wound inflammation. After the dorsal wounds of mice were covered with PCL fiber scaffolds loaded with apoptotic vesicles from BMSCs, the ABs from the BMSCs significantly accelerated the time to wound healing and promoted the formation of blood vessels, indicating that the use of PCL fiber scaffolds can act synergistically with the ABs. Although this study suggested that ABs from BMSCs are effective at preventing delayed wound healing in mice, the mechanisms by which these improvements in wound healing occur have not been fully determined. Therefore, further study of the additional molecular composition and mechanisms by which BMSCs act as ABs is necessary. This study demonstrates for the first time that apoptotic vesicles derived from bone marrow MSCs can induce macrophage M2 polarization and promote skin wound healing by targeting the CCL-1 gene through the presence of mmu-miR-21a-5p. Through electrostatic spinning technology, we prepared PCL composite fiber materials to construct MSC-AB-released carrier scaffolds and targeted the delivery of miR-21a-5p through local slow-release MSC-ABs to drive macrophage M0 to M2 programming to exert its dual effect of inflammation regulation and angiogenesis and then synergistically promote wound healing. In this study, stem cell-derived apoptotic vesicles, cell signaling required for macrophage programming, and PCL scaffolds were used to investigate the immunopathogenic mechanism of wound healing and new therapeutic targets, providing a promising therapeutic strategy and an experimental basis and theoretical rationale for various diseases associated with an imbalance of pro- and anti-inflammatory immune responses. The study was approved by the Animal Ethics Committee of Chongqing Medical University . All animal experiments were approved by the Animal Ethics Committee of Chongqing Medical University . All authors consent to the publication of this study. This work was supported by the 10.13039/501100001809 National Natural Science Foundation of China , the Chongqing Outstanding Project of Overseas Chinese Entrepreneurship and Innovation Support Program (China) , the Articular Cartilage Tissue 10.13039/100000084 Engineering and Regenerative Medicine Team, 10.13039/501100004374 Chongqing Medical University , the General Project of Natural Science Foundation of Chongqing, China , the “Tomorrow Cup” Teacher‒Student Cocreation Teaching and Research Innovation Project of International Medical College of Chongqing Medical University , the Chengdu Medical Research Project (Sichuan, China) , the China Postdoctoral Science Foundation , the Natural Science Foundation of Chongqing, China , and the Young Excellent Science and Technology Talent Project of the First Affiliated Hospital of Chongqing Medical University . Ning Hu, Leilei Qin, and Yonghua Yuan conceived the manuscript. Xudong Su wrote the first draft. Jianye Yang and Xudong Su revised the first draft. Leilei Qin, Xudong Su, and Zhenghao Xu performed the experiments. Wenge He, Li Chen, Shuhao Yang, Li Wei, and Chen Zhao provided grouping suggestions. Ning Hu provided language and grammar modifications. All authors read and approved the final manuscript. The datasets used and/or analyzed during the current study are available from the corresponding author upon reasonable request. The authors confirm that there is no conflict of interests. | Review | biomedical | en | 0.999997 |
PMC11697111 | Stem cells (SCs) are capable of self-renewal and can exhibit multipotency. Under specific conditions, SCs can differentiate into various functional cell types and have the potential to regenerate various tissues and organs. The history of stem cell research dates back to the late 19th century, when scientists began to focus on cells with differentiation and developmental potential. In 1868, the famous German biologist Ernst Haeckel first developed the concept of undifferentiated cells, which was the earliest concept related to SCs. The 1963 publications by Ernest A. McCulloch and James E. Till marked the beginning of modern stem cell research . The use of SCs or their derivatives to repair diseased or damaged tissues overcomes the limitations of conventional clinical treatments and introduces new possibilities for regenerative medicine and the treatment of other human diseases . Organoids are 3D cultures derived from SCs that are capable of mimicking the spatial structure and physiological characteristics of organs in vitro . Compared with traditional cell cultures, organoids comprise diverse cell types that go beyond making simple physical connections, fostering more complex intercellular communication processes, including interaction, induction, feedback, and collaboration, allowing the organoid to more accurately simulate tissue structure and function . Novel advanced biomanufacturing technologies offer the opportunity to design complex cell niches with specific geometries and architectures that influence the spatiotemporal behavior of stem/progenitor cells . With continued advances in organoid technology, researchers have successfully cultivated various organoids, including organoids derived from the brain , kidney , stomach , liver , lung , mammary gland , and pancreas . These organoids serve as invaluable tools for in vitro studies of organ development , basic research , drug discovery , and regenerative medicine . The skin, the largest organ in the human body, contains various tissue structures, including the epidermis, dermis, subcutaneous tissue, and appendages. The composition of skin tissue enables it to play a variety of important roles, such as roles in physical protection, temperature regulation, immune defense, secretion, and excretion, as the first barrier through which the body resists infection and injury . Efficient wound repair is crucial for maintaining homeostasis, and research in this field has received increasing attention . The idea of using a skin culture system as an in vitro substitute for skin was first proposed by Rheinwatd et al. in 1975 . Those authors pioneered the development of a self-organizing strategy for squamous epithelium generation, which involved serial cocultivation of primary human keratocytes and irradiated 3T3 mouse fibroblasts. This breakthrough paved the way for in vitro culture of self-assembled skin tissue. The emergence and rapid development of skin organoids have brought new opportunities in skin wound healing research, mainly for the following reasons. First, compared with the traditional full-thickness skin model, skin organoids more accurately replicate the in vivo development process. These organoids can self-organize and differentiate directionally into different cell types, aligning more closely with the structure and function of native tissues. Furthermore, skin organoids can produce skin appendages, such as hair follicles and sebaceous glands, which are absent in traditional skin models , providing a biological environment that closely resembles real skin. Therefore, skin organoids are ideal in vitro models for studying the complex process of wound healing. Second, skin organoids have important applications in regenerative medicine. Owing to their ability to simulate the structure and function of real skin, skin organoids are innovative tools for treating skin injuries, burns, and other skin conditions . Transplanting skin organoids into the wound site can promote the regeneration and repair of damaged skin . In addition, skin organoids can be used as platforms for drug screening and toxicology studies . By testing drugs or compounds on organoids, their effects on the skin and potential side effects can be predicted. This approach can improve the efficiency of drug development and reduce risks in clinical trials . In recent years, skin organoids constructed in vitro have been widely used in skin development research , skin pathology research , and drug screening . Hong et al. comprehensively summarized the milestones in skin organoid generation and discussed the diverse applications of skin organoids, including their relevance in developmental biology, disease modelling, regenerative medicine, and personalized medicine. This review focuses on the construction and application of skin organoids in wound healing, elaborates on the construction process, and discusses the evolving role of skin organoids in wound healing research. Cells are the fundamental building blocks of an organism, and their proper function is the cornerstone of effective tissue repair and regeneration . Skin organoids are predominantly composed of SCs , including adult SCs (ASCs) and pluripotent SCs (PSCs) ( Table 1 ). Many organoids associated with the epidermis, sweat glands, and hair follicles are derived from ASCs . PSCs replicate in skin tissue systems in vitro through induced differentiation. This approach enables the simulation of the skin and its associated organoids, enhancing the understanding of the complex interactions between different cell types and molecular signaling pathways during development and homeostasis . In human skin, various types of ASCs play pivotal roles, including epidermal SCs (EpSCs), dermal SCs (DSCs), and hair follicle SCs (HFSCs). These SCs collaboratively contribute to the development and composition of diverse skin cell lineages, which form the skin. EpSCs are precursors to a wide array of epidermal cells that originate from the embryonic ectoderm and are capable of bidirectional differentiation. Boonekamp et al. established an organoid culture system that enables mouse EpSCs to continually expand and differentiate for an extended period of up to 6 months. DSCs, also known as dermal mesenchymal SCs, undergo differentiation into fibroblasts under specific conditions, and they then stimulate the synthesis and secretion of vital components such as collagen and elastin . Su et al. successfully aggregated DSCs with embryonic stem cells (ESCs) to form hair follicle-like organoids. This innovative approach promoted hair follicle formation both in vitro and in vivo via WNT pathway activation. HFSCs function as crucial tissue signal centers within the skin, generating rich signal outputs during all stages of adult skin homeostasis. HFSCs play a vital role in regulating the organization and function of skin niches . Chen et al. pioneered the construction of a nanoscale biomimetic extracellular matrix tailored for individual HFSCs. This development facilitated the stable expansion of HFSCs while preserving their essential SC properties. These properties thus markedly influence the outcomes of skin tissue regeneration. ESCs are pluripotent, self-renewing cells derived from undifferentiated cells originating from preimplantation embryos. Signaling molecules can promote the self-renewal of ESCs and induce the derivation of PSCs. Koehler et al. established 3D mouse ESC cultures to generate a new in vitro model of sensory epithelial differentiation in the inner ear to obtain a deeper understanding of inner ear development and disorders. Lee et al. progressively modulated the transforming growth factor β (TGF-β) and fibroblast growth factor (FGF) signaling pathways, co-induced the aggregation of cranial epithelial cells and neural ridge cells into spheres, constructed an organoid culture system capable of generating complex skin directly from human ESCs and successfully used it for skin reconstruction in vivo . Furthermore, a 3D mouse ESC culture was developed to spontaneously produce new hair follicles that mimic their normal counterparts . Induced PSCs (iPSCs) are a category of PSCs with the capacity for infinite self-renewal and proliferation and can differentiate into mature cells of the ectoderm, mesoderm, and endoderm . iPSCs can be generated from somatic cells, including fibroblasts, keratinocytes, and blood cells, through a process known as reprogramming . This approach overcomes the immunological concerns associated with ESCs. Furthermore, iPSCs can differentiate into diverse cell types, including keratinocytes and fibroblasts , providing a rich source of cellular components for constructing skin organoids. Yang et al. enriched keratinocytes in culture dishes and transfected them with lentivirus encoding transcription factors to obtain epidermal cells and generate iPSCs. Kim et al. reported that iPSCs derived from human cord blood mononuclear cells exhibited high pluripotency, normal karyotypes, and the ability to differentiate into all three blastoderm layers. Keratinocytes and fibroblasts derived from these iPSCs presented characteristics similar to those of primary cell lines. Sahet et al. employed a coculture approach in which iPSC-derived fibroblasts and keratinocytes were utilized to produce 3D skin equivalents. Itoh et al. generated iPSCs from fibroblasts and directed their differentiation into keratinocytes, resulting in the production of functional 3D skin equivalents. Abbas et al. generated skin organoids from human iPSCs derived from human skin fibroblasts or placental CD34+ cells, produced complex skin organoids with skin layers and pigmented hair follicles, and successfully developed sebaceous glands, tactile receptive Merkel cells, and secretory sweat glands. Organoid construction often requires the regulation of physical signals. Hydrogels mimic the in vivo environment through their unique physical, chemical, and biological properties, providing essential signaling support for skin cell growth and differentiation . In terms of physical properties, the 3D cross-linked polymer network of the hydrogel provides cells with a 3D scaffold similar to the structure of the extracellular matrix in vivo . This 3D environment facilitates the appropriate arrangement and interaction of cells in space to mimic complex tissue structures in vivo . Second, the mechanical properties of hydrogels (such as hardness and elasticity) can be adjusted by changing their degree of cross-linking, polymer concentration, and other parameters. This flexibility allows researchers to precisely control the mechanical environment according to different organoid needs, mimic the mechanical properties of different tissues in the body, and provide a growth space for cells that is similar to that in vivo . In terms of biochemical characteristics, hydrogels can promote interactions between cells and biochemical signaling; affect cell morphology, growth rate, and differentiation direction; prevent excessive proliferation and migration; maintain structural stability and organoid function; and facilitate organoid reproduction in vitro , thereby increasing the complexity and functionality of the tissue . When choosing hydrogels, factors such as chemical composition, cross-linking degree, elastic modulus and other factors should be considered . During the preparation process, the properties of a hydrogel can be regulated by changing its type, concentration, and cross-linking conditions . Hydrogels with different biochemical properties and physical properties can simulate different in vivo environments, thereby regulating the growth and differentiation of cells and promoting the formation and maturation of skin organoids . Hydrogels can also serve as carriers for drugs or growth factors to promote cell growth and differentiation and accelerate skin organoid formation. One team designed a microfluidic device to produce an asymmetric gradient of differentiation factors in a spindle hydrogel to improve the spatial organization of dermal and epidermal cells, promoting keratinocyte differentiation and hair follicle formation in skin organoids . The epidermal layer of the skin produces hair and glands. This layer is primarily composed of keratinocytes, which aid in thermoregulation and barrier formation. The dermis, underneath the epidermis, houses an array of structures, including blood vessels and nerves. Dermal fibroblasts within this layer are prolific producers of extracellular matrix components, such as collagen and elastic fibers. These elements provide skin with its well-known elasticity and facilitate the initiation and circulation of hair follicles . The subcutaneous fat lying beneath the dermis serves as an energy reservoir within subcutaneous tissues. Moreover, the skin contains sweat glands, sebaceous glands, and hair follicles derived from the epidermal and dermal layers. A variety of epidermal organoids, as well as organoids containing skin appendages, such as hair follicle and sebaceous and sweat gland organoids, have been constructed for scientific research and clinical treatment . Boonekamp et al. cultured epidermal keratinocytes extracted from the dorsal skin surface of mice to obtain mouse epidermal organoids for long-term expansion and differentiation, contributing to the study of epidermal homeostasis in vitro . Xie et al. constructed mouse primary epidermal organoids, which presented stratified histological and morphological features resembling those of the epidermis. These organoids closely simulate their native tissues at the transcriptomic and proteomic levels, which is valuable for skin infection modelling and drug screening. Wiener et al. seeded cells obtained from microdissected interfollicular epidermis within a basement membrane extract to generate epidermal organoids. This method shows promise as an in vitro model for exploring epidermal structure, function, and dysfunction. Wang et al. harnessed freshly isolated human protodermal cells to establish 3D cultures of human primary epidermal organoids, which served as an effective model for studying dermatophyton infections. In their most recent study, Kwak et al. developed multipotent stem cell-derived epidermal organoids, which produce efficient extracellular vesicles for skin regeneration and contribute to target cell proliferation, migration, and angiogenesis, showing promise as therapeutic tools for wound healing in vivo . Hair follicles are sac-like structures located within the dermis and subcutaneous tissue that are responsible for hair growth. The hair follicle has two main parts: the upper part, which includes the infundibulum and isthmus, and the lower part, which includes the bulb and suprabulbar region. Gupta et al. constructed in vitro 3D organoid models encapsulated in sericin hydrogels containing human hair dermal papilla cells, hair follicle keratinocytes, and SCs. These models exhibited structural features akin to those of natural hair follicles, mimicking cell–cell interactions and a hypoxic environment. Ramovs et al. successfully produced hair-skin organoids from two human iPSC lines and thoroughly characterized epidermal junctions via immunofluorescence and transmission electron microscopy. Weber et al. combined neonatal foreskin keratinocytes with scalp dermal cells and successfully established hair peg-like structures in vitro that expressed appropriate epidermal and dermal markers. This method serves as a platform for optimizing the engineering of human hair follicles for transplantation. Veraitch et al. reported that ectodermal precursor cells derived from human iPSCs were able to communicate with hair-induced dermal cells, ultimately promoting hair follicle formation. Marinho et al. developed a technique to construct hair follicle organoids in vitro by combining multiple cell types. When these organoids were transplanted into the skin, structural fusion and hair bud generation occurred. Kim et al. developed solar UV-exposed skin organoids derived from human iPSCs, which effectively recapitulated several symptoms of photodamage, including skin barrier disruption, extracellular matrix degradation and the inflammatory response. The sebaceous gland is a notable skin appendage that is widely distributed across the body, except for the hands and feet. Secreted sebum combines with sweat to create a lipid membrane that is crucial for skin protection. Feldman et al. used Blimp1+ cells isolated from adult mice to replicate sebaceous gland lineage expression and homeostasis dynamics in vitro . The authors successfully established sebaceous gland organoids, which serve as valuable tools for drug screening and investigations into sebaceous gland homeostasis, function, and pathology. Wang et al. demonstrated that functional hair follicles and sebaceous glands could be reconstituted by transplanting a combination of culture-expanded ESCs and skin-derived progenitors from mice and adult humans. Sweat glands secrete sweat, excrete waste, maintain body temperature, and originate from epidermal progenitor cells, similar to other skin components. The regenerative ability of sweat glands after full-thickness injury is limited, and the repair and regeneration of the sweat gland structure and function after severe burns remain major challenges in clinical treatment . Diao et al. embedded sweat gland epithelial cells in matrix glue in the dermis of the paw pads of adult mice, maintaining SC characteristics to enable differentiation into sweat glands or epidermal cells, effectively integrating sweat glands into tissues, and successfully establishing mouse sweat gland organoids. Sun et al. reprogrammed human epidermal keratinocytes to differentiate into a lineage of sweat gland cells capable of sweat gland regeneration. Yuan et al. generated replicable spheres of sweat gland cells from adipose mesenchymal SCs and formed blood vessels with dermal microvascular endothelial cells, successfully simulated the morphogenesis of vascularized glands in vitro , and reported the precise anatomical relationships and interactions between sweat gland cells and the surrounding vascular niche. Skin damage from surgery, trauma, or burns has considerable physical and psychological effects on patients . Skin organoids offer a promising avenue for overcoming the challenges posed by hard-to-heal wounds and the permanent loss of skin appendages. Currently, a variety of biological structures and clinical strategies are being employed to harness the potential of skin organoids to address clinical issues related to wound healing . At present, the organoids used in skin wound healing are derived predominantly from iPSCs. Takagi et al. generated 3D human skin organoids from iPSCs, incorporating accessory organs such as hair follicles and sebaceous glands that exhibited full functionality after transplantation into nude mice, effectively integrating with surrounding host tissues, including the epidermis, arrector pili muscles, and nerve fibers. Ma et al. established epithelial and mesenchymal organoid models derived from human induced pluripotent SCs, which enhanced epidermal stem cell activity, promoted sweat gland and blood vessel regeneration and provided new therapeutic options for skin lesions and functional defects. Lee et al. developed complex skin organoids from human PSCs, which comprised a layered epidermis, a fat-enriched dermis, and pigmented hair follicles complete with sebaceous glands. When these skin organoids were transplanted into nude mice, the epidermal layer was oriented to create hair follicles, which resulted in the formation of planar hair-bearing skin. This innovation holds promise for skin reconstruction in patients with burns or trauma. Ebner–Peking et al. differentiated human tissue-derived iPSCs into endothelial cells (ECs), fibroblasts, and keratinocytes to generate a cell suspension, which promoted full-thickness wound healing in mice in vivo . Diao et al. embedded epithelial cells derived from sweat glands in the dermis of the paw pads of adult mice using Matrigel, forming sweat gland organoids that retained SC characteristics. These organoids had the capacity to differentiate into sweat gland cells or epidermal cells, and in vivo experiments confirmed that such organoids could enhance skin wound healing and sweat gland regeneration. Thai et al. co-cultured ECs and mesenchymal SCs (MSCs) to form EC-MSC spheres encapsulated within hydrogels that promoted wound healing in an in vitro full-thickness skin burn model. Organoids constructed from reprogrammed skin cells can also be used in wound healing research. Sun et al. employed reprogrammed human epidermal keratinocytes to construct regenerative sweat gland organoids. These organoids were subsequently transplanted into a skin injury mouse model, resulting in the successful development of fully functional sweat glands. For refractory diabetic wounds, Choudhury et al. creatively transdifferentiated chemokine receptor allogenic mesenchymal SCs overexpressing Cxcr2 into keratinocyte-like cells in 2D and 3D cell culture. After these organoids were transplanted into a diabetic mouse wound healing model, epithelialization of the epidermal layer and endothelialization of the dermal layer significantly increased, notably increasing the wound closure rate. To date, numerous studies have employed skin cells as seed cells for 3D printing to form skin organoids, which have been applied in wound healing research. Cubo et al. utilized keratinocytes and fibroblasts as seed cells for 3D printing skin tissue. By using histological and immunohistochemical analyses both in vitro and in vivo , the authors demonstrated that the printed skin closely resembled normal human skin in terms of structure and function. This printed skin could mimic various physiological properties of human skin, holding promise for future applications in scientific and clinical research on skin wound healing. In another study, six primary human skin cell types were used to bioprint a three-layer skin construct comprising the epidermis, dermis, and hypodermis. The bioprinted skin organoids were transplanted into full-thickness skin injury models of mice and pigs, where they achieved full integration and regenerated skin, promoted skin neovascularization and extracellular matrix remodeling, and accelerated wound healing . Moreover, 3D printing can control the spatial arrangement of cells in skin organoids to facilitate skin reconstruction. Pappalardo et al. employed 3D-printed skin tissue for skin reconstruction, successfully replicating biophysical interactions and cellular/extracellular tissue dynamics in human skin. Compared with traditional hydrogel skin organoid transplantation, this approach results in superior mechanical resistance and angiogenesis potential. It can effectively replace full-thickness wounds with minimal sutures and reduce surgery duration. Abaci et al. harnessed 3D printing technology to control the spatial arrangement of cells within a bioengineered human skin structure. The authors initiated dermal cell formation by controlling the autoaggregation of globules within a physiologically relevant extracellular matrix, which facilitated epidermal–mesenchymal interactions. This innovative approach led to hair follicle formation in vitro , offering new possibilities for research into hair follicle regeneration following scalp trauma. In addition, 3D printing of organoids via laser-assisted bioprinting (LaBP) technology and digital light processing (DLP) technology has been applied in skin wound healing research. For example, LaBP technology was employed to 3D print fibroblasts and keratinocytes onto a stable matrix, forming a fully cellular skin substitute. In vivo experiments confirmed that this skin substitute, when grafted onto full-thickness skin wounds, accelerated the formation of new blood vessels and promoted the healing of dorsal skin wounds in mice . DLP-based 3D printing technology can enable the precise positioning of clusters of human skin fibroblasts and human umbilical vein ECs with high cell viability. This technology facilitates the generation of functional living skin (FLS) that can be easily implanted into wound sites to promote neovascularization and skin regeneration . FLS mimics the physiological structure of natural skin and displays robust mechanical and bioadhesive properties. The combination of skin organoids with novel biomaterials also provides new approaches for skin wound healing. Huang et al. and Yao et al. used alginate/gelatine hydrogels as bioinks for 3D printing of the extracellular matrix to simulate the regenerative microenvironment, spatially integrate a variety of biophysical and biochemical cues for cell regulation, promote the transformation of epithelial progenitor cells and mesenchymal SCs into functional sweat glands, and promote sweat gland tissue recovery in mice. Kang et al. constructed a multilayer composite scaffold with epidermal and dermal structures using gelatine/alginate gel. After the scaffold was transplanted into full-thickness wounds in nude mice, it exhibited good cytocompatibility, increased the proliferation ability of dermal papillary cells (DPCs), promoted the formation of self-aggregating DPC spheres, and initiated cuticle–mesenchymal interactions, promoting the formation of hair follicles. Two types of polymer mesh were physically strengthened and integrated into a type I collagen hydrogel to generate a novel dermoepidermal skin substitute; when this platform was transplanted into rats, it uniformly developed into a well-layered epidermis and formed a well-vascularized dermal component . Zhao et al. prepared an alginate–gelatine composite hydrogel bioink with platelet-rich plasma (PRP) integration. The inclusion of PRP not only improved the extracellular matrix synthesis, but also regulated the vascularization of vascular Ecs and macrophage polarization in a paracrine manner. This approach accelerated high-quality wound healing in a rat dorsal full-thickness wound model, demonstrating the remarkable feasibility of 3D bioprinting combined with a PRP-functionalized bioink for expediting wound healing. Bacakova et al. developed a bilayer skin structure composed of collagen hydrogels enhanced with a nanofiber poly(L-lactic acid) membrane of preseeded fibroblasts, which could promote fibroblast adhesion, proliferation and migration to collagen hydrogels. In addition, this construct induced keratinocytes to form the basal and upper layers of cells with high mitotic activity, which could be used for cases of full-thickness skin damage. Guo et al. cross-linked recombinant human collagen (rHC) and transglutaminase to prepare rHC hydrogels and embedded fibroblasts to develop a new tissue-engineered skin equivalent with good biocompatibility, which can promote fibroblast migration and the secretion of a variety of growth factors. This construct has been shown to significantly promote skin wound repair in a full-thickness skin defect mouse model. The TGF-β and FGF signaling pathways are the main regulators of skin cell induction, fate determination, migration, and differentiation . The addition of basic FGF-2 stents can promote neovascularization in the dermis, thus further enhancing the repair of full-thickness skin defects . Lee et al. reported that complex skin organoids derived from human pluripotent SCs gradually regulate TGF-β and FGF and that transplantation of these skin organoids into the hairy skin of nude mice results in the formation of smooth hairy skin. Currently, WNER (Wnt-3a, Noggin, EGF and R-Spondins) is the classic cytokine culture protocol used in organoid culture because fluctuations in the levels of these four factors are relevant for almost all organoid culture experiments. Other studies have shown that the addition of other small molecules, such as CHIR99021 and valproic acid, to ENR (epidermal cell growth factor (EGF), Noggin, and R-Spondins) can induce specific differentiation of SCs . The study of Kageyama et al. confirmed that adding oxytocin to hair follicle organoids upregulated expression of the growth factor VEGF-A and promoted the growth of hair- and nail-like buds. EGF, FGF, TGF-β, and other growth factors have been applied in wound treatment and in the culture of skin organoids. The use of appropriate concentrations of these growth factors is expected to promote cell proliferation and differentiation, improve wound healing speed and quality, and reduce scar formation and infection risk. The currently established skin organoids follow a natural developmental pathway involving the directional differentiation of SCs to replicate the structure and function of in vivo tissue. These organoids play an increasingly vital role in research on skin development, skin disease pathology, and drug screening. Wound healing approaches using skin organoids have the following advantages. First, skin organoids can be generated in vitro to simulate the wound healing process, accelerating drug screening and therapy development. Second, skin organoids derived from patients themselves have a high degree of personalization, which enables them to better simulate the wound environment of patients and improve the precision and effectiveness of treatment. Third, performing drug screening and treatment development in vitro is safe and reduces risk and uncertainty in clinical trials to improve treatment safety. As in vitro models, skin organoids have the ability to simulate skin structure and function, providing an important platform for in-depth studies of skin development, disease mechanisms and drug screening. However, their generation process is relatively complex and time-consuming, with concerns related to standardization and diversification, which are major challenges in current research and related applications. Nonetheless, with continuous advances in technology, we can expect to overcome these limitations in the future to better leverage the potential of skin organoids in wound healing and other fields. One feature worth noting is that organoid cultures lack consistency, making standardized production difficult to achieve. The inconsistency of organoid cultures can be attributed to challenges related to strictly controlling the source, state, and culture conditions of cells , which hinders their clinical applicability. Therefore, in the preliminary research phase, extensive clinical, genetic, and morphological data must be integrated to construct a more stable and clinically suitable organoid model . Researchers and enterprises should strengthen the collaboration between medicine and industry and integrate and innovate existing technology by developing new bioactive materials and establishing standardized processes, standards, and quality control methods. This would enhance the stability, biocompatibility, and degradability of skin organoids and reduce the risks associated with clinical use, making these organoids more widely applicable . In addition, angiogenesis plays a crucial role in wound healing , and vascularized organoids can recreate the interaction between the parenchyma and blood vessels, restoring a realistic skin environment . The incorporation of angiogenesis-related factors into skin organoids via advanced biotechnology can regulate biological signal transmission and accelerate blood vessel formation . Furthermore, microfluidic systems simulating blood vessels have been employed to increase blood vessel formation and perfusion in skin organoids . In a recent study, one team developed a microfluidic platform that connects the vascular network to organoids and improves the growth and maturation of 3D vascular organoids produced with human-induced pluripotent SCs . Wang et al. constructed a 3D vascular fluidized organ chip based on open microfluidic control, providing a method for realizing in vitro construction of vascularized organoid models. This method can be further applied to combine organoids with vascularized organ chips to culture vascularized organoids and resolve the challenge of vascularization in organoid culture. Another important challenge related to skin organoids is the lack of an immune system. The skin is an important part of the body’s immune defense system and has the ability to recognize and resist invasion by foreign pathogens. However, the currently established skin organoids lack the immune components required to adequately recapitulate human skin biology and disease complexity, limiting their ability to simulate real skin function fully. Currently, Bouffi et al. have deciphered human gut–immune crosstalk during development and developed organoids containing immune cells by transplanting intestinal organoids under the kidney envelope of mice with a humanized immune system. Another research team has jointly developed functional macrophages in human colonic organoids derived from multipotent SCs. These macrophages regulate cytokine secretion in response to proinflammatory and anti-inflammatory signals, perform phagocytosis, and respond to pathogenic bacteria . These developments provide completely new ideas for the construction of skin organoids that simulate the immune response in the skin. Finally, the clinical translation of organoids raises considerable ethical concerns. Compared with some cell types that have been widely used in clinical practice (such as red blood cells and platelets), there are more ethical concerns about the safety, efficacy and long-term impact of skin organoids in clinical applications because of their unique regenerative potential and undifferentiated state. Moreover, the utilization of organoids derived from patient-specific ASCs or iPSCs for drug testing can be a valuable way to tailor treatments to individual patients. However, such patient-specific trials are costly and offer limited benefits, preventing them from passing cost–benefit evaluations during ethical review. From a technical safety standpoint, organoid transplantation involves invasive surgery, and the uncontrolled development of SCs may pose substantial risks, making predictions based on animal models challenging. The International Society for Stem Cell Research has issued guidelines for human SC research and clinical translation . It is crucial to carefully study and assess potential ethical issues in research on and applications using organoids; improve the corresponding laws, regulations, and scientific research ethics guidelines; and standardize the research and application of these organoids. This proactive approach will contribute to the responsible and ethical development of the SC field. The future focus of research on skin organoids mainly includes CRISPR-mediated gene-editing technology, microfluidic organoid chip technology, 3D printing technology, high-throughput automation technology based on artificial intelligence (AI), and organoid sample bank establishment . Targeting key endogenous genes via CRISPR technology and increasing their expression may contribute to skin wound repair. In 2020, Artegiani et al. achieved fast and efficient knock-in of human organoids via the nonhomology-dependent CRISPR-Cas9 technology CRISPR-HOT (CRISPR-Cas9-mediated homology-independent organoid transgenesis), providing a vital platform for endogenous knock-in in human organoids. Dekkers et al modelled breast cancer via CRISPR-Cas9-mediated engineering of human breast organoids. Michels et al developed a platform for pooled CRISPR–Cas9 screening in human colon organoids, which was helpful for screening for tumor suppressors both in vitro and in vivo . Mircetic et al pioneered the use of negative selection-based CRISPR screening for patient-derived organoids and identified a cohort of patients who may benefit from gene-targeting therapy. In the field of skin organoid models, Dabelsteen et al used CRISPR-Cas9 gene targeting to generate a library of 3D organotypic skin tissues that selectively differ in their capacity to produce glycan structures on the main types of N- and O-linked glycoproteins and glycolipids. Engineering solutions based on microfluidic and 3D printing technology can resolve issues related to the difficulty of molding organoids, the short modelling and molding time, and small sample sizes, thereby enabling the transition of skin organoids from research and development to commercial application as standardized clinical tools. Organ chips based on microfluidic technology can replicate and regulate multiple microenvironments within microfluidic devices. They offer advantages in terms of the controllability and standardization of modelling, enabling the construction of more complex skin models . 3D printing technology can not only support the long-term growth of cells under laboratory conditions but also simulate the mechanical properties of real organs, providing strong technical support for the in vitro culture of skin organoids. AI high-throughput automation can be applied to sample quality control and standardization of the culture and use process, improving the success rate, optimizing and reducing the time associated with manual procedures, and facilitating clinical application. First, image analysis technology combined with deep learning can more accurately capture the microstructure and changes of organoids, improve the ability to identify changes in their morphology and growth, provide accurate data support for experiments, and reduce time and costs . Second, omics data from organoids provide new tools for the resolution of cell development and disease mechanisms . In drug screening, a key application of organoids, AI enables real-time monitoring of drug activity, which enhances screening accuracy and efficiency . In the future, AI is expected to play a greater role in the study of skin organoids, accelerating their clinical translation and the development of precision treatment. The establishment of biobanks is conducive to the cultivation and maintenance of organoid models, collaborative scientific research among researchers, and the transformation of scientific research results into market applications. By establishing a large-scale library of organoid samples, many experimental materials can be generated to provide accurate and reliable data support for experiments . In addition, diverse samples can simulate the physiological state of the skin in different populations and under different healing conditions, providing a more comprehensive reference for drug development and wound treatment . Skin organoids are emerging as promising models and treatment strategies for skin wound healing, offering novel avenues for scientific research and clinical interventions. With ongoing advances in technologies, such as 3D printing, culture systems for skin organoids are continually maturing, evolving from simple in vitro cultures to complex systems encompassing the epidermis, dermis, and appendages. These systems can facilitate skin cell regeneration and help establish a microenvironment conducive to skin wound healing. However, skin organoid technology currently has several limitations, and related research has yet to comprehensively meet clinical use requirements. With the continuous refinement of skin organoid culture systems, translation from basic research to clinical applications can be expected soon. This approach will enable functional repair and regeneration of wounded skin, ultimately benefiting a substantial number of patients with skin burns and trauma. | Review | biomedical | en | 0.999997 |
PMC11697123 | Liver transplantation is an effective treatment for end-stage liver disease. 1 However, ischemia-reperfusion injury (IRI) is a common and unavoidable surgical complication of liver transplantation, which can result in impaired liver function and even post-transplant liver failure. 2 Previous studies have shown that the inflammatory cascade response and cytokines play vital roles in the mechanism of hepatic IRI, and metabolic stress following metabolic homeostasis disruption in the liver influences the pathogenesis and pathological process of hepatic IRI by stimulating reactive oxygen species overproduction and sterile inflammation. 3 , 4 However, the intrahepatic inflammatory microenvironment and metabolite changes caused by IRI are still undefined; thus, the characteristic variations in the transcription and metabolite levels in the early, intermediate, and late phases of hepatic IRI require further research. Transcriptomics technology is used as a medical tool to explore the underlying mechanisms in IRI research. For example, through transcriptomics, hepatic metabolic remodeling, including lipid/fatty acid and 5-aminolevulinate (5-ALA) metabolisms, has shown its significance in IRI and could be a targeted therapeutic intervention. 5 Tripartite motif-containing 27 (TRIM27), a critical mediator of inflammation, has been revealed by transcriptomics to negatively regulate inflammation via suppressing the NF-κB and MAPK signaling pathways during hepatic IRI and is expected to be a promising treatment to attenuate hepatic IRI. 6 Moreover, metabolomic technology has also been employed to study the mechanism of hepatic IRI. Metabolomics was used to investigate the impact of glucose metabolism-related genes on hepatic IRI and showed that insulin-induced gene 2 (INSIG2) could reduce hepatic IRI by triggering the downstream pentose phosphate pathway to reprogram glucose metabolism. 7 A recent study applied metabolomics to demonstrate that oxidized lipid metabolites markedly increased during hepatic IRI and lipid peroxidation, partially caused by nicotinamide adenine dinucleotide deprivation, and could aggravate hepatic IRI. 8 Here, we established mouse models of liver IRI and investigated the characteristic alterations of transcriptome and metabolome levels of mouse liver in the early, intermediate, and late phases of IRI with transcriptomics and metabolomics. Additionally, we explored the effects of these changes on hepatic IRI during different periods. Finally, our study offers a novel perspective for exploring the occurrence and development of hepatic IRI by combining transcriptomics and metabolomics analyses. All animal procedures and experiments were approved by the Institutional Animal Care and Use Committee of Chongqing Medical University. Food and water were provided ad libitum , and a normative environment with standard temperature and humidity was maintained. Following anesthesia, the mice underwent laparotomy, and the blood vessels of the left and middle liver lobes were clipped with a vascular clamp to form 70% warm ischemia of the liver. After ischemia for 1 h, the clamp was removed and kept for reperfusion for 12, 24, and 48 h 9 , 10 . Mice in the Sham group were subjected to the same procedure, but the blood vessels were not clipped. All 32 mouse liver samples were obtained and divided into the Sham, I1R12, I1R24, and I1R48 groups ( n = 8 per group). After the mice were sacrificed, liver samples were sectioned and fixed with paraformaldehyde. Subsequently, the liver sections were dehydrated using a gradient series of alcohol and embedded in paraffin wax. Liver sections of mice from the four groups were stained with hematoxylin and eosin and then observed under a 200X or 400X light microscope. Blood samples were collected from the four groups of mice, and an ELISA kit (Nanjing Jiancheng, Nanjing, China) was used to measure the serum concentrations of aspartate aminotransferase and alanine aminotransferase. IRI in mouse liver samples was evaluated by calculating Suzuki’s score in a blinded manner. 11 Mouse liver tissue samples were preserved at −80 °C until mRNA was extracted. Furthermore, quantitative real-time PCR with cDNA as a template and β-actin as an internal reference was conducted to analyze the relative mRNA expression of several biomarkers we selected. The primer sequences are listed in Table 1 . Table 1 The sequences. Table 1 The primer The sequence (5′-3′) The sequence (3′-5′) PKG1 CCACAGAAGGCTGGTGGATT GTCTGCAACTTTAGCGCCTC GcK CCCAGTCGTTGACTCTGGTAG CTTCTGAGCCTTCTGGGGTG LDHA AACTTGGCGCTCTACTTGCT GGACTTTGAATCTTTTGAGACCTTG PI3K CCACCTCTTTGCCCTGAT TCGGTTCTTTCCCGTTAG AKT1 CCGCCTGATCAAGTTCTCCT GATGATCCATGCGGGGCTT 4E-BP1 ACTCACCTGTGGCCAAAACA TTGTGACTCTTCACCGCCTG ALDOA AACCCAGCTGAATAGGCTGC CATGGGTCACCTTGCCTGG TIMP-1 AGCCTGGAGGCAGTGATTTC GGCCATCATGGTATCTGCTCT STAT3 TACACCAAGCAGCAGCTGAA TACGGGGCAGCACTACCT PLA2 AACACCTCCGCTAAGAACCC GCAGCCGTAGAAGCCATAGT PLAAT3 GGAGAAAAGGAGCCAGGGG GCTTGGGTTCTGGTATGGGT AML12 cells were purchased from Procell (Wuhan, China) and cultured in DMEM/F12 (Gibco, USA) with 10% fetal bovine serum (Procell), 40 ng/mL dexamethasone and 0.5% insulin-transferrin-selenium . The AML12 cells were incubated in a tri-gas incubator to hypoxia for 12 h (94% N 2 , 5% CO 2 , and 1% O 2 ) and transferred to 5% CO 2 typical incubator for reoxygenation for 12, 24, and 48 h in medium without fetal bovine serum. The proteins of the tissues and cells were extracted by lysis buffer and the protein concentration was measured using a BCA protein assay kit. The proteins were separated by SDS-PAGE and transferred to PVDF membranes, which then were blocked with NcmBlot blocking buffer for 30 min. The membranes were incubated overnight at 4 °C with primary antibodies and 1 h at room temperature with horseradish peroxidase-conjugated goat-anti-rabbit or goat-anti-mouse antibodies. The blots were visualized by the FUSION Solo S system. The primary antibodies were used in this study included PGK1 , GCK , LDHA , PI3K , AKT1 , 4EBP1 , ALDOA , TIMP1 , STAT3 , P-AKT , P-PI3K , mTOR , β-actin . ELISA assays (RUIXIN, China) were used to calculate the concentration of prostaglandin F1 alpha (PGF1α) and the free fatty acid content in the cell culture supernatant of mouse liver tissue samples according to the manufacturer’s instructions. The hydroxyproline content of mouse liver tissue samples was detected by a hydroxyproline content assay kit according to the manufacturer’s instructions. Differentially expressed genes (DEGs) between two samples were identified by calculating the expression level based on the transcripts per million. Furthermore, we quantified gene abundances using RNA-seq by expectation maximization. 12 Differential expression analysis was conducted via DESeq2. 13 Significant DEGs were identified depending on the criteria |log 2 fold change| ≥ 1 and false discovery rate <0.05. The Kyoto Encyclopedia of Genes and Genomes (KEGG) and Gene Ontology (GO) enrichment analyses were conducted by Goatools and Python SciPy, respectively, for function and pathway enrichment analysis of DEGs. The above data were analyzed on the Majorbio Cloud Platform. The analysis of mouse liver samples with liquid chromatography coupled with mass spectrometry was conducted on a Thermo UHPLC-Q Exactive HF-X system. After the addition of 6 mm diameter grinding beads, 50 mg of solid samples were ground and then centrifuged at 13,000 g and 4 °C for 15 min. The supernatant was transferred for analysis with liquid chromatography coupled with mass spectrometry. Progenesis QI software pretreated the raw data. The metabolites were identified by the Human Metabolome Database (HMDB), Metlin, and Majorbio databases. The data matrix from the search database was uploaded to the Majorbio Cloud platform for analysis. Significant DEMs were identified based on variable importance in projection >1 and P < 0.05. DEMs were sorted into the corresponding biochemical pathways via the KEGG pathway enrichment analysis. Metabolic compound identification was performed using the HMDB and KEGG compound databases. Simultaneously, integrated pathway analysis was conducted using the iPath database version 3. All data were analyzed using SPSS 22.0 software and presented as mean ± standard error of the mean. The study employed the student’s t -test to analyze significant differences between two groups, while one-way analysis of variance (ANOVA) was used for the analysis among three or more groups. Statistical significance was determined at P values < 0.05. In each group, liver tissue samples were stained with hematoxylin and eosin . Hepatic IRI extent was assessed by Suzuki’s score . The degree of liver damage increased significantly with the extension of post-reperfusion time, and the serum concentrations of aspartate aminotransferase and alanine aminotransferase corroborated this result, peaking at 24 h of reperfusion . To investigate the characteristic changes in liver IRI over time, we collected mouse liver tissue samples from the I1R12, I1R24, I1R48, and Sham groups, metabolomics and transcriptomics analyses were performed, and the findings were validated by quantitative real-time PCR . The transcriptome and metabolome quality control results indicated that the data were fit to be used for further analysis . Figure 1 Establishment and validation of the hepatic ischemia-reperfusion injury model. (A) Hematoxylin-eosin staining of liver tissue samples of the Sham, I1R12, I1R24, and I1R48 groups (magnification, × 200/ × 400; scale bar, 100 mm). (B) Serum concentrations of aspartate aminotransferase (AST) and alanine aminotransferase (ALT). (C) Suzuki’s scores of the Sham, I1R12, I1R24, and I1R48 groups. (D) Schematic of the research process. n = 8; ∗ P < 0.05, ∗∗ P < 0.01, ∗∗∗ P < 0.001; ns, no significance. I1R12, ischemia for 1 h and reperfusion for 12 h; I1R24, ischemia for 1 h and reperfusion for 24 h; I1R48, ischemia for 1 h and reperfusion for 48 h. Figure 1 The co-expressed and specifically expressed genes among the four groups were indicated in a Venn diagram . Principal component analysis and correlation heatmaps revealed significant between-group differences , proving the rationality of liver tissue samples. Through transcriptome data analysis, based on the criteria |log 2 fold change| ≥ 1 and P < 0.05, 2203 DEGs ( Table S1A ) were identified in the I1R12 group versus the Sham group. Furthermore, 2353 DEGs ( Table S1B ) were identified in the I1R24 group versus the Sham group, and 4146 DEGs ( Table S1C ) were identified in the I1R48 group versus the Sham group . The corresponding heatmaps and volcano plots are shown in detail in Figure 3 C and D. Based on the above DEG data, the Venn diagram showing the co-expressed genes of three comparison groups yielded 1115 DEGs . Figure 2 Correlation analysis of liver tissue samples. (A) Venn diagram analysis of the genes among the Sham, I1R12, I1R24, and I1R48 groups. (B) Principal component analysis of the Sham, I1R12, I1R24, and I1R48 groups. (C) Correlation heatmap of the Sham and IR groups. IR, ischemia and reperfusion; I1R12, ischemia for 1 h and reperfusion for 12 h; I1R24, ischemia for 1 h and reperfusion for 24 h; I1R48, ischemia for 1 h and reperfusion for 48 h. Figure 2 Figure 3 Hepatic ischemia-reperfusion injury involves transcriptional reprogramming. (A) Venn analysis of DEGs in the Sham/I1R12 groups, Sham/I1R24 groups, and Sham/I1R48 groups. (B) Histogram of the DEG number of the Sham/I1R12 groups, Sham/I1R24 groups, and Sham/I1R48 groups. (C) Hierarchical clustering heatmap of DEGs in the Sham/I1R12 groups, Sham/I1R24 groups, and Sham/I1R48 groups. (D) Volcano plots of DEGs in the Sham/I1R12 groups, Sham/I1R24 groups, and Sham/I1R48 groups. Blue denotes down-regulated genes, and red represents up-regulated genes. I1R12, ischemia for 1 h and reperfusion for 12 h; I1R24, ischemia for 1 h and reperfusion for 24 h; I1R48, ischemia for 1 h and reperfusion for 48 h; DEG, differentially expressed gene. Figure 3 GO function ( Tables S2A–C ) and KEGG pathway enrichment ( Tables S3A–C ) analyses were performed to analyze the characteristic changes in biological functions and involved pathways in the early, intermediate, and late phases of IRI. In the early phase of hepatic IRI, KEGG analysis revealed four significant enrichment pathways: glycolysis/gluconeogenesis, galactose metabolism, biosynthesis of unsaturated fatty acids, and pentose and glucuronate interconversions, whereas lipid biosynthetic process and response to oxidative stress processes were enriched in GO analysis . Thus, glucose and carbohydrate metabolism were characteristic changes in the early phase of IRI. Regarding the intermediate phase of hepatic IRI, GO enrichment analysis was enriched in the cellular lipid metabolic process and acute inflammatory response. In contrast, KEGG analysis was enriched in the glycolysis/gluconeogenesis and insulin resistance pathways, and PI3K-AKT, HIF-1, and adipocytokine signaling pathways , involving glucose and lipid metabolism and activation of the inflammatory pathway. In addition, KEGG analysis was enriched in the fatty acid degradation, fatty acid elongation, and linoleic acid metabolism pathways in the late phase of IRI, and GO enrichment analysis was enriched in lipid catabolic, neutral lipid metabolic, fatty acid metabolic, lipid biosynthetic, and triglyceride metabolic processes . This result demonstrated that lipid metabolism was the main characteristic change during the late phase of IRI. Figure 4 GO function and KEGG pathway enrichment analyses of the DEGs. (A) GO and KEGG enrichment analyses of the Sham and I1R12 groups. (B) GO and KEGG enrichment analyses of the Sham and I1R24 groups. (C) GO and KEGG enrichment analyses of the Sham and I1R48 groups. All GO function and KEGG pathway enrichment analyses of the DEGs revealed the top 20 functional terms and pathways. DEG, differentially expressed gene; KEGG, Kyoto encyclopedia of genes and genomes; GO, Gene Ontology; I1R12, ischemia for 1 h and reperfusion for 12 h; I1R24, ischemia for 1 h and reperfusion for 24 h; I1R48, ischemia for 1 h and reperfusion for 48 h; DEG, differentially expressed gene. Figure 4 To validate the results of GO and KEGG analyses, western blotting and quantitative real-time PCR were applied to measure the relative mRNA and protein expression levels of several biomarker-related pathways selected from KEGG analysis. Compared with the Sham group, the relative mRNA expression of PGK1 and LDHA increased, while the mRNA expression of GCK decreased, indicating up-regulation of the glycolysis pathway. The relative mRNA expression levels of PI3K, AKT1, 4EBP1, ALDOA, TIMP-1, and STAT3 were elevated, suggesting the up-regulation of the HIF-1 and PI3K-AKT pathways. In addition, the increased expression of PLA2 and PLAAT3 indicated up-regulation of the linoleic acid metabolism pathway . At the same time, we confirmed that the protein expression of PGK1, LDHA, PI3K, AKT, 4EBP1, ALDOA, TNP-1 and STAT3 in the liver tissue were increased, which were consistent with the results of quantitative real-time PCR . Though hypoxia/reoxygenation model in vitro , we used LY294002 (PI3K inhibitor) to confirm the activation of PI3K/AKT/mTOR pathway and found obviously increased protein expression of phosphorylation of PI3K (p-PI3K) and phosphorylation of AKT (p-AKT) in the hypoxia/reoxygenation-treated group. Meanwhile, the group treated with hypoxia/reoxygenation and LY294002 showed a reduction of the expression of p-PI3K and p-AKT, indicating that LY294002 suppressed the PI3K/AKT/mTOR pathway successfully . These results were consistent with the GO and KEGG analyses of transcriptomics. Figure 5 The relative mRNA and protein expression levels of 11 biomarkers in the Sham and IRI groups. (A) The relative mRNA levels of 11 biomarkers in the Sham and IRI groups measured by quantitative real-time PCR. (B, C) The protein levels of 11 biomarkers detected by western blotting. n = 4; ∗ P < 0.05, ∗∗ P < 0.01, ∗∗∗ P < 0.001. IRI, ischemia-reperfusion injury. Figure 5 Figure 6 The activation of PI3K/AKT/mTOR pathway and hepatic ischemia-reperfusion injury involves the metabolic reprogramming. (A, B) The protein levels of P13K/AKT/mTOR pathway detected by western blotting. (C) Principal component analysis of the Sham, I1R12, I1R24, and I1R48 groups in the positive and negative ionization modes. (D) Correlation heatmaps of the Sham, I1R12, I1R24, and I1R48 groups in the positive and negative ionization modes. ∗ P < 0.05, ∗∗ P < 0.01, ∗∗∗ P < 0.001. I1R12, ischemia for 1 h and reperfusion for 12 h; I1R24, ischemia for 1 h and reperfusion for 24 h; I1R48, ischemia for 1 h and reperfusion for 48 h; DEG, differentially expressed gene. Figure 6 Principal component analysis and correlation heatmaps of the positive and negative ionization modes showed apparent separation differences between the samples of each group . Principal component analysis plots showed that the separation between the I1R48 and Sham groups was the most obvious. Furthermore, based on the criteria (variable importance in projection >1, and P < 0.05), 358 DEMs (137 up-regulated, 221 down-regulated) ( Table S4A ) were identified in the I1R12 and Sham groups. A total of 339 DEMs (142 up-regulated, 197 down-regulated) ( Table S4B ) were identified in the I1R24 and Sham groups, and 367 DEMs (99 up-regulated, 268 down-regulated) ( Table S4C ) in the I1R48 and Sham groups. The corresponding volcano plots are shown in Figure 7 B. The Venn diagram for analyzing the co-expressed DEMs among the three groups yielded 151 metabolites , and we selected two metabolites for validation. We determined the concentration of cell culture supernatant PGF1α was decreased and the hydroxyproline content of mouse liver tissue samples was increased following the extent of reperfusion, consistent with metabolomics results . Figure 7 Hepatic ischemia-reperfusion injury involves the metabolic reprogramming. (A) Venn diagram (left) and histogram (right) of DEMs in the Sham/I1R12 groups, Sham/I1R24 groups, and Sham/I1R48 groups. (B) Positive and negative ionization modes volcano plots of DEMs in the Sham/I1R12 groups, Sham/I1R24 groups, and Sham/I1R48 groups. The blue dots denote uptake of metabolites, and the red dots indicate release of metabolites. (C) The level of PGF1α in cell culture media measured by ELISA. (D) The level of hydroxyproline in mouse liver samples measured with hydroxyproline content assay kit. n = 3; ∗ P < 0.05, ∗∗ P < 0.01, ∗∗∗ P < 0.001. I1R12, ischemia for 1 h and reperfusion for 12 h; I1R24, ischemia for 1 h and reperfusion for 24 h; I1R48, ischemia for 1 h and reperfusion for 48 h; DEG, differentially expressed gene; DEM, differentially expressed metabolite. Figure 7 To analyze the differences in metabolites between groups, we performed clustering heatmaps of the I1R12, I1R24, and I1R48 groups versus the Sham group . Clear separations were demonstrated in the heatmaps, showing that the mouse liver reperfusion models underwent significant metabolic recombination following ischemia and reperfusion, consistent with the principal component analysis and correlated heatmaps. Subsequently, KEGG analysis of the DEMs was conducted ( Tables S5A–C ). Analysis of metabolites showed that the following metabolic pathways were enriched in the I1R12 and Sham groups: arachidonic acid, glycerophospholipid, and ether lipid metabolism . In the I1R24 and Sham groups, the significantly differential metabolic pathways were linoleic acid metabolism, glycerophospholipid metabolism, regulation of lipolysis in adipocytes, sphingolipid signaling pathway, and glucagon signaling pathway . In the I1R48 and Sham groups, KEGG analysis was enriched in arachidonic acid metabolism, PPAR signaling pathway, alpha-linolenic acid metabolism, and biosynthesis of unsaturated fatty acids . Meanwhile, we detected the free fatty acid content of mouse liver tissue samples and showed that the free fatty acid levels in the I1R12 and I1R24 groups were significantly decreased, indicating the existence of lipid metabolism disorder. Our findings demonstrated that the primary metabolic characteristics of lipid metabolism were altered in the early, intermediate, and late phases of IRI. Figure 8 Hierarchical clustering heatmap and KEGG pathway enrichment analysis of the DEMs. (A) Hierarchical clustering heatmap of DEMs in the Sham and IR groups. (B) KEGG analysis of the DEMs in the Sham and I1R12 groups. (C) KEGG analysis of the DEMs in the Sham and I1R24 groups. (D) KEGG analysis of the DEMs in the Sham and I1R48 groups. (E) The level of free fatty acid (FFA) measured by ELISA. All KEGG pathway enrichment analyses revealed the top 20 pathways. n = 3; ∗ P < 0.05, ∗∗ P < 0.01, ∗∗∗ P < 0.001. DEM, differentially expressed metabolite; KEGG, Kyoto encyclopedia of genes and genomes; IR, ischemia and reperfusion; I1R12, ischemia for 1 h and reperfusion for 12 h; I1R24, ischemia for 1 h and reperfusion for 24 h; I1R48, ischemia for 1 h and reperfusion for 48 h. Figure 8 To analyze the specific components of DEMs, 358 DEMs in the I1R12 and Sham groups were assigned to the HMDB database; 330 metabolites were classified into 10 HMDB superclasses and 21 HMDB subclasses; 156 metabolites were included in the “lipids and lipid-like molecules” superclass, and 77 metabolites were included in the “others” subclass, which were the first class in superclass and subclass, respectively . Similarly, 339 DEMs in the I1R24 and Sham groups were assigned to the HMDB database; 313 metabolites were classified into 11 superclasses and 21 subclasses; “lipids and lipid-like molecules” superclass contained 153 metabolites, and the “others” subclass contained 82 metabolites, both of which are the first class . In the I1R48 and Sham groups, 367 DEMs were assigned to the HMDB database; 338 metabolites were classified into 12 superclasses and 21 subclasses; 185 metabolites were included in the first superclass, “lipids and lipid-like molecules”, and 70 metabolites were included in the first subclass, “others” . It was evident from the results of the HMDB database that the proportion of “lipids and lipid-like molecules” increased with prolonged reperfusion time, indicating that the lipid metabolism was significantly altered. In addition, the KEGG compound classification results indicated that the number of fatty acids increased markedly with an increase in reperfusion time , consistent with the HMDB database results and KEGG analysis. Notably, our findings showed that most DEMs in the three groups were linked to lipid metabolism, and the integrated pathway analysis also proved that . Figure 9 The identified metabolites were classified based on the HMDB and KEGG compound databases. (A) Pie chart of the identified metabolites based on the HMDB database. (B) Histogram of the identified metabolites based on the KEGG compound database. (C) Integrated pathway analysis of DEMs in the Sham/I1R24 groups and Sham/I1R48 groups. The rectangle circled by red line indicated that most DEMs were linked to lipid metabolism. DEM, differentially expressed metabolite; I1R24, ischemia for 1 h and reperfusion for 24 h; I1R48, ischemia for 1 h and reperfusion for 48 h. Figure 9 Liver IRI is an unavoidable consequence of liver transplantation and partial hepatectomy that involves multiple pathological mechanisms. 14 , 15 Most liver IRI research focuses on the inflammatory response and cell death; however, metabolism and detoxification are important functions of the liver. Thus, a study suggested ischemia-reperfusion primarily disrupts metabolic homeostasis, followed by an inflammatory response and hepatic damage. 5 Previous studies have indicated that glucolipid metabolism regulated by INSIG2, and lipid metabolic reprogramming, including arachidonate 12-lipoxygenase (ALOX12) and its downstream metabolites, influence hepatic IRI through the release of damage-associated molecular patterns (DAMPs) and oxidative stress. 7 , 16 However, alterations in signaling pathways and metabolic profiles in the early, intermediate, and late phases of hepatic IRI remain undefined. Transcriptomics and metabolomics have been employed in medical research to investigate the mechanisms underlying various disorders. 17 In a recent study, transcriptomics was used to provide a deeper explanation of the molecular mechanism of IRI. 18 Additionally, metabolomics is to reveal certain pathophysiological processes by detecting the level of changes in metabolites in organisms and as an effective method has been used in research on the molecular mechanisms of IRI in metabolic remodeling. 8 This study investigated the pathogenesis of hepatic IRI from a new perspective by combining transcriptomics and metabolomics in the early, intermediate, and late phases of hepatic IRI. In the initial phase of hepatic IRI, ischemia leads to an insufficient oxygen supply to hepatic cells and impairs them via exposure to glucose consumption, pH changes, and ATP depletion, resulting in disturbances in cellular metabolism and inflammation. 3 , 19 Meanwhile, glycolysis is a major energy source, and accelerated glycolysis and ATP depletion increase the accumulation of acidic metabolites, impairing signaling interactions, cellular homeostasis, and hepatocytes, and triggering mitochondrial dysfunction and inflammatory responses. 7 , 20 In this study, KEGG analysis showed that the glycolysis/gluconeogenesis pathway was altered exclusively during the early phase of IRI when glycolytic flux increased to satisfy the energy requirement in the state of hypoxia. The outcomes of our study illustrated that glycose metabolism reprogramming is critical in the early phase of IRI and could be a metabolic intervention treatment to reduce the subsequent inflammatory response. Several studies have reported that glycolysis interference treatments significantly inhibited glycolysis and the release of inflammatory cytokines, improving the acidic microenvironment and acidosis and attenuating hepatic cellular damage. 21 , 22 , 23 Liver IRI involves two interconnected stages: local ischemia injury and reperfusion injury caused by sterile inflammation. 19 The findings of this study confirmed that in the intermediate phase of IRI, inflammatory responses were triggered and became more intense. The intermediate phase of IRI is characterized by inflammatory disorder, triggered by the overproduction of reactive oxygen species and the release of DAMPs and pro-inflammatory cytokines, aggravating apoptosis and hepatocyte damage. 24 , 25 Based on the KEGG analysis of the intermediate phase, this study found that inflammation-related signaling pathways such as PI3K-AKT and HIF-1 were markedly regulated, taking part in anti-inflammatory and adaptive hypoxia responses during IRI, providing a potential therapeutic intervention in regulating anaerobic glycolysis and inflammatory response to improve IRI. 26 , 27 , 28 Our experiments in vitro cell model of hypoxia/reoxygenation also indicated that the activation of the PI3K/AKT pathway during IRI was obviously suppressed by PI3K inhibitors. Meanwhile, some studies have demonstrated the P13K-AKT pathway has the potential to serve as a therapeutic intervention target to mitigate IRI by reducing reactive oxygen species production and pro-apoptotic signals. 29 , 30 Moreover, KEGG analysis, compound classification, and integrated pathway analysis found that lipid metabolism remodeling was the characteristic alteration in the late phase of IRI. The liver is an important organ for lipid metabolism, and essential fatty acids play a vital role in hepatic IRI; for example, lipids are one of the main targets of reactive oxygen species in oxidative stress, contributing to IRI through the concentration of fatty acids and lipid peroxidation, forming cytotoxic lipid aldehydes and lipid hydroperoxides. 31 Previous studies have reported that lipid metabolic disorders during IRI induce oxidative stress, inflammation, apoptosis, and ferroptosis by modulating interrelated transduction signaling pathways and suppressing antioxidant capacity, which could aggravate lipid metabolic reprogramming. 32 , 33 Meanwhile, previous clinical research demonstrated that lipid biosynthesis was the major change during IRI, severe steatosis was associated with a higher incidence of graft failure after liver transplantation, and some metabolites had the potential to be biomarkers of lipid-related damage of IRI. 24 , 34 These findings and our results indicate that lipid metabolic reprogramming plays a key role in hepatic IRI and aggravates IRI. Consequently, we present a new perspective on IRI therapeutic intervention: intervening in the major metabolic reprogramming at each stage through clinical means could effectively control the subsequent inflammatory response, and even predict the prognosis of liver transplantation through the concentrations of mainly different metabolites at each stage. It has been reported that regulating lipid metabolism response and mediators could ameliorate the pathological damage from ischemia-reperfusion by reducing mitochondrial damage and liver macrophage pyroptosis. 5 , 35 , 36 This study illustrated the importance of metabolic reprogramming in hepatic IRI and its potential as a therapeutic intervention target. The above findings were derived from animal experiments but not verified in clinical samples. We simply validated some of the pathways and metabolites through in vivo and in vitro hypoxia/reoxygenation models, but we did not deeply explore the specific mechanisms of different metabolites. In summary, by combining transcriptomics and metabolomics, our study first revealed characteristic changes in signaling pathways and metabolism in the early, intermediate, and late phases of hepatic IRI. Lipid metabolism, precisely regulated by the liver through biochemical, signaling, and cellular pathways, plays a non-negligible role in the occurrence and development of hepatic IRI. This represents a potential therapeutic intervention to treat hepatic IRI and strengthens the understanding of the pathogenesis and pathological process of IRI and its molecular mechanism. This study was funded by the 10.13039/100014717 National Natural Science Foundation of China , 10.13039/501100010008 China Postdoctoral Science Foundation , 10.13039/501100010008 Chongqing Postdoctoral Science Foundation of China , Postdoctoral Cultivation Project of the First Affiliated Hospital of Chongqing Medical University , and Chongqing Postdoctoral Innovation Talents Support Program (Chongqing, China) . Qi Li: Writing – original draft, Data curation, Investigation, Methodology, Project administration. Xiaoyan Qin: Data curation, Investigation, Methodology, Project administration, Resources, Writing – original draft. Liangxu Wang: Data curation, Investigation. Dingheng Hu: Investigation, Project administration. Rui Liao: Project administration, Resources. Zhongjun Wu: Data curation, Investigation, Project administration, Resources, Supervision, Writing – original draft, Writing – review & editing. Huarong Yu: Project administration, Methodology, Resources, Writing – original draft, Writing – review & editing. Yanyao Liu: Funding acquisition, Supervision, Writing – original draft, Writing – review & editing, Data curation, Investigation, Project administration, Resources. All data generated or analyzed during this study are included in this published article and supplementary material. The authors declared no conflict of interests. | Study | biomedical | en | 0.999996 |
PMC11697147 | The postnatal period, lasting up to six weeks after birth, is crucial for women, newborns, and families. Despite this, maternal and neonatal mortality rates remain high, and opportunities to enhance maternal health and newborn care are often missed ( 1 ). Woman should wait at least two years between pregnancies to ensure proper care for the most recent child and reduce the risk of maternal and child mortality ( 2 , 3 ). Family planning can save lives, but there is a lack of understanding about postpartum fertility and birth control among educators, healthcare providers, and the users ( 4 , 5 ). To foster a more promising future for everyone, Sustainable Development Goal 3 aims to lower the maternal mortality ratio to below 70 per 100,000 live births by the year 2030. This goal includes the commitment to provide universal access to sexual and reproductive health care services, encompassing family planning ( 5 ). More than 90% of women worldwide want to delay or avoid pregnancy within a year of giving birth. In sub Saharan Africa, this number increases to 95%, but about 70% of women in these areas do not use contraception ( 6 ). Unplanned pregnancies within a year after childbirth can arise from various factors. Majorly, ovulation can resume as early as 25 days post-delivery, and about 23% of women have a sexual activity before the six-week mark, if there is no contraception ( 7 ). Whether pregnancies are intended or unintended, every woman should be able to use contraception during the postpartum period if she chooses, in order to promote her own and her family's health ( 2 ). If a woman becomes pregnant as early as she giving birth, then it can lead to low birth weight, a doubling of the chances of premature birth, and a 60% increase in the risk of infant mortality for babies born less than 24 months after a previous birth ( 8 ). Review of literature showed that marital status, secondary and above level of education, maternal age, longer birth interval after delivery, ever used contraceptive methods, menses resumption, starting sex, antenatal care follow-up, postnatal care, knowledge about family planning, and discussion with husband ( 4 , 9 – 13 ) were found to be determinants of postpartum contraceptive utilization. Research in Ethiopia shows that 47% of pregnancies occur within 24 months of the previous birth ( 14 ), having the highest maternal mortality rate in Sub-Saharan Africa at 412 per 100,000 live births ( 15 ). Furthermore, there is a high demand for postpartum family planning (PPFP), with rates as high as 86% within the first 5 months after childbirth, decreasing to 76% within the first year. Additionally, 81% of women do not use PPFP because they are unaware that they can conceive within a year after giving birth ( 16 ). To improve postpartum contraceptive use, provision of health education, counseling about the importance of FP, access to various family planning methods are paramount factors ( 17 ). There are significant differences in Ethiopia regarding awareness and use of modern contraceptives. Nearly all married women in Addis Ababa know at least one contraceptive method, compared to only 67% in the Somali region. Urban women have a higher adoption rate (48%) than rural women (38%). The public sector provides 87% of modern contraceptives, while the private sector offers 12%. Usage rates are highest in Addis Ababa (48%) and Amhara (50%), and lowest in Somali (3%) and Afar (13%) ( 18 , 19 ). This data suggested that there are no standard approaches to helping Ethiopian women who wish to utilize family planning per their needs. The 2016 EDHS data shows that 25% of women use modern family planning six months postpartum, with significant differences based on delivery location: 18% for home births and 43% for facility births ( 20 ). These disparities may result from variations in access to healthcare and demographic factors influencing childbirth location and contraceptive use. The report's inconsistencies hinder clinical decision-making. Despite previous studies and reviews ( 21 – 25 ), policymakers and providers still lack a comprehensive overview of the evidence on factors influencing postpartum family planning demand. This study aims to present a comprehensive overview of systematic reviews (SRs) to consolidate the current evidence concerning the uptake of modern postpartum family planning (PPFP) and contributing factors among women women in the postpartum periods. A preliminary search was carried out in PROSPERO to examine the current research landscape and mitigate the duplication risk. At that point, no analogous studies were discovered. Subsequently, a research protocol was formulated and registered in PROSPERO . The search terms were integrated using Boolean operators “OR” and “AND”. The following MeSH terms or keywords were applied in the online database: postpartum OR post-delivery OR parturition OR puerperium OR immediate postpartum OR extended postpartum AND prevalence OR magnitude OR proportion AND use OR utilization OR intention OR unmet need OR barrier AND predictors OR contraception OR contraceptive OR family planning OR modern contraceptives OR modern postpartum family planning OR modern family planning AND Ethiopia AND systematic review ( Supplementary Table S1 ). The Meta-analysis of Observational Studies guideline (MOOSE) ( 26 , 27 ). These methodologies include detailed checklists with 35 elements to guide the execution and documentation of observational studies at high risk of bias and confounding, particularly in evaluating retrospective data. The systematic reviews (SRs) and meta-analyses concerning the adoption of modern contraceptive postpartum family planning (PPFP) per the PRISMA guidelines ( 28 ) was used . An extensive literature review was performed utilizing four major electronic databases, such as MEDLINE/PubMed, Cochrane, Web of Science and Science Direct covering the period from June 15, 2024 to July 15, 2024. A set of inclusion and exclusion criteria was established to identify all pertinent systematic reviews: (i) population: studies focusing on postpartum mothers, (ii) outcome: adoption of modern postpartum family planning (PPFP) and its determining factors, (iii) language: all published studies in English, (iv) study design: systematic reviews and meta-analyses, (v) geographical area: research conducted exclusively in Ethiopia. The exclusion criteria for reviews were determined based on the following factors: the presence of similar duplicate reviews published across multiple journals, articles that did not report the intended outcome (i.e., incomplete quantitative data), and reviews that failed to present a clear research question, search strategy, or a defined methodology for article selection. The PICOS framework highlights four critical components: population, intervention, comparison, and outcome. Its primary aim is to identify and evaluate the clinical aspects of evidence throughout the systematic review process. The PICOS components in this study were delineated as follows: Population: postpartum women in Ethiopia; Intervention: postpartum family planning; Comparison: No adoption of modern PPFP; Outcomes: overall prevalence of modern contraceptive uptake among postpartum women and the factors associated with it in Ethiopia; Study design: systematic reviews or meta-analyses. Each study in the analysis underwent a thorough evaluation using the AMSTAR tool, which consists of 11 questions to assess methodological and evidential integrity. Quality was rated on a scale of 0–11, with scores indicating high 8–10, medium 4–7, or low quality <3 ( 29 ). Date from included studies was extracted using standardized extraction tool developed in an Excel spreadsheet and labeled as follows. For each SRM, the following information was extracted: (1) Identification data (first author's last name and publication year), (2) Review aim, (3) Prevalence or proportion of uptake of postpartum family planning, (4) Risk factors, (5) Odds ratio or relative risk with 95% confidence intervals for the risk factors, (6) Number of primary studies included within each SRM study and their respective design type, (7) Total number of sample size included, (8) Publication bias assessment methods and scores, (9) Quality assessment methods and scores, (10) Data synthesis methods (random or fixed-effects model), and (11) The authors' main conclusion of the SRM study (12). TEG and SA independently extracted information on study characteristics and key findings from each review as stated above. In cases of disagreement, additional input was sought from a third and fourth author LLF and ATA, respectively. The study did not employ a specific search strategy, and there were no limitations regarding publication years. Records were organized using Endnote version 8. Data synthesis in the included SRM studies involved both qualitative and quantitative methods. Multiple estimates for modern PPFP prevalence and associated factors were presented as a range, with an aggregated estimate calculated using STATA version 17. Study heterogeneity was assessed using I 2 and Cochran Q statistics ( 30 ). A random effects model with a 95% CI was used to determine the pooled prevalence of modern PPFP utilization. Due to the inadequate number of studies incorporated in this umbrella review, we did not appraise publication bias. To successfully assess the publication bias, at least 10 studies is required ( 31 ). Stata version 17.0 software was used for analyses. In this research, it was unnecessary to obtain consent or ethical approval from the participants, as the study utilized data derived from SRM studies. The included studies that fulfilled the specified criteria are presented in Figure 1 . A total of 145 articles were initially gathered from four distinct databases. PubMed/Midline generated 58 research articles, Cochrane generated 41 articles, Science Direct generated 12, and Web of science Identified 34 records through reference list review ( 34 ). Subsequently, after eliminating 92 duplicate entries, we were left with 53 records. A review of the titles and abstracts led to the exclusion of 31 articles. The remaining 22 articles were then assessed for eligibility. Ultimately, 17 articles were excluded for several reasons; five were inconsistent with the outcome of study, six were conducted outside of the study area, five had a methodological differences and one was not related to postpartum women. In the end, 5 studies ( 21 – 25 ) were found to meet the eligibility requirements for inclusion. This umbrella review encompasses five systematic review and meta-analysis ( 21 – 25 ), which were derived from observational primary studies. These primary studies consisted of 3 cohort studies, 2 case-control studies, and 77 cross sectional studies, amounting to a total of 82 studies. The collective sample size across these studies involved 44,276 postpartum but one study included women of reproductive age group, pregnant and post-partum ( 23 ) The number of primary studies varied per SRM, ranging from 12 ( 23 ) to 19 ( 24 ). Additionally, the sample size per meta-analysis exhibited variability, spanning from 4,367 ( 23 ) to 11,932 ( 24 ). Two SRM studies were published in the year 2020 ( 24 , 25 ) and one each were published in 2021 ( 23 ), 2022 ( 22 ) and 2023 ( 21 ). These studies comprehensively examined both the prevalence and determinants of postpartum family planning uptake. Based on the included SRMA the prevalence of postpartum family planning were ranged from 21.04, (13.08, 29.00), I 2 = 98.43% ( 21 ) to 48.11 (36.96, 59.27), I 2 = 99.4 ( 25 ). These statistics highlight the diversity in prevalence rates across the studies. General characteristics of the systematic review and meta-analyses studies are presented ( Table 1 ). Using the AMSTAR tool the methodological quality of included SRM studies evaluated. The quality of scoring was done out of 11 points and ranged from 9 to 10 ( Table 2 ). From umbrella review of five SRM studies the pooled prevalence of PPFP utilization was 36.41% (95% CI, 24.78, 48.03) with the heterogeneity index ( I 2 = 99.9%, P < 0.000) showing substantial heterogeneity. Therefore, we have used the random effect model to resolve the issue of heterogeneity among the included reviews Figure 2 . The subgroup analysis focusing on sample size revealed that the sample greater than 10,000 had the highest prevalence of use of postpartum family planning 46.44% (95% CI, 44.81, 48.08), whereas sample size less than 10,000 had the lowest prevalence 21.28% (95% CI, 20.50, 22.05) Figure 3 . We conducted a thorough investigation into the origins of heterogeneity by employing a leave-one-out sensitivity analysis. This analysis demonstrated that the removal of each study from the overall assessment did not significantly impact the estimated average prevalence. The average prevalence consistently fell within the 95% confidence interval of the overall average prevalence calculated when all studies were included. Therefore, no single study exerted a notable influence on the average prevalence. Furthermore, the sensitivity analysis indicated that the exclusion of each study individually yielded an average prevalence of 36.41%, accompanied by a 95% confidence interval ranging from 24.78 to 40.03, as illustrated in Figure 4 . Considerable variability was noted among the studies included in the meta-analysis. To investigate the origins of this variability, we performed a meta-regression analysis utilizing sample size, which indicated a significant influence on the observed differences in PPFP uptake, as illustrated in Table 3 . This Umbrella of systematic review and meta-analysis identified the most often occurring related variables, which were family planning counseling, couple discussion and maternal education Supplementary file 2 . Four SRs included in this umbrella review analyzed family planning counseling ( 21 – 24 ) and the finding revealed that postpartum women who counseled about family planning were 4.12 times more likely to utilize family planning methods than their counterpart (AOR: 4.12, 95% CI: 2.89, 4.71). Moreover, the studies did not reveal any heterogeneity in the results ( I 2 = 0.0%, P = 0.874) Figure 5 . Three out of 5 SRs focused on the couple discussion about family planning during postpartum period ( 21 – 23 ). Postpartum women who had discussed contraception with their partners during the postpartum period were 3.06 times more likely to utilize modern contraceptive methods than their counterparts (AOR: 3.06, 95% CI: 1.42, 5.60). Furthermore, the studies did not showed any heterogeneity in the results ( I 2 = 4.7%, P = 0.369) Figure 6 . Furthermore, the statistical significance of having post natal follow up among postpartum mothers' regarding PPFP utilization was analyzed using three studies ( 23 – 25 ). Women who had PNC follow up were almost four times more likely to use PPFP methods than those women who had no post natal follow up(AOR: 3.48, 95% CI: 2.60, 4.83). Also, the studies did not showed any heterogeneity in the results ( I 2 = 0.0%, P = 0.822) Figure 7 . Postpartum family planning (PPFP) plays a crucial role in decreasing high fertility rates among both those who wish to space their children and those who aim to limit family size. It enhances maternal and child health by mitigating the risks associated with unintended short inter-pregnancy intervals and unsafe abortions ( 2 ). Currently, approximately 222 million women worldwide experience an unmet need for family planning services ( 32 ). Addressing unmet needs in family planning and mitigating the risks associated with closely spaced pregnancies can be achieved through the utilization of postpartum contraceptives ( 33 , 34 ). Conversely, postpartum women experience amenorrhea for different durations, influenced by their breastfeeding habits. For instance, women who do not engage in breastfeeding may conceive within 45 days following childbirth, while those who do not exclusively breastfeed may also become pregnant before the return of menstruation, leading to a range of pregnancy-related complications ( 35 ). To date, five SRM reports have been published concerning the utilization of PPFP in Ethiopia. These SRM studies are generally regarded as providing robust evidence for decision-making in health programs. Nevertheless, as the volume of individual reviews increases, it may become challenging for individuals seeking information ( 36 ). Consequently, this umbrella review was conducted to synthesize the findings from the five SRM studies on PPFP utilization into a comprehensive document. Additionally, several factors, including family planning counseling, couple discussions, and postnatal follow up, were recognized as statistically significant. The comprehensive review of the five selected systematic review and meta-analysis studies regarding the use of postpartum family planning (PPFP) in Ethiopia produced a summary estimate of 36.41% (95% CI: 24.78, 48.03). This result stands in contrast to findings from studies in Bangladesh at 62.4% ( 37 ), Kenya at 86.3% ( 38 ), Rwanda at 51.1% ( 39 ), Zambia at 45.9% ( 40 ), and a systematic review and meta-analysis conducted in low- and middle-income countries, which reported a rate of 41.2% ( 41 ). The observed discrepancy may stem from cultural factors, alongside the significant unmet demand for family planning services in Ethiopia ( 42 ). Therefore, postpartum family planning (PPFP) services must be provided immediately after childbirth. It's also essential to improve access to basic health facilities in the county and ensure various family planning options, especially PPFP services. Another potential explanation could be the lack of male participation in family planning initiatives within a society where male dominance prevails, as is the case in Ethiopia, where men often wish to have more children than their female partners ( 43 ). Additionally, there is a lack of policies that promote male involvement in family planning practices. This includes the absence of support for initiatives aimed at male engagement, social and behavioral change strategies, guidance on collaborative decision-making with partners, and the execution of a holistic approach to male participation in family planning services. Moreover, the discrepancies noted may stem from differences in sample sizes, geographical study locations, and the execution of governmental policies. Our research offers a thorough evaluation of the PPFP across the nation, in contrast to the aforementioned studies, which were confined to specific regions within the country. The findings, however, surpassed the postpartum contraceptive prevalence indicated in the Ethiopian Demographic and Health Survey (EDHS) by 23% ( 44 ). This discrepancy may be linked to the EDHS survey's methodology, which covers extensive geographical regions, including hard-to-reach areas. This could result in an underreporting of postpartum family planning (PPFP) usage, particularly among postpartum women living in rural locations with a history of home deliveries, who may lack access to maternal health services. In addition to this, the results observed exceeded the rates documented in studies carried out in Ethiopia, which reported a figure of 21.04% ( 21 ). This variation may be attributed to factors such as the previously mentioned finding being derived from a single systematic review and meta-analysis (SRMA) that specifically concentrated on immediate postpartum family planning utilization among postpartum women in Ethiopia. Conversely, the current study encompassed five SRMA studies and addressed both immediate and extended postpartum family planning uptake among postpartum mothers. The umbrella review revealed that postpartum women who engaged in discussions about contraception with their partners during the postpartum period were 3.06 times more likely to adopt modern contraceptive methods compared to those who did not. This observation is corroborated by findings from other studies conducted in Nigeria ( 45 ) and Congo ( 46 ). Ethiopia has been actively pursuing various long-term strategies aimed at transforming its health sector. A significant objective within these strategies is to decrease the unmet need for family planning (FP) from 22% to 10%, which has been recognized as a critical impact indicator in the national health policy. Engaging couples in discussions regarding the utilization of family planning services may play a pivotal role in achieving this goal. Therefore, the government needs to integrate male participation into reproductive health policies. This approach would facilitate communication and interaction between partners, enabling them to acquire essential information about family planning and make informed decisions regarding the use of maternal health services. Since family planning is a shared responsibility, enhancing this dialogue could lead to an increased intention to utilize contraception following childbirth. This umbrella review revealed that postpartum women who received counseling on family planning were 4.12 times more likely to adopt family planning methods compared to those who did not receive such counseling. This conclusion is corroborated by research conducted in India, Nepal, Sri Lanka, and Tanzania ( 47 ). Women who participate in family planning counseling may gain a more comprehensive understanding of the various family planning methods, including their advantages and disadvantages. This increased awareness of birth spacing through contraceptive use following childbirth can improve their decision-making abilities regarding postpartum family planning and encourage them to utilize contraceptives. The analysis indicated that women who participated in postnatal care (PNC) follow-up were nearly four times more inclined to adopt postpartum family planning (PPFP) methods compared to those who did not engage in postnatal follow-up. This observation is corroborated by research conducted in Kenya and Zambia ( 40 ). It is suggested that women attending PNC appointments during the postpartum phase likely receive comprehensive guidance on the significance of postpartum family planning. Consequently, they may exhibit a greater motivation to implement the methods they choose. This umbrella review of systematic reviews and meta-analyses exhibits significant strengths, such as the application of varied search strategies, a thorough evaluation of methodological quality, compliance with the PRISMA 2020 extension guidelines, and the execution of a funnel test. To our knowledge, no extensive assessment in the form of an umbrella review has been performed regarding postpartum family planning utilization in Ethiopia, despite the existence of numerous empirical studies and specific systematic reviews and meta-analyses. However, this review is not without its limitations; it exclusively includes articles published in English which could introduce a potential bias by excluding studies published in other languages and is constrained by a limited number of studies. Additionally, despite significant attempts to tackle the issue, heterogeneity remained evident among the studies included, suggesting that there were discrepancies in methodologies or populations that were not entirely resolved. One additional limitation is that one of the systematic reviews included women of reproductive age, pregnant women, and postpartum women ( 24 ), which may have introduced biases. The substantial variation in sample sizes across studies (ranging from 4,367 to 11,932) and the considerable variation in prevalence, lack of analysis regarding regional variations within Ethiopia, and the absence of analysis regarding specific contraceptive methods preferred and cost-effectiveness considerations limits the practical utility of the findings were additional limitations. Moreover, the absence of comparable reviews from other countries further hampers our ability to reach significant conclusions, as these were primary studies. To overcome these limitations and enhance the depth of future research, it is recommended to adopt a more inclusive methodology. In particular, the integration of interventional studies into the research framework is suggested, as this could significantly strengthen the overall validity of the findings and lead to a more thorough and nuanced comprehension of the topic at hand. The overall prevalence of postpartum family planning (PPFP) was determined to be 36.41%, highlighting a significant gap that requires attention. Contributing factors to this situation include the availability of family planning counseling, discussions between couples, and postnatal follow-up care. These results emphasize the necessity for focused interventions aimed at increasing the utilization of PPFP services, as well as improving postnatal follow-up, family planning counseling, and couple discussions. Consequently, our findings strongly recommend that special consideration be given to mothers. Furthermore, policymakers in the health sector, along with promoters, non-governmental organizations, community organizations, and other relevant stakeholders, should initiate educational programs on family planning that emphasize the health advantages of postpartum contraceptive use, particularly in preventing unintended pregnancies and prolonging inter-pregnancy intervals. Healthcare providers should advocate for breastfeeding and introduce the lactational amenorrhea method (LAM) alongside other immediate postpartum contraceptive options. It is also essential for contraceptive programs to involve men in the promotion and uptake of family planning services. This research provides current and succinct evidence regarding the uptake of postpartum family planning (PPFP) in Ethiopia. It serves as a valuable resource for program developers and implementers across various sectors, including government, non-governmental organizations, bilateral and multilateral agencies, the private sector, as well as charitable and civic institutions that aim to deliver standardized family planning services in Ethiopia. The findings underscore the importance of preventing closely spaced pregnancies, which allows families to better support their children, invest in their education, enhance child health, and enable women at risk of pregnancy-related complications to space and delay pregnancies. Furthermore, the study highlights the essential role of male involvement, both from a programmatic perspective and as a means to achieve gender equity in reproductive rights and responsibilities. It stresses the necessity of providing health education and family planning counseling services that guarantee full, free, and informed choices while maintaining privacy and confidentiality, which are critical for ensuring the quality of family planning services. Additionally, this study identifies key determinants influencing the uptake of PPFP and proposes strategies to address these issues in Ethiopia, such as enhancing family planning counseling, encouraging couple discussions, and ensuring postnatal follow-up, particularly by promoting male participation in family planning to improve communication between couples regarding fertility and family planning, thereby ensuring that decisions reflect the needs and preferences of both partners. | Other | biomedical | en | 0.999997 |
PMC11697150 | The rumen is a digestive organ unique to ruminants. It has a distinctive microbial fermentation system that can efficiently convert macromolecular substances from the diet, such as lignin, cellulose, and non-protein nitrogen, into nutrients that are easier for the host to utilize . Early research focused on the structure of the complex microbial community in the rumen and its impact on nutrient usage from feeds of different compositions. These microorganisms metabolize dietary compounds and produce volatile fatty acids (VFAs), amino acids and other essential nutrients, thereby providing energy for the host body and maintaining the optimal functioning of the rumen . The hindgut of ruminants was previously considered the endpoint of digestion; however, in-depth research on the fungal microbiota of ruminants has shown that the fungi in the rectum produce a large number of lignin-degrading enzymes that can ferment unused lignin. These enzymes effectively decomposed the structural polymers of plant cell walls, thereby improving the absorption and utilization rate of their nutrients . Compared with the traditional measurement of feed conversion ratio (FCR), residual feed intake (RFI) is considered to be a more accurate and flexible assessment method. RFI refers to the difference between the actual feed intake of an individual animal and the expected feed intake based on its body size and production performance. Low RFI indicates that individual animals consume less feed than predicted and therefore have a lower environmental pollution capacity, without affecting individual body weight, daily weight gain, or body shape. In studies on the microbiota of ruminants with different RFIs, Herd and Arthur and Paz et al. reported that rumen fermentation patterns and the microbial composition of the ruminant gut accounted for 19%-20% of the variation in RFI. Ellison et al. found that six types of rumen microorganisms were highly correlated with actual feed efficiency in ewes. Liu et al. showed that the abundance of Firmicutes and Bacteroidetes in the intestines of Angus cattle with low RFI was significantly higher than that of cattle with high RFI. Elolimy et al. found that the abundance of Bacteroidota in rectal feces of Holstein cattle was higher in low-RFI cattle than in high-RFI cattle. Both Firmicutes and Bacteroidetes are the most abundant bacterial groups in the digestive tract of ruminants and play a major role in fiber fermentation in the diet. Previous studies have shown that the gastrointestinal microbiota of ruminants affects RFI expression, which is one of the important indicators of feed efficiency type. Digestion and absorption in ruminants are processes that involve a number of interconnected steps in the gastrointestinal tract (GIT). For our experiments, we chose Dexin fine-wool meat sheep of the same age and under the same feeding conditions. To systematically examine the influence of RFI on the digestive tract as a whole, the type and abundance of microorganisms in the rumen, ileum and rectum were determined to represent the overall GIT of the sheep under different feed efficiencies. All animal procedures were approved by the Agreement Management and Review Committee of the Feed Research Institute of the Xinjiang Academy of Animal Sciences . The test was conducted at the Baicheng County Breeding Sheep Farm in Xinjiang, China . Fifty 70-day-old Dexin fine-wool meat sheep were purchased from the Baicheng County Breeding Sheep Farm. Deworming was performed twice, and the sheep were weighed before the experiment started (30.77 ± 3.17 kg). The trial period lasted 100 days, including a 14-day pre-feeding period. The animals were weighed every 20 days during the trial period and slaughtered on the 100 th day of the trial period. The feed formula selected was the 678S-2 experimental pelleted feed produced by Tiankang Feed Technology Co., Ltd. The nutrient levels in the feed are shown in Table 1 . Before the test, the pen was thoroughly disinfected and the sheep entering the pen were marked with numbered ear tags. Before the start of the pilot test period, the experimental animals were dewormed using a combination of intramuscular injection and feeding of anthelminthics. Sheep in individual cages were fed twice a day at 10:00 and 18:00, with free access to food and water, ensuring that the amount of leftover feed for each sheep was >15% per day. During the experiment, three test sheep were eliminated due to disease, and five test rams were selected for breeding. Using SPSS26.0 for analysis, the R 2 values of dry matter feed intake, daily weight gain and average mid-term metabolic weight of the test sheep met the conditions of the regression equation. A linear model was constructed to calculate the RFI of the test sheep as follows: The average daily weight gain, ADGi = (FBWi – IBWi)/N was also calculated. FBWi (final body weight) is the weight of individual i at the end of the trial, IBWi (initial body weight) is the initial weight of individual i , and N is the number of trial days. The formula for average metabolic body weight is MBWi = [1/2 × (FBWi + IBWi)] 0.75, where β0 is the regression intercept, β1 and β2 are fixed values, and ei is the RFI of sheep i . Based on the mean and standard deviation of the RFI, the test sheep were divided into a high RFI (H-RFI) group (RFI > mean + 0.5SD) with 11 sheep, a medium RFI (M-RFI) group (mean + 0.5SD < RFI < mean – 0.5SD) with 18 sheep, and a low RFI (L-RFI) group (RFI < mean – 0.5SD) with 13 sheep . On the 90 th day of the experimental period, six lambs from each group were selected to collect feces, twice a day for 3 days. The samples were placed in plastic bags and stored at −20°C until used. On the 100 th day of the experimental period, the animals were humanely slaughtered before feeding. Six Dexin male lambs were randomly selected from each group and 10 ml of solid digesta from the rumen and the ileum solid digesta were taken, transferred to cryopreservation tubes, and stored in liquid nitrogen. During the collection process, there was no chyme or feces in the rectum of some slaughtered sheep, so only the rectal feces of three lambs from each group were collected and stored in liquid nitrogen. The rumen chyme, ileum chyme and rectal feces were sent to the Xinjiang Morgan Biotechnology Co., Ltd. for sequencing analysis of the bacterial and fungal microbiota. Apparent digestibility was determined according to a published method using hydrochloric acid-insoluble ash content of feces and feeds as a readout : The nutrient digestibility of dry matter, crude protein and neutral detergent fiber was also calculated. Ammoniacal nitrogen was determined using indophenol blue colorimetry according to a published method . Concentrations of volatile fatty acids were determined by gas chromatography. Briefly, VFAs were separated on a 2 m glass column (3 mm i.d.) using a Fisons HRGC MEGA 2 Series model 8560 chromatograph (Fison Instruments, Glasgow, UK) equipped with a flame ionization detector (Fison Instruments, Glasgow, UK). The chromatographic column was 10% SP-1000 + 1% H3PO4, the chromatographic column was 100/120 ChromosorbWAW (Tehnokroma Analitica SA, Sant Cugat del Valles, Spain), and the carrier gas was nitrogen. The injector and detector temperatures were 200°C, and the column temperature was 155°C. The internal standard used was 2-ethylbutyric acid (Sigma Aldrich, Taufkirchen, Germany). GC was used to determine the concentration of acetic acid, propionic acid, butyric acid, and valeric acid in the rumen. For determining the genus and species of microbiota in the rumen, genomic DNA rumen fluid sample was extracted by the cetyltrimethylammonium bromide (CTAB) method. DNA concentration and purity was determined by 0.8% agarose gel electrophoresis, and the DNA was diluted to 1 ng/μL for use. PCR amplification was performed using the V3 region-specific primers of 16S rDNA, the extracted DNA, a fungal ITS1 primer pair (F - 5′CTTGGTCATTTAGAGGAGTAA3 and R - 5′GCTGGTTTCTTTCATATCGATTGCB), and the appropriate combination of PCR reagents and polymerase for amplification. The PCR products were run on an agarose gel and isolated with an OmegaDNA purification kit (Omega Corporation, USA). The purified PCR products were collected and sequenced on an IlluminaNovaSeq. PCR amplification, paired-end sequencing on the 6,000 platform, Illumina HiSeq sequencing, and analysis of the results were performed by the Beijing NuoHe Bioinformation Technology Co., Ltd. Amplicon qiime2 cloud platform . As shown in Table 2 , the daily feed intake and RFI in the H-RFI group were significantly higher than those in the L-RFI and the M-RFI groups ( P < 0.01). The F/G was significantly higher than that of the L-RFI and M-RFI groups ( P < 0.05), and there was no significant difference in the daily weight gain and mid-term metabolic weight of the sheep ( P < 0.05). As shown in Figure 2A , the apparent digestibility of dry matter in the L-RFI group was extremely significantly higher than the H-RFI group ( P < 0.01) and significantly higher than the M-RFI group ( P < 0.05), the M-RFI group was extremely significantly higher than the H-RFI group( P < 0.01), and the apparent digestibility of crude protein in the L-RFI group was significantly higher than M-RFI group ( P < 0.05), and the apparent digestibility of neutral detergent fiber the L-RFI group was significantly higher than the M-RFI group and the H-RFI group ( P < 0.05), the M-RFI group was extremely significantly higher than the H-RFI group( P < 0.01). There was no significant difference in ammonia nitrogen and volatile fatty acids between lambs with different RFI ( P > 0.05) . RFI and DMI were significantly negatively correlated with dry matter digestibility (r = −0.765, −0.546) and significantly positively correlated with propionic acid (r = 0.518, 0.500). ADG was significantly positively correlated with isobutyric acid (r = 0.578) . As shown in Supplementary Figure S1A , in the analysis of rumen chyme samples of Dexin lambs with different RFIs, 4,160 bacterial OTUs were found in the L-RFI group, 1,359 OTUs in the M-RFI, and 3,872 OTUs in the H-RFI. Among them, there were 845 OTUs in common in the three groups, 1,078 OTUs in common in L-RFI and M-RFI, 1,698 OTUs in common in L-RFI and H-RFI, and 1,081 OTUs in common in H-RFI and M-RFI. With respect to the fungi, 775 OTUs were found in L-RFI, 934 OTUs were found in M-RFI, and 747 OTUs were found in H-RFI . Among them, L-RFI and H-RFI had 97 OTUs in common, L-RFI and M-RFI had 115, M-RFI and H-RFI had 102, and among all three groups there were 63 OTUs in common. As shown in Supplementary Tables S1 , S2 , there were no significant differences in Chao1, Shannon, and Simpson indices of bacteria and fungi in rumen digesta among male Dexin lambs with different RFIs ( P > 0.05). As shown in Supplementary Figures S2A , B , the PCoA plots of rumen digesta of male Dexin lambs with different RFIs partially overlapped without obvious separation, indicating that the differences in microbial communities between and within rumen fluid sample groups were small. As shown in Figure 3A and Table 3 , at the phylum level, Firmicutes, Bacteroidia , and Proteobacteria were the dominant bacterial groups in rumen fluid, The Bacteroidota bacteria in the L-RFI group were significantly lower than those in the M-RFI group( P < 0.05), while there were no significant differences in the other bacteria ( P > 0.05). At the genus level, Escherichia-Shigella, Prevotella _7 and Methanobrevibacter were the dominant bacterial genera in rumen fluid, and there was no significant difference among the top ten bacterial genera ( P > 0.05) . At the fungal phylum level, Ascomycota, Basidiomycota , and Mortierellomycota were the dominant phyla, and there was no significant difference among the top ten fungal phyla groups ( P > 0.05) . At the fungal genus level, Cladosporium, Fusarium , and Debaryomyces were the dominant genera, and there was no significant difference among the top ten fungal genera ( P > 0.05) . As shown in Figure 4 , there were seven species with LDA difference >4.0. The results show that the genera with the greatest impact on RFI on community structure are p__Proteobacteria and g__Roseburia in H-RFI , and o__Bacteroidales, c__Bacteroidia, o__Oscillospirales, p__Bacteroidota , and f__Eubacterium__coprostanoligenes_group in M-RFI. No differences were detected in fungi and L-RFI group. It can be seen from Figures 5A , C that the bacteria identified in the different RFI groups are mainly involved in membrane transport, gene translation, carbohydrate metabolism, energy production, and amino acid metabolism. The main differences in fungi in rumen digesta occurred in the L-RFI group and included wood-digesting saprophytes, soil saprophytes, plant pathogens and endophytic plant pathogens. The H-RFI group contained unclassified saprophytes, while the M-RFI samples mainly included animal pathogens, unclassified, and undefined saprophytes . As shown in Supplementary Figure S3A , 1,781 OTUs were found in L-RFI, 1,195 in M-RFI, and 1,454 in H-RFI among the bacteria identified in ileum digesta samples of male Dexin lambs with different RFIs. Among them, there were 401 OTUs in common among all three groups, 578 OTUs in common between L-RFI and M-RFI, 652 OTUs in common between L-RFI and H-RFI, and 533 OTUs in common between H-RFI and M-RFI. With respect to the fungi, 496 OTUs were found in L-RFI, 354 OTUs were found in M-RFI, and 448 OTUs were found in H-RFI . Among them, L-RFI and H-RFI had 107 OTUs in common, L-RFI and M-RFI had 102, M-RFI and H-RFI had 103, and among all three groups there were 80 OTUs in common. As shown in Supplementary Table S3 , in the ileal digesta samples of male Dexin lambs with different RFIs, the bacterial there were no significant differences in Chao1, Simpson, and Simpson indices between any of the groups ( P > 0.05). There were no significant differences in the Chao1, Shannon, and Simpson indices among the fungal groups ( P > 0.05) ( Supplementary Table S4 ). As shown in Supplementary Figures S4A , B , the PCA plots of ileal chyme partially overlapped without obvious separation, indicating that the microbial communities in the ileal chyme samples were less diverse between and within groups. At the phylum level, Firmicutes, Bacteroidota , and Proteobacteria were the dominant phyla in ileal chyme, The Campylobacterota bacteria in the L-RFI group were significantly higher than those in the H-RFI group ( P < 0.05), while there were no significant differences in the other bacteria ( P > 0.05) . At the genus level, Escherichia-Shigella, Bacteroides , and Erysipelatoclostridium were the main dominant genera in the ileal chyme . For the Methanobrevibacter genus, L-RFI was significantly higher than M-RFI and H-RFI ( P < 0.05), and there were no significant differences in the remaining genera ( P > 0.05). At the fungal phylum level, the dominant phyla in ileal chyme samples were Ascomycota, Basidiomycota and Mortierellomycota , and there was no significant difference among the top ten dominant phyla ( P > 0.05) . At the fungal genus level, Geotrichum, Penicillium , and Cladosporium were the dominant genera, and there was no significant difference among the top ten dominant genera ( P > 0.05) . As shown in Figures 7A , B , there are three species with LDA values >4, indicating that the bacterial genera that have a greater impact on ileal chyme due to residual feed intake are f-Family-Xi in M-RFI , f_Anaerovoracaceae and g_Christensenellaceae_R_7_group in L-RFI. No differences were detected in fungi and H-RFI group. The differentially expressed functions of ileal digesta bacteria from different RFI groups were mainly membrane transport, carbohydrate metabolism, and amino acid metabolism . Among the ileal chyme fungi, the functional annotations of L-RFI and H-RFI fungi were quite different, mainly concentrated in plant pathogens, undefined animal pathogens, endophytic plant pathogens and undefined saprophytes . As shown in Supplementary Figure S5A , 979 OTUs were found in the samples of rectal feces from L-RFI lambs, 870 from M-RFI, and 1,454 from H-RFI. There were 293 OTUs in common among the three groups, 404 OTUs in common between L-RFI and M-RFI, 442 OTUs in common between L-RFI and H-RFI, and 425 OTUs in common between H-RFI and M-RFI. A total of 500 OTUs were found in the fungi of the rectal fecal samples from L-RFI, 1,170 OTUs from M-RFI, and 1,246 OTUs from H-RFI, of which 31 OTUs were found in common in all three groups, 101 OTUs in common in L-RFI and M-RFI, 99 OTUs in common in L-RFI and H-RFI, and 72 OTUs in common in H-RFI and M-RFI . As shown in Supplementary Tables S5 , S6 , there were no significant differences in the Chao1, Shannon, and Simpson indices observed in any of the groups of rectal fecal samples ( P > 0.05). As shown in Supplementary Figure S6 , there is a clear separation between L-RFI and H-RFI in the rectal feces' samples of male Dexin lambs with different RFI, indicating that the microbiota in the two groups are quite different. As shown in Figure 9A and Table 11 , at the phylum level, Euryarchaeota, Bacteroidia , and Firmicutes were the main bacterial groups in rectal feces. The Uryarchaeota phylum in L-RFI was extremely significantly higher than that in M-RFI and H-RFI ( P < 0.01), the Actinobacteriota phylum in L-RFI and H-RFI was significantly higher than that in M-RFI ( P < 0.05), the Patescibacteria phylum in H-RFI was significantly higher than that in M-RFI and L-RFI ( P < 0.05), but there were no significant differences in other phyla ( P > 0.05). At the genus level, Achromobacter, Chryseobacterium , and Methanobrevibacter were the dominant bacterial genera in rectal feces . The abundance of Methanobrevibacter in L-RFI was significantly higher than in M-RFI and H-RFI ( P < 0.05), and there were no significant differences in the other genera ( P > 0.05). The dominant rectal fecal fungi at the phylum level were Ascomycota, Basidiomycota and Mortierellomycota , and there were no significant differences among the three groups in the top ten dominant phyla . At the genus level for rectal fecal fungi, Penicillium, Acaulium , and Scopulariopsis were the dominant genera . The levels of Aspergillus and Ustilago genera in the L-RFI group were significantly higher than those in the other two groups ( P < 0.05), but those in the “other” group were significantly lower than those in the two RFI groups ( P < 0.05). As shown in Figures 10A , B , the expression differences in the microbial functions of rectal fecal bacteria are mainly concentrated in membrane transport, energy production, carbohydrate and amino acid metabolism. The rumen is the main site for ruminants to receive feed, water and saliva. It is a humid environment with a favorable temperature of 36–40°C, which makes it an excellent place for microbial growth and reproduction . A large number of studies have shown that the various microbial communities in the rumen work synergistically to convert substances such as cellulose and hemicellulose into volatile fatty acids, and convert the nitrogen produced by dietary degradation into microbial proteins that are absorbed and utilized by the host . 1 The apparent digestibility of dry matter, crude protein, neutral detergent fiber and other insdicators can accurately mirror a sheep's ability to digest and absorb nutrients . In a study on the effect of RFI on apparent digestibility, Bonilha et al. found that the digestibility of neutral detergent fiber and total digestible nutrients in Neruda cattle in the H-RFI group was significantly lower than in the L-RFI group. The research results of Arce-Recinos et al. on growing beef cattle showed that the digestibility of dry matter, crude protein, neutral detergent fiber and acid detergent fiber of L-RFI cattle was 4%−5% higher than that of H-RFI cattle, similar to the results in this study. The apparent digestibility of dry matter, crude protein, neutral detergent fiber and acid detergent fiber of L-RFI male Dexin lambs was higher than in the H-RFI group, indicating that apparent digestibility may be one of the reasons why some sheep show higher feed efficiency. In this experimental study, no difference in rumen volatile fatty acids was found among the three groups of sheep, which is similar to the findings of two studies by Arce-Recinos et al. and Zhang et al. . This may be due to the fact that the feed substrates were exactly the same, making it difficult to establish a difference in the volatile fatty acid content. In the OTU results of this experiment, the fungal and bacterial microbiota detected in the L-RFI group were greater than those in the M-RFI and H-RFI groups, suggesting that an enrichment of beneficial microbes could enable individuals to have higher environmental adaptability and stress resistance. There were no significant differences in the Chao1, Shannon, and Simpson indices of bacteria and fungi in the rumen digesta, ileum digesta, and rectal feces samples among the three groups. The PCoA plots did not show significant differences in the microorganisms from the three groups of sheep, which is consistent with the results of Pinnell et al. on the rumen of Holstein cows. This may be because the RFI only affects individual microbial types and not the overall microbial community. Ruminants rely on the abundant microbial communities in their digestive tract to digest feed and convert it into nutrients that are easily absorbed. The most abundant gut bacteria are from the Firmicutes and Bacteroidota phyla . Recent progress in microbial research indicates that Bacteroidota can produce large amount of glycoside hydrolases, which can effectively degrade nutrients such as cellulose, pectin and starch, and plant polysaccharides in the rumen . Firmicutes are mainly polysaccharide-degrading microorganisms in the rumen, but recent studies indicate that Firmicutes can produce biotin, playing a major role in cellulose degradation, VFA production, and metabolism . In this study, at the bacterial phylum level, the dominant phyla of bacteria in rumen digesta, ileum digesta and rectal feces were Firmicutes, Bacteroidota , and Proteobacteria . The rumen bacteria results were consistent with the results of Liu et al. on the rumen microbiota of Hu sheep with different RFIs. The rumen of healthy ruminants is characterized by the dominance of anaerobic bacteria of Firmicutes and Bacteroidota . The results of ileal digesta were consistent with those of Elolimy et al. , with Firmicutes and Bacteroidota as the main bacterial communities, and the abundance of Firmicutes and Bacteroidota in the L-RFI group was higher than that in the H-RFI group. The rectal feces results were similar to those of Elolimy , with the abundance of Bacteroidota in cattle with L-RFI being higher than that in cattle with H-RFI. At the bacterial genus level, the genera with greatest abundance in rumen chyme were Bacteroides, Rikenellaceae_RC9_gut_group , and Prevotella _7. Among them, the abundance of Bacteroides in the L-RFI group was higher than that in the H-RFI group, and the abundance of Rikenellaceae_RC9_gut_group was lower than that in the H-RFI group. The Bacteroides genus has been shown to effectively degrade plant cell wall polysaccharides and improve fiber utilization . The specific role of the Rikenellaceae _RC9_gut_group genus is still unclear, and it has so far only been shown to be related to butyrate and propionate metabolism . In the ileal digesta, Escherichia-Shigella, Bacteroides , and Turicibacter were the main genera. The abundance of Bacteroides in L-RFI lambs was higher than that in the H-RFI group, while the abundance of Escherichia-Shigella was lower than that in the H-RFI group. Escherichia-Shigella is a harmful bacterium that may cause bacterial dysentery . Methanobrevibacter is a methanogen. Although it was not the main genus in terms of abundance in the ileal digesta, the data showed that its numbers in the L-RFI group were significantly higher than in the H-RFI group. This was contrary to the results of Mia , which showed that the ileal digesta of L-RFI male Dexin lambs contained more methanogens. In our results, the abundance of methanogens in the three parts of the L-RFI group was higher than that in the H-RFI group. This may be a result of the sheep production model and feed type. However, recent microbiological research indicates that methanogens are of great significance to the early intestinal microbial colonization of ruminants. They can effectively reduce the H 2 produced by the fermentation and decomposition of plant fibers in the digestive tract, reduce the hydrogen partial pressure, and improve the body's hydrogen nutrition pathway . However, the mechanistic details and regulatory pathways need to be verified by subsequent studies. In rectal feces, Achromobacter, Chryseobacterium , and Methanobrevibacter were the main genera. The abundance of Methanobrevibacter in the L-RFI group was significantly higher than that in the H-RFI group, while the abundance of Achromobacter and Chryseobacterium was lower than in the H-RFI group. Achromobacter is a conditionally pathogenic bacterium that can cause urinary tract infections under certain conditions . Chryseobacterium has a strong ability to digest collagen and can cause disease in the body . Through the abundance analysis of bacterial microbiota under the three RFI conditions, we can conclude that L-RFI sheep achieve higher digestion efficiency of feed nutrients by having a greater abundance of Firmicutes and Bacteroidetes in the digestive tract. This echoes the results of apparent digestibility in this study, indicating that L-RFI sheep have a higher efficiency of decomposition and absorption of feed nutrients through an enriched population of microorganisms. The increased abundance of pathogenic bacteria detected in the H-RFI group may be a result of differences in their digestive tract microbiota, which makes them less adaptable and resistant than the L-RFI sheep, which is consistent with our OTU results. There are a large number of anaerobic fungi in the GIT of ruminants. Previous studies have generally concluded that these fungi exist in the animal body in the form of zoospores, which produce highly active, fiber-degrading enzymes, and play a major role in the digestion of fibrous plant material . It is increasingly recognized that fungi can optimize rumen fermentation, enhance nutrient availability, and promote intestinal health. Anaerobic fungi degrade plant cell walls through both enzymatic reactions and physical means targeting fibers that are difficult for bacteria to degrade . In addition to fiber degradation in the rumen, 5% to 10% of carbohydrate degradation occurs in the hindgut, indicating that fiber degradation by anaerobic fungi can occur over the entire digestive tract . It has been proven that Basidiomycota and Ascomycota can produce mycelial hyphae, which can penetrate the silica cuticle produced on the surface of forage through stomata and damaged parts of the dermis, thereby promoting more efficient digestion of plant fiber . Basidiomycota and Ascomycota are aerobic fungi with the highest relative abundance, based on sampling of rumen chyme. Within 2 h of eating, the oxygen concentration in the rumen is sufficient for the survival of Ascomycota. As aerobic fungi multiply, the oxygen content gradually decreases. This process causes the rumen to turn into an anaerobic environment, and anaerobic fungi and bacteria begin to multiply in large numbers . Mortierellomycota is a type of saprophytic fungus that is widely distributed in the digestive tract of ruminants and has the ability to efficiently decompose lignin . In this experiment, at the phylum level, the main fungal phyla in rumen and ileum digesta were Basidiomycota, Ascomycota and Mortierellomycota, among which Basidiomycota and Mortierellomycota were more abundant in L-RFI sheep, and Ascomycota was more abundant in the H-RFI group. At the genus level, Cladosporium, Fusarium , and Debaryomyces were the dominant genera in rumen digesta. Current microbiological studies have shown that Cladosporium fungi can produce lipases, proteases, urease, and chitinase, which can help the host digest corn-rich diets . Fusarium can produce biomass that is broken down and transformed by the animal rumen; however, in vivo research on the effects of Fusarium on nutrient digestibility and rumen function is lacking and the details of the specific mechanism of action are still unclear . Debaryomyces has emerged as a potentially valuable probiotic. Its cell wall and the polyamines it produces have been shown to stimulate immunity, regulate the microbiome, and improve digestive function . However, the abundance of Debaryomyces in the H-RFI group was higher than that in the L-RFI group. At the genus level in the ileal digesta, the dominant fungal genera were Geotrichum, Penicillium , and Cladosporium . Geotrichum are believed to have the potential to be useful probiotics, which can increase the production of acetic acid and propionic acid and the ratio of propionic acid to acetic acid in ruminants; their effectiveness is higher than that of traditional brewer's yeast . The cellulase produced by Penicillium can effectively increase the content of oleic acid, linoleic acid and linolenic acid in milk, and can also affect the fat yield and unsaturated fatty acid content of milk . In the results of this study, although there was no significant difference in the abundance of Geotrichum in the ileal digesta among the three groups, the abundance in the L-RFI group was 21% higher than that in the H-RFI group, which may be related to the improvement in feed efficiency. Aspergillus can secrete cellulase and protease to improve the digestibility of the feed . Ustilago plays a role in lignin degradation and participates in the fermentation of feed . In our experiments, Aspergillus and Ustilago were significantly more abundant in the L-RFI group than in the other two groups, suggesting that Aspergillus may play a major role in improving feed efficiency. LefSE is a high-dimensional biomarker mining tool based on the LDA algorithm, which is used to identify significantly characterized microorganisms . In the results of this experiment, when LDS > 4.0, the characteristic bacteria were only found in the rumen and ileum digesta. Among them, the g__ Roseburia genus found in the H-RFI group in rumen chyme is believed to affect the core populations and produce ketones that affect host development . P_Proteobacteria may parasitize the phylum Proteobacteria in rumen fluid and rumen epithelium, and Proteobacteria may oxidize ammonia and methane on the surface of the rumen epithelium . In the ileal chyme, the f_Anaerovoracaceae bacteria found in the L-RFI group can utilize a variety of types of organic matter as carbon sources and participate in the fermentation of plant polysaccharides in the GIT . The g_Christensenellaceae_R_7_group is currently thought to be a new type of probiotic that can effectively improve the growth performance and meat quality of ruminants. It plays an important role in the degradation of carbohydrates and amino acids into acetate and ammonia, respectively . The LEfSE results suggest that there are differences in some probiotics in the digestive tracts of sheep in the L- and H-RFI groups, which may be the main reason for the differences in feed utilization efficiency. The Proteobacteria found in the rumen chyme will compete for H ions with methanogens in the digestive tract, which may be one of the reasons for the differences in the abundance of methanogens among the three groups. The KEGG (Kyoto Encyclopedia of Genes and Genomes) pathway analysis is a software module that builds a manually curated pathway graph representing the current knowledge on biological networks under defined conditions in a specific organism. The pathway diagrams are graphical representations of the networks of interacting molecules responsible for specific cellular functions . In our study, the rumen chyme KEGG diagram showed that the differential metabolic pathways between L-RFI and H-RFI were mainly concentrated in Cellular Process (L-RFI), Metabolism (L-RFI), and Environmental Information Processing (H-RFI). Among these, the Cellular Process pathway is mainly concentrated in the Transport and Catabolism pathway, the Metabolism pathway is mainly related to carbohydrate metabolism, amino acid metabolism and energy production pathways, and the Environmental Information Processing metabolic pathway was mainly concentrated in the Membrane Transport pathway. This is similar to the research results of Zhang et al. and Zhou et al. . The ileal chyme KEGG map shows that the pathway differences are mainly Cellular Process (L-RFI), Metabolism (L-RFI), Genetic Information Processing (L-RFI), and Environmental Information Processing (H-RFI). Among these, the Cellular-Process pathway was mainly concentrated in cell motility, the Genetic Information Processing pathway was mainly concentrated in replication and repair, the Metabolism pathway was mainly concentrated in carbohydrate metabolism, amino acid metabolism and energy production, and the Environmental Information Processing metabolic pathway was mainly concentrated in membrane transport. In the rectal fecal KEGG map, only the Environmental Information Processing pathway and the Human Diseases pathway were overexpressed in the H-RFI group. The Environmental Information Processing pathway was mainly concentrated in the membrane transport pathway, and the Human Diseases pathway was mainly concentrated in the endocrine system pathway. This is quite different from the research results of Elolimy et al. on Holstein cattle, which may be related to the breed, production mode and gender. Overall, the KEGG analysis indicated that the overexpression in the L-RFI group was mainly concentrated in carbohydrate metabolism, amino acid metabolism and energy production, which may be related to the high abundance of Bacteroidetes and Firmicutes in the microbiota of L-RFI sheep, which could effectively improve the conversion and absorption of nutrients such as cellulose and amino acids. In the LEfSE results, some differentially expressed microbial metabolites had the function of promoting carbohydrate metabolism and amino acid decomposition and conversion, which may be related to the differences in KEGG metabolic pathways. In the KEGG diagram of fungi from the rumen and ileum digesta, the main differences observed in the fungi in rumen fluid were wood saprophytes (L-RFI), soil saprophytes (L-RFI), plant pathogens and endophytic plant pathogens (L-RFI), and unclassified saprophytes (H-RFI). The functions of ileal chyme fungi are mainly concentrated in undefined (H-RFI), endophytic plant pathogens (H-RFI) and animal pathogens (H-RFI). Given the current limitations in the KEGG functional annotation of fungi, the specific metabolic pathways should be further explored. According to the results of this study, different RFI did not significantly affect the overall digestive tract microbial community of Dexin male lambs. Its main mechanism of action may be to improve feed efficiency by changing the abundance of certain beneficial bacteria. | Study | biomedical | en | 0.999994 |
PMC11697165 | Over 800 000 older people are currently living with dementia in the UK and millions more worldwide. Within 25 years, this number is projected to double, with an average 45% increase in those aged 65 and over. 1 , 2 Dementia-related costs are substantial, estimated at £26.3 billion annually in the UK, 3 , 4 and around $355 billion across the USA for long-term health and care services. 5 Recent trials of novel monoclonal antibody immunotherapy against amyloid deposition have demonstrated the potential to slow disease progression. 6 , 7 However, these trials recruited participants experiencing very early cognitive impairment with eligibility confirmed by PET imaging or cerebrospinal fluid sampling, neither of which are routinely offered in clinical practice. Even in those with positive amyloid imaging, only a minority would be likely to meet trial eligibility criteria. 8 If these drugs gain regulatory approval, there will be unprecedented pressures on already constrained diagnostic resources. As such, there is an urgent need to target imaging in pre-symptomatic individuals with the highest probability of future dementia. Routinely collected electronic health record (EHR) data contain relevant information on dementia risk factors, such as those identified by the Lancet Commission. 9 Studies suggest that signs of cognitive impairment and progressive neurodegeneration can occur up to 9 years prior to diagnosis. 10 Data-driven studies have the potential to refine our understanding of future dementia risk, analogous to the approach adopted for proactive cardiovascular risk screening to target interventions such as lipid-lowering therapy. 11-13 Aside from novel therapies, personalized brain health estimates could influence better risk factor control through supported individual lifestyle modifications. Several studies have used machine learning approaches to predict future dementia, but these frequently use selective populations such as those undergoing specialized imaging, 14 , 15 or where symptom concerns have already prompted memory clinic referral, 16 , 17 or where detailed genetic profiling has been completed. 18 , 19 While such approaches may have value in supporting efficient dementia diagnosis, they are not appropriate for pre-symptomatic risk identification across the whole population. Some EHR data models have recently been published. 20 , 21 While these present an advancement in this field, limitations are still apparent, either in the observation periods or the lack of relevant linked data for socioeconomic status, lifestyle risk factors, and relevant frailty markers. Our aim was to evaluate the utility of machine learning models developed using comprehensive linked routine primary and secondary care data to predict future dementia diagnosis. We report risk estimates of incident dementia at 5, 10 and 13 years across a large, unrestricted adult population in Scotland. This study was performed with the delegated approval of the local Research Ethics Committee and Caldicott Guardian. All data were collected from EHRs and national registries previously de-identified by the DataLoch service (Edinburgh, UK) and analysed in a Secure Data Environment. Individual consent was not required for this study. In this longitudinal retrospective cohort study, we included all older adults (50–102 years old) registered with a research-linked general practice in a regional health board in South East Scotland. This includes 90% of all general practices and covers a population of approximately 900 000 people of all ages. Only individuals who were alive without a recorded diagnosis of dementia in either the primary or secondary care electronic patient record systems on 1st April 2009 were eligible. Patient follow-up continued until 1st April 2023. Individuals with prior outpatient old age psychiatry clinic attendance for dementia assessment and diagnosis, or any of 111 primary care codes related to ‘memory and cognitive problems’ were excluded. We defined an observation window from 1st April 2009 to 1st April 2010, excluding individuals who were diagnosed with dementia or died in this period. A summary of the data flows in this study is presented in Fig. 1 . Common comorbidities were defined using HDR UK CALIBER phenotype codelists 22 for the presence of relevant codes acquired prior to or during the observation window in either GP records (Read version 2), or hospital (ICD-10 codes) using the Scottish Morbidity Records (SMR). We included information from outpatient clinic attendances (SMR00), acute inpatient episodes (SMR01) and acute mental health admissions (SMR04). We used the Scottish Index of Multiple Deprivation (SIMD) to stratify individuals across quintiles of relative socioeconomic deprivation. 23 The cause of any deaths were identified from ICD-10 coded certification using linkage with the National Records of Scotland. Medication history was collected using a 6-month lookback window for the 50 most prescribed medications within the Scottish Prescribing Information System (Public Health Scotland), which contains records of all non-hospital dispensed prescriptions. Laboratory data for common haematology and biochemistry tests requested from either community or hospital settings were extracted from the local EHR system (TrakCare, InterSystems, MA, USA). Where completed in primary or secondary care, we extracted relevant coded records of lifestyle risk factors [alcohol, body mass index (BMI), smoking status] and measures of blood pressure and lung function (spirometry measures). The models also included two routinely recorded risk scores from primary care, representing cumulative deficit scores of frailty [electronic Frailty Index (eFI)] 24 and 10-year cardiovascular disease risk (ASSIGN score). 25 The eFI was modified to exclude prior reported memory or cognitive problems and was therefore calculated as a modified eFI using 35 deficits. The last valid record within the observation window was used for model development in all cases. The primary outcome was defined via a new dementia code in either primary, secondary or death records within 5, 10 or 13-year prediction windows starting from the index prediction date of 1st April 2010. We used the HDR UK CALIBER dementia phenotype, 22 combining all dementia subtype codes. We additionally performed a sensitivity analysis for the more specific outcome of an Alzheimer's Disease-related Dementia (ADRD) diagnosis. All-cause mortality was a secondary outcome. We undertook both a data-driven and a clinically supervised approach, creating two model variations per outcome. In the data-driven approach, we used the complete set of available routine data, totalling 219 continuous and 92 categorical variables for training . Further details on these variables, the use of thresholds and temporal criteria are described in Supplementary Table 1 . Details regarding patient follow-up, including observation and prediction windows across the three dementia outcomes are described in Fig. 2 . In the clinically supervised approach, candidate features were selected using clinical input (by author A.A.). This included modifiable and non-modifiable risk factors known to impact dementia risk in the literature, with clinical relevance for early dementia screening. 9 The complete list of 22 curated features included age, sex, SIMD (quintiles of multiple deprivation estimated across seven resource or income-based domains), 26 alcohol and smoking history, BMI, modified eFI (EHR-based cumulative markers of frailty status) 24 and ASSIGN (cardiovascular disease risk score incorporating SIMD) 25 risk scores, selected blood tests (HDL/LDL/total cholesterol, triglycerides, glycosylated haemoglobin [HbA1c]) and long-term conditions from primary and secondary health records (atrial fibrillation, hearing loss, heart failure, ischaemic heart disease, hypertension, stroke, peripheral vascular disease, diabetes, obesity, alcohol and substance misuse). The model hyperparameters were fine-tuned using a cross-validated grid search strategy, targeting the 13-year outcomes ( Supplementary Table 2 ). Models were developed using Python version 3.10.12, using the ‘xgboost’ package (version 2.0.3) for training and ‘scikit-learn’ (version 1.3.2) for evaluation and calibration procedures. We employed additional data cleaning to remove sparse features with <1% completeness within the observation window. We removed samples containing any outlier measurements (<0.5 or >99.5 percentile of the dataset). Correlated features were removed when Pearson’s correlation coefficient was over 0.9, prioritizing the retention of continuous variables over defined categorical or temporal variables of the same nature. We used an established ensemble model with gradient-boosted trees (XGBoost) to develop the dementia incidence and all-cause mortality models. 27 We additionally tested performance on any future dementia diagnosis using other linear and non-linear estimators (logistic regression, naïve Bayes, decision trees and random forests). We performed a random stratified split (70% for training and 30% for validation), balancing for age, dementia incidence and all-cause mortality rates between the two sets. At the evaluation stage, we measured the receiver operating characteristic area-under-the-curve ( ROC-AUC ) and the precision-recall area-under-the-curve ( PR-AUC ). We measured the positive predictive value ( PPV ), negative predictive value ( NPV ), Sensitivity and Specificity through thresholding based on the maximum F1-Score achieved for the positive class per outcome. In the presence of high class imbalance between dementia and dementia-free cases, ROC-AUC can produce falsely elevated estimates, undermining the impact of the PPV score. Meanwhile, the PR-AUC provides a relative measure of trade-off between PPV and Sensitivity compared to baseline dementia risk. A higher score than the baseline disease prevalence is treated as ‘better than random choice’. We employed a post hoc calibration technique using cubic splines to normalize the probability distribution and re-evaluate the classifier. 28 At the calibration stage, a further 30% of the training set was held out to fit the spline model and re-calibrate the probability scores within the internal validation set. The evaluation measures were reported post-calibration. A stratified 10-fold cross-validation strategy was used as an additional procedure to validate both the PR-AUC and ROC-AUC on random data partitions. The 95% confidence intervals (95% CI) for these values were generated using the DeLong method, optimized for large sample sizes using linearithmic weights. 29 Stratified analyses were conducted, evaluating potential imbalances in classification performance across age groups and SIMD quintiles. Baseline differences in characteristics from the observation window between patients with and without a future dementia diagnosis are reported using the clinically supervised feature set. Continuous variables were measured using the Kruskal–Wallis test (for non-normal distributions), while categorical variables were reported using Pearson’s chi-squared test, where significance was assumed at P < 0.001. An unadjusted multivariate Cox Proportional Hazards model was used to test the significance of the clinically supervised features in relation to the timing of diagnosis. We ranked the top predictors in both the clinically supervised and data-driven models using the SHAP framework (Shapley Additive eXplanations) 30 and the calibrated probability scores. The estimated patient-level Shapley values on the internal validation set were summarized in density plots, highlighting the observations contributing to increased (red) and decreased risk (blue). To perform risk stratification, we used quantile-based discretization on the validation set's sorted and calibrated probability scores to generate 10 equally sized risk groups. The response rate (% observed dementia diagnoses) and the age distribution across each model subset were then reported. The cohort included 144 113 individuals, of whom 11 143 (8%) developed dementia during the 13-year prediction window. Baseline differences between the people who did and did not develop dementia are shown in Table 1 . Those individuals who developed dementia were older at baseline [75 (69–80) versus 60 (55–69) years, P < 0.001] and more likely to be female (62% versus 51%, P < 0.001). The distribution of socioeconomic deprivation was similar between groups, but records were more complete in people with a diagnosis. In those who went on to develop dementia, rates of recorded smoking, high alcohol consumption and BMI measures were lower within the observation window when compared to those who remained free from dementia. However, the ASSIGN and modified eFI risk scores were higher, although the ASSIGN score was only recorded in 3% of all participants. In most clinically curated variables, completeness rates were higher in the dementia group in the early study years, but lower closer to the end of follow-up . All clinically supervised comorbidities were more frequently observed in the group who developed dementia, except for prior alcohol or substance misuse. By the end of the study period, all-cause mortality rates were significantly higher in those who developed dementia (70% versus 24% in non-dementia cases, P < 0.001). Median observation time until death or end of study period was 125 (84–156) months in those who developed dementia, compared to 156 (156–156) months in those who did not ( P < 0.001). However, in patients who died ( n = 40 074), the observation period was longer in those with future dementia diagnosis [102 (68–128) versus 81 (40, 120) months without dementia, P < 0.001]. The rate of newly coded diagnoses in the routine data fluctuated over the years, with a notable drop following the COVID-19 pandemic in 2020 and a slower recovery in subsequent years . Most participants (76% mean) had their index diagnosis coded within primary care , although this could follow correspondence from a specialist outpatient clinic making the diagnosis. The mean age at diagnosis was 82 ± 7 years, and most dementia diagnoses were made in the 80–89 years old group after a median period of 78 (40–114) months from the index prediction date. There was non-specific subtype coding in 48% of dementia diagnoses, while 36% were ADRD-coded . The linear Cox regression model (excluding age) suggested that modified eFI >0.05 (the equivalent of two or more non-cognitive deficits) had the highest positive association with the timing of diagnosis, and most curated risk factors significantly contributed to a future diagnosis . After performing a stratified random split to generate the training ( n = 101 286) and validation ( n = 43 409) sets, the samples were fully balanced by age group, mortality and dementia incidence rates ( Supplementary Table 3 ). The incidence at 5-, 10- and 13-year prediction windows was 3, 6 and 8% for any dementia diagnosis, 1, 2 and 3% for ADRD, and 9, 20 and 28% for all-cause mortality, respectively. Table 2 shows the performance of both the data-driven and clinically supervised models, estimated after calibration. Overall, the data-driven model performed marginally better than the model that used a clinically supervised subset of features. For the data-driven model, the ROC-AUC scores had similar discrimination for dementia [0.89 (0.88–0.89) at 5 years, 0.87 (0.86–0.87) at 10 years and 0.85 (0.84–0.85) at 13 years] and mortality [0.89 (0.89–0.90) at 5 years, 0.89 (0.88–0.89) at 10 years and 0.88 (0.88–0.89) at 13 years]. Using the PR-AUC and F1-score thresholded scores ( PPV , NPV , Sensitivity and Specificity ) to assess discrimination in detected cases, we observed that model performance improved as the prediction windows widened for both dementia [ PR-AUC of 0.18 (0.13–0.23) at 5 years, 0.28 (0.24–0.32) at 10 years and 0.30 (0.26–0.34) at 13 years] and all-cause mortality [ PR-AUC of 0.55 (0.51–0.59) at 5 years, 0.73 (0.70, 0.75) at 10 years and 0.79 (0.77, 0.81) at 13 years]. In this case, precision was limited among the dementia incidence models (0.14, 0.26 and 0.30 versus 0.54, 0.65 and 0.70 in all-cause mortality models at 5, 10 and 13 years, respectively). The NPV was more robust, consistent with the relatively low dementia incidence (0.99, 0.97 and 0.96 at 5, 10 and 13 years, respectively). Sensitivity for all-cause death improved with the longer prediction window (0.54, 0.67 and 0.72 at 5, 10 and 13 years, respectively) but worsened for dementia (0.76, 0.58 and 0.53 at 5, 10 and 13 years, respectively), while the opposite trend was apparent for specificity (0.93, 0.89 and 0.86 for all-cause death versus 0.85, 0.89 and 0.89 for dementia incidence at 5, 10 and 13 years, respectively). The clinically supervised models for dementia incidence had marginally lower PR-AUC (0.17, 0.27 and 0.29 at 5, 10 and 13 years, respectively) compared to the data-driven models. Whilst the PPV was worse (0.13, 0.25 and 0.28 at 5, 10 and 13 years, respectively), the NPV (0.99, 0.97 and 0.96 at 5, 10 and 13 years, respectively) and sensitivity were improved (0.76, 0.62 and 0.55 at 5, 10 and 13 years, respectively). There were 4162 ADRD-specific diagnoses within the full prediction window, representing 37% of the total diagnoses of dementia. In this cohort, baseline characteristics were similar, with slight variation in some cardiovascular risk factors ( Supplementary Table 4 ). In the ADRD-specific sensitivity analysis, performance was poorer than for any dementia diagnosis at all prediction windows ( Supplementary Table 5 ). Using the clinically supervised model, the PR-AUCs were 0.05, 0.09 and 0.10 for ADRD compared to 0.17, 0.27 and 0.29 for any dementia at 5, 10 and 13 years, respectively. PR curves comparing performance for any incident dementia, ADRD and all-cause mortality are shown in Fig. 3 . ROC curves visualized across the same model sub-types showed similar discrimination . The 10-fold cross-validated ROC and PR curves indicated comparable performance to the internal validation sets for outcomes of future dementia . Model calibration improved precision by correcting for over-estimation of risk in the baseline models, with the effects of the spline calibration in adjusting the output probability scores becoming more pronounced as the prediction windows increased . Model performance over other common supervised ML classifiers is shown in Supplementary Fig. 7 , demonstrating optimal ROC-AUC and PR-AUC using the XGBoost model compared to other approaches tested. More diagnoses of dementia were made in individuals from the least deprived SIMD groups, and this relationship was stable over time and by sex . To investigate underlying model bias, the performance of the clinically supervised model for any incident dementia at 13 years was stratified by age and SIMD groups ( Table 3 ). Precision for any dementia diagnosis was notably lower in the smaller number of younger-onset cases ( PR-AUC of 0.025 [0.015–0.035], PPV of 0.047 [0.029–0.075] in those below 60 versus PR-AUC of 0.366 [0.332–0.400], PPV of 0.320 [0.302–0.332] in those between 80 and 89). The group >90 years old achieved a lower PR-AUC score (0.296 [0.196–0.397]) than the 80- to 89-year-old group, but had a better balance between sensitivity (0.731 [0.623–0.817]) and specificity (0.603 [0.553–0.650]). The PR curves were notably more stable across the individual deprivation quintiles . We used the SHAP framework to examine the top 20 ranked predictors of future dementia diagnosis in both the clinically supervised and data-driven models . In all instances, age (older), deprivation status (least deprived) and eFI (higher frailty) were among the top features associated with increased risk of dementia. Within the data-driven models, a range of variables were linked to a higher likelihood of a future diagnosis, including a higher number of long-term conditions, number of prescriptions, depression, hearing loss, epilepsy, stroke, smoking history, elevated calcium, glucose and cholesterol, and clinic reviews within neurology and geriatric medicine services. On the other hand, a documented increased cardiovascular risk (ASSIGN score ≥20), poorer lung function, higher blood pressure, high BMI, use of anti-hypertensive drugs and abnormal urea blood results appeared protective. Within the clinically supervised subsets, the variation in SHAP scores was more pronounced. Prior hearing loss, alcohol or substance misuse, stroke, obesity, peripheral vascular disease as well as smoking were associated with increased risk of future diagnosis. A history of hypertension appeared protective in 5-year outcomes but was associated with increased risk of dementia incidence over longer periods. On the other hand, heart failure was protective over 10 and 13 years, but this may be biased by survival rates in these patients. After risk stratification, both clinically supervised and data-driven models had predictions that reached an incidence rate above 30% at 13 years in the highest risk decile, compared to a whole-population incidence of 8% . Over 40% of this group were in their 70s at the time of index risk prediction. Conversely, 13-year dementia risk was <1% in the lowest three prediction deciles, although these predominantly consisted of individuals in their 50s at the time of risk prediction. We have extensively evaluated the diagnostic quality of a machine learning prediction model for long-term dementia risk developed from entirely routinely collected data. We demonstrate moderately capable prediction for diagnoses up to 13 years later, which could inform further testing or risk factor surveillance in those at the extreme of predicted risk. There was marginally improved PR with a high variable count data-driven approach using XGBoost, but at the expense of rule-out performance compared to a clinically supervised model restricted to 22 variables. Precision was consistent across quintiles of socioeconomic deprivation, but detection of younger-onset dementia cases earlier than 70 years old was notably limited, and performance fell markedly when restricted to more specific ADRD-coded diagnoses. Early detection of dementia is a major societal challenge, but population-level screening using routinely collected data to identify high-risk subgroups may improve the targeting of resource-intensive dementia investigations. Our study has important strengths. We used a large, population-level dataset with integrated primary and secondary care health data to maximize ascertainment of risk factors and dementia diagnoses over 13 years of follow-up. We were conservative in identifying a model development cohort at low risk of established cognitive issues by exclusion criteria using hundreds of codes and clinic attendances suitable for identifying pre-diagnostic dementia. In contrast to many reports of machine learning models for risk prediction, we have presented performance beyond basic discrimination and calibration measures flattered by relatively low outcome incidence, using PR to demonstrate the clear challenge of confident prediction for this complex condition. We have also shown a direct comparison between a data-driven approach and clinically supervised selection, suggesting in this case that the latter provides similar performance with parsimony and, therefore, greater potential for transferability to other EHR systems. The challenge of managing dementia-related disorders across an ageing population cannot be overstated. The anticipated increase in the number of individuals living with dementia by 2050 is likely to be in the region of 166%. 31 EHR data have led to a surge of studies covering large integrated population-level data to understand disparity, adverse effects and outcomes in those affected by dementia and related conditions. 32-34 These data contain important long-term markers of health to understand disease progression. Highlighting the cumulative effects of these markers and ranking their contribution to a potential diagnosis can be used to improve the prioritization of public health measures in middle-aged populations. However, this knowledge often lacks understanding of individualized risk, which is essential in an era of precision medicine to drive shared treatment decisions between an individual and their clinician. The great potential of data-driven prediction is coming closer to realization with larger EHR datasets that are crucially more granular in detail to understand heterogeneity of risk, using ever-advancing machine learning methods. The predictive quality and precision of our models for dementia improved as the prediction windows lengthened, highlighting the long-term cumulative effect of many risk factors associated with dementia pathology. Inevitably in an observational study, coding of disease or lifestyle factors reflects engagement with health services and is at risk of ascertainment bias. Despite this, our final model achieved sufficient precision-recall in the highest decile of risk to identify individuals with a 1 in 3 risk of a dementia diagnosis within 13 years, compared to baseline population risk of 1 in 13. While it might be expected that many of these individuals would be of advanced age, over 40% were under 80 years old. So, although the PPV was generally limited, in the context of a general population, high-risk stratification could still provide substantial public benefits. These may include better health and care resource utilization and improved targeting of pharmacological treatments for primary prevention. 35 , 36 The length of the prediction window is relevant for modern Alzheimer’s dementia immunotherapy, where confirmation of amyloid deposition and treatment are needed many years prior to established cognitive symptoms. 6 , 7 Population screening using predictive models of future risk might offer a more equitable strategy for determining eligibility for PET-imaging or novel therapies, in contrast to the potential for access favouring those with financial means or sufficient healthcare literacy. Here, we must acknowledge some critical challenges with the use of routinely collected data in model development. We have shown higher rates of dementia diagnoses in those with the least socioeconomic deprivation, which is in sharp contrast to selected cohort studies such as those in UK Biobank, the Whitehall II study of UK civil servants and Finnish cohorts. 37 , 38 Our data are likely to reflect stronger health-seeking behaviours for early cognitive decline in more advantaged populations, but the competing risk of earlier death in those from more disadvantaged backgrounds must also be considered. Further, the stratified analysis of our prediction model showed stable predictive performance across deprivation groups, suggesting potential utility even if under-representative in higher deprivation groups. Interestingly, the added value of SIMD measures is well recognized in the prediction of long-term cardiovascular events using the ASSIGN score. 25 Ultimately, our data reflect the true known population burden of dementia within a National Health Service in the UK where access to testing or diagnosis is not limited by ability to pay or requirement for insurance. Cohort studies have their own issues with representative inclusion, so caution must be taken against over-interpretation in this area. The lower likelihood of obtaining a dementia diagnosis in people from poorer backgrounds is a challenge for all healthcare systems but contributes to the argument for population screening and proactive targeting if and when more effective treatments for earlier dementia are available. Even without novel immunotherapies there are likely to be other societal advantages to system-wide recognition of early cognitive decline, to maximize access to appropriate health and social care service support and benefits where needed. While various clinical models have been developed in the past, the evidence suggests that there is no single best prognostic model for dementia prediction. Only a small proportion of such models have been externally validated. One example is the Cardiovascular Risk Factors, Ageing and Dementia model, which showed low discrimination power for prediction of incident dementia, with a ROC-AUC of 0.71 (95% CI 0.66–0.76). 39 , 40 Additionally, models that include cognitive testing as a predictor tend to have higher ROC-AUC scores (>0.75) compared to those that do not. 41 However, this renders them unsuitable for pre-symptomatic screening. This limitation is also present in most ML studies. Some EHR-based studies have demonstrated exceptional performance with ROC scores of 0.89 and above for long-term predictions, but target only patients with memory clinical referrals. 17 Other existing data studies achieved ROC-AUCs over 0.80 at 6 years prior to diagnosis using propensity-matched cohorts. 42 However, this approach may limit generalizability, as it typically discards many control cases that could contain important risk indicators. Through our approach, we opted for evaluating an unbiased sample of community-dwelling adults using the PR-AUC, which effectively measures discrimination when sample sizes are imbalanced. Thus, the variability in study design settings and reported outcomes makes it difficult to establish a clinical or data-driven performance baseline for comparison against our models. The estimated SHAP values indicated a wide range of routine data points associated with dementia risk. While a lot of model assumptions regarding modifiable risk factors were consistent with reports from the literature 9 (e.g. smoking, alcohol consumption, hearing loss, stroke and epilepsy), there were also some contradictions (e.g. heart failure being protective of long-term diagnoses and hypertension being protective of short-term incidence). Although these may be partially explained by underreporting in EHR data or competing risks of death from acute conditions, there may also be a causal effect stemming from better control measures provided for high-risk individuals. 43 , 44 Nonetheless, frailty and lifestyle risk factors (age, eFI, SIMD, blood pressure, smoking and BMI) unsurprisingly headlined the summary plots. In these cases, frailty in dementia was associated with a high number of ageing-related health deficits and signs of lower BMI and blood pressure. We also acknowledge several limitations of our study. Firstly, our resources did not allow patient and public engagement to be incorporated into the study design. Although we have been robust in cross-validation, our models lack external dataset validation. However, our clinically supervised model of 22 variables has a high transferability potential to achieve this. Due to the underreporting of PR-AUC in dementia studies, and lack of established prediction baselines, we could not compare performance against similar studies in the literature. We utilized phenotype code lists to derive confirmed clinical diagnoses of dementia from GP and hospital coding. Although the accuracy of EHR-based phenotype definitions is typically high, 45 there is also a possibility that these are underreported across the general population. Furthermore, our analysis highlighted difficulties in developing models for specific ADRD diagnoses that were clearly under-reported in our data. ADRD represents the most common dementia sub-type, but only around a third of dementia cases in our cohort reflected this, with a high proportion of unspecified dementia codes used particularly in primary care. Our relatively low numbers of younger-onset dementia diagnoses limited the validity of prediction in this group. This was partially due to the data collection procedure, as we defined a fixed cutoff of individuals aged 50 and above on 1st April 2009, with no additional entries after this date. This further prevented training and validation on shorter follow-ups beyond the first study year, as it would limit applicability for younger individuals. Future models could incorporate competing risk components to better account for the potential overestimation of dementia risk in our models for people at simultaneously higher risk of earlier non-dementia-related death. This approach has been integrated into newer cardiovascular risk models such as SCORE2. 46 Strategies to improve primary prevention for dementia are essential to mitigate the challenges of an ageing population. We have demonstrated that gradient-boosting (XGBoost) machine learning prediction models, based entirely on routinely collected health data, can provide moderately capable prediction for high-risk individuals many years prior to dementia diagnosis, including when using a parsimonious clinically supervised model with high transferability. Personalized estimates of future dementia risk could influence risk factor modification, access to clinical trials and help target brain imaging required for novel immunotherapy treatments in selected individuals with pre-symptomatic disease. | Review | biomedical | en | 0.999995 |
PMC11697169 | We recruited participants from a prospective cohort study investigating maternal immunizations in low- and high-risk pregnancies at the University of Washington (UW). Inclusion criteria were ability to obtain informed consent, singleton pregnancy, and availability of paired maternal-cord blood samples. Exclusion criteria were multiple pregnancy (eg, twin), known fetal or neonatal genetic anomaly, or small-for-gestational-age birth weight infants (<10th percentile for gestational age per Olsen growth curves) . This study was reviewed and received ethics approval through the UW Human Subjects Division. All participants provided written informed consent. Clinical health, immunization, race and ethnicity, and insurance data were abstracted from electronic medical records and linked Washington State Immunization Registry data as previously described . We considered insurance status categories as public, private, Tricare (military), federal, or other. We calculated body mass index using maternal weight at the time of delivery. We categorized pregestational diabetes to participants with type 1 or 2 diabetes mellitus and defined chronic hypertension as participants diagnosed with hypertension before 20 weeks’ gestational age. We defined preeclampsia with or without severe features, chronic hypertension with superimposed preeclampsia, or eclampsia based on American College of Obstetricians and Gynecologists' definitions . We categorized autoimmune or inflammatory conditions as participants with conditions including systemic lupus erythematous or Crohn disease, respectively. We considered participants as being on immunosuppressing medications if they received long-term corticosteroids, biologics, or other immunosuppressants during pregnancy ( Table 1 ). We categorized birth quarter by dividing a year into 4 sections: January to March, April to June, July to September, and October to December. We defined low birth weight deliveries as infants born weighing <2500 g. Maternal blood samples were collected within 72 hours of delivery and cord blood samples at delivery. Blood samples were centrifuged at 1800 rpm for 20 minutes and sera stored at −80 °C. Maternal total IgG testing was performed by the UW Immunology Clinical Laboratory on maternal and cord serum samples using an Optilite analyzer with standard reagents. Maternal and cord sera were tested in parallel on the same day for IgG against RSV preF, IAV A/Hong Kong/4801/2014 HA (H3), and A/Michigan/45/2015 HA (H1) with an electrochemiluminescence immunoassay (Meso Scale Discovery [MSD]). Information regarding seasonal influenza vaccine strains and match to MSD antigens is presented in the supplement ( Supplementary Table 1 ) . Serum samples were diluted 1:5000 and 1:25 000 and processed according to the manufacturer's protocol . Quantification of specimen antibody (arbitrary units per milliliter) was determined by plotting assay outputs onto the log-transformed standard curve generated from serially diluted calibrators. We defined efficient maternal antibody transfer as a cord to maternal (cord:maternal) antibody ratio >1. Baseline demographic and pregnancy characteristics were described, and these variables were compared by t tests, chi-square tests, and Fisher exact tests for comparisons with small numbers. We categorized pregnancies into those with preterm deliveries (gestational age <37 weeks) and those with full-term deliveries (≥37 weeks). We evaluated the relationship between preterm birth (PTB) and maternal and cord RSV and IAV IgG levels using Wilcoxon rank sum tests. We assessed this relationship using log-transformed maternal and cord RSV and IAV IgG levels with linear regression. Similar analyses were performed for untransformed ratios of infant to maternal RSV and IAV IgG, which we tested with t tests and linear regression. We assessed the correlation between infant and maternal RSV and IAV IgG for the entire sample and across PTB status. We evaluated the relationship between participants with and without influenza vaccination during pregnancy and maternal and cord IAV IgG levels using Wilcoxon rank sum tests stratified by birth status. We also compared cord:maternal IAV untransformed IgG ratios between pairs with and without maternal influenza vaccination using t tests. We assessed statistical differences of maternal IgG concentrations, cord IgG concentrations, and cord:maternal IgG transfer ratio between maternal vaccinations that occurred >6 and <6 months before, >3 and <3 months before, and >1 and <1 month before delivery using t tests. Covariates were selected a priori and based on significant associations between the exposure of PTB and the outcomes of RSV and IAV cord IgG. The first minimally adjusted linear regression model included the annual quarter of the infant's birth date. A second minimally adjusted linear regression model included annual birth quarter and insurance status. We performed statistical analyses using SAS software version 9.4 (SAS Institute Inc) and considered a 2-sided P < .05 to be statistically significant . We followed STROBE guidelines (Strengthening the Reporting of Observational Studies in Epidemiology) . Between June 2018 and July 2021, 115 maternal-infant pairs met inclusion criteria. Most births (69.6%) occurred in 2020 to 2021. Of 115 infants, 29 (25.2%) were born preterm. Demographic and baseline medical information was similar among preterm and full-term pregnancies ( Table 1 ), with the exception of insurance status, preeclampsia, and receipt of Tdap vaccine (tetanus, diphtheria, acellular pertussis) during pregnancy. A large proportion of participants in this cohort were vaccinated with the annual influenza vaccine (72.2%) and received the Tdap vaccine (94.8%) during pregnancy. Birth dates in this cohort were relatively evenly distributed across birth quarters, with the greatest number of infants born in quarter 3 (33.9%) and the fewest born in quarter 2 (19.1%; Table 2 ). As expected, birth weight had a significantly lower median in preterm infants ( P < .001). A small subset of infants was born low birth weight (14.8%) and admitted to the neonatal intensive care unit (21.1%). Higher maternal and cord IgG antibody levels were seen for RSV, IAV-H3, and IAV-H1 in full-term as compared with preterm infants. The median transfer ratio of IgG was highest for RSV in the total cohort and the FTB group, while the median transfer ratio was highest in the PTB group for IAV-H3. When participants were stratified by maternal influenza vaccination and birth status, the highest median maternal and cord IgG concentrations were in the vaccinated FTB group and the lowest in the unvaccinated PTB group for IAV-H3 and IAV-H1 ( Table 3 ). Maternal and cord antibody concentrations were highest in the influenza-vaccinated group for IAV-H3 and IAV-H1 regardless of PTB status . The median IAV-H3 IgG transfer ratio was highest in the unvaccinated group for PTB and FTB, but the difference was not significant. The differences in maternal and cord IgG concentrations between vaccinated and unvaccinated mothers were not significantly different by birth year or when vaccination was stratified by <6, <3, or <1 month prior to delivery (not shown). When PTB and FTB were compared without stratification by maternal vaccination, median maternal IgG level, cord IgG level, and transfer ratio were significantly lower in the PTB group for all specific antibodies, including RSV anti-preF IgG cord concentration and IgG transfer ratio and IAV-H3 IgG cord concentration and IgG transfer ratio . Maternal and cord concentrations were moderately correlated with each other for RSV, IAV-H3, and IAV-H1 . The correlations for full-term infants and their mothers were similar for RSV and higher for IAV-H3 and IAV-H1 as compared with the correlations in the total sample . The correlation remained significant and increased for RSV and IAV-H1 in the PTB group but decreased and was not significant for IAV-H3 . As expected, PTB was not significantly associated with log-transformed maternal IgG concentrations for any of the virus-specific antibodies in either the null model or when adjusted for birth quarter ( Table 5 ). PTB was associated with significant decreases in log-transformed cord IgG RSV and IAV antibody concentrations before and after adjustment; PTB was also significantly associated with a substantial decrease in cord:maternal IgG transfer ratios for RSV and IAV-H3 (47% and 34% decrease, respectively). Results for a further adjusted model including insurance status produced similar results to the adjusted models presented in Table 5 with slight decreases in precision; therefore, these results were not reported. We studied the transfer of maternal to infant RSV and influenza IgG antibodies and documented using standard and novel immunoassays to demonstrate not only the efficient transfer of these antibodies in preterm as well as full-term infants but also the fact that preterm infants can benefit from influenza immunization during pregnancy. We found that maternal and cord antibody concentrations were well correlated for RSV, IAV-H3, and IAV-H1. Importantly, we observed efficient transplacental transfer of RSV, IAV-H3, and IAV-H1 IgG antibodies in preterm as well as full-term infants. When we further stratified our analyses by maternal influenza vaccination, maternal and cord antibody concentrations were highest for IAV-H3 and IAV-H1 in the vaccinated groups regardless of gestational age category at delivery. However, cord antibody concentrations and cord:maternal IgG transfer ratios were significantly lower in the PTB group for RSV and IAV-H3. Associations between cord concentration and PTB as well as less efficient maternal IgG transfer ratios and PTB were significant ( P ≤ .05) for RSV and IAV-H3. This demonstrates that influenza vaccination during pregnancy has the potential to enhance transplacental IgG transfer even for preterm infants, which is important given high rates of preterm delivery globally and increased risks of morbidity and mortality in preterm infants. In our study, the median cord:maternal transfer ratio was very high: 1.64 for RSV, 1.50 for IAV-H3, and 1.49 for IAV-H1. These findings are similar to other studies investigating transplacental antibody transfer for common respiratory viruses, including RSV [ 19–21 ]. For example, Albrecht et al calculated an average maternal-infant antibody transfer ratio of 1.5 for IAV but did not investigate differences between H3 and H1. A study of 57 full-term mother-infant pairs from Seattle calculated cord:maternal antibody transfer ratios of 1.15 for RSV, 1.22 for IAV-H3, and 1.38 for IAV-H1 . While differences in effect size might be due to sampling site, vaccine exposure, and IgG detection methods, our findings confirm that RSV and influenza specific antibodies are transferred very efficiently across the placenta and in similar ratios. We found that cord:maternal antibody transfer ratios were lower in pregnancies with preterm infants, a finding that is well established in the literature for multiple virus-specific antibodies . More recently, in studies including Alaska Native and Seattle-based mother-infant pairs by Chu et al and an Australian indigenous population by Homaira et al , the authors found significantly lower cord:maternal antibody transfer ratios in preterm as compared with full-term infants for RSV but did not see significant differences between groups for IAV-H3 or H1. The differences observed for IAV may be affected by the timing of predominant subtype circulation, sampling site, respiratory virus season of study, and infection history. For IAV-H3 and IAV-H1, we found higher maternal and cord antibody concentrations in vaccinated individuals and their infants regardless of PTB status, with the largest discrepancies produced in the PTB group for IAV-H1. Zhong et al investigated the difference in H1N1 and H3N2 IgG concentrations in pregnant individuals and infants at birth and found significantly higher concentrations in those persons vaccinated during pregnancy and their infants for H1N1 but saw no significant differences in antibody concentration for H3N2. However, they limited their H3N2 analysis to the 2013–2014 and 2014–2015 influenza seasons, which in turn limited the sample size and power to detect significant differences. It is important to consider that due to antigenic drift, antigens included in the annual influenza vaccine are often changed, and we used the same target strains for all specimens regardless of year for the MSD assay in this study. For example, the strains used in the MSD assay, A/Hong Kong/4801/2014 HA (H3) and A/Michigan/45/2015 HA (H1), matched the 2017–2018 season vaccine strains and the H1 2018–2019 season vaccine strain but did not match vaccine strains for subsequent seasons ( Supplementary Table 1 ) . This is an important distinction because the assay targets may not fully represent the true protection elicited by vaccination in each season. In turn, the MSD output gives a representation of partial or nonspecific protection elicited from vaccination after the 2017–2018 season. Although these results may not provide a true representation of the protection conferred from vaccination, it is reasonable to assume that vaccination may have boosted antibodies toward the MSD H3 and H1 targets, which could represent a cross-reactive correlate of protection and be considered a correlate of season-specific protection. Strengths of our study include a moderately sized multiyear cohort with an adequate representation of pregnancies with preterm infants. This allowed us to investigate PTB as a risk factor for less efficient cord:maternal antibody transfer across multiple respiratory virus seasons. We also collected reliable vaccination data utilizing the electronic medical record linked to the Washington State Immunization Registry, providing access to reliable vaccination data. This study supports previous data showing that maternal influenza immunizations increase maternal and cord H3N2 and H1N1 IgG concentrations. This evidence has been primarily presented in clinical trials with little information known about this association in observational settings. This study helps fill this knowledge gap by presenting increased maternal and cord IAV IgG concentrations in an observational setting. Furthermore, laboratory methods used in this study utilized MSD, a novel, previously validated, high-throughput assay that requires a minimal amount of sera, limiting the time and resources needed to obtain meaningful results . Our study was limited by the fact that we were unable to document prior infection with influenza or RSV. Since prior infection may influence antibody concentrations more strongly in unvaccinated people, this may be especially relevant to our RSV analysis where all participants were unvaccinated . Also, we did not follow participants and their infants for infections after birth . This gap points toward future work that should be done to understand how vaccination timing and IgG concentrations in the mother and infant affect subsequent infection risk. These data can be used to make recommendations about the optimization of maternal influenza vaccination to potentially include protection of preterm infants. There is a possible detection issue from mismatched seasonal influenza vaccine strains and the influenza MSD antigens for the 2018–2019, 2019–2020, and 2020–2021 influenza seasons . Although the vaccine strains and MSD antigens are different in these seasons, the comparisons are internally controlled for by assessing cord:maternal transfer ratio. This study period coincided with COVID-19 mitigation measures, which could have affected the generalizability of the last season of the study. However, upon further investigation there were not large differences in the distributions of maternal factors, birth outcomes, maternal IgG concentrations, cord IgG concentrations, or cord:maternal IgG transfer ratios between the periods before and after COVID-19 mitigation measures. Last, we were unable to investigate biological variation or antibody function in this study. Therefore, we cannot draw any conclusions to whether biologic variability influences antibody concentrations or how functional the measured antibodies were. Our results demonstrate that cord antibody concentrations are higher in vaccinated individuals in full-term and preterm infants, illustrating that maternal influenza vaccination is an important mitigation technique to boost infant antibody concentration and potentially decrease the risk of infection in infants, particularly infants born preterm. Additionally, we saw cord:maternal transfer ratios >1 for RSV and influenza in unvaccinated pregnancies, indicating that efficient antibody transfer can occur following natural infection. Infants born preterm are more likely to have lower cord antibody concentrations and less efficient maternal antibody transfer for RSV and IAV-H3, putting them at a greater risk for infection after birth. Our study also acts as a baseline for comparison with data after maternal RSV vaccine uptake increases and potential assistance in validating correlates of protection against RSV infection in the future. | Study | biomedical | en | 0.999997 |
PMC11697185 | Human childbirth has been strongly shaped by evolution. During parturition, the human fetus undergoes a uniquely complex rotational descent through the pelvis . In response to this evolutionary selective pressure, humans have developed a reliance on cooperative and social birthing practices, a phenomenon termed “obligate midwifery” by Trevathan . While some evidence of social birth exists in other primates (e.g. [ 3–5 ],), the near universality of this practice and the active role taken by birth attendants is believed to be a distinctive trait of our species. Trevathan postulated that social birth assistance is a product of natural selection, with individuals who sought aid having higher fitness. Consequently, birth support offered by kin and community members, particularly those with prior childbirth experiences, has become a consistent feature across diverse cultures . Birth assistance carries profound implications during medical emergencies, such as cases where the umbilical cord is entangled around the newborn’s head , but this assistance extends beyond such critical situations. Other key types of support include informational support, advocacy, and emotional support . Emotional support during labor, specifically support that fosters comfort and encouragement and makes the recipient feel loved and respected , has been shown to trigger the release of oxytocin, a hormone that orchestrates uterine contractions during labor and initiates successful breastfeeding . Physical touch from birth attendants also stimulates oxytocin release . Mirroring its effects in non-parturient contexts, oxytocin fosters positive mood, reduces stress, and, crucially during labor, mitigates perceived pain . Emotional support therefore exerts notable biological impacts on the labor and birthing process. For instance, a Cochrane review demonstrated that emotional support during labor corresponds to a reduced likelihood of negative childbirth sentiments, decreased use of intrapartum analgesia, shorter labor duration, and a lower chance of cesarean or instrumental vaginal births . The biomedicalization of childbirth, which emphasizes a technocratic and medicalized approach to birth , has disrupted the availability of emotional support in labor. This biomedical model, which often deprioritizes emotional support and traditional birth practices, had particularly devastating effects during the COVID-19 pandemic. In an effort to control infection, many hospitals in the USA implemented strict policies that prohibited or severely limited the presence of preferred labor support persons, such as partners, mothers, or doulas . In some cases, individuals were forced to give birth without any emotional support persons due to hospital-imposed restrictions, support person infection, travel restrictions, or childcare needs . Since these barriers to emotional support were somewhat randomly applied across birth locations and timeframes, the pandemic offers insight into how an evolutionary mismatch in the distinctly human reliance on emotional support during labor may impact perceived childbirth stress. Here, we evaluate how giving birth alone, the number of emotional support persons, the absence of specific preferred persons, and the perceived availability of one’s medical provider are associated with perceived childbirth stress among individuals who gave birth during the COVID-19 pandemic in the USA, Data come from the COVID-19 and Reproductive Effects (CARE) study, which has been extensively described elsewhere . In brief, this was an online convenience sample survey of pregnant people aged 18 years and older living in the USA. This study was approved by the Dartmouth Committee for the Protection of Human Subjects and all participants provided informed consent. Participants were recruited through study announcements posted on social media platforms (Facebook, Twitter) and distributed via email to contacts working in maternity care and public health. The first survey, administered prenatally, was launched on 17 April 2020 using the Research Electronic Data Capture (REDCap) platform . During the prenatal survey, participants provided their anticipated due date. Individuals who consented to be re-contacted were sent a follow-up survey to ask about their birth experience. The invitation for this postnatal survey was sent four weeks after their listed due date. Data for this analysis come from the prenatal and postnatal data collection waves. One thousand seven hundred and ninety-two participants from the pregnancy survey had complete data for the study variables and agreed to be contacted again. Of those, 1120 completed the postnatal survey (62.9%). Participants who completed the follow-up survey were more likely to have higher education but did not significantly differ from those who did not complete the follow-up survey in relation to age, self-identified race, prenatal depression, or previous birth. Complete data for all the study variables were available for 1082 participants. During the postpartum survey, participants were asked: “How stressful did you find the birth experience?” We used a visual analog scale (VAS) to allow participants to score from “Not at all stressful” to “Very stressful” on a scale of 0–100 . Previous research has found similar VAS scales are an easy-to-understand and implement measure, including when assessing childbirth-related stress and trauma . During the postpartum survey, participants were asked, “Other than doctors, nurses, or midwives, who was in the delivery room with you when you delivered the baby? Check all that apply.” If individuals selected “no one” from the available options for this variable, then they were classified as having given birth alone. We summed the number of people that participants reported were in the room during delivery from the following options: partner, mother, father, sibling, friend, mother or father-in-law, a doula, or other. Given the small number of participants with four or five support persons ( N = 6), we collapsed the available responses to 0, 1, 2, or 3 + for analysis. To index the perceived emotional availability of providers, we asked whether during labor: “My care providers seemed busy/preoccupied/stressed, or had to limit their time in the room with me.” (Yes/No). During pregnancy, participants were asked: “If there were no restrictions, who would you ideally have in the room with you during delivery? Select all that apply.” They were able to select from the following options: Partner, parent, other family member, friend, doula, no one, or other. The three most common categories were partner , parent ( N = 337), and doula ( N = 204). We then used data on who individuals said was present during delivery to see if there was a “mismatch” in the desired presence for each of these three categories (mismatch partner, mismatch parent, mismatch doula). Age (years) was analyzed as a continuous variable. Race was self-identified using US Office of Management and Budget categories (white, Black/African American, Asian, Hispanic, American Indian/Alaska Native [AI/AN], Other). Self-identified white ethnicity was used as the reference category for the race variable since this group was previously reported to have higher birth satisfaction than others during the pandemic . Education was analyzed as less than a college education, college-educated, or advanced degree. The gold standard Edinburgh Depression Scale was used to assess maternal depression during the prenatal data collection wave. Individuals with an EPS score >=13 were categorized as having prenatal depression (yes/no) . Individuals were analyzed according to whether this was their first birth (yes/no). Participants indicated whether they had undergone a cesarean section delivery (yes/no). Participants indicated whether they experienced any complications during their labor and delivery (yes/no). All analyses were performed using R version 4.2.3. We first evaluated the characteristics of emotional support during delivery for our participants using descriptive statistics. We then ran six separate linear multivariate regression models to evaluate our hypothesis of whether emotional support predicted perceived childbirth stress. The first three variables were giving birth alone (dichotomous), number of emotional support persons (ordinal), and perceived emotional availability of providers (dichotomous). The next three models evaluated whether a mismatch in a desired partner, parent, or doula presence predicted perceived childbirth stress, with each of those variables analyzed dichotomously. For all models, we evaluated multicollinearity (all variance inflation factors < 1.09), linearity, normality, and homoscedasticity to ensure all assumptions for linear regression were met. We set alpha at P < .05. Beta coefficients, 95% confidence intervals, P -values, and adjusted R 2 values are reported for all models. We compared our longitudinally collected data on missing support persons with similar data collected cross-sectionally. During the postpartum survey we asked, “Was there anyone you wanted in the delivery room who was not there?” (Yes/No). If participants answered yes, we asked who was missing (options: partner, mother, father, sibling, a friend, mother or father-in-law, a doula, other). Finally, we asked “Was anyone able to attend the labor and delivery virtually (over a video chat or phone)?.” We used this to assess whether virtual labor support was associated with childbirth stress, or whether virtual support attenuated any of the associations with missing support persons and childbirth stress. This allowed us to evaluate whether in-person emotional support is particularly important for alleviating childbirth stress. Sample characteristics are described in Table 1 . The mean maternal age was 31.8 years (SD = 4.0; range = 18–47). Most participants had one support person at delivery (89.6%, N = 969); 1.9% ( N = 21) had no support persons, 7.3% ( N = 79) had two support persons, and 1.2% ( N = 13) had three or more. Thirty-four percent of participants ( N = 373) reported in the postpartum interview that there was someone they wanted at delivery who could not attend. Six percent ( N = 72) of participants had at least one person attend the labor and delivery virtually, with those who were missing support persons being more likely to receive this form of support (12.6% [ N = 47] of participants missing support persons vs 3.5% [ N = 25] of those not missing a support person). 92.7% of participants who provided a reason for a missing support person ( N = 346) said that at least one of the reasons that their missing support person(s) could not attend was due to hospital restrictions (See Table 2 for all listed reasons). Fourteen percent of participants reported that they perceived their provider as busy, worried, stressed, or limiting their time with them during labor. During the pregnancy interview, 0 participants said that they would want “no one” to support them in labor and delivery in the absence of restrictions. 98.9% of participants stated that they would want a partner with them at delivery, followed by 31.1% ( N = 337) who stated that they would want a parent, and 18.9% ( N = 204) who stated they would want a doula. After delivery, 87.3% of participants reported that their partner attended their birth, while mothers were present at 2.4% ( N = 29) of births, fathers at 0.2% ( N = 2) of births, and doulas at 4.5% ( N = 54). Of the 34% of participants who said that they wished someone else had been in the delivery room with them, mothers were the most frequently missed (45.5% of the subset of 373 participants). Full model results are provided in Supplementary Tables 1 – 2 . Nulliparity was associated with significantly higher childbirth stress ( B = 8.4–9.1 across models, all P < .001). Education was associated with significantly higher childbirth stress, with a more advanced degree associated with significantly higher reported stress than those without a college degree ( B = 7.6–8.0 across models, all P =< .002). Self-identified race was not associated with perceived birth stress in adjusted models. Cesarean section delivery ( B = 12–14 across models, all P < .001) and other labor and delivery complications ( B = 17–18 across models, all P < 0.001) were also associated with significantly higher childbirth stress. We found that five out of six emotional support variables were significantly associated with childbirth stress in the expected direction in both unadjusted and adjusted models ( Table 3 ). Specifically, there was a significant linear relationship between the number of support persons and perceived birth stress . The quadratic ( B = 2.8, P = .6) and cubic ( B = −3.6, P = .2) contrasts were not significant, suggesting that a linear trend sufficiently captures the relationship. Individuals who gave birth alone had significantly higher childbirth stress ( B = 15.7, P < .001). Individuals who experienced a mismatch in partner (12.5, P = .008) or doula support ( B = 5.2, P = .021) reported significantly higher childbirth stress. Parent mismatch was unrelated to childbirth stress ( B = 0.85, P = .6). Finally, individuals who said that their provider seemed busy, worried, or stressed during delivery had significantly higher childbirth stress ( B = 16.0, P < .001). Cross-sectional data generally supported the longitudinal trends for the mismatch variables described above. Individuals who said at the postpartum visit that they were missing a desired support person because of the pandemic reported significantly more childbirth stress ( Supplementary Table 3 , B = 7.0, P < .001). Individuals who said that they wished their partner ( B = 14, P = 0.004) or doula ( B = 8.5, P = 0.004) had been able to attend the birth and also reported significantly more childbirth stress ( Supplementary Table 4 ). In contrast to the longitudinal analysis, participants who reported that they wished their mother had been able to attend the birth reported significantly higher childbirth stress ( B = 5.2, P = .021). Having someone attend birth virtually was not significantly associated with reduced childbirth stress ( P > .37 in both adjusted and unadjusted analyses) and did not significantly attenuate any of the relationships for the other emotional support variables (data not shown). Humans are characterized by their sociality in all aspects of reproduction, including childbirth. Pelvic changes across human evolutionary history that caused rotational birth could have placed a selective advantage on seeking assistance at delivery . It has been hypothesized that the occiput anterior position of human birth (i.e. most babies are born facing away from the parent, toward the back) makes it difficult for birthing individuals to catch their babies or to remove the umbilical cord from around the baby’s neck . Given the importance of birth attendants for maternal and newborn health, it has been argued that the powerful emotions around labor and birth, such as fear or excitement, could encourage support-seeking behaviors, ultimately enhancing reproductive success . We were therefore interested in understanding whether the evolutionary mismatch resulting from the COVID-19 pandemic—which represented a rapid and somewhat randomly distributed disruption to emotional support persons in labor—would be associated with variation in perceived childbirth stress. We found that five of the six emotional support variables that we tested, including wanting but not having a partner and doula present at delivery, were associated with childbirth stress, even when adjusting for maternal sociodemographic factors and labor and delivery complications. The effect sizes were also substantial, being similar or even greater than those observed for cesarean section and clinical complications. These findings are consistent with the hypothesis that natural selection has shaped a preference for birth attendants to provide in-person emotional support in labor to reduce stress and anxiety . These findings align with and extend previous work. For instance, Preis et al . found the absence of an emotional support person during labor was associated with decreased birth satisfaction among US-based individuals during the pandemic. Our study advances this knowledge in several key ways. First, we demonstrate that virtual support failed to mitigate the increased stress associated with missing in-person support, highlighting the irreplaceable nature of physical presence during birth. Second, we identified a previously unreported linear relationship between the number of support people and perceived stress, where each additional support person was associated with lower stress levels. These findings have important implications beyond pandemic contexts, as many hospitals routinely limit support persons to one or two visitors . Given that institutional policies often restrict the number of support persons without clear evidence for these limitations, future research should aim to replicate these findings, with particular attention to potential differences in characteristics among participants who do and do not prefer multiple support persons. One possible interpretation of our results is that individuals who are more anxious generally are more likely to perceive their births as more stressful and to say that they needed more support in labor, irrespective of the amount of support that was received. While this is possible, the temporal separation between four of our six measures—capturing desired support persons during pregnancy and actual support presence during delivery—helps mitigate concerns about reverse causality. Notably, zero participants expressed a preference during pregnancy for giving birth alone. Therefore, all cases of participants giving birth alone represent an undesired mismatch between preferred and actual support. Similarly, mismatches between desired and actual presence for partners, parents, and doulas were identified by comparing pre-birth preferences to delivery room presence. This prospective study design, combined with the finding that no participants initially desired to give birth alone, strengthens our interpretation that the support person's absence contributed to increased childbirth stress, rather than stress levels influencing retrospective wishes about support. An additional novel aspect of our study was the evaluation of the perceived attentiveness of the care provider in relation to reported childbirth stress. Fourteen percent of participants perceived their provider as busy or distracted or said that they perceived their provider as limiting their time in the room with them, which could indicate less availability for emotional support. This measure was associated with significantly higher perceived childbirth stress, with a slightly greater magnitude of effect on childbirth stress than cesarean section delivery. While never directly assessed previously, these findings are consistent with the finding that continuous care from a known provider is associated with a more positive birth experience . Such findings have been used to advocate for continuity of care maternity models that are found in cultural contexts such as New Zealand (i.e. in which the same provider meets with the pregnant individual at all stages of prenatal and postpartum care, fostering the development of a trusting relationship), but which are absent from most other cultural contexts, including the USA . A surprising finding in our analysis was that higher education was associated with significantly more childbirth stress in our sample. Additional research is needed to understand whether actual aspects of the birth experience differed according to maternal education or rather whether the perception of those events is what differed, with more educated women feeling relatively less positive about an objectively similar experience. While our findings support the hypothesis that emotional support during labor has been shaped by natural selection in response to the challenges of human childbirth, an alternative explanation is that the sociality of humans more broadly underlies the desire for emotional support during labor. This alternative view finds some support in studies of three captive and one wild bonobo birth, where researchers observed that females remained in close proximity to the parturient female and demonstrated emotional engagement and supportive behaviors . If similar patterns are observed in more individuals, it could suggest that the evolutionary origins of “midwifery” may predate the specific “obligation” for support that arose from the more difficult, rotational birth process that emerged during hominin evolution. Childbirth is orchestrated by a complex, changing array of interacting hormones—a process shaped by evolution and closely tied to the mental state and emotions of the birthing individual. These physiological effects therefore offer insights into the mechanisms by which emotional support during labor shapes both parental and infant biology and survival, with implications for human evolution. Key hormones involved in this process include oxytocin, epinephrine (adrenaline), and endorphins. Oxytocin causes uterine contractions while also generating calming and analgesic effects during labor and promoting immediate bonding between parent and infant upon delivery . Epinephrine, in contrast, may slow or even reverse labor progress in some cases. This hormone plays a central role in the evolved “fight-or-flight” response and may have enhanced survival during human evolution by slowing or stopping labor in response to perceived danger—especially during early stages . Parallel responses have been observed in experimental studies of other mammals disturbed or stressed during labor, such as mares or cows . Thus, in busy and unfamiliar birth settings laboring individuals may experience elevated epinephrine levels that stall labor progress, even in the absence of direct danger . Conversely, the presence of trusted support people and care providers may promote feelings of calm and safety, thereby facilitating labor progression and reducing the risk of interventions often implemented in cases of “prolonged” labor . Finally, endorphins are endogenous opioids that provide some relief throughout labor, including by potentially altering the birthing person’s state of consciousness to help manage labor-related pain and stress . Endorphins have been linked with feelings of euphoria and reward following delivery and subsequent enhanced parent–infant bonding . The positive effects of endorphins are greatest when individuals feel secure, supported, and are not frightened . Overall, emotional support by attendants who are part of the mother’s community or with whom the mother is familiar can promote labor progression and have a calming physiologic effect . These physiological pathways may help explain the association between social support and perceived birth stress documented in the present study. Despite the strengths of this manuscript—including uniquely considering the effects of specific support person absence and the influence of the provider on childbirth stress—there are several limitations. First, the survey only asked about the perceived availability of a “provider” generally, and we were, therefore, unable to account for varying levels of support experienced by participants with more than one provider during delivery. Second, we adjusted for both cesarean section and labor and delivery complications, the latter of which is broad and therefore includes complications that vary greatly in terms of clinical significance. We did this because it was the most conservative approach in our analysis. However, future work may choose to evaluate more specific clinical complications in their models. It is also unclear whether or how the order of questioning about perceived stress in the questionnaire could have influenced the results. In addition, due to the use of convenience sampling, these data are not nationally representative. Online surveys may result in biased samples for various reasons, including that they are limited to individuals with internet access, who learn about the survey, who are interested in the topic, and who have the ability to complete it . Thus, white, highly educated individuals are overrepresented in the CARE study sample compared to the US birthing population as a whole . This is consistent with other online surveys conducted during the pandemic, with the shift to online data collection during lockdown resulting in biased samples and non-random attrition in many longitudinal studies [ 35–37 ]. The inability to collect data from a nationally representative sample has implications for interpreting study results. While universally beneficial, emotional support in labor is potentially even more important for individuals with elevated risks of adverse birth outcomes, including racialized minorities, individuals with public health insurance, and the uninsured . Specifically, research suggests that doulas in these contexts can enhance health literacy, social support, and quality of care received, in large part by acting as experienced advocates for birthing individuals who are more likely to experience medical mistreatment and encounter inattentive providers . The non-representative nature of the CARE dataset precludes analyses rigorously testing the hypothesized benefits of emotional support during labor in these population sub-groups. More work is therefore needed to assess the impact of emotional support across more representative and diverse samples. We found that the absence of any emotional support person during labor—and particularly missing support from a partner, doula, or healthcare provider—was associated with significantly higher perceived childbirth stress. Receiving virtual support did not attenuate these effects. These results align with the hypothesis that human evolution has specifically shaped the need for physical, in-person emotional support during labor. This interpretation is bolstered by previous research suggesting that the evolutionary mismatch of inadequate support during labor increases the risk of cesarean delivery . Given the high rates of cesarean delivery and poor maternal-infant health outcomes in many parts of the world, strategies to improve birth experiences and outcomes are urgently needed. Addressing this evolutionary mismatch by prioritizing adequate emotional support during labor could be a low-risk, low-cost intervention to enhance delivery experiences and outcomes, even outside of public health emergencies like the COVID-19 pandemic. Prioritizing this essential element of the birth process has the potential to yield substantial benefits for mothers, infants, and families. | Study | biomedical | en | 0.999997 |
PMC11697191 | Parkinson's disease (PD) is the second most prevalent neurodegenerative disease worldwide, affecting millions of individuals. It is characterized by the selective and progressive loss of dopaminergic neurons in the midbrain, leading to motor dysfunction symptoms, including bradykinesia, tremor, rigidity, and postural instability. The neuropathological hallmark of PD is the formation of Lewy bodies and Lewy neurites, primarily composed of α-synuclein (α-Syn). Despite the precise etiology of the disease remaining elusive, mounting evidence suggests that targeting α-Syn and mitochondria as a therapeutic approach to inhibit or slow down the progression of PD holds promise. 1 , 2 The protein α-Syn was initially associated with PD in 1997 upon the identification of point mutations in the SNCA (synuclein alpha) gene in familial PD cases. 3 Genome-wide association studies have further implicated SNCA as a major gene linked to sporadic PD. 4 An increasing body of evidence suggests that the accumulation and aggregation of α-Syn play a crucial role in the pathogenesis of PD by disrupting various subcellular functions, including autophagic and mitochondrial dysfunction. 5 , 6 Hence, facilitating the clearance and degradation of α-Syn may represent a promising therapeutic approach for treating PD. Research has shown that α-Syn can be degraded through the ubiquitin-proteasome system and the autophagy/lysosomal pathway. 7 Further studies have indicated that under normal conditions, α-Syn is predominantly degraded via the ubiquitin-proteasome system. However, elevated levels of α-Syn activate the autophagy/lysosomal pathway, emphasizing the critical role of autophagy in α-Syn degradation under pathological conditions. 8 Therefore, it is crucial to activate autophagy and facilitate the degradation of α-Syn in PD. Mitochondrial dysfunction is another major pathological mechanism of PD. 9 PD-associated mitochondrial dysfunction can arise from various causes, such as impaired mitophagy, compromised mitochondrial biogenesis, abnormalities in fission and fusion processes, and deficiencies in electron transport chain complexes. 10 Numerous mutations in genes associated with PD have been confirmed to be linked to mitochondrial dysfunction, including PRKN (Parkin RBR E3 ubiquitin protein ligase), PINK1 (PTEN induced kinase 1), LRRK2 (leucine-rich repeat kinase 2), and DJ-1 (encoded by PARK7). 11 The Pink1-Parkin axis is widely acknowledged as the most extensively studied mitophagy pathway. 12 Mutations in PRKN lead to inhibition of Parkin activity, thereby causing impairment in mitophagy. 13 Recent studies have also found that the crucial transcription factors PGC1-α (peroxisome proliferator-activated receptor-gamma coactivator-1 alpha) and TFAM (transcription factor A, mitochondrial), which regulate mitochondrial biogenesis, are down-regulated in PD patients, 14 , 15 providing evidence of impaired mitochondrial biogenesis in PD. Furthermore, mitochondrial dysfunction exacerbates the generation of reactive oxygen species and the release of cytochrome c, while decreasing ATP levels, ultimately leading to neuronal death. 16 Thus, promoting the clearance of damaged mitochondria and facilitating the generation of new mitochondria are crucial in PD. Transcription factor binding to IGHM enhancer 3 (TFE3), a well-established regulator of autophagy, positively modulates the autophagy/lysosomal pathway by up-regulating genes associated with autophagy. 17 Recent reports have validated that TFE3 activation enhances autophagy, exerting neuroprotective effects in models of spinal cord injury and Alzheimer's disease. 18 , 19 , 20 Moreover, our recent findings demonstrated that TFE3 activation enhances autophagy, providing protective effects in the MPTP (1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine)-induced PD model. 21 However, whether TFE3 activation can promote α-Syn degradation in dopaminergic neurons remains unclear. Additionally, an expanding body of research has revealed the significant role of TFE3 in metabolic regulation, particularly in mitochondrial metabolism. 22 A recent study indicates that the PRCC-TFE3 fusion protein enhances cell survival and proliferation through the induction of mitophagy and mitochondrial biogenesis in translocation renal cell carcinoma. 23 Nevertheless, it remains to be determined whether TFE3 can regulate mitochondrial autophagy and biogenesis in dopaminergic neurons. Therefore, in this study, we further investigated whether TFE3 exerted neuroprotective effects in PD by regulating α-Syn and mitochondria. Ten-to twelve-week-old C57BL/6 mice were purchased from the Beijing Vital River Laboratory Animal Technological Company (Beijing, China). All mice were housed in specific pathogen-free facilities, maintained on a 12-h/12-h light/dark cycle with controlled temperature and air humidity, and allowed free access to food and water. AAV-hSyn-3xFlag (AAV-Flag), AAV-TH-Prkn (AAV-Parkin), and AAV-hSyn-SNCA-3xFlag (AAV-α-Syn) were generated and packaged by BrainVTA (Wuhan, China). AAV-TH-Tfe3 (AAV-TFE3) and AAV-TH-EGFP (AAV-EGFP) were generated and packaged by OBiO Technology (Shanghai, China). For AAV viral injection, mice were anesthetized with 3% isoflurane and subsequently secured in a stereotaxic instrument (RWD Life Science Co., Shenzhen, China). Anesthesia was consistently upheld at 1.5% isoflurane administered through a nose tip integrated into the stereotactic frame. Injections were performed using a 10 μL syringe (Hamilton, Switzerland) coupled with a 33-Ga needle (Hamilton) and facilitated by a microsyringe pump (KD Scientific, Massachusetts, USA). A unilateral injection into the substantia nigra (SN) was performed at a rate of 0.1 μL/min, delivering 1 μL of either AAV-Flag (1.0 × 10 12 vg/mL), AAV-α-Syn (1.0 × 10 12 vg/mL), a mixture of AAV-α-Syn (1.0 × 10 12 vg/mL)/AAV-TFE3 (2.0 × 10 12 vg/mL), AAV-EGFP (2.0 × 10 12 vg/mL), AAV-TFE3 (2.0 × 10 12 vg/mL), and a mixture of AAV-α-Syn (1.0 × 10 12 vg/mL)/AAV-Parkin (1.0 × 10 12 vg/mL). The coordinates representing distance (mm) from the bregma were as follows: anteroposterior −2.9, mediolateral +1.3, and dorsoventral −4.35. After the injection, the needle was left in position for a minimum of 5 min to mitigate retrograde flow along the needle track. After surgery, the mice were gently warmed using a heating pad until they regained consciousness. For real-time PCR and Western blotting, mice were sacrificed by cervical dislocation. Subsequently, the brains were rapidly extracted and rinsed with ice-cold phosphate buffer saline solution (PBS). The SN and striatum (STR) tissue was promptly dissected on ice and preserved at −80 °C until further experiments. For immunofluorescence and immunohistochemistry analyses, mice were anesthetized with urethane (1.5 g/kg, intraperitoneal injection) and subjected to intracardial perfusion with 20 mL of ice-cold PBS, followed by 50 mL of cold 4% paraformaldehyde. After perfusion, mouse brains were removed, post-fixed overnight in 4% paraformaldehyde at 4 °C, and subsequently immersed in 20% and 30% sucrose solutions. The tissues were then embedded in optimal cutting temperature and sectioned into 20 μm-thickness cryosections for immunofluorescence and 40 μm-thickness cryosections for immunohistochemistry. Cryo-coronal sections (40 μm) spanning the entire midbrain and STR were systematically collected. Initially, selected sections were permeabilized in 0.3% Triton-X in PBS at room temperature for 30 min and treated with 3% hydrogen peroxide in PBS at room temperature for an additional 30 min to quench endogenous peroxidase activity. Subsequently, the sections were blocked with a blocking buffer in PBS at room temperature for 1 h to minimize non-specific staining. Following this, sections were incubated overnight at 4 °C with anti-TH diluted in 3% bovine serum albumin in PBS. Visualization was achieved using the VECTASTAIN® Elite® ABC-HRP Kit and the ImmPACT® DAB Substrate Kit , following the manufacturer's protocol. Stained sections were mounted onto slides, coverslipped, and subsequently imaged using an optical microscope (Slide Scan System SQS-40 P, Shenzhen Shengqiang Technology, China). Free-floating 20 μm-thick sections were rinsed in PBS and then incubated in a blocking solution at room temperature for 1 h. Primary antibodies, including TH , TFE3 , α-Syn , p-α-Syn Ser129 , Lamp1 (lysosomal associated membrane protein 1; 1:500, #1D4B–C, DSHB), p62 , LC3 , Parkin , Tom20 , VDAC1 , PGC1-α , and TFAM were diluted in 1% bovine serum albumin in 1× TBST (0.3% Triton X-100) and applied to the sections overnight at 4 °C. Following three washes in PBS, the sections were incubated with secondary antibodies (Thermo Fisher, Massachusetts, USA) conjugated with Alexa 488, Alexa 555, or Alexa 647 at room temperature for 1 h. Finally, the sections were visualized using a confocal laser scanning microscope (A1, Nikon, Tokyo, Japan), and immunofluorescence results were analyzed using ImageJ software. The rotarod test, a well-established method for evaluating motor deficits of rodents in neurodegenerative disease models, was performed as previously described. 24 In brief, all mice were trained on the rotarod for two consecutive days at a consistent speed of 10 rpm for 60 s. Subsequently, on the following day, the mice were tested on a rod with a gradual acceleration from 4 to 40 rpm over a 5-min duration. The latency time to fall from the rod was recorded, with a maximum observation time of 5 min. RNA was extracted from the SN tissue using a TRIZOL kit , and its concentration was determined spectrophotometrically (NANODROP, Thermo). PrimeScript™RT Reagent Kit with gDNA Eraser (RR047A, TakaRa, Japan) was employed to synthesize cDNA, which was then amplified using KAPA SYBR® FAST qPCR Master Mix (2 X ) Kit with specific primers for real-time PCR analysis. All reactions were conducted using the Light Cycler 480 System (CFX96, Bio-Rad, California, USA). The primer sequences utilized were as follows: Tfe3 : forward, 5′-ATCTCTGTGATTGGCGTGTCT-3′, reverse, 5′-GAACCTTGAGTACCTCCCTGG-3′; Prkn : forward, 5′-TGGAAAGCTCCGAGTTCAGT-3′, reverse, 5′-CCTTGTCTGAGGTTGGGTGT-3′; Ppargc1 : forward, 5′-AAGGTCCCCAGGCAGTAGAT-3′, reverse, 5′-GGCTGTAGGGTGACCTTGAA-3′; Actb : forward, 5′-GGCTGTATTCCCCTCCATCG-3′, reverse, 5′-CCAGTTGGTAACAATGCCATGT-3′. Dissected ventral midbrain and STR tissues from mice were homogenized and lysed in Laemmli buffer (50 μL/mg tissue) composed of Tris·Cl (62.5 mM, pH 6.8), SDS (2%, w/v), bromophenol blue (0.005%, w/v), glycerol (10%, v/v), and DTT (8 mg/mL). The lysates were then boiled at 95–100 °C for 5 min 25 μg of protein from each sample were loaded onto SDS-PAGE gels and subsequently transferred to PVDF membranes (Millipore, Darmstadt, Germany). Following the transfer, membranes were blocked with 5% skim milk at room temperature for 1 h and then incubated at 4 °C overnight with the following primary antibodies: TFE3 , α-Syn , p-α-Syn Ser129 , Lamp1 (1:500, #1D4B–C, DSHB), LC3 , p62 , Tom20 , PGC1-α , Parkin , TFAM , and β-actin . After washing, membranes were incubated with the appropriate horseradish peroxidase-conjugated secondary antibodies . All blots were visualized using ECL chemiluminescence , and the results were analyzed using ImageJ software. The quantification of TH-positive cells in brain sections exhibiting typical SN morphology was performed as previously described. 25 Briefly, TH-positive neurons were manually counted at four-section intervals across the entire extent of the SN using bright-field microscopy and ImageJ software. To assess changes in TH-positive neuron numbers, the counts from AAV-Flag-injected mice (control) were set to 100%, and the counts from other groups were expressed as a percentage relative to this control. The optical density of striatal TH-positive fibers in the mouse dorsolateral STR was quantified using ImageJ software. The optical density of the corpus callosum served as background and was subtracted from each measurement in the STR. The optical density in the experimental group was then normalized to the value obtained from the control group. All analyses were performed blinded to the treatments. Statistical analyses were performed using GraphPad Prism version 8.0 (GraphPad Software). The data were presented as mean ± standard error of the mean. Comparisons between two groups were conducted using a two-tailed student's t -test. Multiple group comparisons were assessed through a one-way ANOVA followed by Tukey's post-hoc tests. Statistical significance was established at a probability value of P < 0.05 for all analyses. To investigate the neuroprotective effects of TFE3 and elucidate specific mechanisms in α-Syn pathology, we utilized AAV viral vectors to overexpress human wild-type α-Syn in the SN of mice, establishing an AAV-α-Syn model . Initially, we evaluated whether α-Syn and TFE3 were successfully overexpressed in nigral dopaminergic neurons following stereotaxic nigral injection of either AAV-α-Syn or AAV-TFE3. Immunofluorescent staining confirmed robust expression of α-Syn and TFE3 in dopaminergic neurons on the injected side one month after AAV delivery . Figure 1 TFE3 overexpression attenuates α-Syn toxicity in a mouse model of Parkinson's disease. (A) Schematic diagram of AAV virus stereotactic injection targeting the SN in mice. (B) Representative immunofluorescent staining of α-Syn and TFE3 in dopaminergic neurons of mice injected with AAV-α-Syn and AAV-TFE3. Scale bars, 500 μm. (C) Representative image of TH immunostaining in the SN and STR of mice injected with AAV-Flag, AAV-α-Syn, and AAV-α-Syn/TFE3. Scale bars, 200 μm for SN and 500 μm for STR. (D, E) Quantitative analysis of TH-positive cells in the SN (D) and TH-positive terminals in the STR (E). n = 7 mice per group. (F) Assessment of motor function using the accelerating rotarod test, depicting latency to fall time (s) for mice in different experimental groups. n = 16 or 17 mice per group. The data were presented as mean ± standard error of the mean. Statistical significance was determined using one-way analysis of ANOVA followed by Tukey's multiple comparisons test. ∗ P < 0.05, ∗∗ P < 0.01, ∗∗∗∗ P < 0.0001. TFE3, transcription factor binding to IGHM enhancer 3; α-Syn, α-synuclein; AAV, adeno-associated virus; SN, substantia nigra; TH, tyrosine hydroxylase; STR, striatum. Figure 1 We then investigated whether TFE3 overexpression conferred neuroprotection in the AAV-α-Syn model. Naive mice were stereotaxically injected into the SN with different AAV vectors and subsequently categorized into three groups: AAV-Flag-injected group (F), AAV-α-Syn-injected group (α), and AAV-α-Syn and AAV-TFE3 co-injected group (α+T). For immunohistochemical analysis and behavioral tests, mice were sacrificed three months after virus injection. We systematically analyzed dopaminergic neuron numbers on the injected side after unilateral injection of AAV. Consistent with previous reports, 26 administration of AAV-α-Syn caused a 46.1% loss of dopaminergic neurons compared with mice injected with AAV-Flag . Co-administration of AAV-α-Syn with AAV-TFE3 prevented α-Syn-induced degeneration of dopaminergic neurons, resulting in only a 1.2% reduction in the number of dopaminergic neurons compared with AAV-Flag-injected mice . To assess whether the total preservation of dopaminergic cell bodies corresponded with the maintenance of dopaminergic terminals in the STR, we quantified the optical density of TH staining in the STR. Consistent with the results in the SN, the administration of AAV-α-Syn resulted in a 55.8% reduction in the optical density of the STR compared with mice injected with AAV-Flag . However, co-administration of AAV-α-Syn with AAV-TFE3 only led to a 3.9% decrease . To determine whether TFE3 expression not only preserved the integrity of nigral dopaminergic neurons but also maintained their function after α-Syn intoxication, rotarod tests were performed. The results revealed that the administration of AAV-α-Syn had a significantly shorter latency to fall from the accelerated rod compared with AAV-Flag-injected mice, and co-administration of AAV-α-Syn with AAV-TFE3 significantly increased retention time on the rotarod . These findings demonstrate that TFE3 overexpression reduces neurodegeneration and associated motor function deficits in the AAV-α-Syn model of PD. Next, we explored the specific mechanisms underlying TFE3's neuroprotective effects in the AAV-α-Syn model. Autophagic defects can enhance the accumulation of α-Syn, which in turn further inhibits autophagy. 27 Therefore, restoring α-Syn-mediated autophagic dysfunction is especially crucial. Consistent with our previous findings, 21 our new results indicate a notable up-regulation of the TFE3 mRNA and protein levels one month after AAV-TFE3 injection , concomitant with the induction of lysosomal marker Lamp1, autophagy receptor p62, and autophagosome marker LC3 in the SN , confirming an enhancement of autophagic flux by overexpression of TFE3. Figure 2 TFE3 overexpression rescues autophagy defects of dopaminergic neurons in the AAV-α-Syn model. (A) Quantitative reverse-transcription PCR analysis of Tfe3 mRNA in ventral midbrain homogenates from mice injected with AAV-EGFP and AAV-TFE3. n = 4 mice per group. (B, D) Representative western blots for TFE3, Lamp1, p62, and LC3 in ventral midbrain homogenates from mice injected with AAV-EGFP and AAV-TFE3. (C, E–G) Quantification of Western blot bands corresponding to TFE3 (C), Lamp1 (E), p62 (F), and LC3 (G) normalized to β-actin. n = 6 mice per group. The data were presented as mean ± standard error of the mean. Statistical significance was determined using a two-tailed student's t -test. ∗ P < 0.05, ∗∗ P < 0.01, ∗∗∗∗ P < 0.0001. (H, J, L) Representative immunofluorescent staining of Lamp1 (H), p62 (J), and LC3 (L) in dopaminergic neurons of mice injected with AAV-Flag, AAV-α-Syn, and AAV-α-Syn/TFE3. n = 4 or 5 mice per group. Scale bars, 10 μm. (I, K, M) Quantitative analysis of the fluorescence results shown in (H), (J), and (L). The data were presented as mean ± standard error of the mean. Statistical significance was determined using one-way analysis of ANOVA followed by Tukey's multiple comparisons test. ∗ P < 0.05, ∗∗ P < 0.01, ∗∗∗ P < 0.001, ∗∗∗∗ P < 0.0001. TFE3, transcription factor binding to IGHM enhancer 3; α-Syn, α-synuclein; AAV, adeno-associated virus; Lamp1, lysosomal associated membrane protein 1; LC3, microtubule-associated protein light chain 3. Figure 2 Subsequently, we investigated whether TFE3 overexpression could ameliorate autophagic dysfunction in dopaminergic neurons within the AAV-α-Syn model. Our results indicated that the overexpression of α-Syn for three months significantly down-regulated Lamp1 in dopaminergic neurons compared with AAV-Flag injected mice , suggesting a reduction in lysosomal abundance. Co-injection of AAV-α-Syn with AAV-TFE3 completely restored Lamp1 levels, indicating that TFE3 overexpression reverse lysosomal depletion in the AAV-α-Syn model . Concurrently, α-Syn overexpression significantly increased p62 levels and resulted in the formation of numerous p62-positive puncta in dopaminergic neurons compared with the AAV-Flag injected group , indicating the accumulation of autophagic substrates. Remarkably, co-administration of AAV-α-Syn with AAV-TFE3 also induced p62 up-regulation but significantly reduced the number of p62-positive puncta . Furthermore, α-Syn overexpression resulted in a down-regulation of LC3, indicative of a decrease in the number of autophagosomes . Co-administration of AAV-α-Syn with AAV-TFE3 reversed the LC3 down-regulation in the AAV-α-Syn model, restoring the formation of autophagosomes . Taken together, these findings collectively demonstrate that TFE3 overexpression reverses α-Syn-induced autophagic dysfunction in dopaminergic neurons. Activating autophagy has been demonstrated to promote the degradation of α-Syn. For instance, AAV-mediated overexpression of TFEB, BECN1 (beclin 1), ATG7 (autophagy-related 7), and other factors has been shown to facilitate α-Syn degradation, suggesting therapeutic implications for modulating autophagy in α-Syn-related pathologies. 28 , 29 , 30 Our results have already demonstrated that TFE3 overexpression restores autophagy in the AAV-α-Syn model. Therefore, we sought to explore whether activating TFE3 could promote the degradation of α-Syn. Immunofluorescence and Western blot analyses were performed three months after viral injection. The AAV-Flag group showed no α-Syn staining, while AAV-α-Syn-injected mice displayed pronounced α-Syn staining in the SN . The results under high magnification reveal a strong expression of α-Syn in dopaminergic neurons following AAV-α-Syn injection . However, co-administration of AAV-α-Syn with AAV-TFE3 reduced α-Syn protein levels in dopaminergic neurons . This result was further confirmed by Western blot analysis . These results confirm that TFE3 overexpression promotes α-Syn degradation in the AAV-α-Syn model. Figure 3 TFE3 overexpression promotes α-Syn degradation and inhibits α-Syn propagation in the AAV-α-Syn model. (A, C) Immunofluorescence (A) and Western blot (C) analysis for α-Syn expression in dopaminergic neurons of the SN or ventral midbrain homogenates from mice injected with AAV-Flag, AAV-α-Syn, and AAV-α-Syn/TFE3. n = 4 mice per group. Scale bars, 100 μm for low magnification and 10 μm for high magnification. (B) Quantitative analysis of the fluorescence results shown in (A). (D) Quantification of Western blot bands corresponding to α-Syn normalized to β-actin. n = 4 mice per group. (E, G) Immunofluorescence (E) and Western blot (G) analysis for p-α-Syn expression in dopaminergic neurons of the SN or ventral midbrain homogenates from mice injected with AAV-Flag, AAV-α-Syn, and AAV-α-Syn/TFE3. n = 4 mice per group. Scale bars, 100 μm for low magnification and 10 μm for high magnification. (F) Quantitative analysis of the fluorescence results shown in (E). (H) Quantification of Western blot bands corresponding to p-α-Syn normalized to β-actin. n = 4 mice per group. (I, K) Immunofluorescence (I) and Western blot (K) analysis for α-Syn expression in the STR from mice injected with AAV-Flag, AAV-α-Syn, and AAV-α-Syn/TFE3. n = 4 mice per group. Scale bars, 500 μm. ( J ) Quantitative analysis of the fluorescence results shown in (I). (L) Quantification of Western blot bands corresponding to α-Syn normalized to β-actin. n = 4 mice per group. The data were presented as mean ± standard error of the mean. Statistical significance was determined using one-way analysis of ANOVA followed by Tukey's multiple comparisons test. ∗∗∗ P < 0.001, ∗∗∗∗ P < 0.0001. TFE3, transcription factor binding to IGHM enhancer 3; α-Syn, α-synuclein; AAV, adeno-associated virus; SN, substantia nigra; STR, striatum. Figure 3 Additionally, the study shows around 90% of the α-Syn found in Lewy bodies undergoes phosphorylation at serine 129. In contrast, the normal brain exhibits phosphorylation at this residue in only 4% or less of the total α-Syn. 31 Thus, phosphorylation of α-Syn at the serine 129 residue (p-α-Syn) correlates with pathological developments in PD and promotes fibril formation and insoluble aggregation. 32 Reducing such aggregation has long been proposed as a therapeutic strategy for PD. In our results, the expression pattern of p-α-Syn was not entirely consistent with that of α-Syn, revealing a predominant co-staining with dopaminergic neurons in AAV-α-Syn-injected mice . This suggests that α-Syn is more prone to aggregate in dopaminergic neurons and exert neurotoxicity. However, co-injection of AAV-α-Syn with AAV-TFE3 nearly eliminated p-α-Syn staining in dopaminergic neurons . Similarly, this result was further validated by Western blot analysis . These results indicate that TFE3 overexpression reduces α-Syn aggregation in the AAV-α-Syn model. α-Syn is known to act as a prion-like protein and exhibits well-established spreading characteristics. 33 Therefore, we investigated whether TFE3 overexpression could inhibit α-Syn propagation. Our results showed that α-Syn expression was detected in the STR and cortex on the side ipsilateral to the viral injection , confirming its propagation. Additionally, co-injection of AAV-α-Syn with AAV-TFE3 significantly reduced α-Syn levels in both the STR and cortex . Western blot analysis of the STR further confirmed these findings . These results confirm that TFE3 overexpression also inhibits α-Syn propagation. Autophagic dysfunction can impede the clearance of damaged mitochondria, ultimately leading to cell death. Research has shown that overexpression of A53T human α-Syn in transgenic mice induces extensive abnormalities in mitochondrial macroautophagy. Subsequently, genetic deletion of either Parkin or PINK1 in mice overexpressing A53T α-Syn further significantly exacerbates mitochondrial inclusions and reduces mitochondrial mass, 34 providing evidence that PINK1/Parkin-mediated mitophagy is essential for the effective autophagic elimination of impaired mitochondria in dopaminergic neurons. A recent study has also reported that the PRCC-TFE3 fusion mediates Parkin-dependent mitophagy in translocation renal cell carcinoma. 23 The overexpression of Parkin has been demonstrated to promote mitophagy in dopaminergic neurons. 35 Therefore, to address whether TFE3 regulated mitochondrial autophagy in dopaminergic neurons, we first examined whether TFE3 regulated Parkin. Immunofluorescence results showed that AAV-mediated TFE3 overexpression significantly increased Parkin protein levels in dopaminergic neurons , and this finding was further confirmed by Western blot analysis . Furthermore, reverse-transcription PCR results revealed that TFE3 overexpression up-regulated Prkn mRNA levels , suggesting that TFE3 can trans-regulate Parkin in dopaminergic neurons. Further investigation in the AAV-α-Syn model demonstrates that the overexpression of α-Syn results in a reduction of Parkin protein levels . Notably, co-administration of AAV-α-Syn with AAV-TFE3 significantly restores the Parkin protein levels , implying that TFE3 overexpression can enhance mitophagy. Figure 4 TFE3 overexpression transcriptionally up-regulates Parkin, promoting the removal of accumulated mitochondria in the AAV-α-Syn model. (A, C) Immunofluorescence (A) and Western blot (C) analysis for Parkin in dopaminergic neurons of the SN or ventral midbrain homogenates from mice injected with AAV-EGFP and AAV-TFE3. Immunofluorescence: n = 6 mice per group. Scale bars, 50 μm. (B) Quantitative analysis of the fluorescence results shown in (A). (D) Quantification of Western blot bands corresponding to Parkin normalized to β-actin. n = 6 mice per group. (E) Quantitative reverse-transcription PCR analysis of Prkn mRNA in ventral midbrain from mice injected with AAV-EGFP and AAV-TFE3. n = 4 mice per group. The data were presented as mean ± standard error of the mean. Statistical significance was determined using a two-tailed student's t -test. ∗ P < 0.05, ∗∗∗∗ P < 0.0001. (F) Western blot analysis for Parkin expression in ventral midbrain homogenates from mice injected with AAV-Flag, AAV-α-Syn, and AAV-α-Syn/TFE3. (G) Quantification of Western blot bands corresponding to Parkin normalized to β-actin. n = 4 mice per group. The data were presented as mean ± standard error of the mean. Statistical significance was determined using one-way analysis of ANOVA followed by Tukey's multiple comparisons test. ∗∗∗ P < 0.001, ∗∗∗∗ P < 0.0001. (H, I) Immunofluorescence analysis for Tom20 (H) and VDAC1 (I) in dopaminergic neurons of the SN from mice injected with AAV-Flag, AAV-α-Syn, and AAV-α-Syn/TFE3. n = 3–5 mice per group. Scale bars, 10 μm. (J, K) Immunofluorescence analysis for Tom20 (J) and VDAC1 (K) in dopaminergic neurons of the SN from mice injected with AAV-Flag, AAV-α-Syn, and AAV-α-Syn/Parkin. n = 4 mice per group. Scale bars, 10 μm. TFE3, transcription factor binding to IGHM enhancer 3; α-Syn, α-synuclein; AAV, adeno-associated virus; SN, substantia nigra; Parkin, Parkin RBR E3 ubiquitin protein ligase; Tom22, outer mitochondrial membrane protein; VDAC1, voltage-dependent anion channel 1. Figure 4 Subsequently, we observed the specific impact of TFE3 on mitophagy in the AAV-α-Syn model. Tom20 is often used as a marker for mitochondria. Immunofluorescence analysis revealed significantly increased Tom20 inclusions in dopaminergic neurons of mice overexpressing α-Syn, indicating the accumulation of damaged mitochondria . However, co-administration of AAV-α-Syn with AAV-TFE3 resulted in the complete elimination of Tom20 inclusions , demonstrating that activating TFE3 could promote the clearance of accumulated mitochondria. Similar results were further validated by VDAC1, an outer membrane protein of mitochondria . To further confirm that Parkin could mediate the clearance of mitochondrial inclusions in the α-Syn overexpression model, we co-injected AAV-α-Syn with AAV-Parkin (α+P) into the SN. The results showed that Parkin overexpression also significantly reduced Tom20 and VDAC1 inclusions . Taken together, these findings suggest that TFE3 overexpression promotes the clearance of accumulated mitochondria by transcriptionally up-regulating Parkin. Recent research suggests that α-Syn not only directly damages mitochondria and impedes their degradation but also suppresses mitochondrial biogenesis in certain cellular models. 36 , 37 Simultaneously, a recent report demonstrates that PRCC-TFE3 fusion can regulate mitochondrial biogenesis in translocation renal cell carcinoma. 23 Moreover, previous research has shown that in muscle, TFE3 directly regulates PGC-1α, 38 a co-transcriptional factor and master regulator of mitochondrial biogenesis. 39 The activation of PGC1-α has also been demonstrated to promote mitochondrial biogenesis in PD models, thereby exerting neuroprotective effects. 40 Therefore, we first examined whether TFE3 could regulate PGC-1α in dopaminergic neurons. The immunofluorescence results confirm a significant up-regulation of PGC-1α protein levels in dopaminergic neurons of mice overexpressing TFE3 compared with those injected with AAV-EGFP . This result was further validated by Western blot analysis . Additionally, reverse-transcription PCR results demonstrated that TFE3 overexpression up-regulated Ppargc1a mRNA levels , suggesting that TFE3 transcriptionally up-regulates PGC1-α in dopaminergic neurons. Then, we examined TFAM, identified as a transcription factor for mitochondrial DNA, which is recognized to be crucial for the maintenance of mitochondrial DNA. 41 Both immunofluorescence and Western blot results confirmed that overexpression of TFE3 significantly promoted the up-regulation of TFAM in dopaminergic neurons . Concurrently, we also observed a significant increase in Tom20 expression in dopaminergic neurons overexpressing TFE3 . These results demonstrate that activation of TFE3 could enhance mitochondrial biogenesis in dopaminergic neurons. Recent research has indicated impaired mitochondrial biogenesis in both PD patients and PD models. 42 Next, we observed the impact of TFE3 on mitochondrial biogenesis in the AAV-α-Syn model. Our results revealed that overexpression of α-Syn led to down-regulation of PGC1-α and TFAM, indicating impaired mitochondrial biogenesis . However, co-administration of AAV-α-Syn with AAV-TFE3 significantly increased the expression of PGC1-α, TFAM, and Tom20 , demonstrating that activation of TFE3 could promote mitochondrial biogenesis in the AAV-α-Syn model. Figure 5 TFE3 overexpression reversed the impairment of mitochondrial biogenesis in the AAV-α-Syn model. (A, C) Immunofluorescence (A) and Western blot (C) analysis for PGC1-α in dopaminergic neurons of the SN or ventral midbrain homogenates from mice injected with AAV-EGFP and AAV-TFE3. Immunofluorescence: n = 6 mice per group. Scale bars, 50 μm. (B) Quantitative analysis of the fluorescence results shown in (A). (D) Quantification of Western blot bands corresponding to PGC1-α normalized to β-actin. n = 6 mice per group. (E) Quantitative reverse-transcription PCR analysis of Ppargc1a mRNA in ventral midbrain from mice injected with AAV-EGFP and AAV-TFE3. n = 5 or 6 mice per group. (F, H) Immunofluorescence (F) and Western blot (H) analysis for TFAM in dopaminergic neurons of the SN or ventral midbrain homogenates from mice injected with AAV-EGFP and AAV-TFE3. Immunofluorescence: n = 6 mice per group. Scale bars, 50 μm. (G) Quantitative analysis of the fluorescence results shown in (F). (I) Quantification of Western blot bands corresponding to TFAM normalized to β-actin. n = 6 mice per group. (J, L) Immunofluorescence (J) and Western blot (L) analysis for Tom20 in dopaminergic neurons of the SN or ventral midbrain homogenates from mice injected with AAV-EGFP and AAV-TFE3. Immunofluorescence: n = 6 mice per group. Scale bars, 50 μm. (K) Quantitative analysis of the fluorescence results shown in (J). (M) Quantification of Western blot bands corresponding to Tom20 normalized to β-actin. n = 6 mice per group. The data were presented as mean ± standard error of the mean. Statistical significance was determined using a two-tailed student's t -test. ∗ P < 0.05, ∗∗ P < 0.01, ∗∗∗ P < 0.001, ∗∗∗∗ P < 0.0001. (N) Western blot analysis for PGC1-α, Tom20, and TFAM expression in ventral midbrain homogenates from mice injected with AAV-Flag, AAV-α-Syn, and AAV-α-Syn/TFE3. (O – Q) Quantification of Western blot bands corresponding to PGC1-α (O), Tom20 (P), and TFAM (Q) normalized to β-actin. n = 4 mice per group. The data were presented as mean ± standard error of the mean. Statistical significance was determined using one-way analysis of ANOVA followed by Tukey's multiple comparisons test. ∗ P < 0.05, ∗∗ P < 0.01, ∗∗∗∗ P < 0.0001; ns, not significant. TFE3, transcription factor binding to IGHM enhancer 3; α-Syn, α-synuclein; AAV, adeno-associated virus; SN, substantia nigra; Tom20, outer mitochondrial membrane protein; PGC1-α, peroxisome proliferator-activated receptor-gamma coactivator-1 alpha; TFAM, transcription factor A. Figure 5 α-Syn plays a central role in PD pathology. Consequently, employing the AAV virus to express α-Syn in rodents has become a popular tool for modeling PD. This model proves valuable in exploring potential therapeutics targeting α-Syn and its associated pathology. 26 Autophagy is crucial for maintaining the homeostasis and survival of dopaminergic neurons. 43 In recent years, autophagy impairment has been well-established in PD. 44 In this study, our results also demonstrate that overexpression of α-Syn leads to autophagic dysfunction in dopaminergic neurons. However, overexpression of TFE3 fully restores the autophagy of dopaminergic neurons. Our recent work has confirmed that knocking down TFE3 in dopaminergic neurons causes autophagy dysfunction, indicating that TFE3 is crucial for maintaining autophagy within these neurons. 21 Additionally, a previous study has shown that α-Syn can interact with TFEB, sequestering it in the cytoplasm and inhibiting its activity. 28 As TFE3 and TFEB belong to the same family with structural similarities, it is plausible that the partial inhibition of autophagy by α-Syn may originate from the suppression of the transcriptional activity of both TFE3 and TFEB. Notably, TFE3 overexpression also increased p62 protein levels, which is often associated with impaired autophagic degradation. 45 TFE3 has been shown to transcriptionally regulate autophagy-lysosome-related genes, including p62. 20 Our previous work and other studies confirm that TFE3 overexpression leads to elevated p62 protein levels. 21 , 46 Additionally, TFE3 overexpression typically up-regulates other autophagy-related proteins, such as LC3, LAMP1, and cathepsin D, indicating an overall increase in autophagy flux. Consequently, in the α-Syn model, TFE3 overexpression raises p62 protein levels while reducing p62 puncta, thereby enhancing the degradation of autophagic substrates. An increasing body of evidence indicates that enhancing autophagy can facilitate the clearance of α-Syn. 47 Our results demonstrate that activating TFE3 significantly reduces α-Syn protein levels in the AAV-α-Syn model. The degradation of autophagic substrates requires prior ubiquitination, and α-Syn has been confirmed as a substrate for the E3 ligase Parkin 48 . Moreover, activation of Parkin has been shown to enhance the autophagic degradation of α-Syn. 49 Notably, we have also observed that TFE3 overexpression promotes the up-regulation of Parkin, suggesting that TFE3 may facilitate the degradation of α-Syn through the Parkin-mediated autophagic pathway. Additionally, our findings indicate that TFE3 significantly reduces the phosphorylation levels of α-Syn, which implies a decrease in α-Syn aggregation in the AAV-α-Syn model. Furthermore, TFE3 overexpression appears to eliminate phosphorylated α-Syn compared with total α-Syn, suggesting that TFE3-mediated autophagy may more effectively promote the degradation of aggregated α-Syn. Moreover, we observed that TFE3 overexpression inhibits the spread of α-Syn to other brain regions, such as the STR and cortex. Research has shown that cellular stressors like serum deprivation, proteasomal or lysosomal inhibition, and hydrogen peroxide stimulate the vesicular translocation and subsequent release of α-Syn. 50 In addition to regulating the autophagy/lysosomal pathway, TFE3 has been confirmed to up-regulate anti-oxidation proteins, including SOD1 (superoxide dismutase 1) and HO-1 (heme oxygenase-1). 51 Therefore, TFE3 overexpression may inhibit α-Syn propagation by influencing lysosomal function and oxidative stress. Mitochondrial dysfunction has been confirmed in PD patients. 52 Compromised mitophagy in PD impedes the effective elimination of impaired mitochondria, thereby exacerbating the neurotoxicity linked to mitochondrial dysfunction. 53 Recent investigations have also validated the neuroprotective effects of Celastrol and Morin in PD models by activating mitophagy. 54 , 55 Our results indicate that TFE3 can transcriptionally up-regulate Parkin, which can ubiquitinate mitochondrial surface substrates for degradation by autophagy. 56 Additionally, our findings show that overexpression of TFE3 reversed the down-regulation of Parkin in the AAV-α-Syn model and eliminated the accumulation of mitochondria. Furthermore, overexpression of Parkin in the AAV-α-Syn model also promotes the clearance of mitochondrial inclusions. These results suggest that TFE3 may enhance mitophagy by up-regulating Parkin, but is perhaps distinct from Parkin-mediated mitophagy solely. TFE3 may, on one hand, up-regulate Parkin for the ubiquitination of damaged mitochondria, and on the other hand, enhance the autophagy/lysosomal pathway, thereby synergistically promoting the clearance of damaged mitochondria. Therefore, this may enhance the efficiency of mitophagy. Clearing damaged mitochondria necessitates the generation of new mitochondria to sustain energy supply. Our results demonstrate that overexpression of TFE3 also enhances mitochondrial biogenesis. Specifically, our findings reveal that TFE3 overexpression transcriptionally up-regulates PGC1-α, which is recognized as a master regulator of mitochondrial biogenesis. 39 Recent studies have shown that Parkin can promote the degradation of the PGC1-α inhibitors ZNF746 (zinc finger protein 746) and PARIS (Parkin-interacting substrate) in cellular models, thereby up-regulating the expression of PGC1-α. 57 , 58 Since we have also found the up-regulation of Parkin by TFE3 overexpression, the increase in PGC1-α may partly result from the Parkin/ZNF746 and PARIS/PGC-1α axis. Additionally, we observed the up-regulation of TFAM and Tom20 upon TFE3 overexpression, further supporting the increase in mitochondrial biogenesis. Enhancing mitochondrial biogenesis has been considered a focal point in the development of novel therapeutic approaches for treating PD. 59 Recent research has also demonstrated that promoting mitochondrial biogenesis exerts neuroprotective effects in PD models. 60 , 61 Furthermore, our study reveals that overexpression of α-Syn leads to decreased levels of PGC1-α and TFAM, with no significant change in Tom20 levels, likely due to impaired mitophagy and mitochondrial accumulation. In contrast, overexpression of TFE3 significantly increases PGC1-α, TFAM, and Tom20 in the AAV-α-Syn model, restoring mitochondrial biogenesis and preserving mitochondrial function. These findings deepen our understanding of TFE3's role in regulating mitochondrial homeostasis. Consistent with findings from the MPTP mode, 21 we observed that TFE3 overexpression in the AAV-α-Syn model provided nearly fully protected dopaminergic neurons. This suggests that the neuroprotective effects exerted by TFE3 in PD may be multifaceted. In this study, we report that TFE3 exerts neuroprotective effects by regulating autophagy to facilitate the degradation of aggregated α-Syn and damaged mitochondria, as well as promoting mitochondrial biogenesis. In a spinal cord injury model, TFE3 has been reported to inhibit oxidative stress by transcriptionally regulating anti-oxidant proteins and to suppress pyroptosis and necroptosis, or alleviate endoplasmic reticulum stress through the augmentation of autophagy. 19 , 20 Recent investigations have also shown that TFE3 enhances autophagy, promoting the degradation of NLRP3 (NLR family pyrin domain containing 3), thereby inhibiting neuroinflammation in Alzheimer's disease models. 18 Therefore, further research is needed to explore additional neuroprotective mechanisms of TFE3 in PD. These findings contribute to our understanding of the diverse roles of TFE3 in neuroprotection. While TFE3 is more abundant in the central nervous system than TFEB, 62 , 63 the literature on TFE3 in the field of neuroscience remains limited. TFEB has been extensively implicated in various neurodegenerative diseases, such as Alzheimer's disease and PD, leading to the development of numerous agonists aimed at activating TFEB to exert neuroprotective effects. 64 , 65 Our current study provides additional support for the neuroprotective role of TFE3 in PD. As TFE3 belongs to the same family as TFEB, sharing many structural and functional similarities, the higher abundance of TFE3 suggests that the exploration of TFE3 agonists or dual-target agonists for TFE3 and TFEB may offer a more promising therapeutic avenue. In conclusion, our findings elucidate the potential neuroprotective effects of TFE3 in PD . Our results show that TFE3 overexpression enhances autophagy, promoting the degradation of α-Syn and thereby reducing α-Syn aggregation in the AAV-α-Syn model. Additionally, we present the first evidence that TFE3 regulates the mitochondrial metabolism of dopaminergic neurons in the AAV-α-Syn model by up-regulating Parkin to promote mitochondrial autophagy and increasing levels of PGC1-α and TFAM to enhance mitochondrial biogenesis. These results not only expand the scope of TFE3 applications in α-synucleinopathy-based PD models but also further underscore TFE3 as a promising therapeutic target for PD. Figure 6 A schematic illustration depicting the presumed mechanism of TFE3 in Parkinson's disease. Increased α-Syn in Parkinson's disease leads to dysfunction in autophagy and mitochondrial impairment, exacerbating the accumulation of α-Syn and damaged mitochondria, ultimately resulting in neuronal death. Conversely, activation of TFE3 enhances autophagic flux, Parkin, and PGC1-α, thereby facilitating the clearance of aggregated α-Syn and accumulated mitochondria, as well as promoting mitochondrial biogenesis, ultimately fostering neuronal survival. TFE3, transcription factor binding to IGHM enhancer 3; α-Syn, α-synuclein; PGC1-α, peroxisome proliferator-activated receptor-gamma coactivator-1 alpha. Figure 6 All experimental procedures were conducted in accordance with the Chongqing Science and Technology Commission guidelines and approved by the Animal Ethics Committee of the Children's Hospital of Chongqing Medical University . Xin He: Writing – original draft, Project administration, Methodology, Investigation, Funding acquisition, Conceptualization. Mulan Chen: Validation, Methodology, Investigation, Formal analysis, Data curation. Yepeng Fan: Validation, Methodology. Bin Wu: Visualization, Data curation. Zhifang Dong: Writing – review & editing, Supervision, Funding acquisition, Conceptualization. Zhifang Dong is an editorial board member of Genes & Diseases and was not involved in the editorial review or the decision to publish this article. All authors declare that there are no competing interests. This work was supported by the 10.13039/501100001809 National Natural Science Foundation of China , the 10.13039/501100004374 CQMU Program for Youth Innovation in Future Medicine (Chongqing, China) , and the Natural Science Foundation of Chongqing , China . The datasets generated during and/or analyzed during the current study are available from the corresponding author upon reasonable request. | Review | biomedical | en | 0.999997 |
PMC11697192 | Maintenance of genomic integrity is vital for both evolutionary fitness and individual health. Cells have evolved protective DNA mechanisms, while it is prone to mutations from internal and external insults. On the one hand, mutations act in recombination and DNA repair to preserve genome diversity and integrity; on the other hand, mutations are associated with aging, tumors, immune disease, etc . 1 , 2 For most, if not all, of these mechanisms, nucleases are required to cleave DNA phosphodiester bonds in a controlled and accurate manner. A wide variety of nucleases have been discovered and characterized based on their subunit constitution, cofactor demands, and DNA cleavage modes that can be divided into exonucleases and endonucleases, which participate in multiple pathways such as DNA replication, mismatch repair (MMR), and DNA degradation . 3 , 4 Figure 1 Functions of DNA nucleases. (A) DNA endonuclease recognition specific sites. (B) Mismatch repair. (C) DNA replication. Fig. 1 DNA exonucleases contain 3′–5′ or 5′–3′ exonuclease activities and flap endonuclease activities in maintaining genome stability that remove a deoxyribonucleoside monophosphate from the end of one strand of DNA. DNA endonucleases are enzymes that can hydrolyze the phosphate diester bond in the molecular chain to generate oligonucleotides in the nucleic acid hydrolase, corresponding to the exonucleases. 5 A critical difference between exonucleases and endonucleases is that endonucleases can be combined with associated DNA substrates, whereas most exonucleases bind in a non-sequence-specific manner. 6 Their functions are involved in removing mismatched, modified, fragmented, and normal base-paired nucleotides, which is crucial in the subsequent steps of DNA synthesis. Double-strand breaks (DSBs) in DNA are detrimental to genome integrity and cell survival. 7 Commonly, non-homologous end joining (NHEJ) or homologous recombination (HR) are the two main repair methods for DSBs. 8 HR is more accurate than NHEJ because a homologous DNA sequence, usually the identical sister chromatid, is utilized as a repair template in HR. 9 HR and NHEJ depend on the nature of the DNA ends and cell cycle phase. The nucleolytic degradation of DNA ends, defined as DNA end resection, plays a pivotal role in DSB repair, which can serve as the substrate for the HR machinery . Figure 2 Functions of DNA nucleases in DNA repair. Fig. 2 In addition to their role in DNA damage repair, endonucleases and exonucleases also play important roles in immunity. The cyclic GMP-AMP synthase (cGAS)-stimulator of interferon genes (STING) pathway serves as a key pathway for innate immunity and can be activated by sensing exogenous or endogenous DNA. 10 However, when endo/exonuclease activity is dysregulated leading to massive accumulation or excessive cleavage of double-stranded DNA (dsDNA), the pathway can also become dysregulated and cause certain diseases. In some autoimmune diseases, there is often a decrease in exonuclease or endonuclease activity. Examples include rheumatoid arthritis, Aicardi-Goutières syndrome, familial chilblain-like lupus, etc ., which will result in dsDNA accumulation. 11 , 12 , 13 The dsDNA will be recognized by cGAS and cyclic GMP-AMP synthesis. Cyclic GMP-AMP acts as a second messenger to activate the innate immune response via STING. 14 However, in the treatment of some tumors, the rational use of inhibitors to inhibit the cleavage activity of endo/exonucleases can cause genomic instability in tumor cells, thereby activating the cGAS-STING pathway and enhancing the efficacy of immunotherapy. Besides, exonucleases and endonucleases regulate the growth and development of immune cells. Exonucleases such as MRE11 regulate the lifespan of T cells by maintaining telomere length, while the endonuclease CtIP is essential for the development and proliferation of B cells. 11 , 15 This review summarizes the various functions of several important nucleic acid exonucleases and nucleic acid endonucleases and discusses their various roles in DNA damage response and immunity. Exonucleases are evolutionarily highly conserved and may be divided into groups based on sequence and function. The well-known exonucleases including their origin, mechanism, and their relationship with diseases are elaborated ( Table 1 ). Table 1 Exonucleases functions and associated diseases. Table 1 Name Polarity DNA Function Disease Clinical feature Ref. MRE11 3′–5′ DS Recombination Ataxia-telangiectasia-like syndrome; Nijmegen breakage syndrome Progressive cerebellar degeneration; increased cancer incidence, cell cycle checkpoint defects, and ionizing radiation sensitivity. 18 , 113 , 114 EXO1 5′–3′ DS Repair Tumor; immune deficiency Tumor suppression. 115 WRN 3′–5′ DS Repair; telomeres Werner syndrome Premature aging. 116 , 117 TREX1 3′–5′ SS/DS removal/proofreading? AGS syndrome Upregulated type I interferon. 64 , 118 DS: double stranded DNA; SS: single stranded DNA; AGS syndrome: Aicardi Goutières syndrome. MRE11 was first identified as a meiotic recombination-related gene in Saccharomyces cerevisiae in 1993, and its atomic resolution was first glimpsed from P. furiosus in 2001, which is responsible for the recognition, repair, and signaling of DSBs in eukaryotes. 16 , 17 , 18 It is the fundamental component of the MRE11/RAD50/NBS1 (MRN) complex and exhibits dual 3′–5′ exonuclease and endonuclease activity. MRE11 can detect DNA DSBs and activate ataxia telangiectasia-mutated kinase while initiating HR repair. 19 , 20 MRE11 and Sae2 cleave DSB ends to generate an intermediate, which is then cleaved further by EXO1 to form a mobile single-stranded DNA (ssDNA) substrate for Rad51. 21 To investigate the mechanism of HR promotion by MRE11, Shibata's team designed MRE11 endonuclease and exonuclease inhibitors. Using the inhibitors, it was discovered that both MRE11 endonuclease and exonuclease are required for HR, with endonuclease activity starting cleavage and supporting HR repair and exonuclease acting downstream. 22 MRE11 activity is essential for DNA damage repair and in the pursuit of genomic stability. The MRE11 C-terminus contains two DNA-binding domains and a glycine-arginine-rich structural domain involved in the regulation of nucleic acid endonuclease and exonuclease activities. PIH1D1, a subunit of the R2TP complex of the heat shock protein 90 co-chaperone, binds to the C-terminus of MRE11 to regulate its stability. Therefore, when the MRE11 C-terminus is mutated, the instability of MRE11 will lead to a decrease in the level of MRN complexes. 23 The mutation of MRE11 is associated with immune diseases such as ataxia-telangiectasia-like disease and cancers. 24 , 25 , 26 The N-terminus of MRE11 contains a nuclease structural domain essential for HR, and N-terminal mutations lead to structural and functional defects in MRE11 and have effects in the MRE11/NBS1/RAD50 complex. Patients with ataxia telangiectasia-like disease and MRE11 mutations show cerebellar ataxia. Their cells are unable to activate ataxia telangiectasia-mutated kinase and can therefore prevent DNA damage repair, leading to chromosomal mutations, increased susceptibility to radiotherapy, and immune checkpoint defects. 27 , 28 , 29 MRE11 can directly or indirectly participate in the activation of immune pathways or regulate DNA damage response. In the cytoplasm, MRE11 acts as a DNA damage receptor that directly recognizes dsDNA, facilitates its translocation into the Golgi by interacting with the stimulator of STING, and directly promotes the activation of the cGAS-STING innate immune pathway. 30 In addition, UFMylation modification of MRE11 could promote DSB repair by enhancing the phosphorylation and activation of ataxia telangiectasia-mutated kinase. This helps to maintain normal cellular mitosis and chromosome stability. 31 Meanwhile, in a study on zebrafish, it was shown that UFMylated deletion of MRE11 could shorten telomere length and accelerate aging in zebrafish. This interesting finding provides us with clues to further explore MRE11. 32 MRE11 is not only present in the nucleus, and cytoplasm, but is also localized in the mitochondria. 33 As a protector of mitochondria, MRE11 ensures mitochondrial energy production and blocks caspase-1 activation to inhibit mitochondrial stress-induced inflammatory vesicle activation. At the same time, MRE11 reduces T cell pyroptosis and regulates T cell lifespan. 11 Rheumatoid arthritis patients have lower levels of MRE11, which shortens T-cell lifespan; this condition can be reversed by overexpressing MRE11. This may be related to the protective effect of MRE11 on telomeres. 34 However, in Fanconi anemia patients, due to mutations in the Fanconi anemia proteins that protect nascent mitochondrial DNA, MRE11, which acts as a mitochondrial protector, will over-cleave nascent mitochondrial DNA and release it into the cytoplasm. This will activate the cGAS-STING pathway via signal transducer and activator of transcription 1. 35 , 36 Therefore, how to utilize MRE11 to protect mitochondrial DNA is particularly important in various immune diseases. In normal cells, excessive cleavage of MRE11 activates immune pathways and causes autoimmune diseases. However, in cancer cells subjected to radiotherapy, MRE11 is recruited to the damage site to cleave damaged dsDNA to produce ssDNA for HR repair. p97, a hexameric ATPase of the AAA family, can bind to and remove MRE11, preventing its over-cleavage. 37 However, when p97 is inactivated, MRE11 will cleave excessively to produce large amounts of ssDNA, transforming HR repair into rad52-mediated single-strand annealing. This will enhance the sensitivity of cancer cells to radiotherapy. 38 Therefore, the over-cleavage of MRE11 is a double-edged sword, and its rational utilization will likely be a potential target for cancer therapy. Overall, MRE11 plays an important role as a nuclease with dual endonuclease/exonuclease activity in both activation of immune pathways and DNA damage repair. In the cytoplasm, MRE11 directly binds to dsDNA, activates the cGAS-STING pathway, and induces interferon (IFN)-1 production. In chromatin and mitochondria, MRE11 cleaves damaged DNA to promote HR repair. Of course, this is based on moderation. When its cleavage is out of control, MRE11 can cause a range of autoimmune diseases. However, in radiation-treated cancer cells, excessive MRE11 cleavage instead increases radiation sensitivity. Therefore, utilizing the nuclease activity of MRE11 may provide a good idea for future disease treatment. Exonuclease 1 (EXO1) is a gene encoding a multifunctional 5′–3′ exonuclease found in Saccharomyces cerevisiae , which plays a role in MMR by interacting with MMR genes such as MSH2 and MLH1. 39 Studies reported that EXO1 facilitates the modulation of cell cycle checkpoints, the maintenance of replication forks, and the post-replication DNA repair pathways, which are required for the solution of DNA replication arrest or blockage associated with replication stress and replication forks. 40 In MMR, MSH dimer recruits downstream factors such as EXO1, PCNA, and MLH protein. Among them, EXO1 is mainly responsible for cleaving mismatched bases, with replication protein A and HMGB1 playing a supporting role. Meanwhile, RFC and PCNA promote pol δ to fill the gap created by the cleavage. Finally, the MLH1 protein binds to EXO1 to terminate the cleavage. 41 , 42 MMR-deficient tumors (dMMR) are unable to degrade EXO1 due to the lack of the MLH1 protein. dsDNA is therefore excessively cleaved, leading to the accumulation of large amounts of ssDNA. Meanwhile, replication protein A can bind to ssDNA, preventing the cleavage of ssDNA by EXO1. dMMR tumors are also characterized by a lack of MLH1 protein, which is unable to degrade EXO1. However, due to the unrestricted cleavage by EXO1, replication protein A is soon depleted. 43 The additional unprotected ssDNA produced is further cleaved and leaks into the cytoplasm, leading to activation of the cGAS-STING innate immune pathway, which results in enhanced effects of immune checkpoint therapy. 44 However, there are still some dMMR tumors that do not benefit from immunotherapy, such as metastatic colorectal cancers with the microsatellite instability (MSI) phenotype and melanoma. Their common feature is that they have Janus kinase 1 or Janus kinase 2 mutations, which may increase the resistance of the tumor to immunotherapy. 45 , 46 In addition, because dMMR tumors are highly mutagenic, they may even introduce mutations into the cGAS-STING pathway. In conclusion, EXO1, as the nucleic acid exonuclease mainly responsible for cleavage in MMR, was utilized in some dMMR tumors for dsDNA hyperexcision to serve as an enhanced immune checkpoint therapy through activation of the cGAS-STING immune pathway. It provides ideas for further exploration of immunotherapy in association with DNA hyperexcision. WRN is a RecQ family member with 3′–5′ exonuclease and 3′–5′ helicase activities and plays important roles in stalling forks, counteracting replication stress, maintaining genome stabilization, and slowing cellular senescence. 47 Mutations in WRN lead to Werner syndrome, a type of autosomal recessive disorder being recognized as premature senility. 48 The cause of premature aging in Werner syndrome patients may be due to the ability of WRN to regulate the transcription of NMNAT1, a key enzyme in NAD + biosynthesis. In Werner syndrome patients, WRN deficiency leads to impaired transcription of the enzyme, resulting in NAD + depletion, which leads to accelerated aging. 49 Werner syndrome is currently incurable, and some emerging therapies such as mammalian targets of rapamycin inhibitors are still being explored. 50 , 51 MSI tumors are WRN-dependent, and WRN is a synthetic lethal target for MSI tumors. Loss of WRN induces DSBs in MSI cancers and selectively promotes apoptosis and cell cycle arrest. Since WRN has both helicase and exonuclease activities. To determine which enzyme is acting, WRN mutants with inactivated helicase or inactivated exonuclease were constructed to validate the sgRNA targeting WRN exon-intron junctions (WRN EIJ sgRNA), finally, the helicase domain was determined being in action. 52 , 53 In addition, in MSI tumors, there is a type of short repeat mutation called “genomic scar”. These “genomic scar” are folds formed by large expansions of TA nucleotide sequence repeats that depend on the deconvolving enzyme activity of the WRN for deconvolution. Therefore, when WRN is inactivated, the “genomic scar” will be cleaved by MUS81 endonuclease, resulting in cancer cell death. 54 Therefore, WRN exonuclease may be a promising target for the treatment of MSI tumors. Similarly, in BACA2-deficient breast cancer cells, WRN helicase protects against the over-degradation of stalled forks in BRCA2-deficient cancer cells by inhibiting the activity of MRE11 and EXO1 nuclease on the degenerating forks. When WRN nuclease activity is inhibited, MRE11 will cleave unprotected forks, generating mus81-dependent DSBs, while increasing NHEJ and chromosomal instability, leading to cancer cell death. 55 It also has the potential to further stimulate the host response to mediate tumor transition from cold to hot tumors by increasing the cGAS-STING-dependent type I IFN response, thereby increasing the efficiency of the immune response. 56 , 57 In conclusion, WRN exonuclease can interact with exonucleases MRE11 and EXO1 and endonuclease MUS81. By inhibiting the deconjugating enzyme activity of WRN in certain tumors, these exonucleases and endonucleases are activated, triggering an innate immune response while enhancing the efficacy of immunotherapy. TREX1 is a 3′–5′ nucleic acid exonuclease expressed mainly in the cytoplasm of mammalian cells, which is capable of cleaving ssDNA and dsDNA. 58 TREX1 is a relatively small dimeric protein that efficiently cleaves the 3′ end. The TREX1 sequence has an ExoIII motif variant (ExoIIIε), which is closely related to the ε subunit of EXO1. 59 TREX1 prevents the accumulation of dsDNA as an autoantigen to induce autoimmune diseases by cleaving it. 60 Many autoimmune diseases are caused when TREX1 is mutated, such as Aicardi–Goutières syndrome, familial chilblain-like lupus, systemic lupus erythematosus, and leukodystrophy-related retinopathy. 12 , 13 , 61 A common feature of these diseases is the reduced 3′–5′ nucleic acid exonuclease activity of the mutant TREX1. Intracytoplasmic accumulation of dsDNA and ssDNA, as pathogen-associated molecular patterns, causes autoimmune reactions. 62 , 63 , 64 In these diseases, Aicardi-Goutières syndrome is caused by the accumulation of a large amount of damaged DNA in the cytoplasm caused by TREX1 mutation, which strongly triggers the cGAS-STING pathway, resulting in systemic autoimmunity. 65 Among them, cyclic GMP-AMP synthase acts as a DNA receptor in the cytoplasm and binds to DNA to form the cGAS-DNA complex. Through a phase separation mechanism, TREX1 is restricted to the periphery of phase-separated droplets, and its exonuclease activity is inhibited. In contrast, Aicardi-Goutières syndrome patients with TREX1 mutations have increased permeability to the interior of the droplet, allowing it to enter the droplet, and the phase separation mechanism is disrupted. 66 cGAS synthesizes cyclic GMP-AMP, which activates the cGAS-STING pathway and produces large amounts of IFNs and inflammatory factors. 63 , 67 TREX1 is a radiation-driven upstream regulator of anti-tumor immunity that guides patient radiation dose selection. Radiotherapy can enhance the immunogenicity of tumors by activating immune signaling to fight tumors, however, when the radiation dose reaches 12–18 Gy or more, TREX1 can be induced to degrade DNA accumulated in the cytoplasm after radiation, weakening its immunogenicity. Conversely, when TREX1 is not induced, the cGAS-STING pathway is activated and recruits BATF3-dependent dendritic cells that activate anti-cancer CD8 + T cells to mediate systemic tumor immunity. Consequently, finetuning the dose of radiotherapy to modulate tumor expression of TREX1 is a potential target for improving therapeutic efficacy. 68 In addition, repeated doses in radiotherapy are also important to enhance tumor immunogenicity, and the IFN-β production of 8GyX3-treated cancer cells was significantly higher than that of 8Gy single-dose-treated cancer cells. 69 TREX1 localizes to the endoplasmic reticulum in the cytoplasm. The endoplasmic reticulum enters the ruptured micronucleus and enables TREX1 to play a key role in degrading damaged DNA in the micronucleus. Mutation of TREX1 in autoimmune diseases, dissociating TREX1 from the endoplasmic reticulum, disrupts the localization of TREX1 in micronuclei, reduces micronucleus-damaged DNA degradation, and enhances cGAS activation. Thus, the immobilization of TREX1 on the endoplasmic reticulum is the basis for preventing autoimmune diseases. 70 When the nuclear envelope is damaged, TREX1 undergoes nuclear ectopic translocation into the nucleus, causing TREX1-dependent DNA damage. This causes cellular senescence in normal cells. 71 In contrast, it promotes tumor invasion in tumor cells. 72 This phenomenon may often occur in cancer, where the nuclear membrane is squeezed and ruptured because the cancer cells are more crowded. As a result, inhibition of TREX1 may be a potential target to stop cancer invasion and inhibit its further development. Endonucleases can hydrolyze the phosphodiester bond inside the molecular chain to generate oligonucleotides, corresponding to exonucleases. During DNA replication, it plays a role in maintaining gene stability by cutting double strands. In addition, some endonucleases can also combine with exonucleases to facilitate the cleavage of exonucleases ( Table 2 ). Table 2 Endonuclease functions and associated diseases. Table 2 Name Polarity DNA Function Disease Clinical feature Ref. CtIP 5′–3′ DS G1/S transition Tumor Dual role in tumors 119 FEN1 5′–3′ DS DNA metabolism; telomeres Tumor Promoted 120 MUS81/SLX4/EME1 ∖ DS DNA interstrand cross-linking repair; medullary development Anemia Fanconi anemia; bone marrow failure; cancer predisposition 121 RAG1/RAG2 5′–3′ DS NHEJ; lymphocyte development Omenn syndrome SCID; erythrodermia; hepatosplenomegaly; lymphadenopathy; alopecia. 104 DS: double stranded DNA; SS: single stranded DNA. SCID: severe combined immunodeficiency. CtIP, an endonuclease capable of excising damaged DNA 5′ overhangs, was first isolated in 1998 by a yeast two-hybrid screening assay and it is a 125-kDa protein, which interacts with the oncogenic transcriptional corepressor CtBP. 73 Yun and Hiom et al suggested that the interaction of BRCA1 with CtIP is required for CtIP-mediated DNA end resection and tumor suppression. They constructed chicken DT40 cells with CtIP S327 mutation resulting in loss of CtIP-BRCA1 interaction and found that HR repair was inhibited. 74 However, in 2010, Nakamura et al clarified that the chicken CtIP S332A protein could effectively promote DSB repair through interaction with BRCA1 in an HR-independent manner. 75 In 2013, Reczek et al constructed CtIP-S326A mutant mice and showed that HR repair was not affected. 76 Furthermore, in 2014, Polato et al used a mouse model expressing S327A mutant CtIP which suggests that loss of CtIP-BRCA1 interaction does not significantly affect the maintenance of genomic stability. 77 The above findings suggest that CtIP-BRCA1 interaction may not be necessary for dsDNA end resection and tumor suppression in mammals. In yeast, MRE11 is involved in DSB cleavage together with Sae2. CtIP is homologous to Sae2 and also acts as a cofactor for DSB cleavage by MRE11. 78 CtIP interacts with the MRN complex, promotes MRN to perform 5′–3′ excision of the broken DNA ends, converts the DSB ends into 3′ ssDNA overhangs, which can inhibit NHEJ, and is a necessary intermediate to promote HR repair. 9 , 79 The FHA and BRCT domains of NBS1 in MRN can sense CtIP phosphorylation and activate MRN endonuclease activity when CtIP is extensively phosphorylated. T847 (the phosphorylation site of cyclin-dependent kinase) in CtIP is an important site for phosphorylation, and the absence of phosphorylation at this site could severely impair the binding of dsDNA to MRN. 80 In addition, the study found that MRN also has cleavage activity when combined with CtIP in the absence of NBS1, but the efficiency is much lower than the cleavage ability of MRN holocomplex when combined with CtIP. 81 These results suggest that the MRN endonuclease activity is restricted and the activity is fully activated in the presence of both NBS1 and phosphorylated CtIP. 82 Terminal excision performed by CtIP generates 3′ ssDNA, which promotes immune checkpoint activation and arrests the cell cycle in the S–G2 phase for DNA damage repair. 83 In addition, the terminal excision and DNA repair effects of CtIP affect B-cell development and proliferation. Phosphorylation of CtIP at T847 is essential for B-cell development and class-switching recombination, and loss of T847 phosphorylation leads to accumulation of replication intermediates and loss of cell viability. 15 In summary, CtIP was initially found to interact with the oncogenes CtBP and BRCA1. It can also act as a cofactor for MRE11, activate ATR-dependent checkpoints by enhancing the endonuclease capacity of MRE11, and promote HR repair, as well as the development and proliferation of B cells. Harrington et al first purified flap endonuclease 1 (FEN1) in 1994. FEN1, as a DNA structure-specific endonuclease, has 5′–3′ endonuclease activity and can specifically recognize the 5′ unannealed single strand of dsDNA (flap), and make an incision at the bottom of the flap. 84 FEN1 can process Okazaki fragments for long-patch base excision repair, so it contributes to DNA replication fidelity and maintains genome stability. 85 FEN1 is recruited to the telomeres to maintain telomere stability during DNA replication, and loss of FEN1 results in γH2AX accumulation and lagging-strand sister telomere loss. The interaction of FEN1 with WRN and the telomere-binding protein TRF2 is required for the activity of FEN1 at telomeres. 86 FEN1 is a classic lagging endonuclease, however, in addition to maintaining lagging telomere stability, FEN1 can also limit the telomere fragility of the leading strand. The study from Daniel et al showed for the first time that FEN1 can also cleave a flap structure similar to Okazaki fragment substrates in the leading strand. The absence of FEN1 activity results in replication stress and DNA damage. 87 Collectively, FEN1 is a key endonuclease for genome stability. FEN1 is also involved in mitochondrial DNA metabolism. In the mitochondria of non-apoptotic immune cells, FEN1 cleaves oxidized mitochondrial DNA and releases its small fragments (<650 bp) into the cytoplasm, where it binds to NLRP3 and triggers NLRP3 inflammation body assembly and activation of its inflammatory pathways. 88 Another target of cytoplasmic oxidized mitochondrial DNA fragments is cGAS, which activates the cGAS-STING pathway and promotes the production of type I IFN, which further amplifies the inflammatory response. 89 , 90 , 91 We can conclude that inhibiting the cleavage activity of mitochondrial FEN1 endonuclease may serve as a target for the treatment of inflammatory diseases. FEN1 has been widely recognized as a tumor suppressor in previous studies, and FEN1 haplo-deficient mice allow the accumulation of replication intermediates leading to genomic instability, which promotes rapid tumor development. 92 In contrast, Zheng et al speculated that FEN1 expression is required for cancer growth and proliferation and promotes cancer development. 93 Several recent studies have found that FEN1 is highly expressed in a variety of cancers and is positively correlated with tumor proliferation rate, tumor size, lymph node metastasis, and degree of differentiation. 94 , 95 In addition, Wang et al found that in oral squamous cell carcinoma, inhibition of FEN1 could cause up-regulation of IFN-γ and activation of JAK/STAT signaling pathway, resulting in reduced expression of programmed cell death ligand 1 to play an immunomodulatory role. 96 Thus, inhibition of FEN1 in some cancers may be a potential target for their treatment. Methyl methanesulfonate and ultraviolet-sensitive gene 81 (MUS81), a fission yeast protein related to the XPF subunit of ERCC1-XPF endonuclease, together with EME1 and SLX4, forms an endonuclease complex that cleaves Holliday junctions. 97 , 98 Holliday junctions are four-way DNA intermediates formed during DNA replication or DNA damage, and their cleavage facilitates the maintenance of chromosome stability. 99 Therefore, the MUS81-EME1-SLX4 complex plays an important role in DNA repair and cell cycle regulation. Meanwhile, MUS81-EME1 acts as a conformation-specific nucleic acid endonuclease, which is normally recruited by SLX4, and is phosphorylated by cyclin-dependent kinases to form a stable complex in the G2–S phase, resulting in an intact endonuclease activity. 100 The endonuclease action of MUS81-EME1 inhibits long interspersed element-1 reverse transcription. When SLX is inhibited, long interspersed element-1 transcription is increased, leading to an increase in dsDNA and proinflammatory factors in the cytoplasm, which activates the innate immune cGAS-STING pathway. 101 In addition, MUS81-EME1 also enables G2/M phase blockade, helping HIV-infected cells to evade sensing by the innate immune system. The HIV accessory protein Vpr interacts with the SLX4 protein and prevents the triggering of the cGAS-STING pathway by recruiting VPRBP and PLK1 to activate the endonuclease activity of MUS81-EME1, which cleaves viral DNA. 102 Similarly, the exonuclease TREX1, which cleaves viral DNA via exonuclease activity, prevents IFN-1 production in HIV-infected cells. 103 , 104 MUS81-EME1 acts as an oncogene and enhances the immune response in cancer cells. In prostate cancer cells, MUS81-EME1 acts as an endonuclease, causing fragmentation of genomic DNA, and leading to the accumulation of intracytoplasmic dsDNA. 105 This is recognized by intracytoplasmic DNA receptors and activates the cGAS-STING pathway to produce IFN-1, which enhances the immune response of phagocytes and T cells against prostate cancer cells. 106 In addition, MUS81-EME1 can serve as a potential target to enhance the efficacy of cancer immunotherapy. In gastric cancer, MUS81-EME1 disrupts β-TRCP-induced ubiquitination and increases the expression of WEE1, which acts as a DNA-damage checkpoint kinase and inhibits the activation of the intrinsic immune cGAS-STING pathway. Therefore, in gastric cancer, WEE1 inhibitors are used to enhance the efficacy of immunotherapy. Meanwhile, inhibition of MUS81-EME1 was able to increase WEE1 ubiquitination, which led to a further decrease in WEE1 levels and further enhanced the efficacy of immunotherapy. 107 Overall, utilizing the endonuclease activity of MUS81-EME1 could shed light on the future treatment of the disease. RAG1 and RAG2 are specific endonucleases that form a complex to initiate the V(D)J recombination process. 108 The production of T and B cell-specific receptors is dependent on V(D)J recombination of RAG1 and RAG2. 109 The expression of RAG1 and RAG2 endows early T and B cells with adaptability to repair DSBs. 110 The RAG1 protein functions as a catalytic active member of the RAG complex and cleaves dsDNA through a catalytic core. The C-terminal region of RAG2 binds to DNA bending cofactors (HMGB1 or HMGB2) to assist RAG1 in cleaving dsDNA. 111 Then, the RAG complex remains bound to the DNA ends in the cleavage complex, preventing abnormal recombination. 112 RAG1 and RAG2 are essential for the early development of T and B lymphoid immune cells. RAG1 and RAG2 mutation or deficiency lead to impaired V(D)J recombination and blocked B cell and T cell differentiation, and are associated with many types of immunodeficiency diseases, 113 such as severe combined immunodeficiency (including T and B cell deficiency), Omenn syndrome, leaky severe combined immunodeficiency (production of small amounts of functional T cells, B cells, and immunoglobulins in the body and no clinical features of tumor (osteosarcoma)), and combined immunodeficiency with granuloma or autoimmunity. 114 The rearrangement of the RAG1 and RAG2 genes is labile, resulting in potentially oncogenic DNA. BH3-only protein is a protein in the Bcl-2 family with only one Bcl-2 homologous region, which is the promoter of apoptosis and is capable of inducing cell apoptosis. 115 , 116 In these potentially oncogenic cells, BIM deficiency accelerates the development of lymphomas in p53-deficient mice, a process that relies on RAG1/RAG2-mediated rearrangement of antigen receptor genes. 117 Accordingly, the rearrangement of RAG1 and RAG2 genes is of great significance for the regulation of the immune system's function and the maintenance of genome stability. In conclusion, the cleavage of dsDNA and ssDNA by endo/exonucleases plays an important role in DNA damage repair, maintenance of genome stability, and regulation of the innate immune cGAS-STING pathway. Furthermore, these endonucleases and exonucleases have interactions with each other. Exonuclease MRE11 can cleave broken DNA through its 3′–5′ nuclease activity, while the endonuclease CtIP interacts with MRE11 to facilitate its cleavage by converting the DSB end into the 3′ ssDNA overhangs. This 3′ ssDNA is recognized by the MMR proteins MSH2-MSH3 and recruits the exonuclease EXO1 to perform 5′–3′ cleavage, thereby facilitating HR repair. 21 , 118 , 119 However, in BRCA2-deficient tumors, the exonuclease activities of MRE11 and EXO1 are inhibited by WRN helicase and exonuclease activities. WRN exonuclease replaces BRCA2 to protect the stalled fork from degradation. 55 When WRN is inhibited, stalled replication forks are cleaved by MRE11 and EXO1 and further degraded by MUS81 nucleic acid endonuclease. This leads to genomic instability in BRCA2-deficient tumor cells, resulting in increased tumor cell death. 120 , 121 At the same time, the nuclease may be a double-edged sword. When these nuclease activities are properly regulated, they enable timely cleavage of damaged DNA and DNA damage repair and inhibit the activation of innate immune pathways. When nuclease activity is uncontrolled, large amounts of dsDNA are cleaved, which are recognized by DNA receptors and activate the cGAS-STING pathway, thereby triggering autoimmune diseases. However, excessive cleavage by nuclease is not always harmful. In cancer cells, the use of nuclease over-cleavage can increase the efficacy of immunotherapy and sensitivity to radiation therapy for cancer. Therefore, rational utilization of these nucleases will be a therapeutic target for cancer and autoimmune diseases. In this review, we discussed in detail the cleavage activities of major nucleic acid endonucleases/exonucleases, their interactions with each other, the roles they play in DNA damage, and their effects in autoimmune diseases and tumors through activation of immune pathways . To date, many nucleases remain to be characterized. Therefore, how to fulfill the role of nucleases in DNA damage repair and immunity and provide effective treatment for clinical patients may become a top priority for future research. Figure 3 DNA nucleases in the immune response. Fig. 3 Concept and design: T.M. and C.M.A.; data analysis and interpretation: M.J.L. and J.H.W.; manuscript writing: all authors; final approval of manuscript: all authors. This study was supported by the Beijing Xisike Clinical Oncology Research Foundation (China) and the National Natural Science Foundation of China . All data are available in the main text or the supplementary materials. The authors declared no conflict of interests. | Review | biomedical | en | 0.999997 |
PMC11697194 | Articular cartilage is a specialized connective tissue located on the surface of the synovial joint and plays an important role in lubrication and weight-bearing. 1 With aging, progressive degeneration of articular cartilage leads to joint pain and dysfunction, namely osteoarthritis (OA). OA is the most common type of chronic musculoskeletal disease which is characterized by degeneration of articular cartilage, fibrosis of articular cartilage, formation of osteophyte, inflammation of synovium, and loss of mobility. OA has affected 7% of the global population, or more than 500 million people worldwide. 2 , 3 Clinically, the knee joint is the most common site of OA, followed by the hand and hip joints. 4 Furthermore, the global prevalence of OA is higher in women and increases with age, with 10% of men and 18% of women over 60 years old being affected. 5 However, there are no effective therapies except for joint replacement in the late stage of OA, because the molecular mechanisms underlying the progression of OA remain largely unknown. Chondrocyte is considered the only cell type in cartilage, which secretes growth factors and enzymes to regulate extracellular matrix synthesis. 6 , 7 Chondrocytes are derived from mesenchymal stromal cells which differentiate into chondroprogenitors and then into chondrocytes. 8 , 9 After chondrogenesis, chondrocytes remain as resting cells to form articular cartilage or exhibit a life cycle of proliferation, maturation, hypertrophy, and apoptosis. 10 , 11 The degeneration of articular cartilage prompts the release of cytokines from damaged cartilage, thus triggering synovial fibrosis. 12 , 13 Fibrosis is thought to be a prominent and consequential hallmark of OA, which includes fibrosis of synovial and generation of fibrocartilage. 12 Although it is well known that cartilage is composed of chondrocytes, the cell heterogeneity of chondrocytes in human articular cartilage is not well defined. Single-cell sequencing, in particular single-cell RNA sequencing (scRNA-seq), is a powerful tool to study cell heterogeneity, which has identified various cell types and provided insights into physiological and pathological processes of diseases. 14 , 15 , 16 , 17 , 18 Recently, several studies used scRNA-seq to explore the cell heterogeneity of chondrocytes in cartilage from OA or other joint disease patients. 19 , 20 , 21 , 22 , 23 Ji et al identified seven chondrocyte subsets in human OA cartilage, including proliferative chondrocytes (ProC), prehypertrophic chondrocytes (PreHTC), and hypertrophic chondrocytes (HTC). Furthermore, they identified chondrocyte subsets and their specific genes and found a potential transition among ProC, PreHTC, and HTC. 19 Sun et al 20 constructed a chondrocyte atlas in the healthy and degenerated meniscus, in which most chondrocyte subsets were consistent with that reported in Ji et al. 19 Whereas Fu et al 22 constructed a chondrocyte atlas and named chondrocyte subsets based on their significant enriched gene ontology (GO). Lv et al 23 identified ferroptotic chondrocytes based on molecular characteristics and their markers in OA patients. This study also found that TRPV1 protected chondrocytes from ferroptosis and could be an anti-ferroptotic target. Swahn et al 24 found a senescent chondrocyte subset with ZEB1 as the main regulator that promoted OA in cartilage and meniscus. Although these studies identified chondrocyte subsets in human cartilage, these results are not well consistent, and dynamic processes of chondrocyte subsets in the progression of OA are not clear. In this study, we performed scRNA-seq on chondrocytes from cartilage to better elucidate the cell heterogeneity of chondrocytes in human healthy cartilage and OA cartilage. We identified chondrocyte subsets using pre-defined markers and constructed a single-cell transcriptomics atlas of cartilage chondrocytes. The trajectory analysis was used to infer the potential transition and dynamics among chondrocyte subsets. We further compared the single-cell landscape between healthy cartilage and OA cartilage to reveal the distinct landscape of OA cartilage. These results offer a better understanding of the chondrocyte heterogeneity and provide a deeper insight into the pathogenetic mechanisms of OA. Human joint cartilage tissues were collected from Shenzhen Second People's Hospital. The healthy donor signed informed consent approved by the Institutional Review Board (IRB) of Shenzhen Second People's Hospital . The cartilage was isolated from knee joints of the healthy human donor and OA patients and cultured following previous studies. 9 , 19 In brief, cartilage was immediately put in physiological saline containing heparin anticoagulant at 4 °C after collection, which was further processed within 6 hours. Then the cartilage was cut into pieces (1 mm 3 ) and digested with 0.2% collagenase in high-glucose Dulbecco's modified Eagle's medium (Gibco, Australian) containing 10% fetal bovine serum (Gibco, Australian) and 10 μg/L basic fibroblast growth factor (Gibco, Australian). Following overnight incubation at 37 °C with 5% CO 2 , cells were collected by centrifugation, washed twice, resuspended in high-glucose Dulbecco's modified Eagle's medium supplemented with 10% fetal bovine serum and 10 μg/L basic fibroblast growth factor, plated in a culture flask, and allowed to attach for three days. Nonadherent cells were removed after a seven-day culture and the medium was replaced. Medium replacement was carried out every 72 hours until the cells reached an 80% confluent layer. Cells were harvested with 0.25% (w/v) trypsin plus 0.02% (w/v) EDTA (Hyclone, USA) and subcultured at a density of 1000 cells/cm 2 . Chondrocytes were isolated from the cultured cells and subjected to fluorescence-activated cell sorting using the BD FACSAria II instrument (BD Biosciences) to eliminate nonviable cells. scRNA-seq was conducted using the 10X genomics platform. Chromium Single Cell 3'Gel Bead and Library Kit were used following protocol. Each channel accommodated approximately 15,000 cells. Sequencing libraries were subsequently loaded on the Illumina NovaSeq 6000 platform using paired-end kits. We further obtained scRNA-seq data of articular cartilage from Swahn et al , 24 namely Sw_data. The raw data were processed following our previous studies. 9 , 18 , 25 In detail, the raw sequencing data was transformed into FASTQ format using the Illumina bcl2fastq software. To align the reads and demultiplex the barcodes, we employed Cell Ranger V2.2.0 from 10X Genomics, aligning the reads to the hg38 reference genome. The resulting digital gene expression matrices underwent preprocessing and filtering using the R packages scran and scater. 26 Cells surpassing the expression threshold of 4000 genes (potentially indicating doublets), falling below 200 expressed genes (suggesting low-quality libraries), or exhibiting mitochondrial unique molecular index counts exceeding 10% (possibly indicative of cell fragments and debris) were excluded from subsequent analysis. Additionally, we utilized Scrublet 27 to identify potential doublets, calculating a doublet score for each cell and determining the threshold based on the default parameters of the bimodal distribution. We set the expected doublet rate at 0.08, and cells predicted to be doublets or with a doubletScore parameter exceeding 0.25 were removed from consideration. After implementing rigorous quality control measures, the healthy cartilage retained a total of 13,363 cells, while OA#1 and OA#2 retained 8808 cells and 12,770 cells, respectively. After quality control of Sw_data, six normal cartilage samples retained 8505, 7183, 3601, 6519, 6243, and 7214 cells, while six OA samples retained 4389, 4458, 7060, 5362, 4944, and 5468 cells, respectively. Seurat 28 package was used for performing scRNA-seq data analysis, including data integration, normalization, dimension reduction, and cell clustering, following our previous studies. 9 , 18 , 25 We implemented a gene-wise scaling approach to set the mean and variance of each gene across cells to 0 and 1, respectively, thus preventing highly expressed genes from dominating subsequent analyses. The scaled expression data was then employed to identify highly variable genes, which were subsequently utilized for dimension reduction. The UMAP algorithm was applied for the visualization of the scRNA-seq data. 29 We assigned annotations to each cell cluster based on the highly expressed genes specific to that particular population, as well as the established marker genes unique to each population. By employing the Wilcoxon Rank-Sum test, we compared the gene expressions within each investigated cluster to those of the remaining clusters. Genes exhibiting significantly higher expression levels within the investigated cluster were identified as cluster-specific genes. Furthermore, we performed the Wilcoxon Rank-Sum test to determine the differentially expressed genes between any two clusters. To ascertain statistical significance, a minimum log2(fold change) threshold of 0.25 and an adjusted P -value of 0.01 were applied. Metascape was applied for the investigation of biological process enrichment. 30 To investigate the intricate network of cellular communication, we employed the CellChat package (version 1.6.1) for ligand–receptor interaction analysis. 31 Leveraging the extensive ligand–receptor pair data available in CellChatDB, we evaluated the potential interactions among the different cell populations. Specifically, we focused on the datasets pertaining to “secreted signaling”, “ECM–receptor”, and “cell–cell contact” interactions. These selected datasets provided valuable insights into the intricate cell communication occurring between each cluster. We also used CellPhoneDB 32 and iTalk 33 to infer cell–cell interaction between chondrocyte subsets. BAM files aligned using the Cell Ranger pipelines were initially sorted using SAMtools. 34 Next, the Velocyto pipeline was used to count spliced and un-spliced reads and generate loom files. 35 To compute gene-specific velocities, we utilized the scVelo Python package. 36 Additionally, the projection clustered with metabolic genes was embedded with the velocity streams predicted by scVelo with the loom files. Finally, plots for the ratio of spliced and un-spliced, for the velocity and the expression of various individual genes were generated based on the velocity calculated by scVelo. To verify the robustness of our findings, we employed additional developmental trajectory inference algorithms, specifically partition-based graph abstraction (PAGA). 37 For PAGA analysis, pseudotime was calculated using scanpy v1.4.3. Briefly, we followed the pipeline integrated into scVelo, employing the same projection generated by scVelo. We performed the prediction using the scv.pl.paga function in scVelo, setting the basis parameter as UMAP, the size as 50, the alpha as 0.3, the min_edge_width as 2, and the node_size_scale as 1.5. We also used monocle2 38 to infer the trajectory of chondrocytes of Sw_data. The programming languages R and Python were employed for all statistical analyses and data visualizations. Wilcoxon Rank Sum test was used to identify the differentially expressed genes between two cell clusters. Bonferroni correction was applied for multiple testing. To reveal the cell heterogeneity of human chondrocytes, we conducted scRNA-seq on chondrocytes from healthy human knee cartilage. We obtained single-cell transcriptomes from 13,363 cells, with a median number of 10,606 detected unique molecular indexes and an average of 2753 detected genes per cell after quality control . Unsupervised clustering of the chondrocytes resulted in a total of nine cell clusters . We annotated each cluster according to cluster-specific genes: (i) homeostatic chondrocytes (HomC) ( DDIT3 , ATF3 , and GDF15 ), (ii) proliferative chondrocytes (ProC) ( BHLHE41 , CCL20 , and DUSP6 ), (iii) prehypertrophic chondrocytes (PreHTC) ( IL11 , MMP3 , and CXCL3 ), (iv) hypertrophic chondrocytes-1 (HTC-1) ( FMOD , EBF1 , ADAMTS5 , ELL2 , and NEAT1 ), (v) hypertrophic chondrocytes-2 (HTC-2) ( FMOD , EBF1 , OLFM2 , PDGFRB , and SCG2 ), (vi) prefibrochondrocytes (PreFC) ( PTX3 , TAGLN , and SPARC ), (vii) proliferate fibrochondrocytes (ProFC) , (viii) fibrochondrocytes (FC) ( MYLK , ACTA2 , and CTGF ), and (ix) regulatory chondrocytes (RegC) ( CFH , LUM , and DCN ) . Among all chondrocyte subsets, PreHTC and PreFC were abundant and accounted for 22% and 19% of the total cells, respectively; while HomC and RegC were relatively rare and accounted for 3% and 4% of the total cells, respectively . We found ProFC expressed cell cycle genes including STMN1 , KIAA0101 , and MCM3 . Further analysis showed that ProFC were mainly in the S phase of the cell cycle . Meanwhile, ProFC-specific genes enriched in the cell cycle, DNA replication, cell activation, and collagen formation , strongly supporting that ProFC is in an active phase of the cell cycle. The GO terms enriched in the specifically expressed genes of each chondrocyte subset were consistent with its identity inferred by marker genes . Figure 1 A single-cell transcriptomic atlas of chondrocytes in healthy human cartilage. (A) UMAP visualization of the 13,363 chondrocytes from healthy human cartilage. Color represents the chondrocyte subset. (B) UMAP visualization of the expression of representative marker genes for each chondrocyte subset. (C) The heatmap of chondrocyte subset-specific genes. (D) Cell–cell communication between chondrocyte subsets was analyzed by CellChat. The width and color of the line represent the strength of cell–cell interaction and signaling source, respectively. (E) Gene ontology (GO) enrichment of RegC-specific genes. HomC, homeostatic chondrocytes; PreHTC, prehypertrophic chondrocytes; ProC, proliferate chondrocytes; HTC, hypertrophic chondrocytes; ProFC, proliferate fibrochondrocytes; preFC, prefibrochondrocytes; FC, fibrochondrocytes; RegC, regulatory chondrocytes. Figure 1 We analyzed the crosstalk of ligand–receptor pairs to understand the cell-cell communication between chondrocyte subsets. We found HomC had the lowest self-interactions among all chondrocyte subsets based on three cell–cell interaction analysis methods . In particular, HomC sent out a few cell–cell interaction signals . These results potentially indicate that HomC is relatively resting and isolated. RegC has one of the strongest inter-subset interactions and self-interactions among all chondrocyte subsets . The GO enrichment analysis showed that RegC-specific genes were enriched in extracellular matrix organization, regulation of cellular component movement, regulation of cell motility, collagen formation, cellular responses to stimuli, connective tissue development, etc . . These results indicated that RegC played an important role in shaping cartilage microenvironment and regulation of chondrocyte movement and activation. We used CellChat to identify cell–cell interaction signaling among chondrocyte subsets and the most significant cell–cell interaction signaling pathways included the COLLAGEN signaling pathway, FN1 signaling pathway, THBS signaling pathway, LAMININ signaling pathway, TENASCIN signaling pathway, and HSPG signaling pathway . These cell–cell interaction signaling pathways showed distinct patterns, indicating each pathway has its own feature and story. Taking the COLLAGEN signaling pathway as an example, FC displayed the strongest interaction with the other cell clusters, which indicates that the collagen metabolism in FC was more active than the other cell types . We found two HTC subpopulations in human cartilage , and it is interesting to investigate the similarities and differences between the two HTC subpopulations. Although both HTC subpopulations highly expressed chondrocyte hypertrophic specific genes , we identified total 241 HTC-1-specific genes and 616 HTC-2-specific genes ; ADAMTS5 39 , 40 and FGF2 , 41 , 42 which are associated with chondrocyte hypertrophy and cartilage degeneration, were expressed higher in HTC-1, while COL1A1 19 , 20 and BGN , 43 , 44 which are associated with fibrocartilage formation and collagen fibril organization, were expressed higher in HTC-2 . Moreover, GO enrichment analysis suggested that HTC-1-specific genes were enriched in the regulation of apoptosis, cellular responses to stress, and programmed cell death; while HTC-2-specific genes were enriched in skeletal system development, collagen fibril organization, and ossification , indicating the two HTC subpopulations have quite different functions. Figure 2 The different features of the two HTC populations. (A) Highlighting of the two HTC subpopulations on the UMAP plot of chondrocytes. (B) The heatmap of the expression level of differentially expressed genes (DEGs) between HTC-1 and HTC-2. (C) The violin plots showing the expression levels of representative DEGs between HTC-1 and HTC-2. (D) Gene ontology (GO) enrichment analysis of HTC-1-specific genes and HTC-2-specific genes. (E) Gene set enrichment analysis (GSEA) showed apoptosis and programmed cell death were associated with HTC-1-specific genes. (F) GSEA showed collagen fibril organization and ossification were associated with HTC-2-specific genes. HTC, hypertrophic chondrocytes. Figure 2 RNA velocity exploited the relative abundance of nascent (unspliced) and mature (spliced) mRNA to infer trajectory direction during dynamic processes. 35 , 36 We calculated RNA velocity in each cell to infer the trajectories of chondrocytes using PAGA. We identified two main trajectories (trajectory #1: ProC → preHTC → HTC-2 → PreFC → FC, and trajectory #2: ProC → preHTC → HTC-1), which shared the starting point . The trajectories inferred by scVelo and monocle3 were similar to those inferred by PAGA . Interestingly, the expression of MMP3 decreased along the pseudotime , consistent with recent reports that MMP3 expressed in early chondrocyte development. 45 , 46 The expression of COL1A1 increased along the pseudotime , consistent with recent reports that COL1A1 expressed in late chondrocyte development. 19 , 20 Figure 3 The pseudotime trajectories of chondrocytes and trajectory-associated genes. (A) The pseudotime trajectories of chondrocytes inferred by PAGA. (B) The expression of MMP3 along pseudotime. (C) The expression of COL1A1 along pseudotime. (D) Pseudotime score of ProC, PreHTC, HTC-2, PreFC, and FC in trajectory #1. (E) Pseudotime score of ProC, PreHTC, and HTC-1 in trajectory #2. (F) Trajectory #1 showed the progression of ProC, PreHTC, HTC-2, PreFC, and FC. (G) The dynamic gene expression along trajectory #1. (H) Trajectory #2 showed the progression of ProC, PreHTC, and HTC-1. (I) The dynamic gene expression along trajectory #2. PreHTC, prehypertrophic chondrocytes; ProC, proliferate chondrocytes; HTC, hypertrophic chondrocytes; preFC, prefibrochondrocytes; FC, fibrochondrocytes. Figure 3 We found that the pseudotime scores increased along either trajectory #1 or trajectory #2 . Trajectory #1, starting from ProC and ending up with FC, showed a process of chondrocyte proliferation, hypertrophy, and fibrosis , which was consistent with previous reports 19 , 47 and Sw_data inferred by monocle2 . We identified hundreds of trajectory-coordinated genes with expression gradually changing along trajectory #1 that differentiated into FC . For example, NGF , 48 ITM2B , 49 and RUNX1 , 50 being reported associated with chondrocyte differentiation and proliferation, were highly expressed at the beginning of the trajectory . While THBS1 , 51 COL1A2 , 52 and GREM1 , 53 being reported associated with chondrocyte fibrosis, were highly expressed at the end of the trajectory . Trajectory #2 is the process of chondrocyte development, degradation, and apoptosis , which has not been reported at the single-cell level. The genes associated with chondrocyte degradation and apoptosis, such as NFIA , 54 SERPINE1 , 55 and CAP2 , 56 were highly expressed in the later stage of trajectory #2 . Although it is reported that precisely regulated apoptosis plays an important role in the homeostasis of cartilage degradation in vitro , 47 , 57 , 58 the trajectory of HTC apoptosis provides novel insight into the process of chondrocyte apoptosis and cartilage degradation. We conducted a comparative analysis of the single-cell landscape of chondrocytes between healthy cartilage and OA cartilage . After quality control, we had a total of 34,941 single-cell transcriptomes, comprising 13,363 cells from healthy individual and 21,578 cells from OA patients ( Table S1 ). We identified ten chondrocyte subsets , nine of which were consistent with that in our constructed single-cell atlas . PreHTC and PreFC were abundant, comprising 28% and 15% of the total cells, respectively, while ProFC and HomC were relatively scarce, accounting for only 1% and 2% of the total cells, respectively . Notably, we discovered a new chondrocyte subset, termed ProFC-2 that specifically expressed CCNB1 and MYLK . Remarkably, ProFC-2 was exclusively present in OA cartilage, while the other clusters contained cells from both the healthy individual and OA patients . The expression of cluster-specific genes showed there were some genes expressed differently between healthy chondrocytes and OA chondrocytes . Furthermore, the proportions of PreFC, RegC, ProFC, and HTC-1 in OA patients were increased compared with those in the healthy individual, whereas the proportions of HomC, ProC, PreHTC, and HTC-2 in OA patients were decreased compared with those in the healthy individual . In particular, PreFC and HTC-1 were almost dominant by cells from OA patients, while HomC and ProC were almost dominant by cells from healthy individual , essentially consistent with independent analyses of Sw_data . Figure 4 Comparison of the landscape of chondrocytes between healthy cartilage and OA cartilage. (A) UMAP visualization of 34,941 chondrocytes in healthy and OA cartilage. (B) UMAP visualization of chondrocytes in healthy cartilage (left) and OA cartilage (right). (C) Comparison of the expression of chondrocyte subset-specific genes between healthy cartilage and OA cartilage. Dot size and color intensity represent the fraction of cells expressing the genes and the average expression level, respectively. (D) Cell compositions of chondrocytes in healthy and OA cartilage. (E) The bar plot displaying the cell compositions of each chondrocyte subset based on cell sources. OA, osteoarthritis. Figure 4 Both ProFC and ProFC-2 highly expressed cell cycle genes including TOP2A and STMN1 . Gene set enrichment analysis of ProFC-specific genes and ProFC-2-specific genes revealed that both cell subsets enriched in the mitotic process , indicating that both ProFC and ProFC-2 are in an active state of cell proliferation. A total of 178 ProFC-specific genes and 329 ProFC-2-specific genes were identified by differential analysis . ProFC-specific genes included GINS2 , HELLS , and MCM3 , while ProFC-2-specific genes included CENPA , CDKN3 , and AURKA . GO enrichment analysis suggested that ProFC were enriched in extracellular matrix organization, skeletal system development, and cell cycle; while ProFC-2-specific genes were enriched in cytokine signaling, inflammatory response, and cellular responses to stimuli . Therefore, ProFC-2 might contribute to OA via inflammation since inflammation is thought to be associated with the development of OA. 59 We identified three significantly expanded chondrocyte subpopulations in OA cartilage, namely ProFC, ProFC-2, and HTC-1. First, the proportion of ProFC in OA cartilage was significantly higher than that in healthy cartilage , indicating the increase of ProFC may be associated with or contribute to the occurrence and development of OA. Differential analysis of ProFC between healthy and OA cartilage identified 321 OA-specific genes . These OA-specific genes include CEMIP , 60 , 61 ACAN , 62 and HMOX1 63 which are associated with chondrocyte inflammation, degradation, or fibrosis. We also identified 437 healthy specific genes ( Table S4 ) including BDNF , IGFBP2 , and WNT5A . GO enrichment analysis of OA cartilage-specific genes in ProFC enriched in extracellular matrix organization, collagen fibril organization, and ossification, while healthy cartilage-specific genes in ProFC enriched in the cellular response to cytokine stimulus, cell activation, and cell population proliferation , indicating that ProFC in OA cartilage have increased extracellular matrix and collagen than in healthy cartilage. Intriguingly, ProFC-2 represented a small cell population predominantly in OA cartilage , implying that ProFC-2 have a unique effect on the occurrence and development of OA. Figure 5 Expansion of ProFC, ProFC-2, and HTC-1 in OA cartilage and change of gene expression. (A) Highlighting of ProFC on UMAP plot of chondrocytes in healthy cartilage (left) and OA cartilage (right). (B) Highlighting of ProFC-2 on UMAP plot of chondrocytes in healthy cartilage (left) and OA cartilage (right). (C) The proportions of ProFC and ProFC-2 in healthy cartilage and OA cartilage. (D) The heatmap of differentially expressed genes (DEGs) between healthy cartilage and OA cartilage in ProFC. (E) Enrichment analysis of healthy specific genes and OA-specific genes in ProFC. (F) Highlighting of HTC-1 on UMAP plot in healthy cartilage (left) and OA cartilage (right). (G) The proportion of HTC-1 in healthy and OA cartilage. (H) The heatmap of the expression level of DEGs between healthy and OA cartilage in HTC-1. (I) Enrichment analysis of healthy specific genes and OA-specific genes in HTC-1. OA, osteoarthritis; HTC, hypertrophic chondrocytes; ProFC, proliferate fibrochondrocytes. Figure 5 The proportion of HTC-1 in OA cartilage was significantly higher than that in healthy cartilage , which was consistent with the result of Sw_data . Differential analysis of HTC-1 between healthy and OA cartilage identified 230 OA-specific genes ( Table S4 ) including CEMIP , SFRP4 , and CXCL12 . We also identified 333 healthy specific genes ( Table S4 ) including IGFBP2 , WISP3 , and IFIT3 . GO enrichment analysis of OA cartilage-specific genes in HTC-1 enriched in vasculature development, degradation of the extracellular matrix, and ossification, while healthy cartilage-specific genes in HTC-1 enriched in cellular response to stress, proteasome degradation, and regulation of apoptosis , implying that HTC-1 might be stimulated into apoptosis via degradation of the extracellular matrix, and the increase of HTC-1 might trigger OA. Although we found several chondrocyte subsets expanded in OA cartilage, the proportion of HomC in OA cartilage was significantly lower than that in healthy cartilage , which was consistent with independent analyses of Sw_data . HomC have been reported for their protective role in preventing cartilage degeneration and exhibit high expression of human circadian clock rhythm genes, 19 and its decrease may indicate weaker regulation in OA cartilage. Differential analysis of HomC between healthy and OA cartilages identified 454 OA-specific genes ( Table S4 ) including COL1A1 and BGN . We also identified 850 healthy specific genes ( Table S4 ) including WARS and ISG20 . GO enrichment analysis showed healthy specific genes in HomC enriched in cellular response to protein processing, immune system function, and maintenance of cellular homeostasis, indicative of their regulatory effect on cartilage homeostasis . However, OA-specific genes in HomC enriched in skeletal system development, degradation of the extracellular matrix, and ossification, implying their potential involvement in OA progression and pathological remodeling of the joint . These results indicated that HomC in OA cartilage decreased in number and were dysfunctional. Figure 6 Reduction of HomC in OA cartilage and change of gene expression. (A) Highlighting of HomC on UMAP plot in healthy cartilage (left) and OA cartilage (right). (B) The proportion of HomC in healthy cartilage and OA cartilage. (C) The dotplot of healthy specific genes and OA-specific genes in HomC. (D) The violin plot of representative OA-specific genes. (E) The violin plot of representative healthy specific genes. (F) Enrichment analysis of healthy specific genes and OA-specific genes in HomC. OA, osteoarthritis; HomC, homeostatic chondrocytes. Figure 6 Here, we employed scRNA-seq to construct a single-cell transcriptomic atlas of chondrocytes in healthy human cartilage. We identified two HTC subpopulations with distinct functions and disparate terminal fates, namely HTC-1 and HTC-2. HTC-2 is involved in skeletal system development, which differentiate into PreFC and then FC. It is worth noting that HTC-1 specifically expresses genes related to apoptosis and programmed cell death and is the terminal of chondrocyte apoptosis trajectory at single-cell resolution. Importantly, we observed the expansion of the HTC-1 population in the cartilage of OA patients compared with the healthy individual. Compared with healthy cartilage, the OA-specific genes of HTC-1 showed weaker cellular response to stress and regulation of apoptosis, and are more likely to participate in vasculature development, degradation of the extracellular matrix, and ossification. These significant findings offer compelling clues indicating that an increased presence of HTC-1 and decreased chondrocyte apoptosis play pivotal roles in the pathogenesis of OA. It is reported that the change in chondrocyte subpopulations and the cellular states may contribute to the occurrence of OA. 19 , 64 Notably, the population size of ProFC in OA cartilage has significantly expanded compared with healthy cartilage. ProFC highly expressed cell cycle genes and played an important role in extracellular matrix organization, collagen formation, and collagen fibril organization. Compared with ProFC in healthy cartilage, cellular response to cytokine stimulus and angiogenesis signals decreased in OA cartilage, while extracellular matrix organization, collagen fibril organization, and ossification increased in OA cartilage, indicating the dysfunction of ProFC. Interestingly, not only ProFC has significantly expanded, but also a new subset, namely ProFC-2 showed up in OA cartilage. Different from ProFC, ProFC-2 showed increased cytokine signaling, inflammatory response, and cellular responses to stimuli. Thus ProFC-2 is an OA cartilage-specific subpopulation and may contribute to the development of OA via inflammation. HomC is known for its protective effects against cartilage degeneration and its pronounced expression of human circadian clock rhythm genes. 19 Here, we successfully identified well-defined gene markers for HomC including ATF3 , DDIT3 , and GDF15 , 21 , 65 all of which have been linked to collagen synthesis, chondrocyte proliferation, and chondrocyte differentiation. Interestingly, our results showed that HomC in OA cartilage was significantly lower than those in healthy cartilage, providing an interesting insight into the molecular mechanism of OA. In summary, this study provided a single-cell transcriptomic atlas of chondrocytes in healthy cartilage. In particular, we identified a novel THC subset, namely HTC-1, that specifically expressed genes associated with cell apoptosis and programmed cell death. We identified two main trajectories of chondrocytes, one of which differentiates into FC, while the other terminates in apoptosis. A comparison of chondrocyte subsets between healthy cartilage and OA cartilage showed that ProFC and HTC-1 populations expanded in OA patients, whereas the HomC population decreased. Interestingly, we also discovered an OA-specific ProFC subset, namely ProFC-2, which showed enhanced cytokine signaling and inflammatory response. Therefore, ProFC-2 may contribute to the development of OA via inflammation signaling. In short, our study promotes a better understanding of chondrocyte heterogeneity in articular cartilage and also provides a new insight into the mechanisms underlying the progression of OA. Wenfei Jin and Li Duan conceived and designed the project. Qi Zhang, Bin Zeng, and Guanming Chen performed the experiments. Changyuan Huang analyzed the scRNA-seq data with contributions from Wenhong Hou and Bo Zhou. Wenfei Jin, Ni Hong, and Guozhi Xiao supervised this project and interpreted the results. Changyuan Huang and Wenfei Jin wrote and revised the manuscript, with input from other authors. All authors read and approved the manuscript. All the authors declare no conflict of interests with the content of this manuscript. The authors declare no affiliation with or financial involvement in organizations or entities with a direct financial interest in the subject matter or materials discussed in the manuscript. This study was supported by the 10.13039/501100012166 National Key R&D Program of China , 10.13039/501100001809 National Natural Science Foundation of China , Guangdong Basic and Applied Basic Research Foundation (China) , Key-Area Research and Development Program of Guangdong Province, China , International Science and Technology Cooperation Program of Guangdong, China , Shenzhen Innovation Committee of Science and Technology (Guangdong, China) , Shenzhen Science and Technology Program (Guangdong, China) , and Shenzhen High-level Hospital Construction Fund (Guangdong, China) . The raw single-cell RNA sequencing data generated for this study can be accessed from the Genome Sequence Archive of the Beijing Institute of Genomics (BIG) Data center, BIG, Chinese Academy of Sciences, under accession number HRA004154 at http://bigd.big.ac.cn/gsa-human . The scRNA-seq data of chondrocytes Sw_data is available in GEO . | Review | biomedical | en | 0.999997 |
PMC11697247 | CDEs are prominent in quantum research and engineering. When a model based on mathematics is created to address a real-world physical phenomenon, it takes the shape of CDEs. By way of illustration, differential equations with a complex dependent variable are commonly used to explain the vibrations of a one-mass system with two degrees of freedom , . In , , provides an overview of the various applications for complex dependent variables of differential equations. However, analytic techniques alone cannot provide a perfect solution. To overcome this challenge, numerical methods must be applied. In the last few years, extensive research has been conducted on CDEs. Some of these studies include a geometric method in any domain that is based on meromorphic functions , a topological explanation of certain CDE solutions involving multi-valued coefficients , the complex oscillation of some linear CDE , the growth estimates of linear CDE , analytic functions in the complex plane: the polynomial and rational approximations , , the linear differential equations’ meromorphic solutions , [ p , q ] -order linear differential equations in the complex plane , a higher-order periodic linear differential equation problem , on complex domain the solution of IVP for retarded differential equations , an analytic method for the non-linear CDEs , the solution growth of algebraic systems of nonlinear CDEs , and also on meromorphic solutions and entire solutions by julia limiting directions and some function spaces solutions for the nonlinear CDEs have been studied in , , , , . The following equation is a representation of the generalized m th order CDE with complex variable coefficients. (1) ∑ k = 0 m Q k ( z ) f ( k ) ( z ) = h ( z ) ; m ∈ N where z is the complex variable, Q k ( z ) and h ( z ) are the analytic functions in the following rectangular domain D of the complex plane C D = { z ∈ C , z = x + i y , i = − 1 ; a ≤ x ≤ b , c ≤ y ≤ d ; a , b , c , d ∈ R } . Eq. (1) is a general CDE written down in derivative form given , , , , , , , with the following mixed conditions (2) ∑ k = 0 m − 1 ∑ l = 0 L [ b r k f ( k ) ( ξ l ) + c r k f ( k ) ( z 0 ) ] = λ r ; L ∈ N and r = 0 , 1 , 2 , ⋯ , ( m − 1 ) . where b r k , c r k , and λ r are suitable real or complex constants; ξ l , z 0 ∈ D . Recently, solutions of CDEs have been estimated using numerous numerical techniques. For instance, Taylor Collocation approach for operational matrix method , , , Bessel polynomials method , Legendre polynomial method , , Euler polynomials method , orthogonal Bernstein polynomials method , Fibonacci polynomials method , Bernoulli polynomials method , and Hermite polynomials method , . One of the most renowned weighted residual numerical methods for solving differential equations is the Galerkin method, which performs a significant role in the solution of differential equations numerically. For instance, the Galerkin method of Wavelet, Chebyshev, Taylor, Petrov, Legendre, Hermite, Bernstein, Exponential B-splines, and Bernoulli has been used to solve differential equations , , , , , , , integral and integro-differential equations , Volterra integro-differential equations , Fredholm integro-differential equations , eigenvalue problems , delay differential equations , , , Burger’s differential equations , KdV equations , nonlinear partial differential equations , , , and perturbed partial differential equations . It has come to our attention that the previous research has not applied the Galerkin method for the numerical solution of CDEs. Since there is a research gap in this field, we presented a new technique for solving CDEs called the Taylor Galerkin Method (TGM). TGM uses Taylor series expansions for discretization, resulting in higher-order precision in temporal integration. This is especially useful for applications needing precise temporal resolution. TGM may successfully address nonlinear differential equations by adding Taylor expansions, which better represent nonlinear term behavior. While TGM can achieve high accuracy, it may incur additional computational costs due to the necessity for higher-order derivatives and the complexity of the numerical implementation. The collocation method is directly evaluating the governing equations at specific collocation points. The method can have lower computational overhead and cost for certain problems, especially when using fewer basis functions. However, this method can struggle with problems that involve sharp gradients or discontinuities, as the collocation points may not adequately capture local behavior. Poorly chosen points can lead to suboptimal convergence and inaccuracies. It can produce non-unique solutions, particularly when dealing with ill-posed problems or insufficiently defined basis functions. Both methods may encounter difficulties with boundary conditions, particularly for complicated geometries or where exact enforcement is required. However, TGM can be applied to a variety of spatial discretization techniques, including both structured and unstructured domains, increasing its application to a wide range of issues. When dealing with complex shapes or domains, the TGM uses finite element discretization, which is highly flexible in terms of geometry. The domain (shape) is subdivided into smaller elements, and the method solves the problem over these elements using basis functions that are defined locally on each element. In this method, boundary conditions must be incorporated into both the time-stepping scheme (via the Taylor expansion) and the spatial discretization (Galerkin method). The immediate and a sample error analysis results are focused on Fig. 1 . For a particular problem and the same number of basis functions, under those conditions TGM achieves higher-order accuracy by incorporating derivatives of the solution, which can significantly reduce error in the approximation, especially in smooth regions of the solution. On the other hand, since the Collocation method directly evaluates the governing equations at specific collocation points and the number of points is insufficient or poorly distributed, particularly near discontinuities or sharp gradients, it leads to larger errors. Fig. 1 Visual depiction of the absolute error analysis for N = 5 ( ℑ m part). Fig. 1 The objective we set in the present research is to obtain an approximate solution f ˜ N ( z ) of equation (1) and subject to the condition Eq. (2) , and want to express in terms of N degree Taylor polynomial in the following form; at the point z = z 0 , (3) f ˜ N ( z ) = ∑ j = 0 N a j ( z − z 0 ) j j ! ; z , z 0 ∈ D and N ≥ m . where the unidentified Taylor coefficients a j , j = 0 , 1 , 2 , ⋯ , N are to be determined. We will find out the unknown coefficients a j of the Eq. (3) by the proposed approach TGM. Since the proposed method is based on a Taylor series expansion, it may inherently handle higher-order derivatives more naturally than some other methods. This could lead to increased accuracy when approximating solutions that involve high spatial or temporal gradients. Computational complexity for the present method can grow rapidly, especially for multi-dimensional problems or when higher-order derivatives need to be accounted for. This could lead to a significant increase in memory allocation and high process time requirements, making TGM less efficient for very large-scale or real-time applications. TGM needs to carefully handle boundary conditions (e.g., Dirichlet, Neumann, or mixed) for CDEs. Since the solution is complex-valued, both the real ( ℜ e ) and imaginary ( ℑ m ) parts must satisfy the boundary conditions. At first, we consider the approximate solution f ˜ N ( z ) and its k th derivative f ˜ N ( k ) ( z ) in the following form , , (4) f ˜ N ( z ) ≡ f ˜ N ( 0 ) ( z ) = ∑ j = 0 N a j ( z − z 0 ) j j ! = ∑ j = 0 N a j θ j ( z ) and (5) f ˜ N ( k ) ( z ) ≡ d k [ f ˜ N ( z ) ] d z k = d k d z k [ ∑ j = 0 N a j θ j ( z ) ] = ∑ j = 0 N a j d k d z k [ θ j ( z ) ] ; k ∈ N . where θ j ( z ) = ( z − z 0 ) j j ! are considering the basis function for TGM. We first gather every term in the CDE (1) on the left side to get the residual function , . Now we substitute the relation (5) into the Eq. (1) , then we obtain the corresponding residual function of f ˜ N ( z ) as follows, (6) R ( f ˜ N ) = ∑ k = 0 m Q k ( z ) f ˜ N ( k ) ( z ) − h ( z ) ⇒ R ( f ˜ N ) = ∑ k = 0 m Q k ( z ) [ ∑ j = 0 N a j d k d z k [ θ j ( z ) ] ] − h ( z ) . We have to do to get a weighted residual is multiply the integral over the residual’s domain D by a weighting function w ( z ) , i . e . ∫ D R ( f ˜ N ) w ( z ) d z . By choosing ( N + 1 ) weight functions, w i ( z ) for i = 0 , 1 , 2 , ⋯ , N ; and to find out the ( N + 1 ) unknown coefficients a j of Eq. (3) , we have to solve ( N + 1 ) equations that result from putting these ( N + 1 ) weighted residuals to zero. The ( N + 1 ) weighted residual for w i ( z ) is defined as follows (7) R i ( f ˜ N ) ≡ ∫ D R ( f N ˜ ) w i ( z ) d z for i = 0 , 1 , ⋯ , N . Since the weighted residual method requires , , , R i ( f ˜ N ) = 0 for i = 0 , 1 , ⋯ , N . That implies (8) ∫ D R ( f N ˜ ) w i ( z ) d z = 0 for i = 0 , 1 , ⋯ , N . The main task of the TGM is to match the weight functions to the basis functions of the approximate solution f ˜ N ( z ) . That is, (9) w i ( z ) = θ i ( z ) for i = 0 , 1 , ⋯ , N . Now, by substituting the Eq. (6) and (9) into the Eq. (8) , then the Galerkin weighted residual equations or simply the Galerkin equations are , (10) ∫ D [ ( ∑ k = 0 m Q k ( z ) ( ∑ j = 0 N a j d k d z k [ θ j ( z ) ] ) − h ( z ) ) θ i ( z ) ] d z = 0 ⇒ ∫ D [ ∑ k = 0 m Q k ( z ) ( ∑ j = 0 N a j d k d z k [ θ j ( z ) ] ) θ i ( z ) ] d z = ∫ D h ( z ) θ i ( z ) d z ⇒ ∑ j = 0 N a j [ ∫ D ( ∑ k = 0 m Q k ( z ) ( d k d z k [ θ j ( z ) ] ) θ i ( z ) ) d z ] = ∫ D h ( z ) θ i ( z ) d z for i = 0 , 1 , ⋯ , N . The Eq. (10) can be written in conventional matrix form as follows, (11) [ K ] { A } = { C } where (12) K = [ k i , j ] ; k i , j = ∫ D [ ∑ k = 0 m Q k ( z ) ( d k d z k [ θ j ( z ) ] ) θ i ( z ) ] d z for i , j = 0 , 1 , ⋯ , N and (13) C = [ c i ] T ; c i = ∫ D h ( z ) θ i ( z ) d z for i = 0 , 1 , ⋯ , N and (14) A = [ a j ] T for j = 0 , 1 , 2 , ⋯ , N . The appropriate matrix configuration for the mixed conditions (2) can be obtained in the following manner , , (15) ∑ k = 0 m − 1 ∑ l = 0 L [ b r k d k d z k ( ∑ j = 0 N a j θ j ( ξ l ) ) + c r k d k d z k ( ∑ j = 0 N a j θ j ( z 0 ) ) ] = λ r ⇒ ∑ j = 0 N a j [ ∑ k = 0 m − 1 ∑ l = 0 L ( b r k d k θ j ( ξ l ) d z k + c r k d k θ j ( z 0 ) d z k ) ] = λ r for r = 0 , 1 , 2 , ⋯ , ( m − 1 ) . The Eq. (15) can be written in conventional matrix form as follows, (16) [ U ] { A } = { λ r } where (17) U = [ u r , j ] ; u r , j = ∑ k = 0 m − 1 ∑ l = 0 L ( b r k d k d z k θ j ( ξ l ) + c r k d k d z k θ j ( z 0 ) ) for r = 0 , 1 , ⋯ , ( m − 1 ) ; j = 0 , 1 , ⋯ , N . The augmented matrix form of Eq. (11) becomes , (18) [ K : C ] = [ k i , j : c i ] ; i , j = 0 , 1 , 2 , ⋯ , N ⇒ [ K : C ] = [ k 0 , 0 k 0 , 1 ⋯ k 0 , N : c 0 k 1 , 0 k 1 , 1 ⋯ k 1 , N : c 1 ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ k N − 1 , 0 k N − 1 , 1 ⋯ k N − 1 , N : c N − 1 k N , 0 k N , 1 ⋯ k N , N : c N ] which contains ( N + 1 ) rows. The augmented matrix form of Eq. (16) yields (19) [ U : λ ] = [ u r , j : λ r ] ; r = 0 , 1 , 2 , ⋯ , ( m − 1 ) ; j = 0 , 1 , ⋯ , N ⇒ [ U : λ ] = [ u 0 , 0 u 0 , 1 ⋯ u 0 , N : λ 0 u 1 , 0 u 1 , 1 ⋯ u 1 , N : λ 1 ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ u m − 1 , 0 u m − 1 , 1 ⋯ u m − 1 , N : λ m − 1 ] which contains m rows. Thus, it is possible to determine the unidentified Taylor coefficients a j ; j = 0 , 1 , 2 , ⋯ , N , associated with the equivalent solution of the problem (1) , which is composed of Eq. (11) and conditions (16) , by swapping the m row matrices (19) out the last m rows of the augmented matrix (18) . We have the new augmented matrix form as follows , , (20) [ K * : C * ] = [ k 0 , 0 k 0 , 1 ⋯ k 0 , N : c 0 k 1 , 0 k 1 , 1 ⋯ k 1 , N : c 1 ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ k N − m , 0 k N − m , 1 ⋯ k N − m , N : c N − m u 0 , 0 u 0 , 1 ⋯ u 0 , N : λ 0 ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ u m − 1 , 0 u m − 1 , 1 ⋯ u m − 1 , N : λ m − 1 ] or, the equivalent matrix equation (21) [ K * ] { A } = { C * } . If det K * ≠ 0 , we can rewrite the Eq. (21) as (22) { A } = [ K * ] − 1 { C * } and it is unique to determining the column matrix A which is the unknown coefficient of the Taylor polynomial (3) . Therefore, there exists a unique solution to the m th order linear CDE with variable coefficients under the given conditions. For nonlinear CDE, we construct a nonlinear system of equations with undetermined Taylor coefficients. We can solve this nonlinear system of equations numerically by well-known iterative techniques such as Newton’s Method, Levenberg-Marquardt, and Broyden’s Method , . For a better approximation, we have to increase the degree of N polynomial (3) . In this section, by using the residual function of the m th order CDE provided by Eq. (1) , we estimate the error for the proposed method. Next, we demonstrate how to use this estimation to improve the approximate solution of the equation, known as the corrected solution. Finally, the Taylor theorem is utilized to determine an error bound on the corrected solution’s error , , . Let us consider the residual function of Eq. (1) as follows: (23) R ( z ) = ∑ k = 0 m Q k ( z ) f ( k ) ( z ) − h ( z ) = 0 . Now substitute the approximate solution f ˜ N ( z ) in place of f ( z ) to the Eq. (23) , we get (24) R N ( z ) = ∑ k = 0 m Q k ( z ) f ˜ N ( k ) ( z ) − h ( z ) as the residual function of f ˜ N ( z ) . Subtracting Eq. (23) from Eq. (24) , we obtain (25) ∑ k = 0 m Q k ( z ) E N ( k ) ( z ) = − R N ( z ) which is just as Eq. (1) with non-homogeneous term − R N ( z ) instead of h ( z ) and f ( z ) − f N ˜ ( z ) is restored by E N ( z ) . Since the approximate solution f ˜ N ( z ) also assure the mixed condition (2) , we obtain the corresponding homogeneous condition (26) ∑ k = 0 m − 1 ∑ l = 0 L [ b r k E N ( k ) ( ξ l ) + c r k E N ( k ) ( z 0 ) ] = 0 ; r = 0 , 1 , 2 , ⋯ , ( m − 1 ) . This is the mixed condition of the Eq. (25) . Now using the solution method described in Method details: Phase 1 and Method details: Phase 2 section to obtain an approximation solution E N , M ( z ) to Eq. (25) , where M is any positive integer. Lastly, we apply this approximation to obtain the approximate corrected solution for Eq. (1) , which is given by (27) f ˜ N , M ( z ) = f ˜ N ( z ) + E N , M ( z ) where the actual error of f ˜ N , M ( z ) is given by f ( z ) − f ˜ N , M ( z ) . In the following theorem, the truncation error of the Taylor expansion for the exact solution of Eq. (1) is used to evaluate the error bound for the approximate solution f ˜ N ( z ) . Theorem 1 Let f ˜ N ( z ) be the approximate solution and f ( z ) be the exact solutions of Eq. (1) . If f ( z ) has ( N + 1 ) times continuous derivative, then the error bound for the absolute error is given by | f ( z ) − f ˜ N ( z ) | ≤ | R N T ( z ) | + | f N T ( z ) − f ˜ N ( z ) | . Where f N T ( z ) denotes the N t h degree Taylor polynomial of f ( z ) around the point z = z 0 ∈ D and R N T ( z ) represents its Cauchy form remainder term . Proof The Taylor series can be rewritten with reminder term of f ( z ) around the point z 0 ∈ D as f ( z ) = ∑ j = 0 N ( z − z 0 ) j j ! f ( j ) ( z 0 ) + R N T ( z ) . Where R N T ( z ) = 1 2 π i ∮ γ ( z − z 0 t − z 0 ) ( N + 1 ) f ( t ) t − z d t is the Cauchy form reminder term of the Taylor expansion of f ( z ) and this contour integral is evaluated around the circle γ which centered at z 0 , such that γ ⊂ D . Consequently, R N T ( z ) = f ( z ) − f N T ( z ) . By using this in conjunction with the triangle inequality, we get | f ( z ) − f ˜ N ( z ) | = | f ( z ) − f ˜ N ( z ) + f N T ( z ) − f N T ( z ) | ≤ | f ( z ) − f N T ( z ) | + | f N T ( z ) − f ˜ N ( z ) | = | R N T ( z ) | + | f N T ( z ) − f ˜ N ( x ) | . As a result, we have located an upper bound for the absolute error based on the Taylor truncation error of the exact solution. □ This section will demonstrate the numerical solution of three linear and two nonlinear CDEs by applying the proposed method. Nonlinear CDEs often involve terms like products of the unknown solution or its derivatives, which complicate both the iterative solution and error correction. The Newton-Raphson or Picard iteration methods are used to decouple these nonlinear terms at each iteration. All results are presented numerically, along with the exact solution and comparison. Since the N degree polynomial (3) is an approximate solution of Eq. (1) , when the approximation solutions f ˜ N ( z ) and exact solution f ( z ) are substituted in the following equation, we can evaluate the absolute errors E N ( z ) at the subsequent particular points within the specified domain; that is, for z = z j ∈ D , (28) E N ( z j ) = | f N ˜ ( z j ) − f ( z j ) | . The absolute error E N ( z ) diminishes when N grows to a significant size. We can also evaluate the maximum absolute error L ∞ n o r m as follows: (29) L ∞ n o r m = max [ E N ( z j ) ] . Example 1 Let us examine the second-order non-homogeneous CDE that is linear and has variable coefficients , , (30) f ″ ( z ) + z f ( z ) = e z + z e z ; z ∈ C . Where m = 2 , Q 0 ( z ) = z , Q 1 ( z ) = 0 , Q 2 ( z ) = 1 , h ( z ) = e z + z e z and subject to the initial conditions are (31) f ( 0 ) = 1 , f ′ ( 0 ) = 1 . The corresponding transcendental entire solution of Eq. (30) is f ( z ) = e z and now consider an approximate solution f ˜ 5 ( z ) by the N = 5 degree Taylor polynomial at z 0 = 0 in the following form (32) f ˜ 5 ( z ) = ∑ j = 0 5 a j z j j ! . Thus, we have θ j ( z ) = z j j ! for j = 0 , 1 , ⋯ , 5 and θ i ( z ) = z i i ! for i = 0 , 1 , ⋯ , 5 Assume the Galerkin integral domain D = { z ∈ C , z = x + i y , i = − 1 ; − 1 ≤ x ≤ 1 , − 1 ≤ y ≤ 1 } . From Eq. (18) , we obtain the augmented matrix by using Eq. (12) and (13) as follows: [ K : C ] = [ 0 − 4 3 + 4 i 3 2 + 2 i − 4 15 − 4 i 15 − 2 3 + 2 i 3 2 105 − 2 i 105 : − 0.3103 + 3.6453 i − 4 3 + 4 i 3 0 − 4 5 − 4 i 5 − 4 3 + 4 i 3 2 21 − 2 i 21 − 4 15 − 4 i 15 : − 3.6136 + 1.4915 i 0 − 4 5 − 4 i 5 − 2 3 + 2 i 3 4 21 − 4 i 21 − 2 5 − 2 i 5 2 135 + 2 i 135 : − 1.6120 − 0.7535 i − 4 15 − 4 i 15 0 4 21 − 4 i 21 − 4 15 − 4 i 15 2 81 + 2 i 81 4 63 − 4 i 63 : − 0.2513 − 0.7562 i 0 2 21 − 2 i 21 − 1 15 − i 15 2 81 + 2 i 81 1 21 − i 21 − 1 495 + i 495 : 0.1046 − 0.1764 i 2 105 − 2 i 105 0 2 235 + 2 i 235 2 105 − 2 i 105 − 1 495 + i 495 2 405 + 2 i 405 : 0.0553 − 0.0161 i ] . From Eq. (19) , the augmented matrix form for the initial condition (31) is [ U : λ ] = [ 1 0 0 0 0 0 : 1 0 1 0 0 0 0 : 1 ] . From Eq. (20) , we obtain the new augmented matrix form by applying the initial condition as follows: [ K * : C * ] = [ 0 − 4 3 + 4 i 3 2 + 2 i − 4 15 − 4 i 15 − 2 3 + 2 i 3 2 105 − 2 i 105 : − 0.3103 + 3.6453 i − 4 3 + 4 i 3 0 − 4 5 − 4 i 5 − 4 3 + 4 i 3 2 21 − 2 i 21 − 4 15 − 4 i 15 : − 3.6136 + 1.4915 i 0 − 4 5 − 4 i 5 − 2 3 + 2 i 3 4 21 − 4 i 21 − 2 5 − 2 i 5 2 135 + 2 i 135 : − 1.6120 − 0.7535 i − 4 15 − 4 i 15 0 4 21 − 4 i 21 − 4 15 − 4 i 15 2 81 + 2 i 81 4 63 − 4 i 63 : − 0.2513 − 0.7562 i 1 0 0 0 0 0 : 1 0 1 0 0 0 0 : 1 ] . Here, det ( K * ) ≠ 0 and so by solving the linear system of equations [ K * ] { A } = { C * } , the unknown Taylor coefficients a j become [ a 0 a 1 a 2 a 3 a 4 a 5 ] = [ 1.000000000000000 + 0.000000000000000 i 1.000000000000000 + 0.000000000000000 i 1.014201842129353 + 0.001179180289414 i 1.008159146912473 − 0.001763403006139 i 0.990894060694918 + 0.141332871646214 i 0.995534591695822 + 0.111337779580632 i ] . Therefore, the approximate solution (32) of Eq. (30) is f 5 ˜ ( z ) = 1 + z + z 2 2 ! + z 3 3 ! + z 4 4 ! + z 5 5 ! . Similarly, we can also calculate the approximate solution of Eq. (30) for N = 9 . That is f 9 ˜ ( z ) = 1.0 + z + z 2 2 ! + z 3 3 ! + z 4 4 ! + z 5 5 ! + z 6 6 ! + z 7 7 ! + z 8 8 ! + z 9 9 ! . For N = 5 , 9 the tabular comparison and for N = 5 the graphical comparison, the absolute error produced by the present method is compared with the outcomes generated by the Taylor Collocation method and the Bessel Collocation method are shown in Table 1 and in Fig. 2 for ℜ e part, and in Table 2 and in Fig. 3 for ℑ m part. Example 2 Let us examine the second-order non-homogeneous CDE that is linear and has variable coefficients , , (33) f ″ ( z ) + z f ′ ( z ) + 2 z f ( z ) = 2 z sin z + z cos z − sin z ; z ∈ C . Where m = 2 , Q 0 ( z ) = 2 z , Q 1 ( z ) = z , Q 2 ( z ) = 1 , h ( z ) = 2 z sin z + z cos z − sin z and subject to the initial conditions are (34) f ( 0 ) = 0 , f ′ ( 0 ) = 1 . The corresponding transcendental entire solution of Eq. (33) is f ( z ) = sin z . Assume the Galerkin integral domain D = { z ∈ C , z = x + i y , i = − 1 ; − 1 ≤ x ≤ 1 , − 1 ≤ y ≤ 1 } . For N = 5 , by applying the proposed method discussed in Method details: Phase 1 and Method details: Phase 2 section, we obtain the approximate solution of the problem (33) is Table 1 Absolute error E N ( z ) analysis of Example 1 ( ℜ e part) for N = 5 , 9 . Table 1 z j Taylor Collocation E 5 ( z j ) ( ℜ e part) Bessel Collocation E 5 ( z j ) ( ℜ e part) Present method E 5 ( z j ) ( ℜ e part) − 1.00 − 1.00 i 1.5455878 × 10 − 1 4.56461167157 × 10 − 2 1.55825712363 × 10 − 4 − 0.60 − 0.60 i 4.306944 × 10 − 2 4.69890942146 × 10 − 3 2.92740621909 × 10 − 5 − 0.20 − 0.10 i 7.56259 × 10 − 3 3.53588372881 × 10 − 6 1.70147084216 × 10 − 4 − 0.20 − 0.20 i 2.021649 × 10 − 3 5.28547892122 × 10 − 5 2.88995538694 × 10 − 5 − 0.10 − 0.20 i 6.3395028 × 10 − 3 1.17953217994 × 10 − 5 2.07165233691 × 10 − 4 − 0.10 + 0.20 i 6.3395028 × 10 − 3 1.17953217994 × 10 − 5 1.88735059177 × 10 − 4 − 0.10 − 0.10 i 2.685866 × 10 − 4 4.08981352739 × 10 − 6 9.54656886026 × 10 − 6 − 0.10 + 0.10 i 2.685866 × 10 − 4 4.08981352739 × 10 − 6 1.52868641185 × 10 − 5 0.00 + 0.00 i 0 0 0 0.10 + 0.10 i 3.00956 × 10 − 4 5.34405858898 × 10 − 7 1.37335135544 × 10 − 5 0.10 − 0.10 i 3.00956 × 10 − 4 5.34405858898 × 10 − 7 8.60026504344 × 10 − 6 0.10 − 0.20 i 9.461532 × 10 − 3 5.89808309592 × 10 − 6 2.18213126268 × 10 − 4 0.10 + 0.20 i 9.461532 × 10 − 3 5.89808309592 × 10 − 6 2.37584226249 × 10 − 4 0.20 + 0.20 i 2.545070 × 10 − 3 4.02982871539 × 10 − 6 6.05804000660 × 10 − 5 0.20 + 0.10 i 8.12515 × 10 − 3 3.01135500091 × 10 − 6 1.81330834974 × 10 − 4 0.60 + 0.60 i 8.494471 × 10 − 2 1.04420502024 × 10 − 4 4.39688872244 × 10 − 4 1.00 + 1.00 i 4.774621 × 10 − 1 1.08853442660 × 10 − 2 3.93946395961 × 10 − 5 L ∞ n o r m → 4.774621 × 10 − 1 4.56461167157 × 10 − 2 4.39688872244 × 10 − 4 z j Taylor Collocation E 9 ( z j ) ( ℜ e part) Bessel Collocation E 9 ( z j ) ( ℜ e part) Present method E 9 ( z j ) ( ℜ e part) − 1.00 − 1.00 i 1.108733182 × 10 − 1 3.76537324165 × 10 − 4 2.57859638130 × 10 − 9 − 0.60 − 0.60 i 1.45548425 × 10 − 2 6.06258539787 × 10 − 6 7.58469130873 × 10 − 9 − 0.20 − 0.10 i 3.294138 × 10 − 4 2.45158049416 × 10 − 9 2.26350818464 × 10 − 8 − 0.20 − 0.20 i 1.061492 × 10 − 4 4.87937790172 × 10 − 9 9.44547400322 × 10 − 10 − 0.10 − 0.20 i 3.063314 × 10 − 4 1.05705222352 × 10 − 9 2.49267239421 × 10 − 8 − 0.10 + 0.20 i 3.063314 × 10 − 4 1.05705244557 × 10 − 9 3.61462878580 × 10 − 8 − 0.10 − 0.10 i 1.9201 × 10 − 6 3.5108893570 × 10 − 10 6.30457370627 × 10 − 10 − 0.10 + 0.10 i 1.9201 × 10 − 6 3.5108893570 × 10 − 10 1.51172321082 × 10 − 9 0.00 + 0.00 i 0 0 0 0.10 + 0.10 i 3.429 × 10 − 5 1.4785728197 × 10 − 11 1.29170784506 × 10 − 9 0.10 − 0.10 i 3.429 × 10 − 5 1.4785728197 × 10 − 11 6.39311490701 × 10 − 10 0.10 − 0.20 i 4.95029 × 10 − 4 1.2836420815 × 10 − 10 4.06109024591 × 10 − 8 0.10 + 0.20 i 4.95029 × 10 − 4 1.2836420815 × 10 − 10 2.88975497779 × 10 − 8 0.20 + 0.20 i 4.11766 × 10 − 4 1.7463808177 × 10 − 11 5.50078312939 × 10 − 9 0.20 + 0.10 i 3.58652 × 10 − 4 3.4350300381 × 10 − 11 2.34756393087 × 10 − 8 0.60 + 0.60 i 2.7396252 × 10 − 2 1.5724670343 × 10 − 7 1.08960406759 × 10 − 8 1.00 + 1.00 i 2.12823672 × 10 − 1 1.8848182780 × 10 − 5 7.60426705103 × 10 − 10 L ∞ n o r m → 2.12823672 × 10 − 1 3.76537324165 × 10 − 4 4.06109024591 × 10 − 8 Fig. 2 Visual depiction of the absolute error analysis of Example 1 for N = 5 ( ℜ e part). Fig. 2 Table 2 Absolute error E N ( z ) analysis of Example 1 ( ℑ m part) for N = 5 , 9 . Table 2 z j Taylor Collocation E 5 ( z j ) ( ℑ m part) Bessel Collocation E 5 ( z j ) ( ℑ m part) Present method E 5 ( z j ) ( ℑ m part) − 1.00 − 1.00 i 2.6405612 × 10 − 1 3.20625005945 × 10 − 2 4.61134911108 × 10 − 4 − 0.60 − 0.60 i 1.296049635 × 10 − 1 1.93852368188 × 10 − 3 2.09682236332 × 10 − 3 − 0.20 − 0.10 i 9.025824 × 10 − 3 2.46106053944 × 10 − 5 2.82627329290 × 10 − 4 − 0.20 − 0.20 i 1.8727366 × 10 − 2 7.97723721643 × 10 − 6 5.05755573585 × 10 − 4 − 0.10 − 0.20 i 1.0687015 × 10 − 2 1.50108726619 × 10 − 5 2.62174108542 × 10 − 4 − 0.10 + 0.20 i 1.0687015 × 10 − 2 1.50108726619 × 10 − 5 3.13020553920 × 10 − 4 − 0.10 − 0.10 i 4.964118 × 10 − 3 1.80962075498 × 10 − 6 1.36401933341 × 10 − 4 − 0.10 + 0.10 i 4.964118 × 10 − 3 1.80962075498 × 10 − 6 1.42214405880 × 10 − 4 0.00 + 0.00 i 0 0 0 0.10 + 0.10 i 5.532334 × 10 − 3 1.831842975447 × 10 − 6 1.42946035745 × 10 − 4 0.10 − 0.10 i 5.532334 × 10 − 3 1.831842975447 × 10 − 6 1.46555754649 × 10 − 4 0.10 − 0.20 i 1.011738 × 10 − 2 6.199232248094 × 10 − 6 3.00371756593 × 10 − 4 0.10 + 0.20 i 1.011738 × 10 − 2 6.199232248094 × 10 − 6 2.63978549581 × 10 − 4 0.20 + 0.20 i 2.325711 × 10 − 2 9.399457632647 × 10 − 6 5.56436485635 × 10 − 4 0.20 + 0.10 i 1.2161465 × 10 − 2 3.156056383443 × 10 − 6 3.10634103809 × 10 − 4 0.60 + 0.60 i 2.4723143 × 10 − 1 9.018303218719 × 10 − 4 3.05961727471 × 10 − 3 1.00 + 1.00 i 7.63417273 × 10 − 1 9.857912120237 × 10 − 3 3.03618060646 × 10 − 3 L ∞ n o r m → 7.63417273 × 10 − 1 3.20625005945 × 10 − 2 3.05961727471 × 10 − 3 z j Taylor Collocation E 9 ( z j ) ( ℑ m part) Bessel Collocation E 9 ( z j ) ( ℑ m part) Present method E 9 ( z j ) ( ℑ m part) − 1.00 − 1.00 i 4.026018 × 10 − 2 9.03074377008 × 10 − 5 1.88617719668 × 10 − 9 − 0.60 − 0.60 i 7.679373 × 10 − 3 5.48651788112 × 10 − 6 1.46971560696 × 10 − 8 − 0.20 − 0.10 i 4.924492 × 10 − 4 2.79223615062 × 10 − 9 3.99808868758 × 10 − 8 − 0.20 − 0.20 i 8.613902 × 10 − 4 7.48862591382 × 10 − 9 6.44273901376 × 10 − 8 − 0.10 − 0.20 i 4.202928 × 10 − 4 1.40807795978 × 10 − 9 3.95198737399 × 10 − 8 − 0.10 + 0.20 i 4.202928 × 10 − 4 1.40807795978 × 10 − 9 4.78513258255 × 10 − 8 − 0.10 − 0.10 i 2.3079652 × 10 − 4 8.6068374649 × 10 − 11 1.98854405035 × 10 − 8 − 0.10 + 0.10 i 2.3079652 × 10 − 4 8.6068374649 × 10 − 11 2.25097428782 × 10 − 8 0.00 + 0.00 i 0 0 0 0.10 + 0.10 i 2.656783 × 10 − 4 3.0673394380 × 10 − 11 2.06909033500 × 10 − 8 0.10 − 0.10 i 2.656783 × 10 − 4 3.0673394380 × 10 − 11 2.32382756256 × 10 − 8 0.10 − 0.20 i 3.784386 × 10 − 4 8.5109169711 × 10 − 11 4.63237345777 × 10 − 8 0.10 + 0.20 i 3.784386 × 10 − 4 8.5109169711 × 10 − 11 3.94731838006 × 10 − 8 0.20 + 0.20 i 1.124509 × 10 − 3 1.5459328261 × 10 − 11 7.01385834213 × 10 − 8 0.20 + 0.10 i 6.947382 × 10 − 4 1.4565446071 × 10 − 11 4.38067448607 × 10 − 8 0.60 + 0.60 i 1.0193823 × 10 − 2 6.6743245152 × 10 − 8 6.10843735118 × 10 − 8 1.00 + 1.00 i 9.404928 × 10 − 3 4.4066271108 × 10 − 5 1.21583262896 × 10 − 7 L ∞ n o r m → 4.026018 × 10 − 2 9.03074377008 × 10 − 5 1.21583262896 × 10 − 7 Fig. 3 Visual depiction of the absolute error analysis of Example 1 for N = 5 ( ℑ m part). Fig. 3 f 5 ˜ ( z ) = z + z 2 2 ! + z 3 3 ! + z 4 4 ! + z 5 5 ! . Similarly, we can also calculate the approximate solution of Eq. (33) , we have f 9 ˜ ( z ) = z + z 2 2 ! + z 3 3 ! + z 4 4 ! + z 5 5 ! + z 6 6 ! + z 7 7 ! + z 8 8 ! + z 9 9 ! . For N = 5 , 9 the tabular comparison and for N = 5 the graphical comparison, the absolute error produced by the present method is compared with the outcomes generated by the Taylor Collocation method and the Bessel Collocation method are shown in Table 3 and in Fig. 4 for ℜ e part, and in Table 4 and in Fig. 5 for ℑ m part. From the above discussion, the tables and the figures claim more approximation accuracy of the proposed method than the mentioned methods. Example 3 Let us examine the fourth-order non-homogeneous CDE that is linear and has variable coefficients , (35) f ″ ″ ( z ) − 2 z f ″ ( z ) + z f ( z ) = 24 + 19 z + 2 z 2 − 29 z 3 + z 5 z ∈ C . Where m = 4 , Q 0 ( z ) = z , Q 1 ( z ) = 0 , Q 2 ( z ) = − 2 z , Q 3 ( z ) = 0 , Q 4 ( z ) = 1 , h ( z ) = 24 + 19 z + 2 z 2 − 29 z 3 + z 5 and subject to the conditions are (36) f ( 0 ) = − 1 , f ′ ( 0 ) = 2 , f ( 1 ) = − 3 , f ′ ( 1 ) = − 4 . The corresponding exact solution is a fourth-degree polynomial that is f ( z ) = − 1 + 2 z − 5 z 2 + z 4 . Assume the Galerkin integral domain D = { z ∈ C , z = x + i y , i = − 1 ; − 1 ≤ x ≤ 1 , − 1 ≤ y ≤ 1 } . Table 3 Absolute error E N ( z ) analysis of Example 2 ( ℜ e part) for N = 5 , 9 . Table 3 z j Taylor Collocation E 5 ( z j ) ( ℜ e part) Bessel Collocation E 5 ( z j ) ( ℜ e part) Present method E 5 ( z j ) ( ℜ e part) − 1.00 − 1.00 i 2.4326713 × 10 − 1 5.89986041904 × 10 − 2 2.85970519951 × 10 − 4 − 0.60 − 0.60 i 5.740202 × 10 − 2 8.45000949478 × 10 − 3 2.10298845644 × 10 − 4 − 0.20 − 0.10 i 2.81797 × 10 − 4 1.64846642128 × 10 − 4 7.01455471542 × 10 − 6 − 0.20 − 0.20 i 2.212166 × 10 − 3 1.90988842115 × 10 − 4 9.74400536351 × 10 − 6 − 0.10 − 0.20 i 1.532345 × 10 − 3 2.54330433371 × 10 − 4 1.00394364354 × 10 − 5 − 0.10 + 0.20 i 1.532345 × 10 − 3 2.54330433371 × 10 − 4 1.66588489879 × 10 − 5 − 0.10 − 0.10 i 2.775133 × 10 − 4 2.06556950317 × 10 − 5 4.54004234077 × 10 − 7 − 0.10 + 0.10 i 2.775133 × 10 − 4 2.06556950319 × 10 − 5 4.89361311986 × 10 − 6 0.00 + 0.00 i 0 8.5541519885 × 10 − 13 0 0.10 + 0.10 i 2.775107 × 10 − 4 1.4879698500 × 10 − 5 3.72562526995 × 10 − 6 0.10 − 0.10 i 2.775107 × 10 − 4 1.4879698500 × 10 − 5 1.48814842361 × 10 − 6 0.10 − 0.20 i 1.530695 × 10 − 3 6.2325482521 × 10 − 5 1.20027715801 × 10 − 5 0.10 + 0.20 i 1.530695 × 10 − 3 6.2325482521 × 10 − 5 1.92300221186 × 10 − 5 0.20 + 0.20 i 2.212125 × 10 − 3 9.8572923268 × 10 − 5 2.20274275447 × 10 − 5 0.20 + 0.10 i 2.80156 × 10 − 4 1.3159328847 × 10 − 4 3.08519534006 × 10 − 6 0.60 + 0.60 i 5.739872 × 10 − 2 9.6432020506 × 10 − 4 2.43755696897 × 10 − 4 1.00 + 1.00 i 2.432416 × 10 − 1 1.2386559787 × 10 − 3 4.93934953302 × 10 − 5 L ∞ n o r m → 2.4326713 × 10 − 1 5.89986041904 × 10 − 2 2.85970519951 × 10 − 4 z j Taylor Collocation E 9 ( z j ) ( ℜ e part) Bessel Collocation E 9 ( z j ) ( ℜ e part) Present method E 9 ( z j ) ( ℜ e part) − 1.00 − 1.00 i 1.2078499 × 10 − 2 4.115916475067 × 10 − 5 1.74967622014 × 10 − 8 − 0.60 − 0.60 i 2.484785 × 10 − 3 9.820587112296 × 10 − 6 1.20111606850 × 10 − 8 − 0.20 − 0.10 i 1.1912 × 10 − 5 4.429130137650 × 10 − 8 4.51597524796 × 10 − 10 − 0.20 − 0.20 i 8.97123 × 10 − 5 1.804838386798 × 10 − 7 2.22042272375 × 10 − 9 − 0.10 − 0.20 i 6.22013 × 10 − 5 1.093696364723 × 10 − 7 1.95549871291 × 10 − 9 − 0.10 + 0.20 i 6.22013 × 10 − 5 1.093696364723 × 10 − 7 2.23261008978 × 10 − 9 − 0.10 − 0.10 i 1.11861 × 10 − 5 1.616685911531 × 10 − 8 3.17901463180 × 10 − 10 − 0.10 + 0.10 i 1.11861 × 10 − 5 1.616685925409 × 10 − 8 4.41483525557 × 10 − 10 0.00 + 0.00 i 0 6.49920795016 × 10 − 13 0 0.10 + 0.10 i 1.11835 × 10 − 5 7.01703836702 × 10 − 9 3.25059744464 × 10 − 10 0.10 − 0.10 i 1.11835 × 10 − 5 7.01703836702 × 10 − 9 4.33283141427 × 10 − 10 0.10 − 0.20 i 6.05515 × 10 − 5 2.45374184859 × 10 − 9 2.13889327312 × 10 − 9 0.10 + 0.20 i 6.05515 × 10 − 5 2.45374230656 × 10 − 9 1.94741138691 × 10 − 9 0.20 + 0.20 i 8.96715 × 10 − 5 3.44256884110 × 10 − 8 2.24201153179 × 10 − 9 0.20 + 0.10 i 1.0271 × 10 − 5 3.54173038119 × 10 − 8 4.14418753795 × 10 − 10 0.60 + 0.60 i 2.481483 × 10 − 3 1.98254734962 × 10 − 7 1.10180060964 × 10 − 8 1.00 + 1.00 i 1.2053017 × 10 − 2 4.60677713531 × 10 − 7 1.20032742658 × 10 − 9 L ∞ n o r m → 1.2078499 × 10 − 2 4.115916475067 × 10 − 5 1.74967622014 × 10 − 8 Fig. 4 Visual depiction of the absolute error analysis of Example 2 for N = 5 ( ℜ e part). Fig. 4 Table 4 Absolute error E N ( z ) analysis of Example 2 ( ℑ m part) for N = 5 , 9 . Table 4 z j Taylor Collocation E 5 ( z j ) ( ℑ m part) Bessel Collocation E 5 ( z j ) ( ℑ m part) Present method E 5 ( z j ) ( ℑ m part) − 1.00 − 1.00 i 3.09312 × 10 − 1 1.85893527304 × 10 − 2 4.53596288293 × 10 − 4 − 0.60 − 0.60 i 6.25585 × 10 − 2 6.68868421674 × 10 − 3 3.55780327721 × 10 − 4 − 0.20 − 0.10 i 1.52585 × 10 − 3 3.19933894691 × 10 − 4 9.73150983529 × 10 − 6 − 0.20 − 0.20 i 2.23548 × 10 − 3 5.46524065895 × 10 − 4 2.04215934919 × 10 − 5 − 0.10 − 0.20 i 2.73608 × 10 − 4 1.68336224113 × 10 − 4 1.21921823707 × 10 − 6 − 0.10 + 0.20 i 2.73608 × 10 − 4 1.68336224113 × 10 − 4 1.01332516060 × 10 − 5 − 0.10 − 0.10 i 2.78721 × 10 − 4 1.19731980447 × 10 − 4 2.29471931160 × 10 − 6 − 0.10 + 0.10 i 2.78721 × 10 − 4 1.19731980447 × 10 − 4 1.38653178968 × 10 − 6 0.00 + 0.00 i 0 2.21437262029 × 10 − 28 0 0.10 + 0.10 i 2.77624 × 10 − 4 8.46333017160 × 10 − 5 3.92533850752 × 10 − 6 0.10 − 0.10 i 2.77624 × 10 − 4 8.46333017160 × 10 − 5 2.93506904779 × 10 − 6 0.10 − 0.20 i 2.75787 × 10 − 4 2.05738371287 × 10 − 4 1.61657156734 × 10 − 6 0.10 + 0.20 i 2.75787 × 10 − 4 2.05738371287 × 10 − 4 1.36086931283 × 10 − 7 0.20 + 0.20 i 2.2311 × 10 − 3 2.70937062759 × 10 − 4 2.74365619024 × 10 − 5 0.20 + 0.10 i 1.52364 × 10 − 3 1.23452638561 × 10 − 4 1.75887713028 × 10 − 5 0.60 + 0.60 i 6.2519 × 10 − 2 6.68465941159 × 10 − 4 4.66194239589 × 10 − 4 1.00 + 1.00 i 3.09202 × 10 − 2 1.84717548590 × 10 − 3 1.02296380001 × 10 − 3 L ∞ n o r m → 3.09312 × 10 − 1 1.85893527304 × 10 − 2 1.02296380001 × 10 − 3 z j Taylor Collocation E 9 ( z j ) ( ℑ m part) Bessel Collocation E 9 ( z j ) ( ℑ m part) Present method E 9 ( z j ) ( ℑ m part) − 1.00 − 1.00 i 1.03425 × 10 − 2 2.48152172653 × 10 − 4 1.85987290531 × 10 − 8 − 0.60 − 0.60 i 2.36461 × 10 − 3 9.64545246917 × 10 − 6 1.70240876063 × 10 − 8 − 0.20 − 0.10 i 6.26632 × 10 − 5 1.61341194057 × 10 − 7 1.91601385659 × 10 − 9 − 0.20 − 0.20 i 9.13166 × 10 − 5 1.90064818800 × 10 − 7 2.85283863781 × 10 − 9 − 0.10 − 0.20 i 1.0171 × 10 − 5 2.27097510518 × 10 − 8 3.21724716929 × 10 − 11 − 0.10 + 0.20 i 1.0171 × 10 − 5 2.27097510518 × 10 − 8 7.67970593828 × 10 − 10 − 0.10 − 0.10 i 1.17155 × 10 − 5 4.33176848557 × 10 − 8 4.15969033987 × 10 − 10 − 0.10 + 0.10 i 1.17155 × 10 − 5 4.33176844949 × 10 − 8 3.71587874864 × 10 − 10 0.00 + 0.00 i 0 4.44685976728 × 10 − 27 0 0.10 + 0.10 i 1.06186 × 10 − 5 2.27537023805 × 10 − 8 3.87344639278 × 10 − 10 0.10 − 0.10 i 1.06186 × 10 − 5 2.27537023805 × 10 − 8 3.33844698485 × 10 − 10 0.10 − 0.20 i 1.23496 × 10 − 5 5.64710605222 × 10 − 8 8.27722774918 × 10 − 10 0.10 + 0.20 i 1.23496 × 10 − 5 5.64710598283 × 10 − 8 3.43136409355 × 10 − 11 0.20 + 0.20 i 8.6929 × 10 − 5 5.89471798040 × 10 − 8 2.79208080708 × 10 − 9 0.20 + 0.10 i 6.04542 × 10 − 5 2.75819321549 × 10 − 8 1.86596570940 × 10 − 9 0.60 + 0.60 i 2.31512 × 10 − 3 1.56472432677 × 10 − 7 2.10176396896 × 10 − 8 1.00 + 1.00 i 1.02328 × 10 − 2 2.25562201583 × 10 − 7 4.69233857440 × 10 − 8 L ∞ n o r m → 1.03425 × 10 − 2 2.48152172653 × 10 − 4 4.69233857440 × 10 − 8 Fig. 5 Visual depiction of the absolute error analysis of Example 2 for N = 5 ( ℑ m part). Fig. 5 For N = 4 , by applying the proposed method discussed in Method details: Phase 1 and Method details: Phase 2 section, we obtain the required augmented matrix as follows: [ K * : C * ] = [ 0 − 4 3 + 4 i 3 0 12 5 − 44 i 15 2 + 2 i : 163 3 + 152 i 3 1 0 0 0 0 : − 1.00 0 1 0 0 0 : 2.00 1 1 1 2 1 6 1 24 : − 3.00 0 1 1 1 2 1 6 : − 4.00 ] . By solving the above matrix for unknown coefficients, then the coefficients a j become a 0 = − 1.00 , a 1 = 2.00 , a 2 = − 10.00 , a 3 = 0.00 , and a 4 = 24.00 . Therefore, the approximate solution of the Eq. (35) becomes f ˜ 4 ( z ) = − 1.00 + 2.00 z − 5.00 z 2 + z 4 = f ( z ) . Example 3 demonstrates how the approximate outcomes generated by the proposed method match perfectly the exact solution of the CDE if the exact solution of the CDE is in the N degree or less than N degree polynomial form. Example 4 Let us examine the second-order non-linear non-homogeneous CDE , (37) f ″ + ( e z − e − z ) ( f ′ ) 2 − ( e z + 1 ) f = 1 ; z ∈ C where subject to the initial conditions are (38) f ( 0 ) = − 1 / 2 , f ′ ( 0 ) = 1 / 4 . Corresponding exact transcendental entire solution is f ( z ) = − 1 / ( e z + 1 ) . Now consider an approximate solution f ˜ 3 ( z ) by the N = 3 degree Taylor polynomial at z 0 = 0 in the following form (39) f ˜ 3 ( z ) = ∑ j = 0 3 a j z j j ! . Thus, we have θ j ( z ) = z j j ! for j = 0 , 1 , 2 , 3 and θ i ( z ) = z i i ! for i = 0 , 1 , 2 , 3 . Assume the Galerkin integral domain, D = { z ∈ C , z = x + i y , i = − 1 ; − 1 ≤ x ≤ 1 , − 1 ≤ y ≤ 1 } . Since the problem (37) is a non-linear CDE, from Eq. (11) we obtain the corresponding system of a nonlinear equation in the matrix equation as (40) [ K 1 ] { A 1 } + [ K 2 ] { A 2 } = { C } where K 1 = [ − 3.2699 − 4.5969 i 1.5803 − 1.0483 i 3.6833 + 1.1117 i 0.1984 + 0.3250 i 1.5803 − 1.0483 i 3.3666 − 1.7765 i 0.5953 + 0.9751 i − 1.0138 + 2.0311 i 1.6833 − 0.8883 i 0.5953 + 0.9751 i − 0.1873 + 1.7134 i − 0.2357 + 0.1372 i 0.1984 + 0.3250 i 0.3196 + 0.6978 i − 0.2357 + 0.1372 i − 0.4359 − 0.1954 i ] A 1 = [ a 0 a 1 a 2 a 3 ] T K 2 = [ − 6.3211 + 4.1934 i 0.0000 + 0.0000 i − 2.3812 − 3.9004 i 0.0000 + 0.0000 i 0.0000 + 0.0000 i 0.0000 + 0.0000 i 0.0000 + 0.0000 i − 2.3812 − 3.9004 i 0.0000 + 0.0000 i − 3.1605 + 2.0967 i − 2.3812 − 3.9004 i 1.4140 − 0.8233 i − 2.3812 − 3.9004 i 0.0000 + 0.0000 i 2.8279 − 1.6466 i 0.0000 + 0.0000 i 0.0000 + 0.0000 i 0.0000 + 0.0000 i 0.0000 + 0.0000 i 0.9426 − 0.5489 i 0.0000 + 0.0000 i − 0.3969 − 0.6501 i 0.9426 − 0.5489 i 0.2089 + 0.3700 i ] A 2 = [ a 1 a 2 a 1 a 3 a 2 a 3 a 1 2 a 2 2 a 3 2 ] T and C = [ 2.0000 + 2.0000 i 0.0000 + 0.0000 i − 0.6667 + 0.6667 i 0.0000 + 0.0000 i ] T . By applying the initial condition (38) on the Eq. (40) with the help of Eq. (16) , we obtain (41) [ K 1 * ] { A 1 } + [ K 2 * ] { A 2 } = { C * } where K 1 * = [ − 3.2699 − 4.5969 i 1.5803 − 1.0483 i 3.6833 + 1.1117 i 0.1984 + 0.3250 i 1.5803 − 1.0483 i 3.3666 − 1.7765 i 0.5953 + 0.9751 i − 1.0138 + 2.0311 i 1.0000 + 0.0000 i 0.0000 + 0.0000 i 0.0000 + 0.0000 i 0.0000 + 0.0000 i 0.0000 + 0.0000 i 1.0000 + 0.0000 i 0.0000 + 0.0000 i 0.0000 + 0.0000 i ] K 2 * = [ − 6.3211 + 4.1934 i 0.0000 + 0.0000 i − 2.3812 − 3.9004 i 0.0000 + 0.0000 i 0.0000 + 0.0000 i 0.0000 + 0.0000 i 0.0000 + 0.0000 i − 2.3812 − 3.9004 i 0.0000 + 0.0000 i − 3.1605 + 2.0967 i − 2.3812 − 3.9004 i 1.4140 − 0.8233 i 0.0000 + 0.0000 i 0.0000 + 0.0000 i 0.0000 + 0.0000 i 0.0000 + 0.0000 i 0.0000 + 0.0000 i 0.0000 + 0.0000 i 0.0000 + 0.0000 i 0.0000 + 0.0000 i 0.0000 + 0.0000 i 0.0000 + 0.0000 i 0.0000 + 0.0000 i 0.0000 + 0.0000 i ] and C * = [ 2.0000 + 2.0000 i 0.0000 + 0.0000 i − 0.500 + 0.0000 i 0.2500 + 0.0000 i ] T . By solving the system of nonlinear Eq. (41) , the unknown Taylor coefficients a j become a 0 = − 0.500 − 1.722344475 × 10 − 16 i . a 1 = 0.2500 + 8.978526104 × 10 − 16 i , a 2 = 0.4501617132 + 0.2833277909 i , and a 3 = 0.7341905712 − 0.2539949629 i . Therefore, the approximate solution (39) of Eq. (37) is f 3 ˜ ( z ) = − 0.500 − 1.722344475 × 10 − 16 i + z + z 2 2 ! + z 3 3 ! . Similarly, we can also calculate the approximation solution of Eq. (37) for N = 4 , 5 . These are f 4 ˜ ( z ) = − 0.5 − 6.498084614 × 10 − 17 i + z + z 2 2 ! + z 3 3 ! + z 4 4 ! and f 5 ˜ ( z ) = − 0.5 − 1.181916589 × 10 − 18 i + z + z 2 2 ! + z 3 3 ! + z 4 4 ! + z 5 5 ! . For N = 3 , 4 , 5 the absolute error E N ( z ) generated by the present method are shown in Table 5 and in Fig. 6 for ℜ e part, and in Table 6 and in Fig. 7 for ℑ m part. Example 5 Let us examine the second-order non-linear non-homogeneous CDE , (42) f 3 f ′ + ( f ′ ) 3 + f f ″ + 3 f 2 ( f ″ ) 2 = 64 e 4 z + 8 e 3 z + 4 e 2 z ; z ∈ C where subject to the initial conditions are (43) f ( 0 ) = 2 , f ′ ( 0 ) = 2 . Table 5 Absolute error E N ( z ) analysis of Example 4 ( ℜ e part) for N = 3 , 4 , 5 . Table 5 z j Absolute error E N ( z ) analysis ( ℜ e part) E 3 ( z j ) ( ℜ e part) E 4 ( z j ) ( ℜ e part) E 5 ( z j ) ( ℜ e part) -1.00-1.00i 9.12210175 × 10 − 2 4.31795313 × 10 − 1 1.94703639 × 10 − 4 -0.90-0.90i 8.80103073 × 10 − 2 4.08792338 × 10 − 1 1.70246213 × 10 − 4 -0.80-0.80i 8.10764674 × 10 − 2 3.61981716 × 10 − 1 1.58242201 × 10 − 4 -0.70-0.70i 7.11610504 × 10 − 2 3.01161415 × 10 − 1 1.43632273 × 10 − 4 -0.60-0.60i 5.91154570 × 10 − 2 2.34654142 × 10 − 1 1.19582451 × 10 − 4 -0.50-0.50i 4.58883571 × 10 − 2 1.69319919 × 10 − 1 8.75903101 × 10 − 5 -0.40-0.40i 3.25096110 × 10 − 2 1.10572163 × 10 − 1 5.40924242 × 10 − 5 -0.30-0.30i 2.00733527 × 10 − 2 6.23946022 × 10 − 2 2.62331769 × 10 − 5 -0.20-0.20i 9.72194476 × 10 − 3 2.73573224 × 10 − 2 8.50546983 × 10 − 6 -0.10-0.10i 2.63162954 × 10 − 3 6.63111334 × 10 − 3 1.08762437 × 10 − 6 0.00+0.00i 0.00000 0.00000 0.00000 0.10+0.10i 3.03492628 × 10 − 3 5.87231509 × 10 − 3 1.28267091 × 10 − 6 0.20+0.20i 1.29442785 × 10 − 2 2.12909765 × 10 − 2 9.42271958 × 10 − 6 0.30+0.30i 3.09256497 × 10 − 2 4.19437644 × 10 − 2 2.88109773 × 10 − 5 0.40+0.40i 5.81552821 × 10 − 2 6.21743947 × 10 − 2 5.99544406 × 10 − 5 0.50+0.50i 9.57755383 × 10 − 2 7.49950482 × 10 − 2 9.93196530 × 10 − 5 0.60+0.60i 1.44880552 × 10 − 1 7.21007195 × 10 − 2 1.40995803 × 10 − 4 0.70+0.70i 2.06500185 × 10 − 1 4.38852617 × 10 − 2 1.80054017 × 10 − 4 0.80+0.80i 2.81583105 × 10 − 1 2.05416962 × 10 − 2 2.16778545 × 10 − 4 0.90+0.90i 3.70980714 × 10 − 1 1.33338671 × 10 − 1 2.60059321 × 10 − 4 1.00+1.00i 4.75434564 × 10 − 1 3.07919747 × 10 − 1 3.27285753 × 10 − 4 L ∞ n o r m → 4.75434564 × 10 − 1 4.31795313 × 10 − 1 3.27285753 × 10 − 4 Fig. 6 Visual depiction of the absolute error functions with the present method of Example 4 for N = 3 , 4 , 5 ( ℜ e part). Fig. 6 Table 6 Absolute error E N ( z ) analysis of Example 4 ( ℑ m part) for N = 3 , 4 , 5 . Table 6 z j Absolute error E N ( z ) analysis ( ℑ m part) E 3 ( z j ) ( ℑ m part) E 4 ( z j ) ( ℑ m part) E 5 ( z j ) ( ℑ m part) -1.00-1.00i 7.27147628 × 10 − 2 5.58946462 × 10 − 1 7.43492113 × 10 − 4 -0.90-0.90i 9.01205576 × 10 − 2 3.82972581 × 10 − 1 6.96983381 × 10 − 4 -0.80-0.80i 9.57821070 × 10 − 2 2.49380915 × 10 − 1 6.54712150 × 10 − 4 -0.70-0.70i 9.20556334 × 10 − 2 1.51678245 × 10 − 1 5.70944575 × 10 − 4 -0.60-0.60i 8.13112453 × 10 − 2 8.36991376 × 10 − 2 4.47654187 × 10 − 4 -0.50-0.50i 6.59110889 × 10 − 2 3.96277885 × 10 − 2 3.08486537 × 10 − 4 -0.40-0.40i 4.81954325 × 10 − 2 1.40119408 × 10 − 2 1.80656303 × 10 − 4 -0.30-0.30i 3.04760098 × 10 − 2 1.76954077 × 10 − 3 8.41035673 × 10 − 5 -0.20-0.20i 1.50353289 × 10 − 2 1.81057099 × 10 − 3 2.66155254 × 10 − 5 -0.10-0.10i 4.13047212 × 10 − 3 1.07245975 × 10 − 3 3.43966990 × 10 − 6 0.00+0.00i 1.72234447 × 10 − 16 6.49808461 × 10 − 17 1.18191659 × 10 − 18 0.10+0.10i 4.87276214 × 10 − 3 2.22568680 × 10 − 3 3.47874091 × 10 − 6 0.20+0.20i 2.09776082 × 10 − 2 1.10403465 × 10 − 2 2.64474268 × 10 − 5 0.30+0.30i 5.05530986 × 10 − 2 2.94028477 × 10 − 2 8.25089107 × 10 − 5 0.40+0.40i 9.58563157 × 10 − 2 5.99489123 × 10 − 2 1.74793787 × 10 − 4 0.50+0.50i 1.59169768 × 10 − 1 1.04998019 × 10 − 1 2.93244181 × 10 − 4 0.60+0.60i 2.42805188 × 10 − 1 1.66557203 × 10 − 1 4.15000569 × 10 − 4 0.70+0.70i 3.49102846 × 10 − 1 2.46320368 × 10 − 1 5.09280062 × 10 − 4 0.80+0.80i 4.80424886 × 10 − 1 3.45661622 × 10 − 1 5.48220135 × 10 − 4 0.90+0.90i 6.39141418 × 10 − 1 4.65621372 × 10 − 1 5.24981520 × 10 − 4 1.00+1.00i 8.27608664 × 10 − 1 6.06884467 × 10 − 1 4.79783556 × 10 − 4 L ∞ n o r m → 8.27608664 × 10 − 1 6.06884467 × 10 − 1 7.43492113 × 10 − 4 Fig. 7 Visual depiction of the absolute error functions with the present method of Example 4 for N = 3 , 4 , 5 ( ℑ m part). Fig. 7 Corresponding exact transcendental entire solution is f ( z ) = 2 e z . Assume the Galerkin integral domain, D = { z ∈ C , z = x + i y , i = − 1 ; − 1 ≤ x ≤ 1 , − 1 ≤ y ≤ 1 } . Since all the left terms of the problem (42) are non-linear, for better approximation we have to apply a higher degree Taylor polynomial (3) . By applying the proposed method and working like the previous problem Example 4 , we obtain the approximate outcomes for various values of N = 3 , 5 , 7 as follows, f ˜ 3 ( z ) = 2.0 + 1.35958319 × 10 − 16 i + z + z 2 2 ! + z 3 3 ! f ˜ 5 ( z ) = 2.0 − 1.367168896 × 10 − 17 i + z + z 2 2 ! + z 3 3 ! + z 4 4 ! + z 5 5 ! and f 7 ˜ ( z ) = 2.0 − 1.921605847 × 10 − 14 i + z + z 2 2 ! + z 3 3 ! + z 4 4 ! + z 5 5 ! + z 6 6 ! + z 7 7 ! . Now for N = 3 , 5 , 7 the absolute error E N ( z ) generated by the present method are shown in Table 7 and in Fig. 8 for ℜ e part, and in Table 8 and in Fig. 9 for ℑ m part. Table 7 Absolute error E N ( z ) analysis of Example 5 ( ℜ e part) for N = 3 , 5 , 7 . Table 7 z j Absolute error E N ( z ) analysis ( ℜ e part) E 3 ( z j ) ( ℜ e part) E 5 ( z j ) ( ℜ e part) E 7 ( z j ) ( ℜ e part) -1.00-1.00i 4.04211484 × 10 − 1 5.46973308 × 10 − 2 8.17365239 × 10 − 4 -0.90-0.90i 2.43011014 × 10 − 1 5.13235575 × 10 − 2 7.10864263 × 10 − 4 -0.80-0.80i 1.27495547 × 10 − 1 4.47224516 × 10 − 2 5.91143240 × 10 − 4 -0.70-0.70i 5.00375561 × 10 − 2 3.64450535 × 10 − 2 4.52687224 × 10 − 4 -0.60-0.60i 3.26081238 × 10 − 3 2.77220908 × 10 − 2 3.11176527 × 10 − 4 -0.50-0.50i 1.99062140 × 10 − 2 1.94813171 × 10 − 2 1.86293963 × 10 − 4 -0.40-0.40i 2.61718651 × 10 − 2 1.23699396 × 10 − 2 9.21692603 × 10 − 5 -0.30-0.30i 2.18178852 × 10 − 2 6.78196884 × 10 − 3 3.35881162 × 10 − 5 -0.20-0.20i 1.26314005 × 10 − 2 2.89017748 × 10 − 3 6.23900354 × 10 − 6 -0.10-0.10i 3.83277158 × 10 − 3 6.82186967 × 10 − 4 5.58258273 × 10 − 7 0.00+0.00i 0.000000 0.0000000 0.0000000 0.10+0.10i 4.99059643 × 10 − 3 5.82071389 × 10 − 4 3.95317573 × 10 − 6 0.20+0.20i 2.18620755 × 10 − 2 2.10675218 × 10 − 3 1.99906117 × 10 − 5 0.30+0.30i 5.27925309 × 10 − 2 4.23565379 × 10 − 3 4.98925287 × 10 − 5 0.40+0.40i 9.90030631 × 10 − 2 6.65515886 × 10 − 3 8.98798595 × 10 − 5 0.50+0.50i 1.60684186 × 10 − 1 9.11395386 × 10 − 3 1.32493224 × 10 − 4 0.60+0.60i 2.36928720 × 10 − 1 1.14540761 × 10 − 2 1.69648901 × 10 − 4 0.70+0.70i 3.25674085 × 10 − 1 1.36325560 × 10 − 2 1.95991325 × 10 − 4 0.80+0.80i 4.23657367 × 10 − 1 1.57302960 × 10 − 2 2.11223200 × 10 − 4 0.90+0.90i 5.26386952 × 10 − 1 1.79443689 × 10 − 2 2.19632988 × 10 − 4 1.00+1.00i 6.28135051 × 10 − 1 2.05594357 × 10 − 2 2.24560538 × 10 − 4 L ∞ n o r m → 6.28135051 × 10 − 1 5.46973308 × 10 − 2 8.17365239 × 10 − 4 Fig. 8 Visual depiction of the absolute error functions with the present method of Example 5 for N = 3 , 5 , 7 ( ℜ e part). Fig. 8 Table 8 Absolute error E N ( z ) analysis of Example 5 ( ℑ m part) for N = 3 , 5 , 7 . Table 8 z j Absolute error E N ( z ) analysis ( ℑ m part) E 3 ( z j ) ( ℑ m part) E 5 ( z j ) ( ℑ m part) E 7 ( z j ) ( ℑ m part) -1.00-1.00i 1.03611560 × 10 0 6.01045938 × 10 − 2 1.41528255 × 10 − 4 -0.90-0.90i 8.06736405 × 10 − 1 4.54161578 × 10 − 2 2.00690619 × 10 − 4 -0.80-0.80i 6.12934359 × 10 − 1 3.20963001 × 10 − 2 2.70031006 × 10 − 4 -0.70-0.70i 4.51415193 × 10 − 1 2.09823648 × 10 − 2 3.04581051 × 10 − 4 -0.60-0.60i 3.19141665 × 10 − 1 1.24304057 × 10 − 2 2.93049274 × 10 − 4 -0.50-0.50i 2.13331451 × 10 − 1 6.42316081 × 10 − 3 2.43897582 × 10 − 4 -0.40-0.40i 1.31448920 × 10 − 1 2.67192486 × 10 − 3 1.74573442 × 10 − 4 -0.30-0.30i 7.11896499 × 10 − 2 7.11149655 × 10 − 4 1.03758386 × 10 − 4 -0.20-0.20i 3.04564010 × 10 − 2 1.54952933 × 10 − 5 4.63952780 × 10 − 5 -0.10-0.10i 7.32518528 × 10 − 3 7.89612721 × 10 − 5 1.11648744 × 10 − 5 0.00+0.00i 1.35958319 × 10 − 16 1.36716890 × 10 − 17 1.92160585 × 10 − 14 0.10+0.10i 6.75470769 × 10 − 3 1.97194680 × 10 − 4 9.15710610 × 10 − 6 0.20+0.20i 2.58605042 × 10 − 2 9.49403187 × 10 − 4 3.13146474 × 10 − 5 0.30+0.30i 5.54973870 × 10 − 2 2.37420107 × 10 − 3 5.81407275 × 10 − 5 0.40+0.40i 9.36480383 × 10 − 2 4.42390989 × 10 − 3 8.27745943 × 10 − 5 0.50+0.50i 1.37972574 × 10 − 1 6.90075890 × 10 − 3 1.01704221 × 10 − 4 0.60+0.60i 1.85662695 × 10 − 1 9.49264756 × 10 − 3 1.15602408 × 10 − 4 0.70+0.70i 2.33273893 × 10 − 1 1.18308464 × 10 − 2 1.28814242 × 10 − 4 0.80+0.80i 2.76534578 × 10 − 1 1.35707846 × 10 − 2 1.47377576 × 10 − 4 0.90+0.90i 3.10131223 × 10 − 1 1.44968166 × 10 − 2 1.75714220 × 10 − 4 1.00+1.00i 3.27468967 × 10 − 1 1.46515281 × 10 − 2 2.12461960 × 10 − 4 L ∞ n o r m → 1.03611560 × 10 0 6.01045938 × 10 − 2 3.04581051 × 10 − 4 Fig. 9 Visual depiction of the absolute error functions with the present method of Example 5 for N = 3 , 5 , 7 ( ℑ m part). Fig. 9 To solve the high-order linear and nonlinear CDEs analytically, is a challenging task. To solve numerically, we provide the TGM in a rectangular domain, and based on the Taylor polynomials. A fascinating feature of the suggested method is its ability to yield precise results in instances when the linear CDE has an exact solution that is represented by a polynomial of degree N or less than N . For the linear CDE, the tabular and graphical comparisons reveal that the method we suggested is more accurate and stable than the existing Collocation method. For the nonlinear CDE, our proposed method goes to the accurate solution when N is sufficiently large enough. Those reveal the validity of our proposed method but it comes with greater computational complexity due to the need to compute higher-order terms. In the future, we will demonstrate the practical use of the TGM for solving CDEs in real-life problems. For example, the well-known Schrödinger equation in quantum mechanics is a CDE used to describe the behavior of particles at the atomic level. Present work can also be completed utilizing the Haar Wavelet, Petrov-Galerkin, finite difference, and compact finite difference computations. With a few adjustments, the suggested method can be applied to fractional order CDEs and the system of CDEs with variable coefficients. This article does not contain any studies with human or animal participants. The author reviewed the results and approved the final version of the manuscript. This research received no external funding. The author declares that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | Other | other | en | 0.999997 |
PMC11697261 | The use of CO 2 as a feedstock for producing chemicals through carbon capture and utilization (CCU) strategies is currently receiving a great deal of attention as a way to move towards a low carbon and circular economy. 1 The great value of this approach is the possibility of valorising waste CO 2 by converting it into industrially useful chemicals and polymers and, at the same time, reducing greenhouse gas emissions. 2 One of the most interesting chemicals that can be obtained from CO 2 is acetic acid (AA), a commodity chemical with many current uses in different industrial sectors ( i.e. , textile, fibre, pharma, foods, etc. ). 3,4 The major consumption of AA comes from the synthesis of vinyl acetate monomer, used in the production of different polymers with applications as emulsifiers, resins, coatings, fibers and polymer wires. The other main use of AA is the production of cellulose acetate esters (through acetic anhydride). Moreover, glacial AA and some acetate esters are used extensively as solvents. Industrial production of AA is dominated by thermocatalytic processes, including carbonylation of methanol and oxidation of acetaldehyde and hydrocarbons. 3 Different chemo-catalytic methods for the conversion of CO 2 into AA have been proposed, using either H 2 , 5 methanol + H 2 , 6 methane, 7 or electricity 8 to drive CO 2 reduction. An interesting alternative to the chemical methods is based on biotechnology and involves the use of microorganisms or enzymes to catalyze that conversion. It is generally accepted that biotechnological methods show some advantageous properties when compared with chemical ones, such as ambient temperature and pressure operation, which reduces energy costs, and a high selectivity and specificity, which avoids byproduct generation. The main biological process for the conversion of CO 2 into AA is carried out by autotrophic acetogenic bacteria and involves the Wood–Ljungdahl pathway. 9 The main electron donors used by acetogens to drive reduction of CO 2 are H 2 and CO, but a wide range of organic compounds can also be used. 9 Alternatively, electrons can also be supplied by electricity, through the so-called microbial electrosynthesis, 10 or light, by means of organic semiconductor-bacteria biohybrid photosynthetic systems. 11 Unfortunately, the industrial implementation of a biotechnological process for the conversion of CO 2 into AA is currently a great challenge. The slow growth and low productivity of acetogens under autotrophic conditions, resulting from metabolic energy limitations, and the low solubility of gaseous substrates are important hurdles to overcome. 12 A key factor to consider regarding AA and any other chemical's production processes, in general, is the need of recovering and purifying them through the so-called downstream processing. The aim of downstream processing is the efficient, reproducible, and safe recovery of the targeted product to the required specification (biological activity, purity, etc. ), while maximizing recovery yield and minimizing costs. Product separation and purification from bioprocess media is often a complex task accounting for a significant share of the process costs (around 50–70%) of the total production cost can be attributed to it, 13 mainly due to the low concentrations of the target molecules in the production media and the complexity of it. This is of special relevance for CO 2 (gas)-derived products, that usually are present at much lower concentrations than, for example, their sugar-derived counterparts, so increasing downstream complexity and costs. 14 Therefore, the development of efficient and cost-effective downstream processes for product recovery and purification is a mandatory need for industrial feasibility and accordingly, efficient, and non-energy intensive downstream technologies are preferred. Recovery of AA from fermentation media has several key challenges, derived from its high solubility in aqueous media and its relatively low concentration. The concentration of AA in typical fermentation broths may vary over a wide range but is generally less than 10% by weight. 15 Therefore, its recovery in pure form involves separation from a large quantity of water. Different methods have been used for AA separation from fermentation broths, including distillation (simple, reactive, azeotropic, extractive), extraction or reactive extraction, supercritical fluid extraction, precipitation, crystallization, adsorption, ion-exchange, electrodialysis/electrodeionization, pressure-driven membrane methods, and pervaporation. 16,17 Distillation and precipitation are the most conventional industrial methods, but they are neither economically nor environmentally feasible at the low concentrations found in fermentation broths. Furthermore, the presence of various ions (phosphate, chloride, sulfate, proteins) in significant amounts must be considered when designing a downstream separation process, since they could strongly interfere with the purification of AA. Ion exchange (IEX) is found among the non-energy intensive technologies often used in downstream processing. IEX is a separation technique where insoluble polymers having different positively or negatively functional groups, called IEX resins, are used. These resins, normally present in the form of porous microbeads, membranes, or granules, have the potential to bind the ions of opposite charge. IEX, a separation process that do not require high power input, is widely used in bioseparations, in the recovery of organic acids, including AA, from aqueous fermentation media. 16–18 Very often these IEX separation processes claim to address the recovery and purification of AA from diluted solutions but, when they talk about “diluted solutions”, they are referring to AA concentrations in the range 1–10 g L −1 , that is, one to two orders of magnitude higher than the concentrations usually available in CO 2 -derived bioprocesses, which gives an idea of the extreme difficulty of the task. In addition, purification of diluted carboxylic acids from bioprocess media using separation technologies based on electrical charge, such as IEX, poses a great challenge, not only due to the extremely low concentration of the target product, but also due to the complex composition of the media. Many of the different chemicals present in these media, mainly inorganic salts and low molecular weight charged organic compounds, often in concentrations higher than those of the carboxylic acid to be purified, can potentially interfere, by competition, with its purification, and would result in a lower recovery yield and a product having higher levels of impurities. In the framework of the Horizon Europe Photo2Fuel project ( https://www.photo2fuel.eu/ ), an artificial photosynthesis process for the conversion of CO 2 into AA using a hybrid system of non-photosynthetic bacteria and organic photosensitisers is addressed, with sunlight as the only energy source. 11 As the effluents obtained are characterized by the extremely low concentrations of AA, a suitable downstream processing should be developed not only to efficiently recover and purify the acid, but also to concentrate it. In this paper, such a downstream process is presented, involving the use of mixed bed IEX resins. The process is carried out in two steps: a first step to remove the contaminating mineral anions from the medium (demineralization), while AA remains in solution, and a second step to recover and concentrate AA from the mineral anions-free medium. AA separation and purification experiments were carried out starting from DPM medium, a model solution with the following composition: 0.1 g L −1 AA, 0.4 g L −1 NaCl, 0.64 g L −1 K 2 HPO 4 , 1.5 g L −1 KH 2 PO 4 , 0.4 g L −1 NH 4 Cl, 0.33 g L −1 MgSO 4 ·7H 2 O, 0.05 g L −1 CaCl 2 , 0.25 g L −1 KCl and 2.5 g L −1 NaHCO 3 . This model solution was based on defined photosynthesis medium (the actual DPM medium), 11 the medium where artificial photosynthesis would be performed, but only containing its main inorganic salts. The amount of AA supplemented to this medium reflects the target concentration expected to be reached upon artificial photosynthesis. The minor components of the original DPM medium, trace mineral mix and Wolfe's vitamin mix, were omitted as they were considered not relevant for the different separation procedures to be applied. Although this model solution was slightly different from the original DPM medium this name was maintained in this work. IEX experiments were carried out either under batch mode or in column, using the IEX resins shown in Table 1 . The resins were used as supplied, without any pretreatment. The quantities used refer to the mass as received (wet weight). Concentration of AA (acetate) and inorganic anions (chloride, sulfate, and phosphate) was quantified by ion chromatography, using a Metrohm 930 Compact IC Flex ion chromatograph equipped with a conductivity detector. Anion separation was carried out in sequential suppressor mode on a Metrosep A Supp 19 – 250/4.0 analytical column connected in series with a Metrosep A Supp 19 Guard/4.0 precolumn. A gradient elution with the eluents A (4 mM Na 2 CO 3 ) and B (20 mM Na 2 CO 3 ) was used in the chromatographic separation as follows (flow rate, 0.7 mL min −1 ): eluent 100% A was initially held for 15 min, then this proportion was reduced to 20% in 25 min while that of B was increased from 0 to 80% and held for 10 min; finally, the proportion of B was reduced to zero while that of A increased to 100% in the next 0.1 min and held for 10 min. A solution of 500 mM H 2 SO 4 /100 mM oxalic acid/20% acetone was used as the regenerant. Column temperature was set at 35 °C and sample volume was 20 μL. DPM medium, the medium from which AA is to be purified, without being a very complex medium, contains a mixture of salts in the form of bicarbonates, phosphates, sulfates, and chlorides. And, actually, AA is in minority with respect to the mineral anions: 100 mg L −1 AA, 1816 mg L −1 bicarbonate, 129 mg L −1 sulfate, 659 mg L −1 chloride and 1411 mg L −1 phosphate. As these inorganic anions could presumably interfere with the purification of AA through IEX 19 a demineralization pretreatment of DPM medium was considered to be required to remove them and improve the subsequent purification of AA. Demineralization of DPM medium was first addressed by IEX, using the Amberlite MB20 resin, a mixed bed resin containing both a strong acid cation exchange resin and a strong base anion exchange resin, supplied in the H and OH forms, respectively. This resin would allow the removal of both cations and anions in only one step. The chemical forms of the resin mean that cations in solution would be exchanged by protons (H + ) in the resin and anions in solution would be exchanged by hydroxyl anions (OH − ) in the resin. So, cation and anion binding to the resin would result in acidification and alkalinization of the solution, respectively. If the number of cation and anion equivalents bound to the resin are the same, so will the H + and OH − ions released, which would neutralize each other, and the pH of the solution should not be altered. The key to demineralize the DPM medium with the IEX resin without, at the same time, also removing the AA, was to adjust medium to an acidic pH value well below its p K a , so that AA (a weak acid) is undissociated and, therefore, uncharged, while mineral anions remain still charged. This would consequently allow mineral anions to bind to the resin, but not AA, which would remain free in solution. Such kind of demineralization process was previously proposed for the purification of lactic acid from fermentation broths. 20 According to its p K a (4.76), 99.45% of AA would be undissociated at pH 2.5, so this pH was selected to perform the demineralization tests. Mineral anions would remain charged at this pH according to their p K a values: phosphoric acid p K a1 2.12, sulfuric acid p K a2 1.92, and hydrochloric acid p K a −6.3. A very relevant difference would occur for carbonic acid (p K a1 6.35) however, as will be explained at the end of this section. Consequently, HCl-acidified DPM medium (pH 2.5) was treated, in batch, with the Amberlite MB20 resin at different resin to medium ratios ranging from 10 to 200 g of resin per L of DPM medium. Results are shown in Fig. 1 . Two parameters, pH and conductivity at equilibrium, were directly measured to evaluate the effect of the treatments. pH remained practically unchanged around the initial pH of acidified DPM medium for ratios up to 75 g L −1 . At higher ratios medium pH increased, to around 4.1 at 100 g L −1 and near 9 at 200 g L −1 . As previously explained, if the same number of equivalents of cations and anions are bound to the resin, it was expected that the pH of the medium remained unchanged because the released H + and OH − ions would neutralize each other. Therefore, such pH increase would be the result of an unbalanced binding of cations and anions to Amberlite MB20, being higher that of the latter. According to the manufacturer, the percent volume of the anion exchange resin in Amberlite MB20 exceeds that of the cation exchange resin (62–56% vs. 38–44%). So, when the cation binding sites of the resin are saturated, more anions can still be bound, which would result in a net alkalinization of the solution. Conductivity of DPM medium strongly decreased from its initial value of 9.5 mS cm −1 to virtually zero (60 μS cm −1 ) by increasing resin to medium ratio to 100 g L −1 . This decrease was quite linear and would reflect the removal of charged ions from the solution. So, apparently, the resin was very efficient in removing the salts. When the salt composition of DPM medium after IEX resin treatment was analyzed several conclusions could be extracted. First, the efficient removal of all the anions suggested by conductivity measurements was confirmed. All the anions, including acetate, were totally removed from DPM medium at 100 g L −1 resin. At a slightly lower resin to medium ratio, 75 g L −1 , 97% of sulfate, 96% of chloride and 80% of phosphate were removed, while 86% of AA remained in solution. Second, the binding selectivity sequence of the anions to the resin, which reflects the affinity of the resin to them, was sulfate > chloride > phosphate (dihydrogen) > acetate, which agreed with the data reported in literature, 21 so suggesting that this treatment could be very suitable to carry out demineralization of DPM medium because among the anions present the affinity of the resin for acetate (as free AA) was the lowest one. However, as explained above, when the resin to medium ratio was higher than 100 g L −1 , all the acetate was removed from the solution, which likely was a result of the parallel pH increase observed. The pH increase approaching and surpassing the p K a of AA would displace equilibrium to the formation of the dissociated and charged acetate form, which could bind to the resin. So, it was very important that the pH of the medium during treatment with the IEX resin is maintained as far as possible in the acidic side from the p K a of AA to avoid its removal. Regarding the mineral anions, sulfate and chloride were almost totally removed at 75 g L −1 resin (97 and 96%, respectively). However, note that this value for the removal of chloride is related to an initial concentration of 2464 mg L −1 , considerably higher than that contained in the original DPM medium (659 mg L −1 ), which is explained by the addition of HCl used to acidify DPM medium to the starting pH of 2.5. So, if the original chloride concentration is considered, the final 96 mg L −1 attained after resin treatment would represent a lower actual removal of 86%. The most reluctant mineral anion to be removed was phosphate, remaining still 20% in solution at 75 g L −1 resin. By slightly increasing the ratio to 80 g L −1 , the removal of phosphate increased to 88%, but then the AA remaining in solution decreased to 64% of the initial value, a loss that was considered unacceptable. The reason under the incomplete removal of phosphate is likely related to the different species found at equilibrium at the pH used (2.5). At this pH, the main species present in solution would be the monovalent dihydrogen phosphate anion (p K a1 2.14), but around 30% of the undissociated and, therefore, uncharged form would be also present, and this later form could not bind to the resin. Finally, some comments about bicarbonate, the most important mineral anion, in concentration terms, found in DPM medium. The ion chromatography method used to quantify the anions did not allow quantification of bicarbonate because the eluent used was Na 2 CO 3 . So, no direct information regarding this anion was available. However, we can suppose with a high degree of accuracy what happens with it. At the pH of the original DPM medium, around 7.1, bicarbonate is the main species found in solution (p K a1 6.35), also appearing a fraction of undissociated carbonic acid. When pH is acidified to 2.5 before IEX resin treatment equilibrium would be totally shifted to the formation of carbonic acid, which, in turn, would decompose to CO 2 and be released from solution as a gas. Therefore, it was expected that at the initial pH of the IEX resin treatment most of the bicarbonate anions would have been removed. In conclusion, it appeared that the strategy used to demineralize DPM medium, without hardly affecting AA, using the mixed bed IEX resin Amberlite MB20 at a ratio of resin to medium of 75 g L −1 at acidic pH would be quite successful. The previous experiment, involving the treatment of DPM medium with the mixed bed IEX resin Amberlite MB20 at pH 2.5, allowed the almost complete removal of sulfate (and probably bicarbonate) anions, and most of chloride, still remaining around 86% of AA in solution. The problem was that more than 20% of phosphate still remained in the medium. So, a new strategy was planned to improve the demineralization extent, involving a calcium-treatment of DPM medium. It is known that phosphate forms very insoluble salts with calcium. Therefore, treatment of phosphate-containing solutions with Ca 2+ was expected to result in the precipitation of different calcium phosphate salts, so removing this anion from solution. The best option of calcium source was considered to be CaO (calcium oxide or quicklime), which is converted into Ca(OH) 2 upon dissolution in water, because its use would have a double benefit. First, no extra anions would be added to the medium, avoiding the need to remove them later. And second, medium pH would become very alkaline, so favouring not only phosphate precipitation, but also bicarbonate removal, because at that pH values equilibrium would be displaced to the formation of carbonate anion (p K a2 10.32), which would precipitate as the very insoluble CaCO 3 salt. Consequently, an amount of CaO sufficient to achieve 60 mM Ca 2+ was added to the DPM medium and was let stirring overnight. This concentration of calcium is in excess from the bicarbonate and phosphate content of the medium, around 30 and 15 mM, respectively. From the solubility data of both calcium salts it was expected that calcium carbonate was precipitated first, and then calcium phosphate. Following CaO addition a dense white precipitate appeared, rising the solution pH from an initial value of 7.14 to 12.40, and also increasing its conductivity from 6.24 to 9.92 mS cm −1 . The medium was then filtered to remove the precipitate, resulting in a clear filtrate with no trace of phosphate . Regarding bicarbonate, although it could not be quantified as explained before, it was also expected to be completely absent from the calcium-treated solution. Around half of sulfate, precipitated as gypsum, was also removed with the calcium treatment. The other anions, chloride and acetate, remained unchanged in solution. So, once achieved the complete removal of phosphate, the most difficult mineral anion to be removed with the Amberlite MB20 resin at acidic pH, experiments of demineralization with this resin could be resumed. As explained before, the best pH to demineralize the medium with this resin, affecting AA as little as possible, was an acidic pH well below its p K a value (4.76). So, the calcium-treated medium had to be acidified from its very alkaline pH of 12.40 to around 2.5 or less. One possibility to get such a strong acidification was to add strong mineral acids ( e.g. , HCl or H 2 SO 4 ), but this would result in an increase in the mineral anion content of the solution, which would complicate further treatment with the IEX resin. The most feasible alternative was the use of a strong acid cation exchange resin in the H form. 20 Treatment of the calcium-precipitated DPM medium (DPM-Ca medium) with this type of resin would allow to remove the cations originally present in it and the excess of calcium likely still remaining after precipitation. And most importantly, the binding of these cations to the resin would be coupled to the release of an equivalent amount, in terms of charge, of protons, so resulting in a pH decrease of the solution. According to these assumptions, DPM-Ca medium was treated with the strong acid cation exchange resin Amberlite IR-120 at a resin to medium ratio of 25 g L −1 , a ratio sufficient to decrease the solution pH and conductivity to 2.30 and 3.72, respectively . This treatment, as expected, did not affect to the concentration of anions still present in solution, which remained unchanged , but strongly decreased the concentration of cations (results not shown). Regarding the anions, in the very unlikely event that there was still some trace of bicarbonate remaining in solution after calcium-precipitation, the strong acidification of the medium would have completed its removal, as it would have been released as CO 2 gas. Finally, this resin-acidified DPM-Ca medium was treated with the mixed bed IEX resin Amberlite MB20 to remove the remaining mineral anions (sulfate and chloride). When the resin to medium ratio was 20 g L −1 the pH increased from 2.30 to 3.15 and the conductivity decreased from 3.72 to 0.44 mS cm −1 . Under these conditions, while the AA concentration in solution was still kept at 91 mg L −1 (from the 100 mg L −1 in the original DPM medium), sulfate was completely removed, and chloride concentration decreased to 63 mg L −1 (from the initial 659 mg L −1 ) . So, as a result of the combined treatments (calcium precipitation, acidification with the Amberlite IR-120 resin and treatment with the IEX resin Amberlite MB20) an almost complete demineralization of the DPM medium was achieved, with total removal of phosphate and sulfate (and likely bicarbonate) and 90% removal of chloride, while 91% of AA still remaining in solution. A comparative summary of the results obtained with both demineralization treatments, with Amberlite MB20 alone and with the above combined treatment, is shown in Table 2 . Results are shown in terms of concentration remaining in solution for four of the five main anions present in DPM medium. The values for the fifth anion, bicarbonate, are not shown because it could not be quantified by the ionic chromatography method used, but are expected to be zero or near zero according to the known behaviour of this anion under the different conditions applied to the medium (calcium-precipitation at alkaline pH and/or strong acidification). Although both treatments were very efficient in demineralizing DPM medium, but still maintaining most of the AA in solution, the combined treatment was somewhat better ( Table 2 ). According to these results of the demineralization process of the DPM medium, a new synthetic simplified solution was prepared to be used in the next steps to purify AA. This synthetic simplified solution, called Demineralized-DPM (DM-DPM) medium, had the following composition: 91 mg L −1 AA and 63 mg L −1 chloride (as NaCl, 104 mg L −1 ), pH 3.15. Once DPM medium was almost completely demineralized, obtaining the DM-DPM medium as explained in the previous section, the recovery of AA from such an extremely diluted solution using IEX resins was addressed. For that, initially four different anion exchange resins were tested, covering all the types available in the market: two weak base anion , one strong base anion (Amberlite IRN78) and one mixed bed strong acid cation and base anion (Amberlite MB20) resins. The resins were used as received, without any prior conditioning. DM-DPM medium was treated in batch with increasing concentrations of the single resins, ranging from 0 to 10 g L −1 , and their capacity to remove AA and chloride from the solution was determined. In addition, changes in pH and conductivity were also recorded. The results of such assays are shown in Fig. 3 . First, weak base anion resins Amberlite IRA-67 and Lewatit VP OC 1065 had a very similar behaviour. Initially, by increasing the concentrations of the resins up to 2 g L −1 , AA was increasingly removed from the starting 100 mg L −1 , reaching its lowest concentration in the medium, around 35 mg L −1 . From that point no additional AA was removed from the solution despite the increase in resin concentration to 10 g L −1 . A similar effect was observed for chloride, that reached a minimum concentration of around 10 mg L −1 at 2 g L −1 resin. Regarding pH, it continuously increased in line with resin concentration from the initial value of 3.15 to around 7.00 at 10 g L −1 resin. Conductivity, in turn, strongly decreased to less than 100 μS cm −1 at the lowest concentration of resins assayed, stabilizing thereafter. The reason behind the limited removal of AA by the weak base anion exchange resins can be probably found in the alkalinization of the medium resulting from the anion exchange activity of the resins, which increased pH to values higher than the p K a of AA, so that its acid–base equilibrium was shifted to the formation of the charged acetate anion. And these weak base anion exchange resins, supplied in free base form, are known to only bind carboxylic acids as charge-neutral units (either through hydrogen bonding or via proton transfer) to maintain the charge neutrality of the adsorbent phase. 19,21 In other words, they can only bind undissociated carboxylic acids, hence the importance of the pH being below the p K a of the acid for a proper removal of it. Moreover, it should be noted that the mere presence of the resins in pure water caused a strong alkalinization to pH close to 9.0 (results not shown), which would be a consequence of the behavior as a weak base of their functional groups (free amines). Amberlite IRN78 strong base anion exchange resin was more efficient for anions removal than the weak base resins, achieving values higher than 90 and 98% for AA and chloride, respectively, using resin concentrations higher than 3.5 g L −1 . Medium pH rapidly rose to very alkaline values, higher than 10.0. As the functional group of the resin is in the OH form, the binding of anions results in the equivalent release of hydroxyl groups, the source of the alkalinization observed. However, unlike what happened with the weak base resins, in this case that alkalinization hardly affected to the extent of the anion binding. The functional group of this resin is trimethylammonium, so that it only binds dissociated, negatively charged, carboxylic acids, which are mainly found at alkaline pH values, when pH > p K a . The comparison between weak and strong base anion exchange resins showed a higher AA removal capacity for the strong base resins, in agreement with other results found in the literature. 16 However, the opposite trend has also been reported, i.e. , better performance of weak base resins compared to strong base ones. 22 This discrepancy can probably be attributed to the different counter-ion present in the strong base anion exchange resins used in those studies. While in ref. 16 and the present work the resins were in the OH form, in ref. 22 they were in the Cl form. And it has been reported that the nature of the counter-ion in the resin influences strongly the exchange equilibrium, suggesting that the OH counter-ion is more easily displaced by the carboxylate anions than the Cl anion. 23 Finally, Amberlite MB20, a mixed bed strong acid cation and base anion resin, behaved similarly to Amberlite IRN78, but with some relevant differences. The removal of AA and chloride was significantly lower with MB20 than with IRN78 at resin concentrations lower than 5 g L −1 , but from this point on the removal was virtually complete with the former, while with the latter it was near but never reached. This difference could be explained by the fact that MB20 is a mixed bed resin, where only about half of it is a base anion resin. Therefore, at the same concentration of resin, the binding capacity of anions by MB20 would be lower (half, approximately) than by IRN78. This means that a two-fold concentration of MB20 would be needed, with respect to IRN78, to get the same result. However, although this explanation might be correct for chloride removal , it does not appear to be correct for AA . The key for this discrepancy would be in the different pH evolution observed with both resins. As explained above, the use of the IRN78 resin resulted in a strong alkalinization of the medium upon anion binding. With the MB20 resin, however, the binding of the anions did not entail such strong pH increase, but it was better controlled . For resin concentrations lower than 5 g L −1 , pH was maintained at values lower than the p K a of AA, so that the acid was mainly in its undissociated uncharged form, which would not bind to the resin. Only from 5 g L −1 of resin the pH rose above the p K a of the acid and, consequently, the concentration of the dissociated charged acetate anion, the species that actually binds to the resin, increased. The pH buffering capacity of the MB20 resin would result from the concerted activity of the base anion and acid cation resins present in it, so that the simultaneous binding of anions and cations would release hydroxyl groups and protons, respectively, that would neutralize each other. Therefore, the binding of AA, a weak acid, would depend on pH, which controls the proportion of acetate available to bind to the resin. On the contrary, as HCl is a strong acid, it is always completely dissociated and available to bind (chloride) independently of the pH. It was previously mentioned that the removal of AA and chloride with the IRN78 resin was near to be complete but was never reached. This effect could result from the strong alkalinization induced upon anion binding, which means that the concentration of OH − anions in the medium increased to such an extent that ultimately could compete for binding sites with the other anions. With the MB20 resin, as pH was better controlled, the concentration of OH − anions would be very low and would not mean a real competition for the other anions, which could be removed completely. In conclusion, from the above results it was considered that the best resin to address the recovery and purification of AA from the DM-DPM medium was the mixed bed resin Amberlite MB20 and, consequently, the next experiments were carried out using it. Selecting from Fig. 3 the results related to the recovery of AA and chloride using the Amberlite MB20 resin and representing them in the same graph some interesting things can be observed. An analysis of the graph in detail allowed to differentiate between two scenarios. In the first one , occurring at a resin concentration of 2 g L −1 , the concentration of chloride was decreased from 67 mg L −1 (the concentration in DM-DPM medium) to 7.3 mg L −1 , that is, it was decreased by 89% or, in other words, only 11% of the original chloride remained in solution. Meanwhile, AA concentration only decreased from 99 to 91 mg L −1 , remaining in solution 92% of the initial acid. This means that at that resin concentration most of chloride was removed from DM-DPM medium, while most of AA remained in solution. So, in this scenario a “cleanup” of DM-DPM medium occurred, selectively removing chloride. As a result, the solution was relatively enriched in AA, so that its purity increased. In the second scenario , occurring at a resin concentration of 5 g L −1 , both AA and chloride were totally removed from DM-DPM medium, so that it could be the starting point for alternative purification strategies where, following this first step, AA would be selectively released from the resin to separate it from the remaining chloride. Several parameters can be used to evaluate the performance of the AA purification process: (a) Yield ( Y ): mass percent of AA recovered in the process. (b) Purity ( P ): mass percent of AA with respect to all the anions present in the medium. (c) Purity increase (PI): ratio of the purity of AA obtained after any separation process with respect to its purity in the original DPM medium. (d) Enrichment factor (EF): ratio of the concentrations (in mg L −1 ) of AA to the rest of anions in a sample with respect to its ratio in DPM medium. A summary of the performance of the AA purification process following scenario 1 of the treatment of DM-DPM medium with the Amberlite MB20 resin in batch is shown in Table 3 . Therefore, after the treatment of DM-DPM medium with the Amberlite MB20 resin according to the conditions of scenario 1, a solution containing 83.7% of the AA present in the original DPM medium was obtained, with a purity of 92.6%, which represents a 38.6-fold purity increase and a 513-fold enrichment. In all the previous experiments, DM-DPM medium was treated with the Amberlite MB20 resin in batch. This operation mode involved the addition of a certain amount of resin to the medium, mixing for a sufficient contact time to achieve anions-resin binding equilibrium, and separation of resin and liquid fractions. There is an alternative operation mode where the medium is passed through the resin packed in a column, which might allow a better separation of AA and chloride. The resin binds chloride with higher affinity than AA, but under batch mode some AA is still removed, probably as a result of the pH increase observed. It would be possible that under column mode these pH changes could be better controlled, thus improving AA separation. So, that column mode was tested aiming to selectively remove chloride from DM-DPM medium. As medium pH was acidic (2.95), a pH value where AA is undissociated and, therefore, uncharged, it was expected that it was unable to bind to the resin, while chloride, negatively charged, could do it. Therefore, the eluate could contain AA at the original concentration and be free of chloride. Once the resin had reached its maximum chloride binding capacity, it would begin eluting from the column. A column containing 0.65 g of Amberlite MB20, with a bed volume (BV) of 1 mL was prepared and the DM-DPM medium was passed through it with a flow rate of 1 mL min −1 , equivalent to 60 BV per h. Fractions of 25 mL (25 BV) of the eluate were taken in the course of the experiment and characterized for the pH, conductivity and AA and chloride content. Results are shown in Fig. 5 . Initially, the eluate was almost free of AA. After a few BV, its concentration in the eluate started to increase, reaching a maximum value close to 200 mg L −1 , double than in the feeding DM-DPM medium, at around 250 BV. Then, the AA concentration decreased to reach finally at about 400 BV the concentration present in the feeding solution and being unchanged thereafter. Most of chloride was, in turn, removed by the resin in the first 250 BV, maintaining a concentration in the eluate below 10 mg L −1 , and then increased slowly to reach its feeding concentration by 400 BV. From that number of BV, the concentration of both anions in the eluate was exactly the same as in the feed, so reaching the breakthrough point. The selective removal of chloride with respect to AA depends on two factors, the intrinsic affinity of the resin for them and the pH. As the affinity of the resin for chloride is higher than for acetate, chloride can displace acetate anions bound to the resin. So, as the liquid front moves through the column, when it finds free binding sites, both chloride and acetate can be bound. However, when the liquid behind the front finds that the binding sites are occupied, acetate can not bind and continues its way to the next free sites, but chloride can displace the acetate previously bound to the resin. As a result, the liquid front would be depleted in chloride and enriched in acetate, which explains the elution pattern showing an overshooting of acetate after the breakthrough of chloride. 24 In addition, pH also plays a relevant role in this process. The pH of the first fraction of eluate increased abruptly from the initial pH of the DM-DPM medium (2.95) to 4.7 and then decreased slowly in the next fractions, as the feed passed through the column, to finally reach the pH value of the feed. As repeatedly explained previously, the charged acetate fraction depends on the pH of the medium, so that the higher the pH the more acetate will be present. This means that at the initial BV, when the pH is higher, more acetate molecules are available for binding. Later, as the pH decreased, most of AA molecules would be undissociated (uncharged) and unable to bind to the resin. Therefore, this pH effect would enhance the displacement of acetate by chloride. If all the fractions eluted to the breakthrough point of chloride are pooled, the resulting solution would contain 92% of the AA fed to the column and one third of the chloride, so obtaining a solution enriched in AA with respect to the DM-DPM medium, but the purification parameters ( Table 3 ) would not improve the results obtained in batch mode with the same resin. By discarding some of the fractions eluted at both extremes of the breakthrough curve, the purity and enrichment factor of the process would improve, but it would be detrimental for the recovery yield, which would decrease to an unacceptable degree. The reason behind the worse performance obtained in column compared to batch might be related to the influence of the flow rate of the liquid through the column on the binding of the anions to the resin. The “contact time” between the anions and the binding sites in the column would decrease at higher flow rates, surpassing its kinetic capabilities, so being more difficult to reach equilibrium and likely negatively affecting anion separation. 25 So, a low flow rate would be preferred. In the column experiment the residence time was 1 min, so that the “contact time” was quite short. A lower flow rate could also be applied, but the time required to pass the liquid would be extremely high. For example, in the column system used in this study, it would be necessary 500 min (more than 8 h) to pass 500 mL of liquid. If the flow rate is reduced by half, the time would be increased the double, to 1000 min (more than 16 h), which would be operationally unpractical. Under batch mode, conversely, the “contact time” is higher, high enough to allow equilibrium to be reached, and independent of the liquid volume to be treated. Another factor that could be involved in the lower performance achieved under column mode could be the nature of the mixed bed resin. This kind of IEX resin contains a mixture of strong acid cation and base anion exchange resins and, according to the manufacturer, the densities of both resins are quite different, being lower that of the latter. This means that during the resin bed formation in the column some degree of separation of the resins could have occurred, resulting in an uneven distribution of both types of resin, with the base anion exchange resin enriched towards the top of the column and vice versa . And this uneven distribution of the resins could locally affect both the binding of ions and pH, which are closely linked, thus affecting the beneficial pH-buffering effect of the mixed bed resin and resulting in a lower separation performance than expected if the resins had been distributed homogeneously throughout the column. Under batch mode, however, this phenomenon would not occur and the separation achieved with the mixed bed resin would be better. Although the purification of AA under the conditions of scenario 1 was considerably improved, particularly under batch mode, the acid still remained in solution at a very diluted concentration, even lower than in the original DPM medium. Accordingly, it would be necessary to apply additional treatments to fully recover and concentrate AA, which would reduce again the recovery yield and make the process unfeasible. So, a different approach was required to further improve purification, and this is where scenario 2 appears. The previous experiments showed a better performance for the Amberlite MB20 resin in batch mode than in column mode. So, an AA recovery and purification strategy based on the scenario 2 described in Section 3.2.2 under batch mode was assessed. The idea was to first remove AA and chloride totally from DM-DPM medium with Amberlite MB20 in batch (5 g L −1 ) and then selectively elute AA using a small volume of a diluted solution of sulfuric acid. Elution was carried out in batch, by successively applying small volumes of the eluent, so that it would be a step-elution. There were several reasons to apply such kind of step-elution with sulfuric acid. First, considering the affinity order of the resin for the anions (sulfate > chloride > acetate), it was expected that sulfate eluent would first displace acetate from the resin and later chloride. Second, the acidic pH of the sulfuric acid solution would shift the AA/acetate equilibrium to the formation of undissociated AA, which would enhance its release from the resin binding sites. Third, the step-elution under batch mode would allow to reach the binding equilibrium of all the anionic species involved by simply extending the “contact time” sufficiently (a “contact time” of 30 min was found to be enough to reach equilibrium). Fourth, the step-elution would allow precise control of the extent of the acetate displacement and elution, allowing the elution to be finished when chloride or sulfate began to appear in the eluate. And fifth, the use of small volumes of eluent would allow to obtain a more concentrated solution of AA in the eluate compared with that in the feeding. A 500 mL solution of DM-DPM medium was treated with Amberlite MB20 at a rate of 5 g L −1 at room temperature for 2 h with gentle stirring to remove completely AA and chloride. Then, the anion-loaded resin was separated from the anion-depleted liquid by filtration. The anion-loaded resin was finally step-eluted with 20 mM H 2 SO 4 applied in eight 5 mL steps. Each elution step involved the addition of 5 mL of the eluent to the resin, stirring for 30 min, and separation of resin and liquid by filtration. The results of this process are shown in Fig. 6 . Treatment of the DM-DPM medium with the resin resulted in the complete removal of both AA and chloride, leaving a liquid that was essentially pure water, that could be further reused supporting the sustainability of the process. The anion-loaded resin was then step-eluted. In the first elution fraction (1) no anions were detected, which suggest that sulfate anions had bound to free binding-sites still present in the resin. Thereafter, in the next four elution steps (2–5), the AA concentration in the elution fractions steadily increased, reaching a maximum value as high as nearly 2200 mg L −1 in the step 5, that is, more than 20 times more concentrated than in DM-DPM medium. Chloride, in turn, was hardly detected in these fractions, with concentrations lower than 15 mg L −1 in all of them, and sulfate was totally undetectable. From step 6, AA concentration began to decrease and, at the same time, concentration of chloride, first, and sulfate, later, increased. If fractions 2 to 7 (2–7) are pooled the resulting solution would contain 1520 mg L −1 of AA and only 42 mg L −1 of chloride, with a recovery yield for AA of 90.3%. This means that AA would have been concentrated by around 15 times, while chloride levels would be 37% lower than in the original DM-DPM medium, so having considerably improved its purity. Instead, if those that are pooled are fractions 2 to 6 (2–6), the AA concentration would be the same, 1520 mg L −1 , and that of chloride lower, 12 mg L −1 , that is, a greater purity would be obtained, but with a lower recovery yield of 75%. The increase in the concentration of AA in the pooled elution fractions compared to that in the original DM-DPM medium results from the strong decrease in the volume of the solutions, from 500 mL to 30 or 25 mL for pooled fractions 2–7 or 2–6, respectively. The purification parameters of this process are shown in Table 4 . The AA purity of the pooled fractions 2–7 and 2–6 would be 96.9 and 99.2%, respectively, so clearly improving the values obtained in the previous processes, involving treatments with the same resin (scenario 1) under batch or column modes. Moreover, as a result of the purity improvement, the enrichment factor shot up to values as high as 1256 and 5086, respectively. The aggregate recovery yield was the only parameter with lower data: slightly lower, but not significantly different, for the pooled fractions 2–7 (82.2 vs. 83.7%), and 18% lower for pooled fractions 2–6. Furthermore, apart from the better results regarding purity and enrichment, the purification process described in this section had an additional and very relevant benefit: the final AA solution was concentrated by 15–16 times compared to the original DPM medium, while in the other two processes its concentration was around 10% lower. Therefore, further concentration of AA to industry-demanding levels using conventional technologies, preferably non-energy intensive technologies such as liquid–liquid reactive extraction 26 or IEX resins again, would be easier by applying this process. A scheme of the whole recovery and purification process proposed in this work is presented in Fig. 7 . In this paper, a case study dealing with the technical feasibility of a downstream process for the recovery and purification of AA from extremely diluted solutions (100 mg L −1 or 0.01% w/w) containing contaminating inorganic salts is presented. The process is based on two successive steps using of IEX resins, that is, a non-energy intensive separation technology. The first step, demineralization, involved a combined treatment of calcium precipitation, acidification with the Amberlite IR-120 resin and treatment with the mixed bed Amberlite MB20 resin, which allowed the total removal of phosphate and sulfate (and likely bicarbonate) and 90% removal of chloride, while still remaining 91% of AA in solution. The demineralized medium resulting from this first step was, in the second step, treated again with the mixed bed Amberlite MB20 resin in batch to remove all AA and chloride remaining in solution and, finally, the anion-loaded resin was step-eluted with a low volume of diluted H 2 SO 4 to selectively elute AA. The recovery yield and purity of AA in the final solution obtained showed an inverse relationship depending on the number of eluted fractions pooled. The greater the number of fractions pooled (2–7 vs. 2–6), the greater the recovery yield (82.2 vs. 68.5%) but the lower the purity (96.9 vs. 99.2%). In any case, the values of both parameters appear to be good, especially considering the final solution of AA obtained, which was 15-fold more concentrated than the original medium . Two issues should be highlighted to support the novelty of this work. On the one hand, the vast majority of downstream processes dealing with the recovery of AA, or carboxylic acids in general, from fermentation media are applied to solutions with concentrations of AA, at least, one to two orders of magnitude higher than the concentration available in this work. On the other hand, a mixed bed ion exchange resin is used in this work to both demineralize the AA solution and purify it, instead of the commonly used single strong or weak base anion exchange resins. As far as we know there are no reports in the literature addressing the recovery and purification of AA (or other short-medium chain length fatty acids) either from extremely diluted solutions nor using mixed bed ion exchange resins. It is worth mentioning that although the experimentation has been done with synthetic solutions the results can be fully extrapolated to real samples such as broths resulting from CO 2 fermentation processes to AA, characterized by the very low content of the acid. The microbial biomass present in the broth would be easily removed by microfiltration or centrifugation, and the macromolecular compounds contained in the clarified broth by ultrafiltration. The resulting broth would mainly contain AA and the inorganic salts, so it would be very similar to the DPM synthetic medium used in this work. Other compounds potentially present in the broth, such as trace elements and vitamins, would be at so low concentrations that would hardly interfere with the purification process. The data associated with this article have been included in the manuscript. Tomás Roncal: conceptualization, methodology, investigation, formal analysis, data curation, writing – original draft, writing – review & editing, visualization, supervision, project administration. Ainhoa Aguirre: investigation, resources, data curation. Yolanda Belaustegui: writing – review & editing, funding acquisition, project administration. Elisabet Andrés: resources, project administration. There are no conflicts of interest to declare. | Other | other | en | 0.999996 |
PMC11697287 | Sentiment analysis is a significant task in natural language processing that aims to mine the emotional tendencies of given texts, thereby helping to gain a deeper understanding of the text content and its potential impact. Currently, with the rise of public social media platforms such as Sina Weibo and Twitter, sentiment analysis techniques have shown important roles in social sentiment analysis and event tracking . Relevant researchers use sentiment analysis algorithms to identify the emotional tendencies of massive social media platform users’ posts, thereby comprehensively analyzing the trend of public opinion and taking corresponding measures. The sentiment analysis technology itself has also expanded from the traditional simple binary classification task to the multi-classification task, that is, identifying the specific emotions contained in the text, such as happy, sad, like, and angry. However, compared to the traditional binary sentiment analysis, the multi-label sentiment analysis task faces challenges such as data sparsity, class imbalance, and difficulty in modeling emotional semantics. To this end, researchers have proposed various multi-label text emotion classification models based on statistics, machine learning, and deep learning techniques. For example, sentiment analysis models based on emotional dictionaries identify the emotional categories of texts by matching the retrieved words in the emotional dictionary. Text emotion dictionary models based on Naive Bayes and support vector machines use statistical learning methods to analyze and model word frequency statistical features to recognize the probability of text emotions. With the widespread application of deep learning in the field of natural language understanding , deep learning text emotion recognition models represented by recurrent neural networks (RNN) and large-scale pre-trained models (pre-trained model) have made significant progress in the identification of specific text emotion categories by relying on the powerful capabilities of deep learning in semantic representation modeling. To efficiently mine and utilize semantic correlation between emotions to enhance multi-label sentiment analysis, in this study, we propose an emotion correlation-enhanced sentiment analysis model (ECO-SAM). Inspired by the widely used self-attention mechanism for language modeling and the basic emotion theory, we first design a novel attention-based emotion correlation modeling module that could automatically learn the semantic correlation between emotions from data and obtain correlation-enhanced emotion embedding representation. Next, we transform the multi-label sentiment analysis problem into an information retrieval problem, which aims to find the most suitable emotions from the emotion candidate list for a given query text. Then, we design an emotion-matching module that uses neural networks to learn the matching function between emotion and text embedding from data. Finally, we demonstrate the effectiveness of ECO-SAM via extensive experiments on two public sentiment analysis datasets. The experiment results unveil that the ECO-SAM obtains the precision score increasing by 13.33% at most, the recall score increasing by 3.69% at most, and the F1 score increasing by 8.44% at most. Meanwhile, the modeled sentiment semantics are interpretable. The basic emotion theory was proposed by American psychologist Ekman . The theory believes that humans have six basic emotions: happiness, sadness, fear, anger, surprise, and disgust. These basic emotions are considered to be universally present across cultures and species. Based on the basic emotion theory, Ekman found some universality of emotional expressions through observing the facial expressions of people in different cultures. Izard expanded the basic emotion theory, discussing the relationship between basic emotions and the relationship between emotion and cognition. The study proposed a model of the emotional system, describing the relationships between basic emotions and how they interact and regulate each other. For example, the author pointed out that there is a close relationship between “anger” and “disgust,” while “happiness” and “sadness” have an antagonistic relationship. Russell proposed the circular emotion theory, which expanded the basic emotion theory and emphasized the construction and subjective experience of emotions, implying the idea of modeling the association between emotions. Cowen and Keltner explored how people describe and distinguish different emotional experiences in self-reports. The study found more fine-grained emotional experiences compared to the basic emotion theory, expanding the understanding of emotions and breaking through the traditional concept of basic emotions. It shows that emotions are complex and diverse and can be described and captured through multiple discrete emotion categories and continuous gradients. In summary, the basic emotion theory first proposed the six basic elements of emotion. Relevant scholars have delved deeper into the construction of emotions and the relationships between emotions based on the basic emotion theory and developed a gradually more comprehensive emotional theory framework. Sentiment analysis is a text classification task that aims to identify the emotional category of a text based on its semantic features. According to the different distribution of emotion labels, sentiment analysis can be divided into emotion polarity classification (binary), emotion category classification (multi-class), and emotion label classification (multi-label). Sentiment analysis models include rule-based emotion dictionary methods , statistical machine learning-based methods , and deep learning-based methods . The rule-based emotion dictionary method is an unsupervised approach that uses emotion dictionaries to obtain the emotion values of emotional words in the document and then determines the overall emotional tendency of the document through weighted calculation. This method does not consider the connections between words, nor does it consider the changes in the emotional tendency of words due to the context. Common emotion dictionaries include English dictionaries such as General Inquirer, SentiWordNet, Opinion Lexicon, and MPQA , as well as Chinese dictionaries such as HowNet , NTUSD , and the Chinese emotion lexicon ontology . The statistical machine learning-based method is a supervised approach that trains machine learning classification models on text data with emotion labels and then applies the trained machine learning classification models to text emotion prediction tasks. For example, Gaye et al. proposed a text emotion recognition model based on support vector machines (SVMs), dividing the emotion analysis process into two strategies and four methods. Ghourabi et al. proposed a text emotion recognition method based on Naive Bayes, establishing a three-layer tree-structured emotion recognition structure. In addition, Patel and Urry proposed a text emotion recognition method that combines deep semantic and surface-level grammar, applicable to aspect-level sentiment analysis. The deep learning-based method is also a supervised approach, training neural network classification models on text data with emotion labels, and utilizing the strong fitting ability of neural networks to accurately predict text emotion categories. For example, Grandjean et al. proposed a sentiment analysis model based on convolutional neural networks, where the dual convolutional layer structure can extract features from sentences of any length. Ji et al. proposed a sentiment analysis model based on deep belief networks, solving the problem of sparse text features. With the rise of large language models (LLMs) , the pre-trained LLM-based methods have emerged in sentiment analysis and achieved excellent performance on large-scale datasets. For instance, Valderrama et al. used the BERT model to obtain more complete text semantic representations, thereby more accurately predicting text emotion categories. Sailunaz et al. compared the sentiment analysis capabilities of various large language models in the research on user behaviors of spreading others’ privacy information on social networks. Gao et al. proposed to use prompt learning to enhance the classification performance of pre-trained models when the data volume is relatively small. In the multi-modal emotion recognition scenario, Zhu et al. proposed a sentiment analysis model based on improved ResNet to analyze and improve the accuracy of image emotion classification. Currently, deep learning models play a pivotal role in accurate sentiment analysis. As shown in Table 1 , we count and list current state-of-the-art sentiment analysis methods based on previous research. The attention mechanism was first proposed by Bahdanau et al. , which is a deep learning technique used to model the semantic association and related representation between different parts of the semantic sequence. In natural language processing, the attention mechanism is often used to model the semantic association between the context in the corpus, thereby achieving the correspondence between the model output results and the context in tasks such as text generation and text classification. The transformer model proposed by Vaswani et al. is a representative model using a self-attention mechanism. The transformer model has strong semantic representation and text output capabilities and is the foundation of many text classifiers and text sentiment recognition methods. Existing sentiment analysis methods struggle to model the important role of emotion correlation in emotion recognition. Therefore, this study first proposes a text sentiment analysis method based on emotion correlation modeling (ECO-SAM). Subsequently, the superiority of the ECO-SAM in sentiment analysis and emotion correlation modeling is demonstrated on the Weibo text sentiment analysis dataset. Finally, the ECO-SAM is applied to text emotion analysis under a given topic. The framework of the proposed ECO-SAM algorithm is shown in Figure 1 . The framework consists of three modules: the text encoder module, the attention text correlation modeling module, and the emotion matching module. The text encoder module uses the large-scale pre-trained model BERT to encode the text input into a high-dimensional text semantic vector. The attention text correlation modeling module uses the attention mechanism to transform the trainable emotion inherent feature vectors and output feature vectors containing emotion correlations, while also outputting an emotion correlation matrix. The emotion classification neural network module matches the text semantic vector with each emotion-correlated emotion feature vector and calculates the probability of the text containing that emotion. The algorithm finally outputs the probability of emotion containment. In the training stage, the model’s emotion inherent feature vectors, attention text correlation modeling module, and emotion classification neural network are trained using a multi-label text emotion recognition dataset. In the inference stage, the parameters of ECO-SAM are frozen to achieve end-to-end sentiment analysis. The BERT text encoder in the ECO-SAM is a large-scale pre-trained text encoding model based on BERT . This module utilizes a masked language model (MLM) to generate deep bidirectional language representations. Experiments in the original BERT study have demonstrated that BERT achieved state-of-the-art performance on 11 natural language processing tasks, which substantiates the efficacy of the BERT module in text semantic representation. Formally, let the original text input be a character sequence s = w 1 w 2 … w N , then the encoding process of BERT can be formalized as shown in Equation 1 : where v s senti ∈ ℝ D , is the text semantic representation vector. The D represents the dimension of the text semantic representation vector defined by BERT. In general, D = 1 , 024 . The attention-based emotion correlation modeling module uses the self-attention mechanism to model the semantic correlation of emotions, thereby addressing the lack of research on emotion correlation in existing studies. Specifically, the self-attention mechanism adopts the query-key-value (QKV) pattern. Each emotion in the framework has a trainable query vector, key vector, and value vector . First, for a target emotion, its query vector is obtained, and the cosine similarity between the query vector and the key vector of each other emotion is calculated. The similarity with each other emotion reflects the semantic dependence of the target emotion, i.e., the extent to which the semantic representation of the target emotion depends on that particular emotion. Then, the feature vector containing the emotion correlation of the target emotion is calculated. This vector is the weighted average of the inherent feature vectors (value vectors) of each emotion, with the weights being the calculated semantic dependence. Finally, Pearson’s correlation coefficient between the feature vectors containing emotion correlations is calculated and the emotion correlation matrix is output. Formally, one-hot encoding is used to mark each emotion. Let S = s j k D × K denotes the emotion feature inherent vector matrix, Q = q j k D × K denotes the evaluation query vector matrix, and Z = z j k D × K denotes the emotion key vector matrix. In the prediction of emotion probabilities, this module first calculates the emotion feature e k using S and the one-hot emotion vector x k , as shown in Equation 2 . Meanwhile, it obtains the query and key vectors for each emotion, as shown in Equation 3 and Equation 4 : Subsequently, the semantic dependence similarity between the target emotion and each emotion is calculated using Equation 5 : Finally, the emotion-semantic embedding with correlation modeling for the target emotion is calculated, as presented in Equation 6 : The resulting calculation e k a t t is the emotion vector representation that contains the emotion dependence relationship, which is used in the subsequent steps to recognize the emotion of the text. The emotion matching module uses a neural network to compute the degree of matching between the text semantic representation and the emotion-semantic representation, thereby predicting the probability of each emotion in the text. Specifically, given the semantic representation vector of a sentence and the semantic representation vector of an emotion, this module uses a quadratic form neural network to predict the probability of the text emotion, as shown in Equation 7 : where W = O ⊤ Λ O is the eigenvalue decomposition of the semantic matching matrix W ∈ ℝ D × D . The above eigenvalue decomposition transformation implies that this neural network prediction process is equivalent to applying the same linear transformation to the text semantic vector and the emotion-semantic vector and then taking the element-wise weighted average, with the weights being the eigenvectors. The training process of the neural network is equivalent to optimizing the linear transformation and the eigenvectors, so that the predicted probability of text emotion is close to the true data label. Since the sentiment analysis problem addressed by the ECO-SAM is a multi-label classification problem, the cross-entropy loss is used as the loss function, as shown in Equation 8 . During the model training process, the training objective of the ECO-SAM is to minimize the loss function value: where Ω represents all the trainable parameters in the ECO-SAM. N represents the number of samples (text samples in the training set). C represents the number of possible emotion categories. y i , k represents whether the text contains the emotion, where y i , k = 1 indicates the text contains the emotion k , and y i , k = 0 indicates otherwise. p i , k represents the probability predicted by the ECO-SAM that the text contains each emotion. This experiment compares the proposed multi-label sentiment analysis model, ECO-SAM, with various baseline text emotion prediction models using the public Weibo dataset. The goal is to verify the accuracy of the ECO-SAM in sentiment analysis and its ability to model emotion feature correlations. For the experimental datasets, this study used two publicly available datasets: NLPCC2014 and GoEmotions . This module takes three inputs for emotion k: the feature inherent vector, query vector, and the corresponding key vectors for all emotions. Each text contains up to two emotions. The GoEmotions dataset consists of 58,000 text data from the English forum Reddit, with the original data containing 27 fine-grained emotion categories. Based on the basic emotion theory, we screened out the 7 emotions consistent with the NLPCC2014 dataset as well as the neutral case as the target of sentiment analysis and selected 32,445 valid samples. Next, we split each dataset into training, validation, and test sets in the ratio of 70%:10%:20%, respectively. In terms of the experiment setting, all models implemented using Python 3.8, with the deep learning framework being PyTorch, and the operating system being Linux. The hardware configuration for running the experiments is a server with two 2.10GHz Intel Xeon E5-2620 v4 CPUs and one NVIDIA Tesla-A100 GPU. The main experiments in this study include emotion prediction experiments and emotion feature correlation analysis. Finally, the ECO-SAM emotion prediction model is applied to sentiment analysis. In the emotion prediction experiment, the following baseline models are used: Random: Random prediction. For each emotion, the text has a 1/2 probability of being classified into that emotion category. Whether an emotion prediction model performs better than random prediction is a basic criterion for its usability. cnsenti : Chinese Sentiment, an emotion prediction model based on the HowNet emotion dictionary of Chinese Knowledge Network. SVM : Support Vector Machine, an emotion prediction model based on support vectors. In the experiment, BERT is used to encode the text into semantic vectors, which are then used as input to the SVM. LSTM : Long short-term memory is a type of recurrent neural network (RNN) architecture designed to address the vanishing gradient problem in traditional RNNs. LSTMs are particularly effective at learning long-term dependencies in data, making them well-suited for applications such as sentiment analysis and time series analysis. BiLSTM : It is an extension of the traditional LSTM architecture that processes input sequences in both forward and backward directions. This bidirectional approach provides a more comprehensive understanding of the sequence. BERT is a pre-trained transformer-based language model. BERT can encode raw texts into semantic vectors with rich information for downstream tasks. For the text emotion prediction task, we use a fully connected neural network as the downstream output layer. T5 : It is a transformer-based language model proposed by Google that unifies various NLP tasks by framing them all as text-to-text problems, where both input and output are text strings. The results of the text emotion prediction experiment are shown in Table 2 . Considering the characteristics of the multi-label classification task, the evaluation metrics are Micro Precision, Micro Recall, and Micro F1 Score. The higher the score for each of these evaluation metrics, the higher the accuracy of the model’s text emotion prediction. Since GoEmotions is an English dataset, the baseline model cnsenti, which is based on the Chinese dictionary, is unable to recognize the text emotions in this dataset. From the above experimental results, it can be seen that the ECO-SAM proposed in this study outperforms the existing text emotion prediction baseline models in terms of precision, recall, and F1 score, with the highest increase in precision being 13.33%, the highest increase in recall being 3.69%, and the highest increase in F1 score being 8.44%. This proves that the ECO-SAM can predict text emotions more accurately compared to existing models ( Table 3 ). Furthermore, among the baseline models, the BERT method also significantly outperforms other existing methods. The comparison between the BERT and cnsenti shows that the text emotion prediction model based on BERT pre-trained language encoding has better performance on Weibo emotion prediction than the traditional model based on rules and emotion dictionaries. The comparison between BERT and SVM shows that the text emotion prediction algorithm based on neural networks has better performance on Weibo emotion prediction than the algorithm based on SVM. Compared to the best baseline model BERT, our proposed ECO-SAM method further improves the performance of the text emotion prediction model based on BERT pre-trained language encoding through an innovative emotion feature modeling module. This section uses the NLPCC2014 dataset as an example to analyze the ability of the ECO-SAM to model emotional semantic similarity. The ECO-SAM text emotion prediction model improves the accuracy of text emotion prediction by modeling the correlation between emotion features through the attention-based emotion modeling module. This experimental stage mainly focuses on the modeling results of the emotional feature correlation in the ECO-SAM. In the ECO-SAM, emotional features are represented as e k a t t , where k represents the emotion category sequence number. For any two emotions k1 and k2, this experiment uses Pearson’s correlation coefficient of the emotion features as the measure of emotion feature correlation, denoted as Corr k 1 k 2 . This correlation coefficient ranges between −1 and 1. When Corr k 1 k 2 > 0 , the two emotion features are positively correlated (similar); when Corr k 1 k 2 ≈ 0 , the two emotion features are uncorrelated (independent); when Corr k 1 k 2 < 0 , the two emotion features are negatively correlated (semantically opposite). The results of the emotion feature correlation calculation are shown in the following figure, which includes seven emotions: anger, disgust, fear, happiness, like, sadness, and surprise. The brighter the color of each square in the figure, the greater the correlation value, and the stronger the association between the two emotions. According to Figure 2 , the three emotions most strongly associated with each emotion are as follows: The above results show that different types of emotions, due to their semantic differences, either exhibit strong correlations or are mutually independent of each other. Some emotions, due to the consistency of their semantics, often exhibit a relatively strong clustering feature. For example, “anger” and “disgust” are both negative emotions, and their semantic correlation reaches 0.99. They also have relatively strong correlations with “fear,” indicating that the above four emotions are similar in semantic connotation, which is consistent with people’s intuition. At the same time, “happiness” and “like” have a relatively strong correlation, indicating that the two intuitively positive emotions also have similar semantic connotations. In addition, “surprise” has a relatively high semantic similarity with positive emotions such as “happiness,” as well as with negative emotions such as “fear.” This suggests that “surprise” as an emotion that an individual perceives due to sudden changes tends to be neutral. In other words, “surprise” can coexist with positive emotions (such as “pleasant surprise”) and also with negative emotions (such as “horrifying surprise”). The significance of this research is as follows: First, at the theoretical level, this study organically combines basic emotion theory and deep learning technology, innovatively proposes a large-scale pre-trained text emotion recognition method (ECO-SAM), and verifies the method’s accurate text emotion recognition and emotion-semantic correlation modeling capabilities through large-scale experiments on real datasets. In the task of sentiment analysis, accuracy is a core issue in related research and is also an important technical guarantee for public opinion monitoring. Therefore, the high performance of ECO-SAM in the experiments is undoubtedly of great significance for enhancing the effectiveness of public opinion monitoring. Second, by leveraging the emotion -semantic correlation modeling capability of ECO-SAM, this study also analyzes the correlation relationships between different emotions within this topic, providing important data references for related public opinion monitoring. At the same time, this research still has some limitations. First, due to the limitations of available data, the training corpus built using the ECO-SAM is still not sufficient to fully unleash the model’s maximum performance, and the data volume needs to be further increased in future research. Second, in terms of text semantic parsing capability, the performance of the ECO-SAM method in recognizing the emotions of texts with large implicit information such as irony and sarcasm still needs to be improved. In the future research plan, on the one hand, we can further improve the text emotion recognition capability through methods such as expanding the dataset and optimizing the model architecture. On the other hand, with the rise of large language models (LLMs) (such as ChatGPT), we can combine the advantages of LLMs in text generation and emergent capabilities, as well as the advantages of ECO-SAM in strong semantic modeling and low computational cost, to develop more efficient sentiment analysis techniques. Furthermore, the topic and user distribution on online social platforms are complex and rich in information. How to leverage the rich topic and user information to assist text emotion recognition and public opinion monitoring, and explore the downstream applications of emotion recognition and emotion-semantic modeling, we also believe, is an important future research direction. Online social platforms are highly susceptible to large-scale controversial network issues, many of which can easily escalate into emotionally charged irrational propagation. Existing sentiment analysis models have difficulty in modeling emotion correlation, and the accuracy of emotion prediction needs to be improved. To solve the above problems, this study first conducted extensive and in-depth-related research and innovatively proposed an emotion correlation-enhanced sentiment analysis model (ECO-SAM) based on basic emotion theory and deep learning technology, to achieve accurate text emotion recognition and emotion correlation modeling on online social platforms. The large-scale comparative experiments on the real text emotion recognition Chinese dataset NLPCC2014 and the English dataset GoEmotions verified the accurate text emotion recognition capability of the ECO-SAM. Emotion recognition comparative experiments showed that the ECO-SAM improved the precision, recall, and F1 score of text emotion recognition by 13.33, 3.69, and 8.44%, respectively, compared to the optimal baseline method BERT, effectively improving the accuracy of text emotion recognition. The emotion feature correlation experiment showed that emotions with similar emotional colors (positive/negative) have relatively strong semantic correlations; the “surprise” emotion has a relatively high semantic correlation with both positive emotions and negative emotions, acting as a bridge between the two in the emotion correlation graph. | Other | other | en | 0.999996 |
PMC11697288 | Magnesium plays a crucial role in the body’s functions. It is involved in over 600 enzymatic reactions that regulate the functioning of the heart, blood vessels, neurons, muscles, and other organs and systems ( 1 ). Most of the magnesium is found in bones and soft tissues, with only 1% in the blood ( 2 ). Therefore, serum magnesium levels correlate poorly with total body magnesium levels or concentrations in specific tissues ( 3 ). Serum magnesium concentrations slightly depend on a child’s age and range from 0.70 to 0.95 mmol/L in children older than 5 months ( 4 , 5 ), and serum levels below 0.7 mmol/L are defined as hypomagnesemia ( 2 ). Symptoms of magnesium deficiency are non-specific and may mask signs of other nutrient deficiencies or non-specific symptoms of chronic diseases ( 6 ). Common causes of magnesium deficiency include insufficient dietary intake, impaired absorption in the gastrointestinal tract, kidney dysfunction, medications (diuretics, calcineurin inhibitors, and certain antibiotics), and genetic factors ( 2 ). Insufficient dietary intake is one of the most common factors of hypomagnesemia in children. Recommended magnesium intake varies by age and sex ( 7 ) and ranges from 75 mg in children aged 7–12 months to 410 mg in boys and 360 mg in girls aged 14–18 years ( 8 ). Several studies have shown insufficient dietary magnesium intake in adult patients in Europe and North America ( 7 , 9 ). Data on magnesium intake from food in the pediatric population are limited, though insufficient dietary intake is noted, particularly in adolescents ( 10 ). Numerous studies have demonstrated the impact of hypomagnesemia on the development of various metabolic disorders, including insulin resistance and diabetes mellitus (DM) ( 11 ). The frequency of hypomagnesemia ranges from 13.5% to 47.7% in patients with type 2 DM ( 12 ). On the other hand, high magnesium intake has been shown to prevent chronic metabolic complications ( 11 ). The positive effects of magnesium in diabetes include improved glucose and insulin metabolism, reduced chronic low-grade inflammation, protection of cells from oxidative stress and damage, improved lipid profile, enhanced endothelium-dependent vasodilation, and neuropathy prevention ( 2 , 11 , 13 ). The aim of our study was to determine dietary magnesium intake, serum magnesium concentration in children with type 1 DM, and their impact on the clinical course of DM. This case-control study included 50 children with type 1 DM (cases) and 67 healthy children (control) aged 6–17 years. The children with DM were examined during hospitalization in the endocrinology department of Ternopil regional children’s hospital, Ukraine. The control group children were examined during routine preventive check-ups at the outpatient department of city and regional children’s hospital in Ternopil, Ukraine. The study was conducted in the spring and autumn of 2021. Inclusion criteria for the control group were the absence of chronic diseases, acute illnesses, and medication intake, along with informed consent from the children and/or their parents to participate in the study. Inclusion criteria for the DM group were a confirmed diagnosis of DM. Exclusion criteria for this group included the presence of other chronic diseases, kidney dysfunction, acute illnesses, and refusal of children and/or their parents to participate in the study. Data collection involved a survey to gather basic characteristics (age, gender, place of residence, and parents’ education) and clinical data for patients with DM (complaints, medical history, medication, vitamin, mineral, and supplement intake). In addition to the primary complaints related to DM, attention was paid to other symptoms that might indicate hypomagnesemia, such as headaches, dizziness, attention disorders, memory issues, depression, irritability, sleep disturbances, cramps, muscle weakness, tremors, and involuntary muscle spasms ( 14 ). To assess dietary magnesium intake, a survey was conducted regarding the weekly consumption of specific food items. The list included major foods for children of different ages, especially those containing magnesium. Each child, under parental supervision, recreated their weekly diet by specifying the number of food portion for each food. Portion sizes were standardized (e.g., a cap or half a cup, teaspoon or tablespoon, slice, etc.). For younger children (ages 6–9), parents helped reconstruct the weekly diet. Using a questionnaire based on a magnesium content database in food products ( 10 ), the average amount and sources of magnesium intake were determined. The total weekly magnesium intake and the average daily intake from food were calculated and compared with national and international recommendations for daily nutrient requirements in children ( 8 , 10 , 15 ). Children with DM had certain dietary restrictions, such as a limited intake of baked goods, barley, millet, pearl barley, oats, legumes, and sour cream. Rice, semolina, pasta, salty cheeses, sweet curds, cream, fatty meats and fish, canned foods in oil, and caviar were excluded or significantly limited. All children underwent comprehensive clinical examinations, including anthropometric measurements [weight, height, and body mass index (BMI)]. The level of glycemic control was determined based on glycated hemoglobin (HbA1c) levels. According to the ISPAD Clinical Practice Consensus Guidelines 2022 ( 16 ), HbA1c levels below 7% were considered optimal glycemic control, while levels above 7% indicated poor glycemic control. Additionally, serum magnesium, calcium, and phosphorus concentration were measured. Blood samples were taken via venipuncture from the elbow vein using disposable “Vacutainer” systems on an empty stomach. Quantitative determination of magnesium, calcium, and phosphorus was performed using ELISA kits from Assay Kit Elabscience, USA, by a colorimetric method. All measurements were conducted in the same laboratory for all participants. Written informed consent was obtained from all study participants or their parents before blood collection. The experimental protocol was conducted in accordance with the guidelines of the 1975 Declaration of Helsinki, revised in 2000, and approved by the I. Horbachevsky Ternopil National Medical University Ethics Committee . Statistical analysis was conducted using the STATISTICA 10.0 statistical package and Microsoft Excel 2003. For normally distributed samples, mean values ( m ) and standard deviation (SD) were calculated. The data were processed using variation statistics methods. Student’s t -test was used to compare mean values. For non-normally distributed samples, data were presented as medians and interquartile ranges (IQR) [25%–75%]. Mann–Whitney U -test was used to compare indicators in two independent groups. Frequency indicators in the observation groups were compared using the χ 2 test and Yates’ corrected χ 2 test. Odds ratios (ORs) and 95% confidence intervals were determined to explore the influence of potential risk factors. Only statistically significant features were used for this analysis. Correlation analysis was performed by calculating Spearman’s rank correlation coefficient. Differences were considered significant at p < 0.05. Baseline characteristics of observed children with type I DM (cases) and healthy children (control) are presented in Table 1 . Boys predominated among children with DM (62%), while there was no significant gender difference among healthy children. There was no significant difference in place of residence among patients with DM, and most parents (80%) had secondary education . In the group of healthy children, urban residents predominated with high significance, and higher education was observed in 52.2% of parents. There was no significant difference in age and BMI between the groups of children with DM and healthy children. Calcium and phosphorus levels did not differ between the groups. The average duration of DM in children was 4.95 ± 4.38 years, ranging from 1 week to 14 years. The average HbA1c level in children with type I DM was 8.83 ± 2.77%, ranging from 5.5% to 15.8%. Optimal glycemic control was observed in 31.6% of patients, while poor control was noted in 68.4%, among which 10 (26.3%) patients had newly diagnosed diabetes. Specific symptoms of DM, such as polyuria and polydipsia, were present in 11 (22.0%) children at the time of the examination, mostly in those with newly diagnosed or poorly controlled DM. The frequency of non-specific symptoms in children with DM and healthy children is shown in Figure 1 . Among the non-specific complaints in children with DM, irritability (34%), muscle spasms (30%), headache (28%), dizziness (16%), and muscle weakness (16%) were most commonly reported. Healthy children significantly less often reported these non-specific symptoms, with headache being the most common – 9 (13.4%), sleep disturbances – 9 (13.4%), and irritability – 7 (10.4%). Children with DM more frequently reported irritability, muscle spasms, headache, dizziness, and muscle weakness compared to healthy children ( p = 0.002; p < 0.001; p = 0.005; p < 0.001; p = 0.013, respectively). The daily dietary magnesium intake and serum concentrations in patients with type 1 DM and healthy children is shown in Table 2 . The median values of dietary magnesium intake did not differ between the group of children with DM and healthy children. The percentage of children with DM whose magnesium intake was below the recommended age norms was 1.34 times higher than the corresponding percentage of healthy children, although the difference was not statistically significant ( p = 0.201). It should be noted that in both groups (cases and control), insufficient dietary magnesium intake was more frequently observed in the 12–17 age group than in the 6–11 age group . Serum magnesium concentration in healthy children was higher than that in children with DM ( p = 0.011) , although the proportion of children with hypomagnesemia did not differ between the two groups (14.0% and 11.9%, respectively). Based on the serum magnesium concentration, children in both groups were divided into two subgroups: those with normal magnesium concentration and those with hypomagnesemia ( Table 3 ). Baseline characteristics and clinical indicators were determined according to the magnesium concentration in patients with DM and in healthy children. No effect of gender on magnesium status was found in either group. However, hypomagnesemia was more frequently observed in children from rural areas in both groups: 85.7% in children with DM and 62.5% in healthy children ( p = 0.054 and p = 0.010, respectively). Living in a rural area influence on hypomagnesemia . Parental education did not affect the magnesium status in either group. In the group of patients with DM, the mean age of the children did not differ depending on the magnesium status, but the mean age of healthy patients with hypomagnesemia was higher than that of children with normal magnesium levels ( p = 0.027). Accordingly, similar trends were observed for BMI, which was higher in children with hypomagnesemia than in patients with normal serum magnesium concentration, but the difference was statistically significant only in the group of healthy children ( p = 0.031). However, there was no difference in BMI percentiles between groups with hypomagnesemia and normal magnesium concentration in both groups. The mean duration of DM did not differ between children with hypomagnesemia and those with normal magnesium concentration. The mean HbA1c level was somewhat higher in patients with hypomagnesemia, but the difference was not statistically significant ( p = 0.313). There was no significant correlation between HbA1c levels and magnesium concentration in children with DM . However, all children with hypomagnesemia had poor DM control compared to 61.3% of patients with normal magnesium concentration ( p = 0.047). The mean magnesium concentration in children with optimal glycemic control was significantly higher than in children with poor control (0.96 ± 0.09 vs. 0.78 ± 0.14 mmol/L, p = 0.001) . Additionally, there was an inverse correlation between serum magnesium levels and glycemic control . The median value of daily magnesium intake in children with DM was higher in those with normal blood magnesium concentration, but the difference was not statistically significant ( p = 0.131). In children with DM and hypomagnesemia, significant decreases in serum calcium and phosphorus concentrations were observed ( p = 0.008 and p = 0.017, respectively). In healthy children, changes in phosphorus and calcium levels due to hypomagnesemia were not significant ( p > 0.05). Comparing the frequency of non-specific symptoms in children with DM depending on magnesium status, found that headache and attention disorders were significantly more frequent in patients with hypomagnesemia (71.4% vs. 20.9%, p = 0.006; 28.6% vs. 4.7%, p = 0.031, respectively). Additionally, OR was determined for significant indicators. Hypomagnesemia was found to influence the occurrence of headache [OR – 9.4444; 95% CI ; p = 0.014] and attention disorders [OR – 8.2000; 95% CI ; p = 0.057]. In the group of healthy children, no difference in the frequency of symptoms was observed between children with normal magnesium concentrations and those with hypomagnesemia. This study aimed to assess dietary magnesium intake and serum magnesium concentration in children with DM compared to healthy peers aged 6–17 years, focusing on the implications of hypomagnesemia on glycemic control and its relationship with calcium and phosphorus levels. No significant difference was found in dietary magnesium intake between children with DM and healthy children. Studies on magnesium intake are quite limited, especially in children. According to the National Health and Nutrition Examination Survey (NHANES) for 2013–2016, 48% of Americans of various ages consume less magnesium from food and beverages than needed ( 10 ). The study also showed low magnesium dietary intake in adolescents. Another study indicated that 66% of adult non-users of dietary supplements had inadequate mineral intakes ( 17 ). Our study also revealed more frequent inadequate magnesium dietary intake in adolescents, both healthy and with DM (70% and 61.8%, respectively). The inadequate magnesium intake observed, especially in adolescents, is concerning given the increased dietary needs during this growth phase ( 8 ). Previous studies on dietary magnesium intake in patients with DM mainly focused on adults with type 2 DM ( 16 , 17 ). Overall, 23.5% of patients with type 2 DM had inadequate magnesium intake ( 18 ). A meta-analysis demonstrated an inverse association between magnesium intake and the risk of type 2 diabetes ( 19 ). The lower serum magnesium concentration in children with DM compared to healthy children is clinically significant ( p = 0.011), suggesting that magnesium deficiency may contribute to complications associated with diabetes ( 20 ). Researchers suggest that there may be an association between impaired antioxidant protection and magnesium deficiency in children with type 1 DM ( 21 , 22 ). The frequency of hypomagnesemia in children with DM was 14% and did not significantly differ from that in healthy patients ( p > 0.05). Other studies reported a hypomagnesemia frequency of 3.4% in children with type 1 DM, also not significantly different from healthy children ( 23 ). Some researchers indicate that about 10% of hospitalized patients have magnesium deficiency ( 5 ). Hypomagnesemia was more frequently observed in rural residents, both in healthy children and in patients with DM, probably due to potential dietary access differences. The OR indicated that living in rural areas may be a risk factor for hypomagnesemia. Separate studies have shown the impact of low magnesium and potassium intake in rural areas on the development of type 2 DM ( 24 ). Hypomagnesemia was more common in children with DM with poor glycemic control, as demonstrated in other studies ( 13 , 22 , 25 , 26 ). Less than a third of patients had optimal glycemic control, while the rest had poor control, consistent with the results of our previous study with a larger number of patients ( 27 , 28 ). The average serum magnesium concentration in children with optimal glycemic control was significantly higher than in children with poor control ( p = 0.001). A negative correlation between serum magnesium levels and glycemic control was also established . This negative correlation indicates that hypomagnesemia may exacerbate glycemic dysregulation in children with DM. Other researchers suggest that hypomagnesemia in adult DM patients is due to insulin resistance, a sign of type 2 diabetes ( 29 ). The authors also did not note a correlation between HbA1c levels and magnesium concentration, which was also demonstrated in our study. Similar trends of hypomagnesemia affecting glycemic control were noted in adults with type 2 DM ( 18 , 30 ). However, another study showed a negative correlation with HbA1c % in children with type 1 DM ( 22 ). In children with DM, hypomagnesemia affected serum calcium and phosphorus levels. Lower serum calcium and phosphorus levels in children with DM and hypomagnesemia ( p = 0.008 and p = 0.017, respectively) highlight the potential risk for compromised bone health in this population. Such changes were not observed in healthy children. Another study showed that serum magnesium concentration positively correlated with calcium and phosphorus levels ( 13 ). Magnesium is involved in the transport of potassium and calcium ions and maintains their levels in the blood ( 1 ). Electrolyte imbalance due to hypomagnesemia was most pronounced in patients with DM. Hypomagnesemia results in decreased levels of parathyroid hormone and vitamin D3, which can affect calcium-phosphorus metabolism and impair bone resorption ( 31 ). Magnesium influences bone cell growth and formation and its strength ( 15 ). The role of hypomagnesemia in the development of osteoporosis is also well-established ( 32 ). The symptoms of hypomagnesemia are not specific and may be associated with the underlying disease and other deficiency states, including hypocalcemia ( 14 , 33 ). While muscle cramps and headaches are common symptoms, their persistence in children with DM may indicate underlying metabolic disturbances that could impact overall health and quality of life ( 32 ). We collected symptoms that may be associated with hypomagnesemia . In children with DM and hypomagnesemia, headaches, and attention disorders were more common. Although these symptoms are multifactorial, the OR indicated that hypomagnesemia in children with DM could contribute to headaches and tended to affect attention disorder symptoms. These patterns were observed only in children with DM. Overall, it is suggested that symptomatic magnesium deficiency due to low dietary intake in healthy individuals is rare since the kidneys limit the excretion of the mineral in case of its deficiency ( 32 ). However, insulin resistance and/or type 2 diabetes increase magnesium excretion in the urine. Magnesium loss is considered a secondary cause of poor glycemic control and high glucose concentrations in the kidneys, which increase urine output ( 15 ). Nonetheless, other studies showed that increased magnesium intake reduced the risk of developing DM ( 19 ) and improved glycemic control in DM patients ( 34 , 35 ). Clinical signs and symptoms of hypomagnesemia are thought to appear at serum magnesium levels below 0.5 mmol/L, although this level was not observed in any of our children. Research on magnesium dietary intake in DM patients was conducted for the first time among the Ukrainian pediatric population. The control group of children of different ages allowed for comparison with healthy children, strengthening the study. We determined the contribution of hypomagnesemia to certain symptoms observed in children with DM and other conditions. While this study provides valuable insights into magnesium intake among the Ukrainian pediatric population, the small sample size and single-center design limit the generalizability of our findings. However, the study allowed us to identify certain patterns. Conducting a multicenter study involving more patients and more indicators will help identify other effects of hypomagnesemia on the course of DM in children. Mean serum magnesium concentration in patients with type 1 DM was lower than in healthy children, although there was no difference in dairy magnesium intake. Hypomagnesemia was more frequently observed in rural children, both those with type1 DM and healthy ones and was associated with poor glycemic control in children with DM. Additionally, children with type 1 DM and hypomagnesemia had lower serum calcium and phosphorus levels and more frequent symptoms such as headaches and attention deficits. These findings underscore the need for routine screening of magnesium levels in children with DM, particularly those in rural areas, to prevent potential complications associated with hypomagnesemia. Further research is needed to explore the other impact of hypomagnesemia on the clinical course of DM in children. | Study | biomedical | en | 0.999996 |
PMC11697290 | Suicide is a global health issue, with over 700,000 people dying by suicide each year ( 1 ). In Australia, approximately nine people are lost to suicide each day ( 2 ). Recent estimates suggest that for each death by suicide 135 people are exposed ( 3 ), indicating the wide-reaching impact of suicide and the potential for further distress for individuals, families and communities. In addition to suicide deaths, one in six Australians aged 16-85 years have experienced suicidal thoughts or behaviours in their lifetime ( 4 ). Suicide prevention interventions can reduce suicide deaths and behaviors ( 5 ), and numerous brief interventions exist to support people experiencing suicide-related distress ( 6 ). One intervention that has been gaining popularity in both clinical and community settings is the Safety Planning Intervention (SPI; 7). The SPI involves developing a personalised list of coping and personal support strategies for use during the onset or worsening of suicide-related distress, typically through six components: a) recognising individual warning signs for an impending suicidal crisis; b) identifying and employing internal coping strategies; c) using social supports to distract from suicidal thoughts; d) contacting trusted family or friends to help address the crisis; e) contacting specific mental health services; f) eliminating or mitigating use of lethal means ( 7 ). Although widely used with US military veterans, the flexibility of the SPI has been demonstrated through its application across diverse age groups ( 8 , 9 ), settings ( 10 ), and with varied populations including refugees ( 11 ), autistic people ( 12 ) and individuals recently incarcerated ( 13 ). The SPI has also been incorporated within or alongside wider therapeutic approaches, such as motivational interviewing ( 14 ). Traditionally completed in hard-copy format, the SPI has more recently been adapted to various digital versions (e.g., 15,16) which can be used in clinical settings or accessed by the public without clinical support. Two recent systematic reviews ( 17 , 18 ) and one meta-analysis ( 19 ) have explored the effectiveness of the SPI and safety planning type interventions. Through narrative synthesis of results, two of the reviews (n = 20 studies, 17; n = 22 studies, 18) concluded that this intervention contributes to reductions in suicidal ideation and behaviour, as well as suicide-related outcomes, such as depression and hopelessness, and improvements in service use and treatment outcomes. While the meta-analysis of six safety planning type studies ( 19 ) also found reduced suicidal behaviour among intervention participants compared to treatment as usual, this study found no evidence for effectiveness on suicidal ideation. Thus, despite the difference in findings related to ideation, current evidence generally supports the efficacy of the SPI in improving people’s coping capacities and safety, with benefits particularly pronounced for reductions in suicidal behavior. However, less emphasis has been dedicated to understanding the underlying processes by which people using the SPI derive benefits ( 20 ). While there is emerging evidence linking the quality and personalisation of safety plans to reduced suicidal behaviour and hospitalisations ( 16 , 21 ), these mechanisms have been quantitatively assessed, rather than qualitatively described from the perspective of those who have used a safety plan. Contemporary thinking recognizes the critical role that lived and living experience plays in suicide prevention research yet there has been limited integration of lived experience in the development of existing suicide prevention interventions ( 22 ). Incorporating lived and living experience understandings into all stages of suicide prevention research is essential for ensuring that suicide prevention strategies meet the needs of those they have been designed for. Moreover, a personalized understanding of peoples’ experiences of using the SPI is needed to inform clinical practice, policy, and future research to enhance the effectiveness of the SPI and ultimately reduce the incidence of suicide and suicide-related distress. This review aims to complement quantitative reviews and meta-analysis ( 17 – 19 ) by synthesizing the existing qualitative, peer-reviewed evidence regarding the experiences of diverse stakeholders (consumers, support persons, and clinicians) involved in the SPI. These stakeholder experiences include but are not limited to: what is perceived as helpful and unhelpful about safety planning; what processes facilitate positive effects; the collaborative process regarding how the safety plan is developed, used, accessed, and revised; as well as the perceived impact of the safety plan on suicide-related outcomes and other well-being indicators. This systematic review followed the PRISMA 2020 guidelines ( 23 ) and was conducted according to the Joanna Briggs Institute (JBI) methodology for systematic reviews of qualitative evidence ( 24 ). The review protocol was pre-registered with PROSPERO . The search strategy was developed by MF, based on a previous safety planning systematic review ( 17 ), and refined in consultation with an academic librarian. We conducted searches on 28 November 2023 in seven databases: Embase, Emcare, MEDLINE and PsycInfo, in the Ovid platform; as well as CINAHL, Scopus and Web of Science. The final search strategy was broad, including terms for safety planning and suicide. Additional terms were trialed (e.g., for participant groups and study designs), however these restricted results and were excluded from the final strategy. We limited results to English language and a publication date range of 2000 to present. See Supplementary Data Sheet 1 for the search strategies used in each database. Reference lists of included articles were pearled in duplicate (MF, EO, KR) for potentially relevant studies. Search results were imported to EndNote 21 (Clarivate, Philadelphia, USA) to manually identify and remove duplicates (MF). We screened the remaining results using Covidence (Veritas Health Innovation, Melbourne, Australia) in two stages, in duplicate: 1) title and abstract screening (MF, KR); 2) full-text screening (MF, EO, KR). Reviewers discussed any disagreements until 100% consensus was reached. Eligibility criteria included: published in English language; qualitative in design (or mixed-methods, but where qualitative data were able to be extracted); participants of any age who had direct involvement in safety planning (including consumers, support persons, service providers, clinicians, etc.) in any setting (e.g. emergency department, inpatient, outpatient, community, online, school, etc.); and where it was clear that safety planning was based on the Stanley and Brown ( 7 ) version. Studies could include the SPI as a standalone intervention, or as part of a wider intervention approach. Studies were excluded if they: were not published in English; were not primary research; were not qualitative in design (either purely quantitative or where qualitative method and data could not be extracted); participants had no direct involvement in safety planning; or where the type of safety planning intervention was irrelevant or unclear (i.e., no reference to Stanley and Brown, and/or no definition or description of safety planning procedures). We custom-built an electronic survey (LimeSurvey, Hamburg, Germany) to extract key information from the included studies, including: aim; study location and setting; study design; participant characteristics (sample size, population description, age, sex); SPI details (delivery modality, format, other intervention components if relevant); methods of data collection and analysis. Reviewers (MF, EO, KR) extracted data independently, in duplicate. Where necessary, we discussed and consulted the original papers until consensus was reached. As part of the data extraction phase, and to facilitate the meta-aggregation process, we read and re-read included studies in duplicate (MF, EO, KR) to extract individual findings (i.e., authors’ analytic interpretative statements of qualitative data) and accompanying illustrations (i.e., verbatim participant quotation that exemplifies the finding). Any verbatim analytic statement was eligible to be extracted as a finding, provided an accompanying illustration was available. Where an accompanying illustration was not available, the finding was not included in this review. As per JBI guidelines ( 24 ), we (independently and in duplicate) assigned finding and illustration pairings a credibility rating: unequivocal (i.e., illustration supports the finding beyond reasonable doubt and therefore not open to challenge), credible (i.e., illustration lacks clear association with the finding and is therefore open to challenge) or not supported (i.e., illustration does not support the finding). Risk of bias assessment was conducted for each eligible study independently by three reviewers (MF, EO, KR) using the JBI Checklist for Qualitative Research ( 25 ). In this 10-item tool, each item is rated as: yes, no, unclear, or not applicable. We resolved discrepancies via discussion, re-checking the papers together, and discussion with a fourth author (NP) as required. As per recent guidelines for ensuring review results represent the best available evidence ( 26 ) eligible studies were included if they satisfied at least six criteria on the appraisal tool. Qualitative findings were pooled via meta-aggregation ( 24 ). Findings, illustrations, and credibility data were exported and printed for repeated reviewing in hard copy and for discussion in duplicate by two authors (EO, KR). Using butchers paper, we manually grouped the printed findings into categories based on our discussions. We first placed findings into categories based on similarity of meaning. Second, we combined similar categories into ‘synthesized findings’, referring to indicatory statements that convey the whole, inclusive meaning of a collection of categories, and which can be used to develop policy and practice recommendations. We then transferred these hard copy synthesized findings back to an Excel spreadsheet for discussion with the wider team. Following team discussion, we prioritized these synthesized findings into conceptual order for presentation in the manuscript. As per JBI guidelines, we used the ConQual approach ( 27 ) to establish confidence in each synthesized finding. ConQual argues that confidence in a meta-synthesized finding is determined by the dependability and credibility of the studies and individual findings that comprise it. Confidence ratings range from high, moderate, low, to very low. By default, qualitative studies are initially given a ‘high’ confidence rating, which can be downgraded based on dependability and credibility. Dependability is determined based on performance of each study on items 2-4 and 6-7 of the JBI Checklist for Qualitative Research, with the overall confidence level unchanging if the majority of individual findings are from studies with 4-5 ‘yes’ responses, downgraded one level for majority 2-3 ‘yes’ responses, and downgraded two levels for majority 0-1 ‘yes’ responses. For credibility, where a synthesized finding contains only unequivocal individual findings, no downgrading penalty is applied; however, confidence is downgraded one level if the synthesized finding comprises a mix of unequivocal and credible individual findings. The overarching qualitative methodology guiding this review was an interpretivist approach, which recognizes subjectivity and reflexivity ( 28 ). This approach makes the perspectives and positioning of the authors explicit, ensuring that the impact of researcher lenses on the synthesis and examination of results is transparent. While the components of the SPI should be universal, we acknowledge our positioning in the Australian context, which is associated with a unique set of cultural factors and policy frameworks that influence SPI practices and implementation. It is also important to acknowledge the authors’ backgrounds. Collectively, the research team brings expertise across lived experience, clinical practice, and research. EO is a postdoctoral researcher with expertise in behavioral science and mental health. KR is an experienced mental health nurse and doctoral level health psychologist working in research and education. NP is a professorial level mental health nurse expert and leader in suicide prevention research and education. ML is a Lived Experience academic. AP is a PhD researcher in health and medical sciences and Expert by Experience with the SPI. J-AR is a mental health nurse expert in clinical and senior management. SP is an experienced mental health nurse. MF is a senior suicide prevention researcher. Database searching yielded 1862 results, reduced to 588 after removal of duplicates. Results were screened at the title/abstract level, leaving 60 eligible for full-text screening. One additional article was identified via a correction that appeared in the search results. No further articles were identified through reference list pearling. Twelve eligible studies were critically appraised; two ( 15 , 29 ) were excluded by the minimum risk of bias threshold, leaving ten studies for inclusion. See Figure 1 for the full screening process, and Supplementary Data Sheet 2 for a list of all ineligible full-text results. Included studies were published between 2015 and 2023 and primarily conducted in the United States (n =7). Results for this review are based on data from n = 243 participants (note: this relates to the total number of participants from eligible phases of the included studies). The mean sample size was 24 (range, n=12-50). Across all studies, participants included n = 113 clinicians/staff (n=5 studies), n = 103 adults (including 95 veterans, n=4 studies; and 8 general population, n=1 study), n = 20 adolescents (n=2 studies), and n = 7 support persons (n=2 studies). Eight studies included both female and male participants, two did not report any gender data, and none reported data on other gender identities. Study settings included combined inpatient and outpatient (n=4), outpatient only (n=4), emergency department (n=1), and community services (n=1), with six studies relating to the context of veterans. Six studies were purely qualitative ( 10 , 30 – 34 ), one was mixed methods ( 35 ), and while a further three identified as qualitative they also included some minor quantitative aspects (e.g., quantitative measures to collect participant clinical information, 36, 37; or quantification of time spent creating safety plans, 38) but were not considered mixed methods. Most studies (n=8) collected qualitative data via semi-structured individual interviews but focus groups (n=1) and open-ended survey items (n=1) were also used. Studies analyzed qualitative data using thematic analysis (n=4), content analysis (n=2), interpretive phenomenological analysis (n=1), and matrix analysis (n=1). Two studies did not clearly report an analytic method. There was substantial variability across studies in SPI features, and its role in suicide prevention and mental health care. Studies discussed versions of the SPI including additional components such as text-message and/or telephone follow-up support ( 31 , 35 ), and the inclusion of support persons ( 36 ). Most studies (n=9) used or discussed the SPI as one component of care, alongside other psychological interventions (e.g., individualized, outpatient psychotherapy). The specific format of initial construction, ongoing access, or both, was often unclear. Only three studies described a specific SPI format, including a traditional hard-copy format ( 33 ), a mobile phone app-based version ( 30 ), and either hard copy or electronic versions ( 38 ). There was also a lack of detailed reporting regarding delivery modality, with four studies ( 30 , 31 , 33 , 38 ) clearly indicating in-person creation of the SPI, and one describing a group-based SPI delivered online via telehealth ( 37 ). Eight studies described who the SPI was co-created with – working with a clinician was the most frequent approach ( 10 , 30 , 31 , 33 , 34 , 38 ), with one study describing construction with a study counselor ( 35 ), and another describing a collaborative creation process with other SPI users in a group format ( 37 ). See Table 1 for full characteristics of included studies. Included studies performed well on critical appraisal items related to congruity between research methodology and study methods, as well as ethical research conduct and appropriateness of study conclusions. However, guiding philosophical perspectives were largely unreported, with only one study mentioning this ( 32 ), and studies did not consistently meet criteria for reflexivity, with only one study ( 32 ) locating the researchers culturally or theoretically, and two studies ( 10 , 32 ) discussing the influence of the researcher on the research and vice-versa. See Table 2 for study-level critical appraisal results. Ninety findings (82 unequivocal; 8 credible) related to stakeholders’ experiences of the SPI were extracted and aggregated into 14 unique categories according to similarity of meaning. Four synthesized findings (one moderate confidence and three low confidence) were developed via meta-aggregation. See Table 3 for a summary of the findings and categories used to create each synthesized finding, and Supplementary Table 1 for full ConQual results. Complete details of individual findings and illustrations are presented in Supplementary Table 2 . We provide a narrative description of each synthesized finding and associated categories below. This synthesized finding comprises 21 individual findings across two categories, revealing that engaging with the SPI is an acceptable intervention, associated with varied benefits to the consumer in the short- and longer-term. Five findings were located from two studies ( 31 , 38 ) describing stakeholders’ perspectives on the utility of the SPI. The SPI is deemed an acceptable and even essential intervention by clinicians working with suicidal veterans ( 31 , 38 ). Clinicians view the SPI as a useful addition to their repertoire, noting that its structured nature can help to facilitate conversations regarding consumers’ emotional states, early warning signs and risk factors ( 38 ). Despite initial skepticism about the SPI ( 31 ), clinicians describe it as a tool they rely on in everyday practice. For example, one emergency department clinician shared that the SPI assists in engaging individuals with emerging suicidality prior to the onset of suicidal behaviors: Further, clinicians who use the SPI with structured telephone follow up stated that it provides a concrete tool to facilitate reduced risk during the transition between inpatient and outpatient settings. For example: Sixteen findings from six studies ( 10 , 32 – 34 , 37 , 38 ) of adolescent and adult consumers, and clinicians, form this category describing perceived benefits related to consumers’ ongoing engagement with SPI practices. SPI conversations can broaden consumers’ motivations for keeping themselves safe. This can be achieved by harnessing and amplifying consumers’ awareness of existing reasons for living and generating hope for a more positive future ( 10 ), as well as through greater awareness of the emotional pain that would befall consumers’ loved ones in the event of their suicide ( 33 ). SPI processes - supported by reflective, collaborative discussions between consumer and clinician regarding consumers’ lived experiences - helped consumers to develop greater awareness of the character and quality of their emotional states, as well as individual triggers that precipitate the onset and worsening of distress ( 10 , 32 , 33 , 38 ). For example, one clinician described how collaborative conversations occurring during the SPI process could help young people to make connections between current distress and earlier triggers: Another clinician noted that developing greater recognition of their own triggers, warning signs, and effective strategies for emotional regulation allowed consumers to communicate their needs more clearly to supportive others: Creating a non-judgmental therapeutic environment that normalizes the experience of ambient and acute depressive states may foster consumers’ openness to engage in these difficult and deeply personal conversations ( 10 ). Clinicians described how, over time, consumers learned to independently select and engage ‘lower-level’ self-soothing strategies to avoid deeper states of crisis ( 38 ). This perspective was also voiced by consumers in multiple studies: Taken together, both clinicians and consumers noted that the SPI supported consumers’ autonomy to identify and effectively manage distress. The second synthesized finding, supported by 32 findings and aggregated into five unique categories, highlights the SPI is perceived to be most effective when it is conducted within a person-centered and collaborative relationship, appropriately involves supportive others, and is integrated in an authentic way within consumers’ ongoing care and personal agency. For both clinicians and consumers, digital technologies may support successful SPI experiences. Five findings from four studies ( 10 , 32 , 33 , 38 ) supported this category. Clinicians cautioned that the SPI should not be prescribed by the service provider nor seen as a risk mitigation strategy, but rather constructed collaboratively ( 10 , 32 , 38 ). As one clinician noted: Clinicians reported taking approximately 30 minutes to co-construct the initial plan in a collaborative way with meaningful involvement ( 38 ). For consumers, the content of the initial plan was arguably less important than the quality of the collaborative therapeutic interaction ( 33 ). This category featured three findings from two studies ( 10 , 34 ). Staff working with refugees and asylum seekers reported needing to be flexible and creative to ensure that the SPI is accessible and culturally appropriate ( 10 ). Clinicians argued that people using the SPI should feel empowered to explore alternative approaches to visualizing and documenting each step, according to the unique consumer needs and preferences ( 10 ). Action planning a range of specific steps to take during future crises can help consumers to feel a sense of control in these scenarios, rather than behaving impulsively: “Plan out what could possibly happen, and the outcomes and you have it written down then you won’t find yourself doing something spur of the moment.” . Four findings from four studies ( 10 , 30 , 33 , 38 ) highlight the benefits of ongoing SPI use. Clinicians reported regularly reviewing and updating safety plans, often after consumers had reported recent suicidal ideation or crisis ( 10 , 38 ). The SPI was seen to provide structure to this process of reflection and, within these discussions, opportunities to adapt the existing plan were explored: Consumers described a similar trajectory of adding to or refining their plans following each suicidal crisis ( 30 ). This approach was described by one consumer as a process of discovery and personal development: This process of addition and refinement may lead to incremental improvements in consumers’ commitment to SPI practices, as well as their capacity to enact safety planning strategies ( 33 ). Three studies of clinicians ( 10 , 31 , 38 ) provided four findings for this category. Clinicians expressed the need for sufficient time, resources, and support to engage in effective safety planning, with their capacity to create collaborative, person-centered safety plans hampered by insufficient time and competing priorities: Clinicians acknowledged difficulties establishing staff acceptance of the SPI, suggesting successful implementation of the SPI requires leadership support and clear organizational policies that support best practice ( 31 ). Additionally, for consumers with limited English language literacy it is essential for organizations to provide translators or employ clinicians who speak the consumer’s first language ( 10 ). Sixteen individual findings, extracted from three studies ( 30 , 35 , 37 ) described how digital technologies – specifically, text messages and telehealth – could be used to deliver and/or supplement the SPI. Consumers described the impact of automated, personalized text messages as an adjunct to in-person SPI practices (MYPLAN app, 30; 35). For some, the automated text messages were perceived as impersonal and perhaps insufficient depending on the consumers’ individual circumstances ( 35 ). However, others found benefit from these support text messages. For example, one consumer shared how this version of the SPI eased their transition out of inpatient care: Finally, consumers of a group-based SPI program delivered via telehealth (Project Life Force-telehealth) voiced that this SPI version bypassed several barriers of traditional in-person mental health care ( 37 ). These included practical barriers such as long wait-lists for accessing individual support, as well as social barriers to sharing their lived experiences: For this synthesis, 15 findings were aggregated into three categories, indicating that including support persons in the SPI process is acceptable and beneficial for the consumer. Some drawbacks might be anticipated relating to confidentiality and support persons experiencing secondary distress. Three findings from one study ( 36 ) form this category. Support persons of US military veterans described their concern for consumers’ welfare and a desire to support the consumer. Reflecting on their willingness to attend in-person appointments, one support person shared: Being involved in the safety plan also allowed support persons to better understand consumer behavior and support needs ( 36 ). Four studies ( 32 – 34 , 36 ) provided ten findings related to the benefits of involving supportive others, such as immediate family members ( 32 – 34 ) friends ( 36 ), or a trusted person from extended family, school or broader community ( 32 ). From a consumer perspective, involving trusted others was helpful for alleviating feelings of isolation: Clinicians agreed, noting how involving supportive others could provide evidence to contradict consumer feelings of burdensomeness: Consumers and support persons described how sharing the SPI with supportive others offered an important external source of feedback and support ( 33 , 34 , 36 ). Support persons could help recognize warning signs, external triggers, and consumer affect and behavior. As a result, support persons may reduce the help-seeking burden placed on consumers and can provide positive reinforcement when the consumer is doing well ( 32 , 36 ). Finally, support persons played a vital role in maintaining safer environments, including restricting access to lethal means in the home ( 34 ). Potential drawbacks of involving support persons were articulated in two findings from one study ( 36 ). Consumers noted that support persons may become overbearing and may share private details with other people without consent. Being involved in the SPI also introduced new emotional challenges for support persons, such as increased worry for the consumer, themselves, and other loved ones who may be affected by suicide-related behaviors: The final synthesized finding was supported by 22 findings, aggregated into four categories, describing a range of challenges associated with the SPI. Five findings from four studies ( 10 , 33 , 34 , 38 ) described stakeholder skepticism about the utility of the SPI. Clinicians were unsure of the SPI’s effectiveness, both in general and in times of crisis ( 38 ). Clinicians also described their experiences with consumers who decline to engage in safety planning at all, perhaps due to stigma attached to suicide-related phenomena ( 10 ). Some consumers expressed doubt that any intervention could deter a person with suicidal intent ( 34 ). Other consumers doubted the helpfulness of SPI strategies, especially whilst experiencing severe neurovegetative symptoms ( 33 ). Finally, one consumer shared the perspective that the SPI was unnecessary: Barriers to engaging with the SPI were discussed in eight findings across three studies ( 10 , 33 , 37 ). A lack of therapeutic rapport may impair consumer engagement with SPI processes, particularly in situations where consumers lack a regular mental health worker ( 10 ). Lack of privacy in consumers’ home environments may interfere with engagement in SPI-based online therapeutic sessions ( 37 ), and restrict the use of specific strategies (e.g., singing, 30). Ferguson et al. ( 10 ) reported several barriers of relevance to refugee and asylum seeker consumers, particularly related to English language literacy, mental health literacy and/or specific cultural needs. For example: Finally, consumer engagement may be impaired if consumers perceive negative ramifications from disclosing suicidality (e.g., refugee and asylum seeker concerns for visa applications and residency; 10). Seven findings from three studies ( 30 , 33 , 34 ) support this category. There was a common perception that, during episodes of severe distress, suicidal ideation dominated conscious awareness and consumers reported feeling unable to consider or initiate behavioral SPI strategies ( 30 , 33 , 34 ): Given the at-times overwhelming nature of consumers’ distress, some may feel belittled if clinicians suggest ‘simple’ self-care strategies without providing genuine validation of the consumer’s perspective or appropriate justification for strategy suggestions ( 30 ). Other limitations of the SPI were noted in two findings from two studies ( 33 , 37 ). The SPI may be challenging to implement for people with few protective factors (e.g., when consumers cannot identify any support persons or strategies for keeping themselves safe; 37). Finally, the act of formally documenting or reviewing warning signs can itself be a triggering experience for consumers: Featuring rich data from the perspectives of consumers, clinicians and support persons, this qualitative systematic review provides unique insights regarding the practices and processes perceived to impact on consumers’ experiences with the SPI. Through meta-aggregation, four synthesized findings were produced, with the results indicating that the SPI is a beneficial intervention, enhanced through person-centered collaboration and the involvement of supportive others. However, several perceived limitations impact on perceived acceptability and efficacy, which must be considered by organizations and clinicians involved in service delivery. These findings add an important lived experience lens to SPI literature, complementing previous quantitative studies and reviews of SPI efficacy. Consumers, clinicians, and support persons viewed the SPI as broadly acceptable and beneficial for reducing consumers’ suicide risk. These qualitative data concur with previous findings ( 39 ), wherein 95% of veterans endorsed the SPI as both acceptable and helpful. In addition, clinicians in the present review perceived SPI practices to be helpful in reducing suicide risk during consumers’ transition from inpatient to home or community settings. This is an important finding, as risk of suicide may be most acute following discharge from psychiatric hospitalization, particularly for those with active suicidal ideation, perceived hopelessness, and history of suicidal behavior ( 40 ). Overall, the efficacy of the SPI in helping consumers to reduce suicidal ideation and behavior is supported by both quantitative systematic reviews ( 17 – 19 ) and by the experiences and perspectives synthesized in the present review. People involved in the SPI also perceived a range of specific benefits that may help to explain the effectiveness of SPI practices. First, person-centered safety planning was seen to facilitate greater consumer autonomy, giving individuals a greater sense of ownership over their own health care. Consumers and clinicians also described how SPI practices helped to increase consumers’ sense of hope by internalizing and valuing their existing reasons for living. The amplification of reasons for living is an important protective mechanism, with reasons for living associated with reduced suicidal ideation and suicide attempts ( 41 ). In the present results, reasons for living often included loved ones such as children, partners, family, and friends. As such, greater identification of reasons for living appeared to intersect with an improved sense of connection with supportive others. This fundamental need for connection was maximized when support persons were involved in consumers’ safety planning. Similarly, ongoing engagement with SPI practices supported individuals’ self-efficacy in recognizing early warning signs and engaging self-regulatory coping strategies to interrupt the trajectory of escalating distress. This latter result aligns with recent evidence for growth in suicide-related coping as a key predictor of reduced suicidal ideation during an SPI intervention ( 16 ). In sum, the lived experience data synthesized in this review broadly align with some of the psychological mechanisms of effect for the SPI as theorized by Rogers et al. ( 20 ). Specifically, these findings add support to Rogers et al.’s ( 20 ) suggestions that the SPI promotes autonomy among users, both in initial plan creation and in their choices surrounding whether, when and how to use the plan to keep themselves safe; encourages connection with others (including healthcare services, and friends, family and community), which is a known protective factors against suicide; and builds competence through encouraging individuals to identify personalized support strategies and to practice using these to build confidence over time. Clinicians and consumers strongly recommended a collaborative, person-centered approach to constructing and using the SPI over time. This approach refers to clinicians and consumers working together, sharing decision making and having a balance of power, to develop plans that address the consumer’s unique needs and circumstances ( 42 ). Unlike a crisis risk assessment process, which can imply a mechanistic and alienating experience of safety planning, collaborative and person-centered approaches allow a normalizing space for consumers to feel supported and to have voice in exploring suicide-related feelings. Recent quantitative evidence suggests that stronger therapeutic alliance established early in psychotherapy is a key predictor of reductions in suicidal ideation and behavior ( 43 ) and this review supports those findings from many consumers using safety plans. Collaborative and person-centered interactions were viewed as essential for helping people in distress to understand and process difficult emotional states, to find meaningful connection with others, and for using their strengths and supports to cope in the future. Most mental health professionals would recognize the importance of person-centered therapeutic engagement. However, our results highlight a range of organizational barriers impairing clinicians’ ability to use the SPI according to these core principles. Time constraints were the primary barrier impacting clinicians’ perceived ability to conduct person-centered safety planning. Thus, without sufficient organizational support, the SPI may be more likely to be delivered instrumentally with a focus on risk mitigation, rather than in a person-centered and collaborative way. Consumers reported experiences of ‘tunnel vision’ or an inability to consider SPI coping strategies, while enduring acute distress. This finding converges with the understanding that the ability to engage cognitive and/or behavioral self-regulatory coping strategies is diminished during heightened periods of crisis ( 44 ). This perceived limitation of SPI utilization further highlights the importance of appropriate and effective methods to work with consumers in deciding to restrict access to lethal means. At an individual level, clinicians and consumers can work collaboratively to make changes to living environments to restrict access to high lethality means should they experience acute and unbearable distress. This part of the planning process should focus on means identified by the consumer that feature in suicidal ideation. Appropriate involvement of support persons may be particularly beneficial in maintaining safe environments and reducing the help seeking burden placed on consumers. In the present results, the SPI was disregarded as unhelpful by some consumers and clinicians. Similar uncertainty regarding the SPI has recently been documented in a quantitative study, with clinicians doubtful of the effectiveness of safety planning in reducing risk of suicidal behavior ( 45 ). As noted by an included study ( 31 ), this hesitancy suggests a need for prior education and training about the efficacy, usability, and acceptability of the SPI. Consumers’ fear of disclosure was another barrier to SPI engagement identified in the present results ( 10 ). Self-stigma and fear of stigmatized responses to disclosure can deter consumers from seeking help for suicide-related concerns ( 46 ), and consumers also report fears of disempowerment from treatment orders under mental health Acts ( 47 ). Similar worries may also deter individuals from engaging with interventions such as the SPI. The four synthesized findings in this review suggest specific recommendations for practice, policy, and future research. For practice, it is recommended that the SPI is developed via a person-centered and compassionate collaboration, where clinicians are afforded sufficient time (minimum 30 minutes) to develop authentic therapeutic rapport for the person to express their suicidal experiences. Further, to address the transient nature of suicidal thoughts and maximize effectiveness of the safety plan, the SPI should be viewed as a living document that is shared with others (support persons, care providers) and revised regularly. Given that involving support persons appears to enhance the SPI, practitioners should genuinely explore this involvement during the initial safety plan co-construction and at review appointments. Supportive others should receive SPI education with assistance from the clinician and guidance from the consumer regarding how to best provide support. Regarding policy recommendations, services that use the SPI should include mandatory training for all staff using the SPI, to ensure consistent, evidence-based skill sets and to address the ambivalence of some clinicians identified in this review. Further, there should be clear guidelines and policies for use of the SPI within and across services to ensure continuity of care. For example, the SPI could be proposed as the recommended safety planning instrument in a local context, to be completed before discharge from emergency/inpatient settings and communicated with follow-up care providers as standard practice. Given the diverse contexts in which safety planning is used, there should be flexibility to adapt the SPI to meet diverse consumer needs (e.g., versions in various languages). Further research is required to address gaps in our understanding of the SPI and how best to support the people who use it. First, the specific processes which assist consumers to reduce suicidal ideation and behavior require further examination. Our findings indicate that SPI practices may enhance consumers’ connection, autonomy, and competence: three of the processes of SPI effect proposed by Rogers et al. ( 20 ). Further mixed-methods research is required to investigate causal pathways from specific SPI strategy-use to improved suicide and wellbeing-related outcomes via theorized processes of effect. Greater integration of diverse user experiences is required to inform future SPI adaptations that meet the needs of the specific consumer groups for whom they are designed. In the current review, over half of the included papers related to veterans, their support persons and/or people who work with them. There has been little to no focus on the experiences of safety planning from other priority groups known to experience high rates of suicidality, such as LGBTQIA+ communities ( 48 ). Finally, our results reveal a common perception whereby states of acute and severe distress temporarily impair peoples’ capacity to engage in safety planning behaviors. This perceived barrier should be explored in more depth using rigorous qualitative approaches. Research has begun to illuminate the temporal dynamics of suicidal states, often using digital technologies to monitor suicidal distress in real-time ( 49 ). Lived experience research will be crucial to develop a greater understanding of how consumers experience the fluctuating and dynamic nature of suicidal states, as well as the relationship between current distress severity and specific SPI strategy use. Such understandings may assist consumers, support persons, clinicians, and researchers to adapt SPI practices to mitigate the onset and worsening of distress, and to improve safety during peak distress. Our search strategy, study selection procedures and meta-aggregation approach were systematic and thorough. In the JBI approach, findings can only be extracted if accompanied by an illustrative participant quotation. Whilst methodologically rigorous, this may have excluded relevant qualitative data if reported in a different format. There is also substantial scope for improvement in the methodological quality of studies in this area. In the present review, the dependability of included studies was limited due to inconsistent reporting of reflexivity details and guiding methodological frameworks. Three of the four synthesized findings were also downgraded due to a mix of unequivocal and credible findings, resulting in “low” overall confidence ratings. To enhance confidence in future qualitative findings, studies should follow best-practice guidelines for reporting qualitative research. Further, some studies lacked SPI details, such as format and delivery modality. We did not attempt to contact the authors of these papers to seek confirmation of these details. Doing so may have improved the generalizability of findings. However, we do not believe these details to be crucial to the results, as the findings relate more to overall experiences with the SPI, rather than specific features (with the exception that we had one finding category related to digital modalities). Finally, although one included study indicated a mental health lived experience academic as part of the authorship team ( 10 ), none of the included studies explicitly indicate involvement or consultation with people with lived experience of suicidality and/or safety planning in designing or conducting the studies. More high-quality qualitative studies of consumer, support person and clinician perspectives, conceived and conducted collaboratively with people with lived experience of suicidality and safety planning, would advance our understanding of peoples’ experiences of using SPI practices. While there is scope for improving the methodological quality of future qualitative SPI research and a need to better understand the causal pathways between SPI use and suicide-related outcomes, the findings from this review indicate that SPI practices are regarded positively from the qualitative perspectives of consumers, support persons and clinicians. This complements what is known about SPI effectiveness from quantitative research, and indicates that the SPI is perceived as acceptable and beneficial, and can be an important strategy to support people experiencing suicide-related distress. Use of the SPI could be strengthened by ensuring that services have sufficient time and resources (including training) for staff to engage in safety planning, as well as pathways for support persons to be involved, and strategies to ensure the SPI is tailored to individual consumer needs. Continuing to prioritize diverse lived experience perspectives of this suicide prevention approach is critical to ensuring that the SPI meets the needs of those using it. | Other | biomedical | en | 0.999997 |
PMC11697295 | Hydrazine (N 2 H 4 ) has long been widely used in many fields as a useful diamine with strong basic, reducing, and nucleophilic properties. Hydrazine can be used as a reducing agent, 1,2 high-energy rocket propellant, 3 precursor of pharmaceuticals, 4 insecticide, 5 and a raw material for industrial products such as polymers and carbon dioxide sorbents. 6,7 However, despite its usefulness, hydrazine is highly toxic and is known to be hepatotoxic, neurotoxic, and mutagenic. 8–10 Its use poses a risk of exposure to the environment during various stages of the manufacturing process. In humans, endogenous aminoacylase may also induce hydrolysis of hydrazine-containing drugs such as isoniazid, releasing hydrazine and/or acetylated hydrazine as a toxic metabolite. 11 Given the hazardous nature of hydrazine, the sensitive and selective detection of hydrazine in environmental and biological samples is an important issue. A chemiluminescent probe, 12 fluorescent probes, 13–15 and other analytical methods 16–19 have been developed to detect hydrazine selectively. Among them, fluorescent probes with sufficient solubility, suitable lipophilicity, and negligible toxicity are convenient for assessing hydrazine exposure in living cells, but faster probes with better sensitivity and selectivity are necessary for real applications. Therefore, there has been ongoing development of fluorescent probes that operate on novel detection mechanisms. We have designed a novel β-ketoester-type fluorescent probe platform (OB-MU1) for detecting hydrazine. The putative mechanism for hydrazine detection is shown in Scheme 1 . First, the ketone moiety of OB-MU1 reacts with hydrazine to form a hydrazone (1). Subsequently, the amine moiety of the hydrazone reacts with the ester in an intramolecular nucleophilic attack to the ester, releasing 5-methyl-2,4-dihydro-3 H -pyrazol-3-one (2), whereas the fluorophore 4-methylumbelliferone (3) is released and exhibits a fluorescent response. We expected that the combination of the strong nucleophilicity 20 and the adjacent positioning of the two primary amine moieties within the same molecule could be used to distinguish hydrazine from other bisnucleophiles, including ethylenediamine and hydroxylamine. In addition, strong endogenous mononucleophiles such as ammonia and hydrogen sulfide do not undergo such ring closure reactions due to the unfavorable four-membered ring formation. Levulinic acid ester-type probes 21–25 have intrinsically fluorogenic reactivity to mononucleophiles by the formation of a five-membered ring as well as to hydrazine by the formation of a six-membered ring. The detection mechanism of OB-MU1 is similar to that of levulinic acid ester-type probes, but our strategy is characterized by five-membered ring formation upon hydrazine detection, giving our probes an advantage. 26 Another potential research gap with levulinic acid ester-type probes is that it also reacted with sulfite (SO 3 2− ) to form 2-methyl-5-oxotetrahydrofuran-2-sulfonate by the formation of a five-membered ring and the fluorescent product as seen in resorufin levulinate, 27 which could complicate hydrazine detection. On the contrary, such a sulfite reaction is unlikely in the case of β-ketoester-type fluorescent probes because unfavorable four-membered ring formation is required for the transformation. First, we attempted to synthesize OB-MU1 by the condensation reaction of 4-methylumbelliferone (3) and 3-oxobutanoic acid (4a), but the desired ester OB-MU1 was not obtained. We attribute this unsuccessful reaction to the reactivity of the active methylene, and condensation with 2,2-disubstituted 3-oxobutanoic acids 4b and 4c gave the desired β-ketoesters OB-MU2 and OB-MU3 ( Scheme 2 ). The reactions of OB-MU2 and OB-MU3 (50 μM) with hydrazine (1 mM, 20 eq.) were monitored by UV-vis spectroscopy in 50 mM HEPES buffer (pH 7.4). Upon addition of hydrazine, the absorption of the OB-MUs around 270 nm, which is considered to be the maximum absorption of β-ketoesters, decreased in a time-dependent manner. Concomitantly, the absorption peak at around 310 nm, which is considered to be the maximum absorption of esterified 3, shifted to 320 nm, the maximum absorption wavelength of free 3. OB-MU2 (apparent k 2 = 27 M −1 min −1 ) reacted with hydrazine more slowly than OB-MU3 (apparent k 2 = 147 M −1 min −1 ) . 28,29 The reaction of OB-MU2 was not complete even after 1 hour, whereas OB-MU3 reacted faster and nearly reached completion by 30 min. Considering that the cyclopropyl moiety is less bulky than the dimethyl moiety, these differences in reaction rates are inconsistent with the Thorpe–Ingold effect for the cyclization reaction; therefore, hydrazone formation with the OB-MUs and hydrazine is likely the rate-determining step. The OB-MUs were subsequently evaluated by fluorescence spectroscopy: the reaction of 10 μM OB-MU derivative with 200 μM (20 eq.) hydrazine was observed by measuring the fluorescence spectrum of 3 . Fluorescence spectroscopy also revealed a slower reaction of OB-MU2 than OB-MU3 with hydrazine, similar to the UV-vis results. These results indicate that OB-MU3 is a better fluorogenic probe for detecting hydrazine in aqueous environments. As the OB-MU3 probe reacted successfully with hydrazine, further studies were conducted to evaluate its selectivity and other properties. The fluorescence intensity was proportional to the concentration of hydrazine , indicating that OB-MU3 is a quantitative probe under these conditions. The detection limit of OB-MU3 was found to be 95.3 nM . The pH-dependence of the reaction of OB-MU3 with hydrazine was also examined from pH 3 to 10. We observed that under biological conditions (pH 6–8), an increase in pH led to a faster reaction and higher signal in the presence of hydrazine (open circles), whereas nonspecific reactions (closed circles) are still negligible . These data imply that the pH dependence can be simply derived from the p K a of the 3-phenol domain (p K a ∼ 7.8), whereas hydrazine and hydrazone intermediates both have sufficient nucleophilicity when the pH is above 5, according to the p K a s of their conjugated acids. Above pH 8, the fluorogenic reaction by non-specific alkaline hydrolysis competes with the intended reaction with hydrazine. Further examination of the selectivity of OB-MU3 toward different test substances showed that the fluorescence response to various amines, amino acids, reducing substances, and metal ions was weak . As anticipated, the better selectivity of OB-MU3 for hydrazine compared to other nucleophiles including sulfite (#12) is thought to be due to the formation of a five-membered ring with two adjacent nucleophilic amines. To confirm the fluorogenic reaction mechanism of OB-MU3 with hydrazine, we evaluated the product formed from the reaction when hydrazine was added to OB-MU3 on a 1 mmol scale. The addition of 10 equivalents of hydrazine dihydrochloride to OB-MU3 in an aqueous acetonitrile solution yielded 3 in quantities corresponding to OB-MU3 consumption, as well as formation of what is presumably the cyclized product 5 which likely reacts with HCl to form 6 ( Scheme 3 ). These results suggest that the reaction of OB-MU3 with hydrazine proceeded as we expected in Scheme 1 . The plausible reaction mechanism of fluorogenic OB-MU3 with hydrazine is shown in Scheme 4 . First, the ketone moiety of OB-MU3 reacts with hydrazine to form a hydrazone (7). 7 exists as either the E -7 or the Z -7 isomer. Subsequently, the amine moiety of Z -7 reacts with the ester in an intramolecular nucleophilic attack to the ester, releasing 5, while the fluorophore 3 is released as the phenolate and exhibits a fluorescent response. We also state here that 6 with reactive alkyl halide may appear toxic similar to the HaloTag due to its possible modification of intracellular nucleophiles such as glutathione. However, generation of potentially toxic 6 under physiological conditions is unlikely because the chloride ion concentration is low (5 to 60 mM) 30 compared to the concentrations found in our experiment (2.0 M) which comes from the use of the stable dihydrochloride form of hydrazine. Finally, we evaluated the ability of OB-MU3 to detect hydrazine in live-cell imaging using fluorescence microscopy. After HeLa cells were treated with 20 μM OB-MU3 and washed with Hanks' balanced salt solution (HBSS)(+), they were exposed to HBSS(+) containing 600 μM hydrazine . Fluorescence imaging results showed that after treatment with hydrazine, an OB-MU3-derived signal was observed throughout the cell with a fluorescence increase of ca. 7-fold . Therefore, OB-MU3 is a hydrazine probe based on a novel cyclization reaction that can visualize exogenous hydrazine in live cells. Finally, we confirmed that up to 50 μM of OB-MU3 exhibited minimal acute cell toxicity and it is thought that OB-MU3 exerts a negligible influence on cells at low concentrations (20 μM). We plan to exchange 3 with a long-wavelength fluorophore like TokyoGreen for further biological applications and/or imaging within specific organelles using the same synthesis scheme as that used for OB-MU3. In summary, we have developed novel fluorescent probes bearing the β-ketoester structure, OB-MU2 and OB-MU3, for the detection of hydrazine. Based on UV-vis and fluorescence spectroscopic measurements, the cyclopropyl moiety of OB-MU3 accelerates the response to hydrazine compared to the dimethyl structure of OB-MU2. OB-MU3 also exhibited a fluorogenic response under aqueous conditions containing 1% organic solvent (acetonitrile) at physiological pH, with up to a 54-fold increase in fluorescence. In addition, OB-MU3 demonstrated a very strong hydrazine-selective response, showing little reaction to many nucleophiles and reducing agents, with hydroxylamine and hydrogen sulfide as notable examples. Moreover, OB-MU3 can visualize intracellular hydrazine. Coumarin-based OB-MU3, which has short excitation and emission wavelengths, has drawbacks for bioimaging applications such as limited imaging depth and interference from the autofluorescence from biological substances. 31 The moderate signal-to-noise ratio of OB-MU3 was caused by low intracellular retention of released 3. Therefore, the signal-to-noise ratio can be improved by replacing 3 with a hydrophilic fluorophore with better intracellular retention. In the future, β-ketoester structures will be combined with long-wavelength fluorescent dyes that are more suited than 7-hydroxycoumarin for bioimaging applications to develop probes that can similarly detect intracellular hydrazine with high sensitivity. Reagents: all reactions were carried out under an inert atmosphere in a round bottom flask containing a stir-bar with a rubber septum except as noted otherwise. Anhydrous dichloromethane (CH 2 Cl 2 ) was purchased from FUJIFILM Wako Pure Chemical Co. and used without further purification. All other reagents were purchased from Tokyo Chemical Industry Co., Nacalai Tesque Inc., or FUJIFILM Wako Pure Chemical Co. and used without further purification. SiliaFlash ® F60, 40–63 μm, #R10030B (Silicycle Inc., Quebec, Canada) or Chromatorex PSQ60B (Fuji Silysia Chemical Ltd., Kasugai, Japan) was used for silica gel flash chromatography. All reactions were monitored by thin-layer chromatography with E. Merck silica gel 60 F 254 pre-coated plates (0.25 mm) and were visualized by UV (254 nm). IR spectra were obtained on a PerkinElmer Spectrum One. 1 H NMR and 13 C NMR spectra were recorded on a JEOL ECZ400S spectrometer ( 1 H: 400 MHz, 13 C: 100 MHz) instrument. Chemical shifts are reported in ppm relative to the carbons of deuterated solvents (CDCl 3 : 77.0 ppm, DMSO- d 6 : 39.5 for 13 C) or the internal standard tetramethylsilane (CDCl 3 and DMSO- d 6 : 0.00 ppm for 1 H). The mass spectra were measured on a Thermo Fisher Scientific LTQ Orbitrap Discovery. Melting points were determined with a Yanaco micro melting point apparatus MP-J3. Yields refer to isolated yields of compounds greater than 95% purity as determined by 1 H NMR analysis. All new products were characterized by 1 H NMR, 13 C NMR, IR, and HRMS. UV-vis spectroscopy was recorded by Cary 8454 (Agilent). Fluorescence spectroscopy was recorded by Duetta (HORIBA) and SpectraMax iD5 multiplate reader (Molecular Devices). To a solution of 2,2-dimethyl-3-oxobutanoic acid 32 (4b, 260 mg, 2.00 mmol) in CH 2 Cl 2 (4.0 mL), 4-methylumbelliferone (3, 177 mg, 1.00 mmol), 1-ethyl-3-(3-dimethylaminopropyl)carbodiimide hydrochloride (EDCI·HCl, 560 mg, 2.92 mmol), and 4-dimethylaminopyridine (DMAP, 13.0 mg, 0.106 mmol) were added and stirred for 14 hours at room temperature. Reaction mixture was quenched by adding H 2 O and extracted with CHCl 3 . The aqueous layer was extracted with CHCl 3 twice. The combined organic layers were dried over anhydrous Na 2 SO 4 and filtered and solvents were removed under reduced pressure. The residue was purified by column chromatography (SiO 2 , CHCl 3 ) to obtain OB-MU2 (162 mg, 56%) as a colorless solid. Rf: (CHCl 3 /MeOH = 30 : 1): 0.63. Mp: 90–91 °C. 1 H NMR (400 MHz, CDCl 3 ) δ : 7.62 (d, J = 8.0 Hz, 1H), 7.11 (d, J = 2.5 Hz, 1H), 7.06 (dd, J = 8.0, 2.5 Hz, 1H), 6.29 (d, J = 1.5 Hz, 1H), 2.44 (d, J = 1.5 Hz, 3H), 2.31 (s, 3H), 1.55 (s, 6H). 13 C NMR (100 MHz, CDCl 3 ) δ : 205.1, 171.5, 160.2, 154.0, 152.8, 151.8, 125.5, 117.9, 117.6, 114.5, 110.0, 56.1, 25.5, 21.7, 18.6. IR (KBr) 3073, 2976, 1761, 1726, 1709, 1616 cm −1 . HRMS (ESI) m / z calcd for [C 16 H 16 O 5 + Na + ] 311.0890, found 311.0884. To a solution of 1-acetylcyclopropane-1-carboxylic acid 33 (4c, 609 mg, 4.75 mmol) in CH 2 Cl 2 (20 mL), 3 (351 mg, 1.99 mmol), EDCI·HCl (933 mg, 4.87 mmol), and DMAP (16.8 mg, 0.138 mmol) were added and stirred for 19 hours at room temperature. Reaction mixture was quenched by adding H 2 O and extracted with CHCl 3 . The aqueous layer was extracted with CHCl 3 twice. The combined organic layers were dried over anhydrous Na 2 SO 4 and filtered and solvents were removed under reduced pressure. The residue was purified by column chromatography (SiO 2 , CHCl 3 ) to obtain OB-MU3 (364 mg, 64%) as a colorless solid. Rf: (CHCl 3 /MeOH = 30 : 1): 0.59. Mp: 129–130 °C. 1 H NMR (400 MHz, CDCl 3 ) δ : 7.63 (d, J = 8.5 Hz, 1H), 7.12 (d, J = 2.5 Hz, 1H), 7.07 (dd, J = 8.5, 2.5 Hz, 1H), 6.30 (d, J = 1.5 Hz, 1H), 2.58 (s, 3H), 2.45 (d, J = 1.5 Hz, 3H), 1.78–1.70 (m, 4H). 13 C NMR (100 MHz, CDCl 3 ) δ : 201.8, 169.2, 160.3, 154.2, 152.5, 151.8, 125.5, 118.1, 117.8, 114.7, 110.3, 34.9, 29.9, 20.4, 18.7. IR (KBr) 3077, 1752, 1730, 1700, 1612 cm −1 . HRMS (ESI) m / z calcd for [C 16 H 14 O 5 + H + ] 287.0914, found 287.0911. To a solution of OB-MU3 (282 mg, 0.985 mmol) in acetonitrile (5 mL) and sodium phosphate buffer (100 mM, pH = 7.5, 5 mL), hydrazine dihydrochloride (1.04 g, 9.91 mmol) was added and stirred for 24 hours at room temperature. Reaction mixture was diluted with EtOAc and washed with aqueous NaHCO 3 . The aqueous layer was extracted with EtOAc twice. The combined organic layers were dried over Na 2 SO 4 and filtered and solvents were removed under reduced pressure. The residue was purified by column chromatography (SiO 2 , CHCl 3 /MeOH = 10 : 1 to 4 : 1) to give the unreacted OB-MU3 (78.4 mg, 28%), concomitant with fluorescent 3 (125.5 mg, 72%) as a colorless solid, 5 (19.1 mg, 15%) as a colorless solid, and 6 (22.1 mg, 14%) as a colorless solid. Rf: (CHCl 3 /MeOH = 10 : 1): 0.58. Mp: 136–138 °C (lit. 140–141 °C). 34 1 H NMR (400 MHz, DMSO- d 6 ) δ : 11.1 (s, 1H), 1.80 (s, 3H), 1.70 (q, J = 4.0 Hz, 2H), 1.35 (q, J = 4.0 Hz, 2H). 13 C NMR (100 MHz, DMSO- d 6 ) δ : 176.1, 159.3, 31.4, 17.0 (2C), 12.3. HRMS (ESI) m / z calcd for [C 6 H 8 ON 2 + Na + ] 147.0523, found 147.0523. Rf: (CHCl 3 /MeOH = 10 : 1): 0.23. Mp: 167–168 °C (lit. 170–171 °C). 34 1 H NMR (400 MHz, DMSO- d 6 ) δ : 10.44 (br s, 1H), 3.57 (t, J = 7.5 Hz, 2H), 2.63 (t, J = 7.5 Hz, 2H), 2.06 (s, 3H). 13 C NMR (100 MHz, DMSO- d 6 ) δ : 159.7, 137.6, 97.2, 44.5, 25.6, 9.9. HRMS (ESI) m / z calcd for [C 6 H 9 ON 2 35 Cl + H + ] 161.0476, found 161.0477, calcd for [C 6 H 9 ON 2 37 Cl + H + ] 163.0447, found 163.0447. To a quartz cuvette, 2.94 mL of 50 mM HEPES buffer (pH 7.4), 30 μL of OB-MU2 or OB-MU3 (5 mM solution in acetonitrile, final conc. 50 μM), and 30 μL hydrazine (100 mM solution in H 2 O, final conc. 1 mM) were added and incubated for each time (0, 1, 3, 5, 10, 15, 30, 45, and 60 min) at 25 °C. After incubation, UV-vis spectra were measured using a Cary 8454 spectrophotometer . All kinetics analyses were conducted with the following eqn (1) and (2) according to the literatures, 28,29 1 2 [P] t = [U] 0 − [U] t * Δ 0 = [V] 0 − [U] 0 , [V] 0 = 20[U] 0 , k 2 : second order rate constant [M −1 min −1 ], U: hydrazine probe (OB-MU2 or OB-MU3), V: hydrazine, P: product (4-methylumbelliferone) The estimated absorbance ( A 365–375 : average of absorbance at 365–375 nm) was calculated with [U] t and [P] t given by eqn (1) and (2) , respectively, and the absorption constants (average of extinction coefficients at 365–375 nm). The apparent k 2 was obtained by least squares curve fitting between the measured and estimated absorbance with scanning of k 2 value in eqn (1) . To a quartz cuvette, 2.94 mL of 50 mM HEPES buffer (pH 7.4), 30 μL of OB-MU2 or OB-MU 3 (1 mM solution in acetonitrile, final conc. 10 μM), and 30 μL hydrazine (20 mM solution in H 2 O, final conc. 200 μM) were added and incubated for each time (0, 1, 3, 5, 10, 15, 30, 45, and 60 min) at 25 °C. After incubation, fluorescence spectra were measured using a Duetta fluorescence spectrometer ( λ ex : 323 nm, λ em : 370–570 nm, 25 °C). To a quartz cuvette, 2.94 mL of 50 mM HEPES buffer (pH 7.4), 30 μL of OB-MU3 (1 mM solution in acetonitrile, final conc. 10 μM), and 30 μL hydrazine (0, 1, 2, 3, 5, 7, 10, or 15 mM solution in H 2 O, final conc. 0, 10, 20, 30, 50, 70, 100, 150 μM) were added and incubated for 30 min at 25 °C. After incubation, fluorescence spectra were measured using a Duetta fluorescence spectrometer ( λ ex : 323 nm, λ em : 370–570 nm, 25 °C). Additionally, using the following equation DL = K × Sb 1 / S ; the detection limit (DL) of OB-MU3 (10 μM) by fluorescence ( λ ex : 323 nm, λ em : 447 nm, 25 °C) for hydrazine (serial dilution from 10 μM) in 50 mM HEPES buffer (pH 7.4, 1% acetonitrile) was calculated, where K = 3; Sb 1 is the standard deviation of the blank solution; and S is the slope of the calibration curve. Sb 1 = 0.01292, S = 0.4067, ∴DL = 95.3 nM. To each well of a flat bottom black 96-well plate , 196 μL of buffer (Mcllvaine buffer for pH 3.0, 4.0, 5.0, 6.0, 7.0, and 8.0; 100 mM sodium borate buffer for pH 9.0 and 10.0), 2 μL of OB-MU3 (1 mM solution in acetonitrile, final conc. 10 μM), and 2 μL of hydrazine (0 or 20 mM solution in H 2 O, final conc. 0 or 200 μM) were added, followed by incubation for 30 min at 25 °C. After incubation, fluorescence intensity was measured by SpectraMax iD5 multiplate reader ( λ ex : 323 nm, λ em : 447 nm, 25 °C). To each well of a flat bottom black 96-well plate , 196 μL of 50 mM HEPES buffer (pH 7.4), 2 μL of OB-MU3 (1 mM in acetonitrile, final conc. 10 μM), and 2 μL of each analyte (for 1: H 2 O, 2: ammonia, 3: NH 2 OH·HCl, 4: ethylenediamine, 5: Na 2 S, 6: aniline, 7: methylamine, 8: piperidine, 9: p -tolylhydrazine, 10: lysine, 11: glycine, 12: Na 2 SO 3 , 13: Na 2 S 2 O 3 , 14: CuBr, 15: CuBr 2 , 16: ZnSO 4 , 17: FeSO 4 , 18: FeCl 3 , 19: MnCl 2 , 20: NiCl 2 , 21: CoCl 2 , 22: N 2 H 4 ·2HCl, 20 mM in H 2 O, final conc. 200 μM) were added and incubated for 20 and 30 min at 25 °C. After incubation, fluorescence intensity was measured using a SpectraMax iD5 multiplate reader ( λ ex : 323 nm, λ em : 447 nm, 25 °C). HeLa cells were cultured in Dulbecco's modified Eagle's medium supplemented with 5% fetal bovine serum (FBS; Sigma Lot No. S.15N348), 50 μg per mL kanamycin sulfate (Meiji Seika Pharma Co.), 50 U per mL penicillin G potassium (Meiji Seika Pharma Co.), and 50 μg per mL streptomycin sulfate (Meiji Seika Pharma Co.) at 37 °C under a humidified atmosphere of 5% CO 2 in air. Cell passages from subconfluent cultures were performed once a week using a trypsin–ethylenediaminetetraacetic acid (EDTA) solution . For fluorescence bioimaging, cells (5.0 × 10 4 cells per mL) were cultured in 500 μL DMEM for 24 hours in each compartment with a 35 mm glass-bottomed dish . After washing twice with 500 μL HBSS(+), the cells were incubated with phenylmethylsulfonyl fluoride (PMSF, 2 mM) in 250 μL HBSS(+) and OB-MU3 (40 μM) in 250 μL HBSS(+) for 30 min. After washing once with 500 μL HBSS(+), the cells were treated with 0 or 600 μM hydrazine dihydrochloride in 500 μL HBSS(+) for 30 min, followed by observation of the cells using an AxioObserver 7 inverted microscope (Carl Zeiss AG) equipped with 20× (N.A. 0.8) objective lens, Colibri7 LED illumination system, and Prime BSI sCMOS camera (Teledyne Photometrics) under differential interreference contrast (DIC) and fluorescent mode (fluorescence channel: λ ex = 370–400 nm, λ em = 410–440 nm). We also confirmed that no fluorescence was observed without the OB-MU3 probe for this channel with or without hydrazine dihydrochloride (data not shown). All treatments were conducted in CO 2 incubator (37 °C, 5% CO 2 , humidified atmosphere). PMSF was purchased from FUJIFILM Wako Pure Chemical Co. HBSS(+) was purchased from FUJIFILM Wako Pure Chemical Co. or Nacalai Tesque Inc. . Cells (1.0 × 10 5 cells per mL) were cultured in 100 μL DMEM with 5% FBS for 24 hours in each well . After removal of the medium via aspiration and washing with PBS(−) (100 μL), the cells were incubated in HBSS(+) containing different concentrations [OB-MU3] = 0, 20, and 50 μM for 1 hour. After removal of the solution, the cells were incubated in 100 μL DMEM with 5% FBS containing 10% WST-8 cell counting solution . After 4 hours of treatment, absorbance was measured at 450 nm (Abs 450 ) to quantify the metabolite water-soluble formazan and at 650 nm (Abs 650 ) to measure background absorbance using a SpectraMax iD5 multiplate reader. Cell viability was calculated from the mean values of four wells using the following equation: Abs = Abs 450 − Abs 650 . The data underlying this study are available in the published article and its ESI. † KO conceived the project. All authors designed the experiments. AT and KO performed the experiments. All authors analyzed the data and contributed to manuscript preparation and revision. There are no conflicts to declare. | Study | biomedical | en | 0.999999 |
PMC11697296 | Parkinson's disease (PD), a common neurodegenerative disease in the elderly, is mainly caused by the lack of dopamine related to midbrain neurons. This deficiency leads to the impairment of motor functions, significantly affecting the quality of life for patients. 1 Levodopa ( l -3,4-dihydroxyphenylalanine, l -Dopa), a precursor drug for dopamine, can enter the brain through the blood–brain barrier and be converted into dopamine by dopamine-decarboxylase to supplement dopamine in the brain, thus alleviating the symptoms. l -Dopa is a commonly used clinical drug for the treatment of PD. 2 However, long-term use of l -Dopa can increase the concentration of l -Dopa in the body, and excessive l -Dopa will lead to many side effects, such as bradykinesia, muscle stiffness, and tremors. 3 Therefore, it is critical to monitor the concentration of l -Dopa in patients taking this drug to improve the curative effect. To date, some traditional analytical methods for l -Dopa detection have been reported, such as high-performance liquid chromatography (HPLC), capillary electrophoresis (CE), spectrophotometric and electrochemical methods. 4,5 These methods achieve effective detection of l -Dopa, but they have various disadvantages, including high costs, complex operation procedures, long analysis times, and easy interference. In recent years, fluorescence sensing has developed rapidly in detection of various types of targets due to its superiority of enhanced selectivity, high sensitivity, quick response, low cost, and independence from expensive instruments. 6 Therefore, the construction of a fluorescence sensor for l -Dopa detection is a reasonable and ideal solution to overcome the limitations of traditional methods and meet clinical requirements. Carbon dots (CDs) are a novel type of fluorescent carbon nanoparticles and have attracted much attention because of low toxicity, superior biocompatibility, high chemical stability, easy surface functionalization and minimal photo bleaching. 7 CDs have been successfully applied in various applications, including fluorescence sensing, bioimaging, drug delivery and photocatalysis, of which fluorescence sensing is a key part. 8,9 CDs-based fluorescence sensors mainly rely on the enhancement or quenching of fluorescence after their reaction with analytes. 10 Surface functional groups play a major role in the reaction between carbon dots and detection objects. Hence, various CDs have been prepared in many studies by changing the carbon source or surface modification. 11,12 These CDs have been applied in constructing fluorescence sensors for diverse analytes, including metal ions, pesticides, antibiotics and so on. 13–15 Among them, there are few reports published on the development of CDs-based l -Dopa sensors. Therefore, there is an urgency to explore new carbon sources or preparation methods for producing CDs with high quantum yield and fluorescence properties for l -Dopa detection. Pandanus amaryllifolius Roxb. is a tropical green plant common in Hainan province of China that is rich in vitamins, proteins, chlorophyll, nucleic acids, and other nutrients. Due to its low price and availability, it is an excellent precursor for preparing carbon dots. In this study, Pandanus amaryllifolius Roxb., was applied as the green carbon source for preparing CDs. A nitrogen-rich chemical (ethylenediamine, EDA) was deliberately introduced to the hydrothermal reaction system to regulate the properties of CDs. CDs prepared with (NPCDs) and without (PCDs) EDA were comparatively analyzed. It was found that a higher quantum yield and a more sensitive fluorescence response to l -Dopa were achieved when using NPCDs. After optimizing the preparation and detection conditions, a fluorescence sensor for l -Dopa was developed using NPCDs with the limit of detection of 0.05 μM. This sensor was successfully used to detect l -Dopa in fetal bovine serum samples with excellent precisions (RSD ≤ 2.99%) and recoveries of 88.50–99.71%. In summary, this work enriches the types of CDs and provides an innovative idea for the regulation of the properties of CDs derived from biomass carbon sources. An effective method for monitoring l -Dopa was presented, demonstrating substantial potential for clinical applications. Glycine (Gly), l -alanine ( l -Ala), l -cysteine ( l -Cys), l -serine ( l -Ser), l -threonine ( l -Thr), l -leucine ( l -Leu), l -glutamic ( l -Glu), l -phenylalanine ( l -Phe), l -tyrosine ( l -Tyr) and K 2 SO 4 were purchased from Shanghai Yien Chemical Technology Co., Ltd (Shanghai, China). Ethylenediamine (EDA), H 2 SO 4 , NaCl, NaOH, MgSO 4 and HCl were obtained from Xilong Scientific Co., Ltd (Guangdong, China). Urea was provided by Aladdin Chemical Reagent Co. Ltd (Shanghai, China). Quinine sulfate, l -ascorbic acid (VC), β-cyclodextrin (β-CD) and dopamine (DA) were acquired from Macklin Biotech Co. Ltd (Shanghai, China). Levodopa ( l -Dopa) and sodium acetate anhydrous were obtained from Shanghai Yuanye Bio-Technology Co., Ltd (Shanghai, China). Fetal bovine serum (FBS) was purchased from Zhejiang Tianhang Biotechnology Co., Ltd (Zhejiang, China). The leaves of Pandanus amaryllifolius Roxb. were purchased online which produced from Wanning City (Hainan, China). All solutions were prepared using ultrapure water obtained from Nova EU10 water purification system (Qingdao, China). Transmission electron microscopy (TEM) images were observed on a JEM 2100F microscope (JEOL, Japan). Raman spectra were obtained by HR evolution (Horiba, France). X-ray photoelectron spectroscopy (XPS) was performed using a ESCALAB 250Xi spectrometer (Thermo Scientific, USA). Fourier Transform Infrared Spectroscopy (FTIR) spectra were recorded by a Thermo Field IS5 spectrometer (Thermo, USA). X-ray diffraction (XRD) spectra were obtained by a SmartLab-9 kW X-ray diffractometer (Rigaku, Japan). Fluorescence lifetimes were evaluated with the help of a FLS1000 fluorescence spectrometer (UK). UV-vis absorption spectra were recorded by a UV-2600 spectrophotometer (Shimadzu, Japan). Fluorescence intensities and spectra were recorded by a FL-4700 fluorescence spectrophotometer (Shimadzu, Japan). Zeta potential was measured by a ZSU310 nano-particle potential analyzer (Malvern, UK). Elemental analysis of carbon source was performed using a UNICUBE CHNSO element analyzer (Elementar, Germany). Quantum yield (QY) was measured and calculated according to previous work. 16 Pandanus amaryllifolius Roxb. were washed, dried in the oven and ground into powder. 0.5 g of the powder, 10 mL of ultrapure water and 50 μL of ethylenediamine were mixed evenly in a 25 mL Teflon lined high-pressure reactor, and then heated at 200 °C for 8 h. After naturally cooled to room temperature, a 0.22 μm membrane was used to filter the products and thus obtain the filtrate, known as NPCDs. The filtrate was further dialyzed against ultrapure water through a dialysis bag for 24 h with renewing the water every 4 h to remove the small molecules. PCDs were prepared using the same method, but without the addition of EDA in the reaction system. The stock solution was prepared by dissolving l -Dopa in ultrapure water. Standard working solutions of l -Dopa with different concentrations (0.1–100 μM) were prepared by diluting the reserve solution with ultrapure water. 4 mL of working solution or samples was added to a 5 mL centrifuge tube and adjusted to pH 11 using NaOH solution. 200 μL of NPCDs solution was added. After standing for 30 minutes, the fluorescence intensity (FL intensity) of the mixture was measured at excitation and emission wavelengths of 365 nm and 440 nm, respectively. The fluorescence quenching efficiency ( F 0 / F ) was calculated, where F and F 0 represent the FL intensity of NPCDs/PCDs solution with and without l -Dopa, respectively. The limit of detection (LOD) was calculated by 3 σ / k , where σ is the standard deviation of the intercept and k is the slope of the regression equation. In order to investigate the selectivity, NPCDs solution was added to the solution containing a series of ions (10 μM, Mg 2+ , K + , SO 4 2− , Na + , CH 3 COO − , Cl − ), small molecules (10 μM, Gly, l -Ala, VC, l -Cys, l -Ser, l -Thr, l -Leu, l -Glu, l -Phe, l -Tyr, urea, β-CD) or DA (1 μM), and FL intensity was measured. The above chemicals were separately added to l -Dopa solution (10 μM) to investigate the anti-interference ability for l -Dopa detection of the sensor. The precision and accuracy experiments were conducted with three concentration levels (5 μM, 20 μM, and 60 μM). Three analysis batches over a three-day period were conducted. Precision was assessed by calculating the variation coefficient of l -Dopa samples at each concentration level. The deviation between the measured concentration and the actual concentration of l -Dopa samples was calculated for accuracy evaluation. Standard working solutions of l -Dopa in FBS at different concentrations (5–100 μM) were prepared by diluting the stock solution with FBS. Samples were measured following the same procedure described in section l -Dopa detection. FBS samples spiked with l -Dopa at low (10 μM), medium (20 μM) and high (60 μM) concentrations were prepared and analyzed with the sensor. The recovery was calculated according to eqn (1) , 1 Recovery (%) = ( C measured / C added ) × 100% where C measured represents the l -Dopa concentration calculated through the linear regression equation of the method, and C added is the actual l -Dopa concentration. In this work, NPCDs were prepared by a simple one-step hydrothermal method using Pandanus amaryllifolius Roxb. as the carbon source for the first time. In contrast to other reported CDs with only biomass as the precursor, EDA was introduced to the reaction system for nitrogen doping to regulate the properties of CDs. 17 The preparation conditions were optimized, including the addition amount of EDA, reaction temperature and time to increase the QY of NPCDs (QY NPCDs ). As shown in Fig. S1a, † QY NPCDs first increased and then decreased slightly with the increase of the amount of EDA. Similar to the preparation of CDs with chemical sources, N doping also resulted in improved quantum yields of CDs with biomass as carbon precursors. 18 With the optimal doping amount of EDA (50 μL), QY NPCDs was 8.19%, which was 2.66 times higher than that without EDA doping (3.08%). To optimize the reaction temperature, the preparation was carried out at 140 °C, 160 °C, 180 °C and 200 °C. As can be seen in Fig. S1b, † QY NPCDs gradually increased when increasing temperature. According to previous reports, the possible fluorescence mechanisms of CDs include quantum effect, edge structure effect, surface defect states and crosslink-enhanced emission, etc. 19,20 It is possible that as the temperature increase, the degree of carbonization increase, resulting in smaller particle sizes of CDs or more surface light emitting functional groups. This phenomenon may contribute to the increase of QY. Considering convenience and safety of operation, the performances at higher temperature were not investigated, and subsequent experiments were conducted at 200 °C. According to Fig. S1c, † the formation of CDs should be basically complete after 8 hours of reaction. In summary, the optimal reaction conditions for the preparation of NPCDs are as follows: 50 μL EDA, 200 °C, and the reaction time of 8 h. Under the optimal condition, QY NPCDs can reach 8.19%, which is higher than many reported CDs synthesized with biomass carbon sources. 18,21,22 Three batches of Pandanus amaryllifolius Roxb. leaves were purchased and used to prepare NPCDs following the same method. The reproducibility was assessed by comparing quantum yield, UV-vis spectra, fluorescence spectra, and response to l -Dopa of the NPCDs across the batches. As shown in the Table S1 and Fig. S2, † the QYs of the three batches of NPCDs exhibited minimal differences, with their spectra showing a high level of consistency and all displaying a fluorescence response to l -Dopa. These findings indicated a high degree of reproducibility in NPCDs. The synthesis yields of CDs were calculated. The yield of NPCDs (1.27 ± 0.02%) was approximately 5 times greater than that of PCDs (0.27 ± 0.03%). This result indicated that nitrogen doping may also enhance the yield of carbon dots. NPCDs and PCDs were compared to assess the impact of nitrogen doping on CDs properties. TEM images of NPCDs and PCDs revealed their predominantly spherical shapes with good dispersion. The diameters ranged from 1.15 nm to 3.55 nm for NPCDs and 1.45 nm to 3.55 nm for PCDs, with average sizes of 2.41 ± 0.03 nm and 2.23 ± 0.04 nm, respectively . According to these results, nitrogen doping appeared to increase the particle size of CDs and lead to a less uniform size distribution. HRTEM images showed lattice spacings of 0.21 nm for NPCDs and PCDs. XRD patterns of both NPCDs and PCDs displayed diffraction peaks at approximately 29° , indicative of graphite lattice spacing (002). 18 Both HRTEM and XRD results confirmed the graphite-like structures of NPCDs and PCDs. Raman spectra in Fig. S3 † revealed D and G bands for NPCDs and PCDs, associated with sp 2 graphitic carbon structure (ordered arrangement) and sp 3 hybrid carbon structure (disordered arrangement), respectively. 23 The calculated intensity ratios ( I D / I G ) of NPCDs and PCDs were 1.51 and 1.32, suggesting more defects presence in NPCDs compared to PCDs. 23 FTIR and XPS were applied to explore the elements components and surface functional groups of NPCDs and PCDs. As observed in Fig. 1d , the FTIR spectra of NPCDs and PCDs exhibited similar absorption peaks at 3455 cm −1 , 2920 cm −1 , 1639 cm −1 and 1056 cm −1 , corresponding to N–H/O–H, C–H, C <svg xmlns="http://www.w3.org/2000/svg" version="1.0" width="13.200000pt" height="16.000000pt" viewBox="0 0 13.200000 16.000000" preserveAspectRatio="xMidYMid meet"><metadata> Created by potrace 1.16, written by Peter Selinger 2001-2019 </metadata><g transform="translate scale" fill="currentColor" stroke="none"><path d="M0 440 l0 -40 320 0 320 0 0 40 0 40 -320 0 -320 0 0 -40z M0 280 l0 -40 320 0 320 0 0 40 0 40 -320 0 -320 0 0 -40z"/></g></svg> O, C–O, respectively. 24,25 A notable difference was the stronger strength of C O in NPCDs compared to PCDs, indicating a higher concentration of oxygen-containing groups on NPCDs. XPS survey spectra of NPCDs and PCDs revealed their predominant elements of carbon (285.0 eV), oxygen (532.3 eV), and nitrogen (400.2 eV) . The proportion of each element was calculated (Table S2 † ). The O/N content ratio in NPCDs was higher than in PCDs, while the C content ratio showed the opposite trend. Each element was further analyzed by high-resolution XPS. As illustrated in Fig. 1f , the high-resolution C 1s spectra of NPCDs/PCDs can be deconvoluted into three peaks at around 284.8 eV, 286.2 eV and 288.5 eV, which were attributed to C–C, C–O/C–N, and C O, respectively. 24 Compared to PCDs, there were more C O and less C–O/C–N on NPCDs. In the high-resolution O 1s spectra of NPCDs/PCDs , the fitted peaks at 531.5 eV and 532.5 eV were related to the C O and C–O–H/C–O–C groups, respectively. 25 The content of C O on the surface of NPCDs was obviously higher than that of PCDs. This was consistent with FTIR analysis. The high-resolution N 1s spectra of NPCDs/PCDs showed two peaks at around 400.1 eV and 402.3 eV, confirming the existence of nitrogen in the forms of pyrrole N and N–H . 25 Overall, the FTIR and XPS analyses confirmed the presence of many hydrophilic groups such as hydroxy, carboxyl and amino groups on the surface of the prepared NPCDs and PCDs, ensuring their water solubility and modifiability. Nitrogen doping was identified as a factor influencing the elemental and functional group composition, potentially explaining the observed differences in QY. The UV-vis spectra of NPCDs and PCDs showed characteristic absorption peaks at 335 nm and 280 nm, attributed to n–π* transitions of C O and π–π* transitions of C C. 26 There is a large difference between the UV-vis spectra of the two CDs due to the different composition of functional groups on each CDs. From the photos , both NPCDs and PCDs appeared pale yellow in aqueous solutions under daylight and exhibited blue fluorescence under UV light ( λ = 365 nm). PCDs and NPCDs demonstrated a wide range of wavelengths for excitation and emission . Besides, the fluorescence emission spectra of NPCDs and PCDs at different excitation wavelengths were examined. According to Fig. 2c and d , as the excitation wavelength increased from 300 nm to 420 nm, the fluorescence emission peaks of NPCDs and PCDs exhibited a redshift, with the fluorescence intensity initially increasing and then gradually decreasing. Both NPCDs and PCDs demonstrated excitation-dependent luminescence properties, likely due to variations in particle sizes and surface functional group compositions. 27 The maximum excitation and emission wavelengths were used for subsequent fluorescence intensity measurements. The impact of pH on the fluorescence of NPCDs and PCDs was investigated. As shown in Fig. S4a, † the FL intensity of NPCDs and PCDs was found to be high within a pH range of 4–11, with weak fluorescence observed in extreme acidic or alkaline conditions. In strong acid environments, there may be more free carboxyl groups on the surface of CDs, forming additional hydrogen bonds. 28 A certain degree of aggregation of NPCDs and PCDs may result in reduced fluorescence. 28 A strong alkaline environment will result in carboxyl group deprotonation and destroy the functional group of NPCDs and PCDs, leading to decreased fluorescence. 29 NaCl was used to examine the effect of ionic strength on the FL intensity of NPCDs and PCDs. As can be seen in Fig. S4b, † the FL intensity of NPCDs and PCDs exhibited remarkable stability in the presence of 1.0 M NaCl, indicating excellent ionic strength stability. Moreover, prolonged exposure to 365 nm UV irradiation for 100 minutes did not significantly alter the FL intensity of NPCDs and PCDs , highlighting their exceptional photo-stability. These findings suggest that NPCDs are a highly stable and promising material for constructing fluorescence sensors. NPCDs and PCDs were tested simultaneously to compare their sensing performance. The key sensing conditions, including pH and reaction time, were optimized to maximize detection sensitivity. It can be seen in Fig. 3a that the acidity and alkalinity of the solution had a significant influence on quenching efficiency. Notably, the fluorescence of NPCDs/PCDs was hardly quenched by l -Dopa at pH 1–8 or 13–14, with the highest quenching efficiency observed at pH = 11, which was subsequently used in the following sensing experiments. Furthermore, as depicted in Fig. 3b , the fluorescence of NPCDs and PCDs exhibited rapid quenching, reaching equilibrium after 20 minutes and 5 minutes, respectively. To streamline operations, both sensing reactions were incubated for 20 minutes. NPCDs and PCDs were incorporated to samples with different concentrations of l -Dopa, and their fluorescence spectra were recorded. The FL intensity of NPCDs/PCDs gradually decreased as the concentration of l -Dopa increased, while the maximum emission wavelengths remained constant, as depicted in Fig. 3c and d . A higher quenching efficiency was noted for NPCDs at equivalent l -Dopa concentrations, suggesting that NPCDs may exhibit greater sensitivity to l -Dopa. To assess their sensing capabilities, calibration curves were constructed by plotting the quenching efficiency ( F 0 / F ) against the l -Dopa concentration. As shown in Fig. 3e , the linear relationships were observed in the l -Dopa concentration range of 0.1–100 μM for NPCDs and 10–100 μM for PCDs . The limits of detection (LOD) of the sensors based on NPCDs and PCDs were calculated to be 0.05 μM and 1.54 μM, respectively. NPCDs-based sensor demonstrated a lower LOD and broader calibration range. These findings suggest that N doping can enhance the sensor's sensitivity for l -Dopa detection, in addition to the QY. When compared to previously reported CDs-based sensors for l -Dopa detection ( Table 1 ), this approach exhibits superior sensitivity and a wider linear range. The NPCDs-based fluorescence sensor was systematically validated. A total of three standard curves were measured on separate days. All standard curves demonstrated a correlation coefficient ( R 2 ) exceeding 0.99, indicating exceptional linearity. The parameters of the weighted linear regression equations were given in Table S3. † The RSD values for slope and intercept were 1.92% and 1.03%, respectively, indicating superior repeatability of the sensor. The accuracy and precision tests were conducted at three concentrations of l -Dopa (5 μM, 20 μM and 60 μM). For each concentration, three replicates were performed over three days. Table S4 † displays the results, showing that the intra-day and inter-day accuracy fell within the range of 92.11% to 104.27%. Intra-day and inter-day precision were both below 8.89%. These results indicated that the method is accurate, reliable, and reproducible. There are a variety of chemicals such as ions (Mg 2+ , K + , SO 4 2− , Na + , CH 3 COO − , Cl − ), amino acids (Gly, l -Ala, l -Cys, l -Ser, l -Thr, l -Leu, l -Glu, l -Phe, l -Tyr), and other organic compounds (urea, β-CD, DA, VC) in serum that can be present alongside the target compound, l -Dopa. 30–33 Consequently, it is essential to assess the possible interference caused by these substances. For selectivity validation, the fluorescence of NPCDs was measured in the presence of various organic compounds or ions, respectively. The quenching efficiency ( F 0 / F ) was calculated based on the FL intensity without ( F 0 ) and with ( F ) each substance. According to Fig. 4 (blue bar), only l -Dopa significantly quenched fluorescence of NPCDs, demonstrating the good selectivity of the sensor. When DA was introduced into the sensing system at a concentration of 1 μM, DA could partially quench the fluorescence of NPCDs, which may be through a similar quenching mechanism due to its similar catechol structure with l -Dopa. As we know, DA concentration in normal people's blood is below 130 pM. 34 In contrast, the concentration of l -Dopa in the blood of patients following treatment may reach levels between 2.54–8.11 μM, 35 which is more than 100 times higher than that of DA. DA therefore has a negligible effect on l -Dopa sensing in serum. The interference of coexisting substances was further investigated by adding different substances to the l -Dopa standard solution. The samples were then analyzed via the same procedure. As seen in Fig. 4 (pink bar), there is little effect of other coexistence chemicals on the quenching efficiency of NPCDs by l -Dopa, suggesting a good anti-interference ability of the sensor. The composition of FBS and human serum is similar, as both contain a variety of plasma proteins, polypeptides, fats, carbohydrates, growth factors, hormones, and inorganic substances. This similarity makes FBS a suitable substitute for blood plasma in method validation, a practice commonly utilized in previous researches and also implemented in this study. 36–38 Standard samples were prepared by spiking l -Dopa in FBS matrix. By following the same detection process, the relationship between quenching efficiency and the concentration of l -Dopa was explored. As illustrated in Fig. 3f , a decent linearity was observed within the concentration range of 5 to 100 μM, with a correlation coefficient of 0.9970, indicating that the influence of the matrix on the detection was negligible. A spike recovery test was carried out at three concentrations. The calculated recoveries (ranging from 88.50% to 99.71%) and RSDs (ranging from 2.08% to 2.99%) further verified the accuracy of the method in real sample detection (Table S5 † ). To explore the sensing mechanism, the Stern–Volmer equation, UV absorption spectra, fluorescence spectra, fluorescence lifetime decay curves and Zeta potential were exploited. According to Fig. 3e , the fluorescence quenching of NPCDs by l -Dopa fitted well to the Stern–Volmer equation ( F 0 / F = 1 + K SV [ l -Dopa]), and K SV (Stern–Volmer constant) was calculated to be 1.72 × 10 4 M −1 . The fluorescence quenching may be through static or dynamic quenching mechanisms. 39,40 UV absorption spectra of NPCDs with and without l -Dopa were examined at pH 11. In Fig. 5a , after adding l -Dopa, the absorption peaks of NPCDs did not shift and no new peaks appeared, suggesting that static quenching might not appear. 41 Fluorescence decay curves of NPCDs with and without l -Dopa (20 μM) were measured. After fitting with the double exponential function, the average lifetimes without and with l -Dopa were calculated to be 5.51 ns and 5.50 ns, respectively . The basically unchanged fluorescence lifetime of NPCDs demonstrated that there was no dynamic quenching process and photo-induced electron transfer (PET). 42 The absorption of l -Dopa and l -Dopa of pH 11 and fluorescence spectra of NPCDs were compared for further investigation . When the pH was adjusted to 11, the absorption of l -Dopa increased obviously in the wavelength range of 280–600 nm. This stems from its oxidation to dopaquinone under an alkaline condition. 30 In this situation, there is a large overlap between the excitation/emission spectra of NPCDs and the absorption of l -Dopa. Thus, the quenching may occur through Förster resonance energy transfer (FRET) and internal filter effect (IFE) mechanisms. Since no change in lifetime of NPCDs was observed, the FRET mechanism was ruled out. 42 In addition, the zeta potential of NPCDs at pH = 11 was measured to be −1.54 mV, and it changed to −36.92 mV when l -Dopa was added, further proving that l -Dopa was oxidized . Overall, the quenching effect of l -Dopa on the fluorescence of NPCDs may be mediated primarily by IFE after its oxidation into dopaquinone. In this research, a new biomass material ( Pandanus amaryllifolius Roxb.) was utilized as a sustainable carbon source for the preparation of carbon dots (CDs). Through comparative analysis, it was found that when CDs were prepared with EDA doping (NPCDs), a higher quantum yield and a more sensitive fluorescence response to l -Dopa were achieved. By optimizing the preparation and detection conditions, a fluorescence sensor for l -Dopa detection based on NPCDs was developed, achieving a limit of detection (LOD) of 0.05 μM. The sensor was successfully applied for detecting l -Dopa in fetal bovine serum samples with promising outcomes. Compared to existing CDs-based l -Dopa sensors, the sensor in this study offers advantages such as simple preparation, high selectivity, strong anti-interference capability, and enhanced sensitivity. The findings suggest that the NPCDs-based sensor has potential for monitoring l -Dopa in clinical settings. The data supporting this article have been included as part of the ESI. † Zongmei Huang: conceptualization, methodology, investigation, writing – original draft, funding acquisition. Jing Li: writing – review & editing. Lu-Shuang Li: conceptualization, supervision, data curation, funding acquisition. There are no conflicts to declare. | Review | biomedical | en | 0.999995 |
PMC11697332 | Chirality, a distinct characteristic of objects that cannot be perfectly aligned with their mirror image, is present in various aspects of nature. For example, the standard form of the DNA double helix always twists in a right-handed manner, while snails exhibit left–right asymmetry both internally and externally. 1 , 2 A large number of naturally occurring molecules, such as proteins, enzymes, amino acids, carbohydrates, etc., are chiral and contain at least one stereogenic center in the structure, typically tetrahedral (sp 3 -hybridized) carbons with four different substituents, 3 and the two nonsuperimposable mirror-image forms of chiral molecules are called enantiomers. 4 A review from 2003 states that approximately 50% of the pharmaceuticals marketed and used in medical treatment are chiral compounds, and 88% among them are administered as racemates. 5 Different enantiomers of a chiral compound generally possess identical physical and chemical properties in an achiral environment, but they may exhibit significant variations in biological activities. For example, the ( S , S )-(+)-enantiomer of ethambutol is utilized for treating tuberculosis, while the ( R , R )-(−)-enantiomermay lead to blindness. 6 Nowadays, regulatory authorities require independent pharmacological tests for each enantiomer as well as their combined effects, and only the therapeutically active isomer can be used in a marketed drug product, 7 consequently, stereochemistry and chiral resolution are of paramount importance in the pharmaceutical industry. The 2001 Nobel Prize in Chemistry was awarded to three scientists for their work in the development of asymmetric synthesis using chiral catalysts in the production of single enantiomer drugs or chemicals. 8 In spite of the rapid development of asymmetric synthesis in recent years, there are still numerous chiral compounds synthesized as racemates, and then separated by a suitable physical separation approach. 9 In industry, two main categories of techniques are often applied for chiral resolution. Diastereomeric salt formation and enzymatic or kinetic resolution are two classical technologies, and the modern approach is the use of preparative high-performance liquid chromatography. 10 − 12 The main restrictions of the above methods is that sometimes they are impractical and uneconomical. Cocrystallization, the process of producing cocrystals, i.e., crystals with two or more molecular species in a specific stoichiometric ratio within a crystal lattice, has gained increasing attention recently as a feasible strategy to achieve chiral separation. 13 − 15 This process enables the formation of new crystalline materials involving two chiral molecules, leading to changes in its physical and physicochemical properties. 16 This approach involves two possible scenarios when both cocrystallizing components are chiral: (i) the chiral coformer only forms an enantiospecific crystal with one enantiomer of the target compound or (ii) the chiral coformer can form a diastereomeric cocrystal pair with each enantiomer of the target compound. Structural modifications in the supramolecular assembly in enantiospecific cocrystals or diastereomeric cocrystal pairs lead to changes in the crystal lattice energy and related physical and physicochemical properties, enabling separation . Therefore, both possible outcomes can be used to develop a chiral resolution process. The application of achieving chiral resolution through enantiospecific cocrystal formation in solution was first introduced by Leyssens’s group in 2012, 14 and developed to include a dual-drug chiral resolution process 17 and the use of ionic cocrystals. 16 , 18 They initially demonstrated that only the S -enantiomer of 2-(2-oxopyrrolidin-1-yl) butanamide, which exhibits nootropic activity and is marketed under the name levetiracetam, can cocrystallize with S -mandelic acid, while the R -enantiomer cannot form a cocrystal with S -mandelic acid, leading to 70% of the S -enantiomer separated from the racemic mixture in a single cocrystallization step. Diastereomeric cocrystal systems have been less extensively studied in comparison to enantiospecific systems. Höpfl and colleagues reported a diastereomeric cocrystal pair of R / S -praziquantel with l -malic acid, and the chiral separation was enabled by phase-decomposition of the R -praziquantel- l -malic acid cocrystal due to the different aqueous solubilities of the diastereomeric cocrystals. 19 l -Proline was proven to form diastereomeric cocrystals with both R - and S -enantiomers of mandelic acid in different stoichiometric ratios, hence, the chiral separation can be attained by simply altering the stoichiometry of the two constituents. 20 Mandelic acid is a widely used compound for forming enantiospecific or diastereomeric cocrystals. The literature and Cambridge Structural Database (CSD) search indicate that approximately 40 cocrystals/salts incorporating mandelic acid with another chiral compound have been documented ( Table S11 ). Somewhat surprisingly, no cocrystals involving mandelamide, the amide derivative of mandelic acid, have been reported or deposited in the CSD, 21 even though it is an important drug precursor. 22 In this work, the crystal structure of racemic [(±) - MDM], enantiopure mandelamide ( S -MDM) and enantioenriched MDM (94 S : 6 R ) were identified, and the potential of S -MDM as a chiral resolution agent via cocrystallization was considered. Two diastereomeric cocrystal pairs of S -MDM with both R - and S -enantiomers of mandelic acid (MDA) and proline (Pro) were obtained by both liquid-assisted grinding and slow evaporation, and fully characterized by thermal analysis, X-ray techniques, and FT-IR spectroscopy. To further investigate the diastereomeric behavior of S -MDM with the chiral coformers, detailed analyses of crystal structures, motifs and Hirshfeld surfaces were performed. S -MDA, R -MDA, and l -Pro was obtained from Fluorochem and d -Pro from TCI chemicals. (±) - MDM was synthesized from (±) - MDA using a literature procedure 23 and was recrystallized from hot ethanol to yield white plates. S -MDM was synthesized from S -MDA using a similar procedure to that used for (±)-MDM, see the SI . (±) - MDA and commercial S -MDM were obtained from Sigma-Aldrich. Solvents were purchased from commercial sources and all materials were used as received. LAG experiments were performed by placing a physical mixture of S -MDM with each coformer in a 5 mL stainless steel grinding jar along with a 2.5 mm stainless steel grinding ball. After the addition of 30 μL of ethyl acetate the mixture was ground using a Retsch MM400 Mixer mill at a rate of 30 Hz for 30 min. The products obtained were analyzed by powder X-ray diffraction (PXRD). A 1:1 molar ratio of S -MDM: coformer was used in all cases. After single crystal analysis, a 1:2 molar ratio of S -MDM with l -Pro was used. 20.5 mg of synthesized (±)-MDM was dissolved in 10 mL of THF by heating. Colorless plate-like crystals were obtained by slowly evaporating the filtered solution at room temperature for 3–5 d. 20.2 mg of synthesized S -MDM was dissolved in 5 mL of MeOH by heating. Colorless plate-like crystals were obtained by slowly evaporating the filtered solution at room temperature for 3–5 d. The bulk commercial sample is identical by PXRD. 20.4 mg of the commercial S -MDM was dissolved in 10 mL of the solvent mixture THF and toluene (1:1, v/v) by heating. Colorless plate-like crystals of MDM were obtained by slowly evaporating the filtered solution at room temperature for 3–5 d and one crystal was identified by single crystal diffraction as containing 94% S -MDM and 6% R -MDM. Bulk quantities of MDM (94 S : 6 R ) were obtained by dissolving 100 mg of the commercial S -MDM in EtOH at room temperature, and removing the solvent quickly using a rotary evaporator (Büchi, Germany) under a vacuum achieved by a diaphragm pump (Vacuubrand, Germany), with the rotary flask rotating at a speed of 40 rpm while being immersed in a water bath at 50 °C. 24 The resulting white powdered product was isolated and allowed to dry in the fume hood overnight. The products from the LAG experiments were dissolved in 10 mL of solvent and the filtrate allowed to crystallize by slow evaporation. 22.7 mg of powdered S -MDM- S -MDA was used in MeOH. Colorless plate-like crystals were harvested after 3–5 d. 20.8 mg of powdered S -MDM- R -MDA was used in a solvent mixture of MeOH and Et 2 O (1:1, v/v). Colorless needle-like crystals were after 5–7 d. 31.4 mg of S-MDM- l -Pro was used with the solvent mixture of EtOH and CH 2 Cl 2 (1:1, v/v). Colorless needle-like crystals were obtained after 3–5 d. 19.8 mg of S-MDM- d -Pro in the mixed solvent MeOH and THF (1:1, v/v). Colorless needle-like crystals were obtained after 3–5 d. Powder X-ray Diffraction (PXRD): The PXRD patterns were collected on a STOE STADI MP diffractometer with a Cu Kα radiation (1.540 Å) using a linear position-sensitive detector. The tube voltage and amperage were set at 40 kV and 40 mA respectively. Each sample was scanned between 3.5 and 45.5° 2θ with an increment of 0.05° at a rate of 2° min –1 . The samples were prepared as transmission foils and the data were viewed via STOE WinXPOW POWDAT software. 25 Differential Scanning Calorimetry (DSC): DSC was conducted on a TA Instruments Q1000. Samples (1–5 mg) were placed in nonhermetic aluminum pans and scanned in the range of 25 to 200 °C at a heating rate of 10 °C min –1 under a continuously purged dry nitrogen atmosphere (flow rate 80 mL min –1 ). The data were viewed and analyzed by TA Universal Analysis software. FT-IR Spectroscopy (IR): FT-IR spectra were recorded on a PerkinElmer UATR Two spectrophotometer using a diamond attenuated total reflectance accessory over a range of 400–4000 cm –1 . Four scans were taken at 4 cm –1 resolution for each sample, and the spectra were measured over the range of 400–4000 cm –1 . Single crystal X-ray diffraction (SCXRD): An optical microscope was used to choose a suitable crystal for diffraction. SCXRD data was performed using a Bruker APEX II DUO with monochromated Cu Kα radiation . The structure was solved and refined by the SHELX suite of programs in the Bruker APEX software. 26 , 27 All non-hydrogen atoms were refined by using anisotropic displacement parameters while hydrogen atoms were fixed in geometrically calculated positions using the riding model, with C–H = 0.93–0.98 Å, O–H = 0.82 Å and N–H = 0.86–0.89 Å, and Uiso (H) (in the range 1.2–1.5 times Ueq of the parent atom). For MDM (94 S : 6 R ), there is disorder in two of the four crystallographically independent MDM molecules due to the R -MDM impurity, which was modeled in two conformations in 88:12 ratio. For S -MDM- l -Pro and S -MDM- d -Pro cocrystals, there was disorder in the proline carbon that is beta to both the nitrogen and the carbon bonded to the carboxylic acid, which was modeled in two conformations in 50:50 and 85:15 ratios, respectively. PLATON was used for the analysis of potential hydrogen bonds and short ring interactions. 28 , 29 Mercury 2022.2.0 and DIAMOND 4.6 were used for viewing structures and creating diagrams. 30 Crystallographic parameters are listed in Table 1 . Hirshfeld surface analyses and two-dimensional (2D) fingerprint plots were carried out using the CrystalExplorer 21.5 program. 31 Searches of the CSD were conducted using ConQuest version 2022.2.0. 32 The following restrictions were applied: 3D coordinates; single crystal structures only; and organics only. NMR spectra were recorded on either a Bruker Avance 300 MHz NMR spectrometer 1 H (300 MHz) or on a Bruker Avance 400 MHz NMR spectrometer 1 H (400 MHz) and 13 C (100.6 MHz). All spectra were recorded at room temperature (20 °C) in deuterated methanol ( d 4 -CD 3 OD), using tetramethylsilane (TMS) as an internal standard. Chemical shifts are reported in parts per million (ppm) relative to TMS, and coupling constants are expressed in Hertz (Hz). The enantiopurity of the commercial S- MDM from Sigma-Aldrich, synthesized S- MDM and the single crystal of MDM (94 S : 6 R ) were determined by chiral high-performance liquid chromatography (HPLC) analysis on a Lux Amylose-1 column, purchased from Phenomenex. The HPLC parameters employed included a mobile phase of hexane/isopropanol = 90:10, a flow rate of 1 mL min –1 , a temperature of 25 °C and a detection wavelength of 210 nm. HPLC analysis was performed on a Waters Arc with a Waters 2998 PDA Wavelength UV Detector. All solvents employed were of HPLC grade. Based on the molecular structures of mandelamide and both coformers, it was anticipated that the well-known amide–amide homosynthons and amide-acid heterosynthons would be observed in their crystal structures . A search of the CSD was undertaken to identify common supramolecular synthons for compounds containing a hydroxyl group in the α position to a primary or secondary amide functional group . The R 2 2 (8) homosynthon between two amides is commonly observed in 82 structures, 58 of which are single component crystals. Only one structure containing the amide-acid R 2 2 (8) heterosynthon has been reported (Refcode NUGFAX 33 ). Single crystals of racemic and enantiopure S -MDM were grown from THF and MeOH, respectively, and the structures determined as shown in Figure 4 and 5 , respectively. The ellipsoid plots are shown in Figure S14 . Hydrogen bonds and π–π interaction geometries are displayed in Tables S2 and S3 , separately. (±)-MDM crystallizes in the monoclinic P 2 1 / c space group with Z ′ = 1. As shown in Figure 4 a, two (±)-MDM molecules formed a R 2 2 (11) motif through the N–H···O and C–H···O hydrogen bonding. The hydrogen-bonded network is further extended by O–H···O hydrogen bonds between two (±)-MDM molecules. Along the b axis, an R 4 2 (8) motif is created among four (±)-MDM molecules via N–H···O hydrogen bonding . S -MDM crystallizes in the P2 1 2 1 2 1 space group with Z ′ = 1. As shown in Figure 5 , every two S -MDM molecules formed a R 2 2 (9) dimer between the hydroxyl group and the amide group in a tail-to-tail manner through the N–H···O hydrogen bonding. The 3D hydrogen-bonded network is further stabilized by O–H···O hydrogen bonds between two S -MDM molecules. Interestingly, during this study a third crystalline form of MDM was isolated from the solvent mixture of THF and toluene. Analysis of the SCXRD showed that this contained enantioenriched MDM (94 S : 6 R ) which results in a very different structure relative to either the enantiopure or racemic forms. The chiral HPLC results on another crystal from the same batch are consistent with the structural analysis. . As shown in Figures S21 and S22 , the crystal arrangement along the b axis in both the major and minor components of MDM (94 S : 6 R ) exhibits similarity to the crystal packing observed in (±)-MDM, rather than the expected resemblance to S -MDM, despite the fact that S -MDM constitutes 94% of MDM (94 S : 6 R ). The single crystals of S -MDM were obtained from the synthesized S -MDM which contains 100% of S -MDM, while the formation of the MDM (94 S : 6 R ) could be attributed to the commercial starting material being <100% S . According to the chiral HPLC analysis, the commercial S -MDM contained 96% S -MDM and 4% of R -MDM . PXRD analysis of the bulk material for (±)-MDM, S -MDM, and MDM (94 S : 6 R ) match the theoretical PXRD based on the single crystal analysis, Figure S11 . The formation of MDM (94 S : 6 R ) may be rationalized either on the basis of solvent effects, since it was observed by crystallization from a THF/toluene mixture, or fast crystallization using the rotatory evaporator, which is a method that can produce new crystalline forms. 24 To investigate whether MDM forms a solid solution, a 50:50 mixture of (±)-MDM and R -MDM was crystallized from methanol and analyzed by PXRD . The peak at approximately 2θ = 19–20° matches all forms of MDM. It has low intensity broadening at lower 2θ (18–19°), which is the region where a peak is only observed in MDM (94 S : 6 R ). The structural analysis results revealed that the expected R 2 2 (8) motif between two MDM molecules is not present in any of the crystal structures of MDM. Instead, motifs 1–4 are present in these three crystal structures. Motif 1 and 3 are not found in reported structures, while motif 2 was observed in four reported structures (Refcodes: VAFVIL, 34 DEZKUR, 35 NOLCOG, 36 YENDEC 37 ) based on the CSD search. In addition, motif 4, consisting of four MDM molecules in (±)-MDM and MDM (94 S : 6 R ), can also be found in two reported structures (Refcodes: DEZLEC 35 and YENDEC 37 ). The two main hydrogen-bonding functional groups in MDM are the amide and hydroxyl groups. As shown in Table S1 and Figure S3 , the characteristic IR bands of the N–H and O–H stretches in (±)-MDM and MDM (94 S : 6 R ) are both increased compared with those in S -MDM. In contrast, the stretching vibrations of C=O in these two solids display a decrease compared to S -MDM. As shown in Figure S8 , the melting point of (±)-MDM is 133–135 °C, which is in line with the reported data. 38 DSC analysis of the MDM (94 S : 6 R ) reveals its melting point is slight lower than that of S -MDM. In the book “Introduction to Stereochemistry”, Mislow examined the most common diastereomeric phase relationships that occur between two stereoisomers of similar substances. 39 One out of the four scenarios could explain the thermal behavior of MDM (94 S : 6 R ). In this case, introducing a small amount of impurity (i.e., R -MDM) can result in a decreased melting point compared to the pure component ( S -MDM). S -MDM- S -MDA and S -MDM- R -MDA cocrystals crystallized in the same space group (P 2 1 2 1 2 1 ) of the orthorhombic system and have similar unit cell parameters ( Table 1 ). Hydrogen bonds and π–π interaction geometries are displayed in Tables S5 and S6 , separately. S -MDM- S -MDA crystallizes with one S -MDM molecule and one S -MDA molecule in the asymmetric unit, Figure 7 a. These two molecules are connected via C11–H11···O23 and O23–H23···O3 discrete hydrogen bonds, forming a R 2 2 (8) motif. Two asymmetric units link through N1–H1A···O23 and C14–H14···O3 discrete hydrogen bonds, generating a four-molecule motif . In the other four-molecule motif , one S -MDM molecule and one S -MDA molecule interact through N1–H1B···O25 and C4–H4···O25 discrete hydrogen bonds, forming a similar four-molecule motif via O5–H5···O25 and O5–H5···O21 discrete hydrogen bonds. These two motifs are further assembled by an O25–H25···O5 hydrogen bond. The asymmetric unit of S -MDM- R -MDA contains one S -MDM molecule and one R -MDA molecule, which are connected via N1–H1A···O21 and O25–H25···O3 discrete hydrogen bonds, forming an R 2 2 (9) motif. Along the c axis, the asymmetric unit links two adjacent units to extend the 3D structure of the cocrystal through O–H···O hydrogen bonds (forming an R 1 2 (5) motif), and N–H···O hydrogen bond, respectively . Additional hydrogen bonding between S -MDM and R -MDA molecules is observed in a tail-to-tail manner along the b axis, where an R 1 2 (5) motif is created via O–H···O hydrogen bonds . The DSC data for the S -MDM- S -MDA and S -MDM- R -MDA cocrystals show single endothermic peaks at 85 and 81 °C, respectively, with the melting point of the cocrystals lying lower than those of the corresponding starting materials . As shown in Table S1 and Figures S4 and S5 , the −NH 2 , −OH, and C=O bands of S -MDM exhibit shifts in both cocrystals. All the observed differences indicated that those three moieties are involved in the formation of hydrogen bonds in the different cocrystals. As shown in Figure S12 the PXRD patterns for both S -MDM- S -MDA and S -MDM- R -MDA cocrystals match with the simulated patterns extracted from the SCXRD analysis, indicating these cocrystals can be reproduced in bulk quantities by the LAG method. The products were the same irrespective of the source of S -MDM (synthesized or commercial) used in the experiments. A stoichiometrically diverse diastereomeric cocrystal system between S -MDM and l / d -Pro was obtained. Hydrogen bonds and π–π interaction geometries are displayed in Tables S7 and S8 , respectively. S -MDM- l -Pro cocrystallized in the orthorhombic P 2 1 2 1 2 1 space group with one S -MDM and two l -Pro molecules in the asymmetric unit. As shown in Figure 9 a, S -MDM links l -Pro 1 through O5–H5···O27 hydrogen bond and connects l -Pro 2 via N–H···O and C–H···O hydrogen bonds, forming an R 2 2 (8) motif. R 1 2 (4), R 2 1 (5), and R 3 3 (8) motifs between l -Pro molecules interlink the chain , stabilizing the 3D hydrogen-bonded network of S-MDM- l -Pro cocrystal along the a axis . The S -MDM- d -Pro cocrystal crystallizes in the monoclinic space group P 2 1 and the asymmetric unit consists of two S -MDM molecules and two d -Pro molecules (Z′ = 2). As shown in Figure 10 , two S -MDM molecules and two d -Pro molecules can be regarded as the crystal packing building block, where R 4 4 (16) motif and R 4 3 (11) motif are created among four S -MDM molecules and two d -Pro molecules via N–H···O and O–H···O hydrogen bonds. An R 4 4 (13) motif between four d -Pro molecules is also observed in this building block through N–H···O hydrogen bonding. The 3D hydrogen-bonded network is extended by connecting different building blocks through O5–H5···O58 and C28–H28···O58 hydrogen bonds. Meanwhile, N–H···O hydrogen bonds between four S -MDA molecules also contribute to the stabilization of the crystal structure, forming two R 3 3 (11) motifs. Adifference of melting point between the S -MDM- l -Pro and S -MDM- d -Pro cocrystals can be observed from the DSC plots . The S -MDM- l -Pro cocrystal shows a single endothermic peak at 208 °C and the melting point of S -MDM- d -Pro cocrystal is 166 °C. Both of these are in between that of the individual components. The IR data show differences in the ν N–H , ν O–H , and ν C=O , indicating reconstruction of hydrogen bond networks in those solids and the formation of new crystalline solids . The experimental PXRD patterns of S-MDM- l -Pro and S-MDM- d -Pro cocrystals were found to compare well with the simulated PXRD patterns obtained from the SCXRD data . The different sources of S -MDM used in the cocrystallization experiments did not influence the products obtained. A 2014 CSD search of the existing enantiospecific and diastereomeric cocrystals demonstrated that among 44 multicomponent structures containing two optically active compounds, 38 (86%) systems behave enantiospecifically. 40 This reveals that even a small change in the structure of the cocrystallizing component, such as a change in absolute and/or relative stereochemistry, can lead to changes in secondary interactions and steric effects, ultimately changing the outcome of cocrystal formation. 40 − 44 Flood et al. explored the formation of enantiospecific and diastereomeric cocrystals by employing crystal structure prediction and molecular simulations, indicating that despite the similarity in the predicted hypothetical crystal structure and hydrogen-bonding geometries, variations in aromatic interactions and lattice energy were instrumental in favoring the formation of an enantiospecific cocrystal instead of a diastereomeric cocrystal pair. 45 Therefore, for the formation of a diastereomeric cocrystal pair, more changes in the hydrogen bonding network and molecular arrangement are required in order to reduce the influence of the secondary interactions and steric effects to the total cocrystal stabilization energy. 40 As mentioned earlier, diastereomeric cocrystals of S -MDM with S / R -MDA have similar crystallographic data, and the stoichiometric ratio between S -MDM and the coformers are the same. However, the hydrogen bonding between the two components in these cocrystals differ significantly. As shown in Figure 11 , binary level hydrogen-bonding motifs are present in S -MDM- S -MDA (motif 5) and S -MDM- R -MDA cocrystals (motif 6), respectively. For the S -MDM- S -MDA cocrystal, only the hydroxyl group from the carboxyl group of S -MDA, serving as both hydrogen-bonding donor and acceptor, is engaged in the hydrogen bond formation, while both the oxygen atom of the carbonyl group and a hydrogen atom (H11) from the benzene ring of S -MDM are involved in the hydrogen bond construction. In contrast, for the S -MDM- R -MDA cocrystal, hydrogen bonding occurs between the carbonyl oxygen atom and the hydroxyl group of R -MDA and the amide group of S -MDM. Motif 5 is not found in any structures through the CSD search, whereas motif 6 was presented in two reported (Refcodes: VASWOC 46 and ZZZRJG01 47 ). These orientationally restrictive interaction motifs determine the formation of diastereomeric cocrystal pairs between S -MDM and S / R -MDA. 48 Moreover, the different contacts in these two cocrystals can be visualized by their 2D fingerprint plots . Hydrogen bonding in the S -MDM- S -MDA cocrystal constitute a bigger proportion compared with those in S -MDM- R -MDA cocrystal, while in contrast, van der Waals interactions account for a larger percentage in the S -MDM- R -MDA cocrystal. These significant differences lead to the remarkable changes in the crystal packing for this diastereomeric pair. Compared to the S -MDM- S / R -MDA diastereomeric cocrystal pair, the differences between the S -MDM- l -Pro and S -MDM- d -Pro cocrystals are more significant. Apart from the dissimilar motifs (motif 7 from S -MDM- l -Pro, motifs 8 and 9 from S -MDM- d -Pro) resulting from different functional groups in two cocrystals and their distinct 2D fingerprint plots and corresponding contact contributions , the primary factor that overcame the obstacle of stabilization free energy for cocrystal formation is the varying stoichiometric ratios of S -MDM and l / d -Pro. This is similar to the recent report by Leyssens and co-workers for l -Pro with mandelic acid. 20 Given the different outcomes in terms of stoichiometry when using the diastereomeric pairs of S-MDM with either MDA or proline, a series of screening experiments were conducted with S-MDM and S / R -MDA and l / d -Pro in 1:1, 1:2, and 2:1 ratios. Based on the PXRD analysis, the product 1:1 ratio is of high purity without the existence of the diffraction peaks from either S -MDM or S / R -MDA. The PXRD pattern of the new phase of S-MDM with l -Pro in a 1:2 ratio was obtained, while for the 1:1 and 2:1 ratios, excess S-MDM was present as well as the 1:2 product. For the d -Pro system, new diffraction peaks of S-MDM- d -Pro were found using a 1:1 ratio. Excess S-MDM was detected when a 2:1 ratio was used and excess d -Pro found using a 1:2 ratio. These grinding experiment results are in line with the solution crystallization results. To demonstrate the potential of the MDM as a cocrystal system for chiral resolution, a series of slurry experiments involving (±)-MDM and l -Pro in molar ratios ranging from 1:1 to 1:5 were undertaken ( Table S10 ). The PXRD results revealed that at high proportions of l -Pro, particularly 1:4 and 1:5 ratios, the R -MDM: l -Pro (or S -MDM- d -Pro) cocrystals were not detected. Due to challenges in the determination of the enantiopurity of proline, the resolution experiment was undertaken using (±)-MDM and l -Pro as a proof of concept. Thus, a sample of (±)-MDM and l -Pro (in 1:3–1:5 molar ratios) was slurried in MeOH for 3 d. Separation of the solid from the liquid phase and analysis of each component revealed that the solid consisted predominantly of S -MDM- l -Pro by PXRD. Notably, the enantiopurity of S -MDM in the solid phase with 1:5 ratio is 96.1%ee, confirming the chiral resolution is possible through this cocrystal system . Further investigations are underway to explore the potential of MDM for chiral resolution through cocrystallization. In summary, the crystal structures of (±)-MDM, S -MDM and MDM (94 S : 6 R ) were identified and fully characterized in this work. Additionally, this study reports the synthesis and characterization of two novel diastereomeric cocrystal pairs of S -MDM with both enantiomers of mandelic acid ( S -MDM- S -MDA and S -MDM- R -MDA) and proline ( S -MDM- l -Pro and S -MDM- d -Pro). The S -MDM- S -MDA and S -MDM- R -MDA cocrystals have similar unit cell parameters and the same stoichiometric ratio (1:1), yet a significantly different hydrogen bonding between the two coformers plays a critical structure determining role. The formation of S -MDM- l -Pro and S -MDM- d -Pro diastereomeric cocrystals proceeds with different stoichiometries, similar to a recent report of proline with mandelic acid, 20 although the structure determining features are very different. The feasibility of utilizing MDM and l -Pro as a cocrystal system for chiral resolution was explored. This work revealed that S -MDM can be effectively resolved by cocrystallization with l -proline. | Review | biomedical | en | 0.999996 |
PMC11697380 | Teaching computational thinking requires insights into how learners understand computational concepts and engage in computational practices. Moreover, to involve all learners and optimize inclusive teaching, it is essential to also know how such understanding and engagement can differ for specific groups of learners, such as those with impairments . Specifically for the group of learners with visual impairments, attention has risen the past few years to improve their participation in early programming lessons. Most research has focused on usability and accessibility of programming tools and materials. Learners with visual impairments form a challenging group here, because of the wide variety in their vision (with the majority having low vision in diverse forms, and a smaller group being blind) which results in different possibilities and preferences, especially when it comes to the use of technology . General technology accessibility issues are known in for instance screenreader compatibility. Furthermore, materials designed to introduce computational thinking to younger learners are often very visual in nature. Consequently, studies have been striving to identify which specific issues in these materials hinder the accessibility for low vision and blind learners. This has resulted in several proposed and implemented adaptations of materials. Examples are the improvement of accessibility of block-based environments and the addition of audio feedback to navigate through textual code . Further, new materials have been specifically designed for the target group, including a tangible block-based tool and inquiries are being made into suited instructions and support . Importantly however, the area of specificities in cognitive processing in visually impaired individuals has been rather unexplored. Especially in the subgroup of blind learners it is likely that such specificities exist, due to known particularities in visual-spatial mental modeling and spatial navigation of this group . The resulting complexity of conveying abstract cognitive concepts to learners with visual impairments has been documented in other educational fields such as science and music . In this study, we focus on the field of programming education, and explore how blind and low vision learners approach the computational concept of abstraction. By observing these learners during programming assignments with the educational robots the Bee-bot and Blue-bot, we assess their approach to and experience of this concept through concrete behaviors. This will provide insight into how the process of abstraction emerges in this group of learners. Ultimately, these insights can contribute to understand specificities within their cognitive processing in the context of computational concepts, as well as to provide tailored educational support. Concerning our language use, we are aware of the discussions on appropriate terms when referring to people with impairments . Through this paper, we use the terms currently in place in our educational practice as well as in the academic literature we build on: learners with visual impairments, and blind or low vision learners. As a result, we use the term “braille learner”, following this indication by the schools of blind learners (with possibly some residual vision) who are being taught braille. Computational thinking is a widely applied but complex term in the context of programming education to young learners . Several definitions of computational thinking are in use (both in academia and in practice), originating from early papers . In line with these earlier understandings, at its core computational thinking can be understood as a set of problem solving processes. Three aspects can be distinguished: computational concepts, computational practices, and computational perspectives . Computational concepts refer to the content of the processes engaged in while programming, for instance iteration or parallelism. Further, computational practices refer to the activities (which can be cognitive) employed to engage with the concepts, for instance debugging. Finally, less relevant here, computational perspectives involve perspectives designers have of themselves and the world. A core computational process is abstraction, which involves viewing a situation at various levels of detail, and deciding what details we need and can ignore . It can be seen as a form of problem solving, as in the model by Perrenet where four layers of abstraction are described to understand how novice learners approach programming tasks. In the original model, these layers were identified in Computer Science education students’ thinking. The layers included the problem layer (the highest layer, where a verbal description of the problem is provided), the design layer (where a detailed depiction of the solution is provided without a reference to the specific programming language), the code layer (referring to the code in the specific programming language) and the execution layer (which involves running the code or referencing to the output, the lowest level). The model has been applied in the context of elementary school-level learners , where concrete observable behaviors of young learners while working with an educational robot have been operationalized for each layer . This enables identifying which layer a learner is engaged in during an assignment. Behaviors include tactile expressions (for instance, pointing towards the robot), verbal expressions (describing the route of the robot) and observing the environment, route, or robot carrying out the task. In addition to observing the behaviors within layers, the model can also be used to assess how learners switch between the layers through pattern analyses . Previous research with the educational robot revealed that young learners spend little time on the problem layer but do switch between layers in a matter that suggests debugging (switching back to the code layers after the execution) and redesigning (switching back to the design layer). Level of complexity or abstraction of the problem or task itself can also be taken into account by looking at the dimensions of control and representation . Control can range from direct manipulation (moving a physical robot or dragging a character in a programming environment) to computational control (where a sequence of instructions is constructed that is executed later on). Representation refers to the manner in which such a sequence is presented. This can range from the programmer having no representation to the programmer having an external plan outside of the unit that is being programmed . More abstract and complex tasks consequently imply moving away from more direct computational control . Understanding and operationalizing abstraction at this observable level is an essential step in optimizing programming education, since it provides direct starting points for teachers to recognize and support the development of this process. It is consequently also key to identify how abstraction emerges in learners with diverse needs. Learners with visual impairments have received quite some attention in the topic of programming education (though young learners more recently) mostly at the level of identifying practical usability and accessibility in programming materials and environments . Various barriers have been described . These include inaccessibility in programming languages and environments and code navigation . Specifically for the younger learners, there are relevant issues with block-based languages (including the reliance upon the mouse and the drag and drop interface) as well as the visual properties of tangible materials such as robots and robotic kits . At the cognitive level (of comprehending and representing computational concepts and employing particular computational practices) however, specificities in the group of learners with visual impairments have been rather unexplored. We know from other educational fields such as science and music that teaching abstract cognitive concepts to learners with visual impairments can be challenging. This challenge in conveying abstract knowledge to learners with visual impairments can be grounded in specificities in visio-spatial mental modeling and spatial navigation in this group. This is a highly complex area. Generally, qualitative differences can exist for specifically blind compared to sighted individuals in how spatial information is encoded, as a result of the absence of visual experience and the quantitative advantage of vision over other senses . However, performance is influenced by several factors, including the onset of blindness, other experiences, taught or developed compensatory mechanisms as well as the specific spatial processing aspects involved or task used. Ultimately, visio-spatial mental images of blind individuals can in practice be functionally equivalent to those of sighted individuals, but differences on specific mental imagery tasks can also be identified. To further understand the implication of this complex picture within the context of young learners with visual impairments’ education, it is recommended to focus on distinct spatial contexts and representations within a particular discipline . Within programming education, the core concept of abstraction provides a suitable starting point. The approach of the layers of abstraction and the previously identified behaviors emphasize how more complex tasks and approaches entail less direct manipulation and consequently require more mental modeling. In order to explore this, we use the educational robots Bee-bot and Blue-bot, since these are widely applied in early programming education and have been proven to be at the concrete level relatively accessible for learners with visual impairments . Further, the two types of this bot provide different options for the dimension of control within a programming task, with the Bee-bot being directly programmed with buttons on the bot and the Blue-bot having the option to be programmed through an external device. Our research question is: which patterns and specific behaviors, approached from the layers of abstraction model, do children with visual impairments engage in when working on a programming task with the Bee-bot or Blue-bot? Specifically, we will assess how the children move through the abstraction layers during a task, and which behaviors they show within the different layers. In our understanding of the four layers, we follow previous work on this model . Consequently, the problem layer refers to the most abstract level where the problem is discussed, the design layer involves a depiction of the solution, the code layer involves being directly concerned with the code as applicable in the specific tool or language, and the execution is the least abstract layer where the code is ran or where there is preoccupation with the output. In our study, first, frequency of the different layers and switching between the layers can reveal whether learners with visual impairments engage in these higher levels of abstraction, that require mental representation of the problems they work on. Second, how exactly these learners engage in these layers, that is what type of concrete behaviors and practices, are being employed, can indicate what information is used and needed to build these mental representations. This also illuminates the extent to which the original model of the layers of abstraction, based on sighted learners , is applicable to other learners with certain specificities. Together these insights explore how the cognitive concept of abstraction in the context of programming is experienced by learners with visual impairments. Our primary interest lies in blind children, however given the low prevalence of this group as well as the unexplored nature of this topic in learners with visual impairments overall, we include in our study pairs of learners with visual impairments with each pair containing at least one blind child. Nine children from three special education schools for learners with visual impairments in the Netherlands participated in pairs in sessions with the Bee-bot (three sessions) and/or the Blue-bot (four sessions). Table 1 summarizes the pairs and characteristics of the children. There were two pairs who participated in both a Bee-bot and Blue-bot session (the pair of sessions 2 and 5 and the pair of sessions 3 and 6). In session 1 and 4 Child 1 was the same, but Child 2 was different. Table 1. Overview of the seven sessions and the participants. */**/*** indicates same pair or same child within pair. No. Child1 Child2 Level Bot 1 Braille (f)* Braille (m) Lower Bee-bot 2** Low vision (m) Braille (m) Lower Bee-bot 3*** Braille (m) Braille (m) Higher Bee-bot 4 Braille (f)* Low vision (m) Lower Blue-bot 5** Low vision (m) Braille (m) Lower Blue-bot 6*** Braille (m) Braille (m) Higher Blue-bot 7 Braille (f) Low vision (f) Middle Blue-bot In total, out of the nine children there were three girls and six boys. Three children had low vision in various forms, including for all of them blurred vision and additional effects such as distorted view or images. The other six children were blind, with two of them being completely blind, and four having some residual vision or light perception. In the results, we refer to the blind learners as “braille learners”, which is how they are indicated at school. This indication reflects that these learners could have some residual vision but were all in any case taught braille, in addition to working with various other tools and assistive technologies such as screenreaders. Because the policy in the Netherlands is that learners with visual impairments enroll in regular education unless not possible, all participants had additional learning issues or other specificities. These included specific behavioral issues or specific learning needs. The participants were enrolled at the lower, middle or higher level of special education elementary schools. Although there is flexibility in age ranges in this school context, these ranges generally include learners of resp. 6–8, 9–10, and 10–12 years old. The schools were located in different parts of the Netherlands, with the school from pairs 1/4 and 2/5 being in a more rural part and the schools from pair 3/6 and pair 7 in a more urban part. The classes of these schools were similar in size and the schools overall had a similar educational approach and support. The primary focus of this qualitative study was to explore visually impaired learners’ experience of the concept of abstraction in a programming assignment, in order to gain insight into the reality of this topic as experienced by our subjects. Fitting with such a design and focus, we intended to obtain trustworthiness of our study through establishing the four criteria of Guba: trust value, applicability, consistency, neutrality . In our collection of the data, we followed a tailored approach fitting the subjects’ specific setting and needs, providing space to find and express their experience. Further, we documented this approach, our sample, and the findings in detailed descriptions (see the relevant sections of the methods and the results sections). As such, we established truth value by staying close and true to the direct experience of subjects and documenting this experience in detail. Further, applicability refers to the extent to which (a type of) generalization is aimed for. In this study, this was limited to enabling transferability to similar participants and contexts by providing details on these participants and contexts. Third, consistency is established in the results section by working with a detailed coding scheme that allows for both pre-defined and newly observed behaviors, and in addition through providing full descriptive pictures for each pair of learners. Finally, neutrality is established again through staying close to our participants’ experience and documenting these experiences in detail. The Bee-bot and Blue-bot sessions were conducted in the context of a larger project on usability and accessibility of programming materials for learners with visual impairments. Classes participating in this project were all part of special education schools part of the two Dutch expertise centres for visual impairments, and three programming lessons focusing on specific materials. The three classes participating in the Bee-bot and Blue-bot lessons had each received the first lesson on an unplugged material, after which they used the Bee-bot in the second lesson and the Blue-bot in the third lesson. Informed consent was obtained from parents, who were approached through the teachers with a letter explaining the lessons and research. Parents were asked to give permission for the participation of their child in the research and for the video recording that took place in the classroom. If a parent would not give consent, their child would still take part in the programming lessons but no data would be collected on this specific child, who would also not be part of the video recording. The latter was ensured by having the children for whom no permission was obtained sit in a separate classroom during the assignment. Half to all of the parents gave consent in the three classes. The programming lesson consisted of a short introduction, after which the children were divided into pairs to work on an assignment. The introduction was given by the researcher. During the assignment, each pair of children was guided and supported by a tester, this was either the researcher or a research-assistant. The research-assistants were students in social sciences or computer sciences who had received a training on working with children with visual impairments as well as on facilitating the set-up of the assignment as explained below. The educational floor-robot Bee-bot and the more advanced version Blue-bot were both used. These robots have the same basic look and functions, shaped as a bee with a clear front (distinguished by the protruding eyes and nose) and seven buttons on top which can be used to move the bot forward, backwards, turn right or left, pause, and run or erase the program. The type of functions are distinguished by different shapes and colors of the buttons. The bot makes sounds when it is being programmed (with different sounds for a step, for erase, and for run) and when it executes the program (making a sound for each step and a different sound at the end). Whereas the Bee-bot can only be programmed with the buttons on top, the Blue-bot has the option to be programmed externally using the accompanying materials (the tactile reader and tactile reader cards), or the Blue-bot app on a PC or tablet device (the latter was not used in the current study). The tactile reader is an external card holder, which connects to the Blue-bot through Bluetooth. A total number of nine cards, that hold the same five functions as the buttons on top of the robot, can be placed in this holder. Compared to programming the bot with the buttons on top, this external device makes it possible to lay out the program. The original cards indicate their function (step forward, pause, etc.) with a small picture. Since this is unsuited for blind learners, adapted tactile versions of these cards were used (previously created by one of the expertise centres and explored in some of the schools), which contained small tactile shapes attached to the cards below the original pictures. One consideration in the design of the tactile shapes was to find an appropriate alternative for the arrow shape, which had been proved in these previous explorations to be a difficult concept to convey to braille learners. Throughout this study, the tactile versions of the Blue-bot cards were used, as well as (in order to compare) the original versions. Finally, for the environment in which the bots were programmed (see Assignments below), either the wooden board that distinguishes different plates indicating the steps of the Blue-bot and that can be build into a maze was used or loose Kapla blocks to create the environment or a maze. The sessions with the Bee-bot and Blue-bot started with a plenary introduction to the whole class by the researcher. Since the children had already been introduced to programming during the previous (unplugged) lesson given in the context of the project, this introduction focused on explaining the bot. During this introduction, the children and researcher all sat together, and the bot was passed around all the children for a visual and/or tactile exploration while the researcher explained the buttons (emphasizing the visual, tactile, and auditory elements). In the following Blue-bot sessions, the reader and cards were introduced and passed around all the children. In the lower level class (Sessions 1, 2, 4 and 5) the teacher preferred to conduct most of the explanation individually, after the children had been divided up into pairs. Once the children had been divided up into pairs and matched to a tester, the tester explained the constructive interaction protocol (described in the next section), after which the video recording was started. Next, the tester checked whether the children had understood the explanation on the bot and if needed provided additional instructions. The assignment always started with the children programming a few steps in order to move the bot from one child to the other. The tester then choose an appropriate assignment to continue, out of several available worked-out assignments with different levels (including having the bot move from point A to point B within an open environment or within a maze, or having the bot perform a dance with a repeated pattern). The children were also allowed to (co-)design the environment and/or think of the end goal for the bot themselves. The children always worked with the bot in a structured manner towards a specific goal. This set-up was designed and carried out following the recommendation for an individually guided, tailored and flexible approach in research with children with impairments . In addition to the specific content of the assignment being adapted to the children’s level, tailored extra instruction was provided when required and the teacher was present to intervene for instance when a child got too distracted. Further, in order to gain insights into children’s experience while working with the bots, the think-aloud method of constructive interaction was used . Constructive interaction uses the set-up of a collaboration between children in order to create a natural situation for them to verbalize their experiences . We explicitly stimulated this by providing the children with an elaborate instruction on verbalizing your thoughts at the start, including a concrete example . The children were instructed to work together and try to verbalize what they were thinking of the material and what they were doing. The tester reminded them throughout the assignment, using neutral prompts (“don’t forget to think aloud”, “what are you doing now”). The sessions were all recorded individually on video. These recordings were processed by coding and transcribing verbal and non-verbal behavior, using a detailed pre-defined coding scheme in line with a theory driven thematic analysis approach . Continuing on establishing trustworthiness for our study as described in the research design section above, our data analyses were also aimed at capturing the experience of reality of our subjects. This was obtained first of all by staying close to this experience in coding our data, and second by taking into account the learners’ overall approach to the assignment and providing a full picture for each pair of participants on how they proceeded through the assignment. The coding scheme consequently included the primary focus of the study, distinguishing the four layers of abstraction, but also information on additional aspects of the learners’ experience. In addition, the overall impression of the sessions is included in the descriptions in the results as well. The scheme included 17 categories of behaviors referring to specific features of usability and accessibility (for instance-independent use, needed assistance, positive or negative experience) and the computational practices. The behaviors were based on previous insights on the use of programming materials with sighted and visually impaired learners . Concerning the coding of the computational practices, for each layer, plausible pre-defined behaviors were hypothesized based on previous work with sighted learners and fitting the Bee-bot and Blue-bot (pre-specified behaviors for each layer can be found in Tables 3–6 ). In addition, for each layer it was possible to indicate non-anticipated behaviors. This enabled the important intention of the research set-up to explore the current subjects’ behaviors in an open way. For all seven sessions included in this study, a coding scheme was completed, including verbatim transcriptions of verbal behaviors. Behavior occurring during one time or stimulus could be coded within multiple layers, for instance, when children were simultaneously discussing the end point and the steps towards it, which would be coded as pre-defined behavior in the problem as well as design layer. Further, multiple behaviors could also be coded within one layer, when for example children were following the output while discussing whether the outcome was anticipated, which would be coded as two pre-defined behaviors within the output layer. With the detailed and elaborate coding scheme, we aimed to establish consistency in our study, allowing for full descriptive pictures where observed behaviors are embedded in context and connected to the overall experience. Further processing was conducted by first taking an inventory of the frequency of and switching between the layers, by creating frequency tables and a graph for each session representing the switching per layer. We followed the previous paper by Faber et al. with this representation through graphs of young learners’ switching through the abstraction layers. Second, the behaviors within the layers were inventoried by creating frequency tables for the pre-specified behaviors within each layer and structuring the open answers for non-anticipated behaviors into patterns. Finally, information from other (not computational practice-related) categories was scanned to obtain an overall picture of the course of the assignment as well as any specificities for each session. Microsoft Excel was used for the coding scheme’s, further processing of the data and creation of the graphs was done in the Statistical Package for the Social Sciences (SPSS), version 27. Table 2. Occurrence of layers within and across sessions. Percentages are relative to specific session. 1 2 3 4 5 6 7 Total Problem Layer 24 19 17 5 8 4 5 82 (8.9%) (7.3%) (6.3%) (3.7%) (6.5%) (3.6%) (7.2%) (7.0%) Design Layer 83 106 54 35 21 26 18 343 (30.6%) (40.5%) (19.9%) (26.1%) (17.1%) (23.4%) (26.1%) (27.6%) Code Layer 112 104 73 61 35 18 28 431 (41.3%) (39.7%) (26.9%) 45.5%) (28.5%) (16.2%) (40.6%) (34.7%) Execution layer 52 33 127 33 59 63 18 385 (19/2%) (12.6%) (46.9%) (24.6%) (48.0%) (56.8%) (26.1%) (31.0%) Total 271 262 271 134 123 111 69 1241 Other 94 43 124 73 30 58 43 Table 3. Behaviors within problem layer. Behaviors Low vision Braille Total Anticipated Point to starting point 8 (28.6%) 11 (15.1%) 19 (18.8%) Point to end point 6 (21.4%) 13 (17.8%) 19 (18.8%) Discuss starting point 2 (7.1%) 8 (11.0%) 10 (9.9%) Discuss end point 9 (32.1%) 14 (19.2%) 23 (22.8%) Non anticipated 3 (10.7%) 27 (37.0%) 30 (29.7%) Total 28 73 101 Table 4. Behaviors within design layer. Behaviors Low vision Braille Total Anticipated Point to route 3 (2.1%) 19 (7.9%) 22 (5.7%) Following route 13 (8.9%) 22 (9.1%) 35 (9.0%) Describing route 89 (60.1%) 89 (36.8%) 178 (45.9%) Counting steps route 12 (8.2%) 7 (2.9%) 19 (4.9%) Non anticipated 29 (19.9%) 105 (43.4%) 134 (34.5%) Total 146 242 388 Table 5. Behaviors within code layer. Behaviors Low vision Braille Total Bee-bot Anticipated Make program 0 8 (2.9%) 8 (2.6%) Follow program 1 (3.2%) 15 (5.5%) 16 (5.2%) Press one button 22 (71.0%) 149 (54.2%) 171 (55.9%) Press multiple buttons 0 64 (23.3%) 64 (20.9%) Erase steps 7 (22.6%) 38 (13.8%) 45 (14.7%) Non anticipated 1 (3.2%) 1 (.4%) 2 (.7%) Total 31 275 306 Blue-bot Anticipated Make program 1 (.8%) 4 (3.0%) 5 (1.9%) Follow program 2 (1.6%) 4 (3.0%) 6 (2.3%) Take card 49 (39.8%) 51 (38.1%) 100 (38.9%) Put card in reader 44 (35.8%) 47 (35.1%) 91 (35.4%) Take card out reader 12 (9.8%) 8 (6.1%) 20 (7.8%) Change order cards 2 (1.6%) 2 (1.5%) 4 (1.6%) Non anticipated 13 (10.6%) 18 (13.4%) 31 (12.1%) Total 123 134 257 Table 6. Behaviors within output layer. Behaviors Low vision Braille Total Anticipated Execute program 14 (18.4%) 109 (31.2%) 123 (28.9%) Follow bot 45 (59.2%) 165 (47.3%) 210 (49.4%) Relate outcome 7 (9.2%) 40 (11.5%) 47 (11.1%) Predict outcome 6 (7.9%) 27 (7.7%) 33 (7.8%) Non anticipated 4 (5.3%) 8 (2.3%) 12 (2.8%) Total 76 349 425 The results consist of two main parts. The first part (3.1) focuses on the abstraction layers : their occurrence and the patterns of switching between the layers. This section starts with a display of the occurrence of the layers for each pair within an overall table, which enables a descriptive overview of the frequency of occurrence within and across the pairs. The experience of each pair of switching through the layers while working on the assignment is captured through a description of the pair and an accompanying graph. The graphs should be viewed as qualitative illustrations of the patterns of switching through the layers. The second part of the results (3.2) focuses on the behaviors within the layers. This section contains a detailed assessment of the behaviors as they occur in the layers, by reporting and describing the anticipated and non-anticipated behaviors. The focus is more across than within the pairs here, but specific behaviors continue to be ascribed to specific pairs. The overview of the occurrence of the four layers ( Table 2 ) as well as the pattern analysis indicates how the children within the seven sessions move and switch through the layers of abstraction while they work on their assignments with the Bee-bot (Sessions 1, 2, 3) and Blue-bot (Sessions 4, 5,6,7). Most sessions lasted around 30 min, session 7 lasted only 15 min. Behavior unrelated to the layers is coded with 0 and displayed in the graphs and included in Table 2 as well. Overall, it can be seen that within all sessions all layers occur. The problem layer occurs least frequently (on average about 7% of the time), as the graphs show it differs per session at which point during the session this usually is. In sessions 1, 2, 3 and 5 the problem layer arises all through the session, whereas in sessions 4, 6, and 7 there are one or two occasions where this layer occurs. The design, code and execution layer usually each take in between 20% and 40% of the behaviors, with some exceptions (for instance the execution layer occurring less often in session 2, and the code layer occurring quite frequently in session 4). In some sessions, the design layer stands out by being less frequently engaged in compared to the code and execution layer (sessions 3 and 5 most clearly) while in session 2 the design layer occurs most often. “Other” behaviors are seen throughout all sessions in between processes related to abstraction. Consistently across sessions, these other behaviors mostly involve the children listening to instructions, generally discussing the material or their collaboration with it, or distractions and actions outside of the material and assignment. Figure 1. Session 1 pattern of layers. Line graph for session 1 describing the pattern of moving through the layers of abstraction with the Bee-bot. The X-as represents time during the assignment and the Y-as represents the four abstraction layers. After a start where the children spent some time at the code and execution layer, the graph shows the children most frequently switch between the design and code layer. Six dense stretches going back and forth between these layers can be seen. In between these stretches the children visit the execution layer, and mostly during the second half they also go to the problem layer. Figure 2. Session 2 pattern of layers. Line graph for session 2 describing the pattern of moving through the layers of abstraction with the Bee-bot. The X-as represents time during the assignment and the Y-as represents the 4 abstraction layers. The graph shows 6 stretches of switching back and forth between the design and code layer, spread through the assignment. At the start or during each of these stretches the problem and execution layer are also visited once or twice. Figure 3. Session 3 pattern of layers. Line graph for session 3 describing the pattern of moving through the layers of abstraction with the Bee-bot. The X-as represents time during the assignment and the Y-as represents the 4 abstraction layers. The graph shows a not very dense pattern of mostly switching between the code and execution, and less frequently, design layer. The problem layer is switched to on occasion all through the assignment. Looking in more detail at the individual graphs, complemented by the accompanying behavior and atmosphere during the different sessions, several observations can be made. A general trend is that sessions 1, 2 and 3 (the Bee-bot sessions) have a denser pattern compared to sessions 4, 5, 6, and 7 (the Blue-bot sessions). Further, especially in sessions 1 and 2 the dense pattern involved several stretches of quickly switching back and forth between coding and designing. Taking a closer look at these sessions, session 1 concerned two braille learners who often relied on their residual sight by bringing themselves very close to the material. The children preferred to work by themselves and the coding-designing stretches always involved one child programming step by step by pressing the button (code layer) and moving the bot through the environment along to plan the next step (designing). Session 2 consisted of one low vision and one braille learner. The latter did not have any residual sight and relied upon the audio function of the bot and tactile exploration both while programming and while following the bot go on his route, as well as upon quite some verbal and tactile assistance by the tester and the other child. The two boys worked enthusiastically and well together. Whereas Figure 2 shows similar coding-designing stretches as in session 1, in session 2 this always involved both children working together while dividing the tasks, with one child coding and the other child designing. In most of the stretches it was Child1 (low vision) who took on the design and Child2 who coded, only in the third stretch this was the other way around. As the graph indicates, this stretch, which takes places between 1300–1500 seconds within the assignment, is a bit slower paced compared to the other stretches. The tester intensely guided the braille child here in tactilely exploring the maze to think of the next step. In session 3 both children were braille learners (Child1 had some very limited residual sight) primarily using tactile and auditory access, receiving some support by the tester or each other for instance in confirming which button they were to press or in getting oriented. The graph in Figure 3 shows a calmer pattern, which includes the execution layer more frequently in between coding and designing. This reflects the children working on smaller sub-parts of the program which were tested in between. Generally, both boys worked together through the different layers, though Child2 was a bit more active. Figure 4. Session 4 pattern of layers. Line graph for session 4, describing the pattern of moving through the layers of abstraction with the Blue-bot. The X-as represents time during the assignment and the Y-as represents the 4 abstraction layers. The graphs shows a spacious not very dense pattern switching between code and execution, and code and design layer, moving to the problem layer twice at the end of the first half of the assignment. Figure 5. Session 5 pattern of layers. Line graph for session 5, describing the pattern of moving through the layers of abstraction with the Blue-bot. The X-as represents time during the assignment and the Y-as represents the 4 abstraction layers. The graph shows a spacious not very dense pattern switching between the 4 layers. During the second half the graph becomes somewhat more dense and there is a stretch going back and forth between the design and code layer, and a stretch going back and forth between the code and execution layer. Figure 6. Session 6 pattern of layers. Line graph for session 6, describing the pattern of moving through the layers of abstraction with the Blue-bot. The X-as represents time during the assignment and the Y-as represents the 4 abstraction layers. The graph shows a spacious not very dense pattern switching mostly between the code, execution, and (less often) design layer). During the second half of the assignment the children switch 3 times to the problem layer. The remaining sessions involved the Blue-bot. Session 4 included a braille girl (same as in session 1) and low vision boy, with the graph in Figure 4 indicating a much less dense pattern. The code layer was most frequent here as well, and the problem layer clearly less frequent. There was some, but less frequent and quick, switching back and forth between designing and coding. Because the children did not work well together, similar to session 1 they worked in turn-on designing, coding and running their own program. Session 5 included the same two boys (one braille and one low vision) as session 2. The execution layer was most prevalent here and the graph in Figure 5 is much less dense. This seems to reflect slower steps of coding, where every time the next step was identified the correct card first had to be found. The tasks were not so clearly defined as in their Bee-bot session, coding and designing were both more frequently engaged in by Child 1 who had low vision. In session 6 as well a similar pair (in this case as in session 3, of two braille learners) worked with the Blue-bot. Their overall experience was much less positive than in their Bee-bot session, they were not very interested in working with the bot anymore and thought it was very limited. The execution layer is again most prevalent, there is however a somewhat slower switching between the four layers, with the problem layer being only included a couple of times later on. Finally, session 7 included a braille girl and low vision girl. This was a relatively short session during which the code layer was most prevalent . The children worked well together and both children are involved through all layers. The braille learner was aware of and asked for what worked for her. Figure 7. Session 7 pattern of layers. Line graph for session 7, describing the pattern of moving through the layers of abstraction with the Blue-bot. The X-as represents time during the assignment and the Y-as represents the 4 abstraction layers. The graph is shorter than the other graphs in duration, and shows the children switch between code and execution layer at first, after which they start to involve the design layer. During the second half the problem layer is involved in a stretch with the design and code layer, after that the execution layer is visited. The behaviors the learners displayed within the layers are categorized into anticipated (observed in or referred from previous research) and non-anticipated (first observed in our study). These anticipated and non-anticipated behaviors are indicated in Tables 3 to 6 , specified per session as well as by vision type (braille or low vision learner). First, Table 3 provides the behaviors occurring within the problem layer. All anticipated behaviors are engaged in by both low vision and braille learners, though no single behavior is consistently present across all sessions. Discussing the endpoint is overall most frequent within the behaviors occurring within this layer. Further, it can be noticed that relatively more non-anticipated behaviors are engaged in by the braille compared to the low vision learners, taking in almost 40% of all behaviors amongst the braille learners. The inventory of these non-anticipated behaviors showed that they most commonly involved an alternative way to be occupied with the start or ending. This included placing the Bee-bot at the start (multiple times by both children in session 1), discussing the start position/stance of the Bee-bot (once in session 3), coding the final step for the Blue-bot in advance (session 5, by a low vision child), or discussing different routes towards the end. Some additional non-anticipated behaviors that occurred were being generally occupied with the plan or environment, making a drawing of the environment (in session 1) or depicting the goal of the assignment in a physical way. The latter occurred in session 6, and involved Child 1 standing up and physically taking several turns to explain to the other child what kind of turn the bot took (“Look, look, I take a step, turn, step, step, turn, step, turn”). Second, Table 4 provides the behaviors within the design layer. All anticipated behaviors occurred, and describing the route was by far the most common anticipated behavior, occurring in all sessions. Whereas with the low vision learners this took up over half of their behaviors, in the braille learners this was somewhat less. For the braille learners, similar to the problem layer, non-anticipated behaviors were most common and took up almost half of their behaviors within this layer. The inventory of the non-anticipated behaviors showed that several of these behaviors involved manually moving the bot forward either step by step while programming (Sessions 1, 2, 4) or in one go to plan or check an entire route (Sessions 4, 6 and 7). Though most of this frequently occurring behavior occurred in sessions 1 and 4, in other sessions it was also observed. In session 2 one of the learners (the low vision learner) placed his hand in the route to indicate the position of the bot. Comparable but involving the cards for the Blue-bot was the behavior to hold a card in or next to the route to check its direction. Other behaviors that occurred were drawing a route on paper or working on the environment (which occurred in most sessions). Third, in Table 5 behaviors within the code layer were inventoried separately for the Bee-bot and Blue-bot since most behaviors within these layers are specific to either bot (involving pressing the buttons on the Bee-bot or picking and placing the cards of the Blue-bot). For the Bee-bot, it can be seen that pressing one button was responsible for around 56% of all behaviors, taking in the majority of behaviors of both low vision and braille learners (though somewhat more in low vision children). Other anticipated behaviors were all observed, though writing code/making the program and pressing multiple buttons not in the low vision children. Non-anticipated behaviors rarely occurred in this layer for the Bee-bot sessions. For the Blue-bot sessions, most frequent were the anticipated behaviors of taking a card and putting the card in the reader, both taking in about 35%–40% of the behaviors and occurring equally frequently in the low vision and braille learners. Other anticipated behaviors occurred but only incidentally. Non-anticipated behaviors did take in about 10% to somewhat more (in the braille learners) of the behaviors, occurring in all Blue-bot sessions. This involved mostly searching a specific card or searching through the cards, correcting the order of the cards in the reader, or interacting with the other child about a card (checking, giving or proposing a card to the other child). Incidentally visual or tactile exploration of a card could be seen as well as (in the Bee-bot sessions) stopping the program or adjusting the bot’s position. Fourth, concerning the output layer ( Table 6 ), most frequently occurring are following the behavior of the bot (which takes in about 50% of the behaviors within this layer) and executing the program (about 30%). It appears that the low vision children engage in following the bot somewhat more often, and the braille children execute the program. Non-anticipated behaviors occur not regularly within this layer and are only observed 12 times, somewhat more often with the braille learners. Most often this involves stopping a program or moving/adjusting the Blue-bot during the output. Two specific behaviors seen in Session 6 are feeling where the Blue-bot ended up after the program ended, and feeling along with the steps on the Blue-bot while the program is being executed. In this study, we answered the research question: which patterns and specific behaviors, approached from the layers of abstraction model, do children with visual impairments engage in when working on a programming task with the Bee-bot or Blue-bot? We assessed how nine children (six of whom were blind, three of whom had low vision) move through the four layers of abstraction during programming assignments. Furthermore, we specifically observed which concrete behaviors they employed within each layer. The four layers include the problem layer (the most abstract level where the problem is discussed), the design layer (where a solution is sought), the code layer (where there is direct involvement with the code) and the execution layer (the least abstract level where the code is ran or output is involved) . Overall, across the different sessions the children move through primarily the design, code and output layer, and (to a lesser extent, though present in all sessions) visit the problem layer. The patterns they engage in, including stretches in which they switch back and forth between coding and designing, occasionally visiting the problem layer or the output layer in between, suggest deliberate proceedings involving iterative processes of redesigning and debugging . Whereas in some sessions children focused on coding and thereafter testing the entire program, one pair especially (in sessions 3 and 6) showed an approach of coding and testing sub-parts of the program. Comparing the patterns to the previous study of young sighted learners with the educational bot Cubetto , it appears our learners show similar overall processes while working on their programming assignment. It can be seen as a valuable addition to the known concrete (relative) usability of the Bee-bot and Blue-bot for visually impaired learners that working with these bots also enables to engage in more formal computational processes. The fast connection between input and output has been identified as inviting these processes, for learners overall and especially for learners with visual impairments . Further, compared to the Bee-bot sessions, the Blue-bot sessions were less dense, showing less behaviors and less fast processes. Coding with the Blue-bot seems obviously a slower process because the action of picking and taking a card takes longer. In addition, the external representation can add to a more conscious and less fast process . However, it did not appear coding with the Blue-bot invoked more behaviors in the more abstract layers. It can be considered that for some of the learners the Blue-bot was actually quite difficult, especially after only one Bee-bot session. Consequently, the more sophisticated options of the Blue-bot might not necessarily invite more abstract thinking because of cognitive overload. Cognitive overload is known to occur while learning programming . Taking into account as well that the tactile versions of the cards of the Blue-bot are in an exploratory phase, this might be especially relevant for learners with visual impairments who generally already need to engage in or learn extra steps . A most simple tool such as the Bee-bot where all the actions are most clear might free up space to work deliberately. This observation stresses the importance to continue to improve accessibility also at the level of concrete access to materials such as educational robots. It has been noted before that although tangible materials can be largely accessible for learners with visual impairments, full accessibility is hindered by the inclusion of small visual features . From the current context, it can be added that the additional effort or lack of access to part of a tangible material can also actually impact the potential to engage in higher cognitive thinking with the material. The exact behaviors low vision and blind learners engage in within the different layers can show how they concretely approach the process of abstraction with these bots. Our learners show a mix of anticipated actions, known from sighted learners , and spontaneous alternative actions. Furthermore, the extent to which alternative behaviors were used was clearly higher in the problem and design layer (compared to the code and output layer), where they were also more frequently employed by the blind learners (compared to the low vision learners). Taken together, in more abstract layers, more alternative approaches are taken in by blind learners. In interpreting this, it should be taken into account that this observation entails both that alternative actions are required, and that they are possible. The background of the model of the layers of abstraction stipulates that more abstract layers involve moving away from the concrete material, the specific programming language and environment, and move towards an approach to the problem in abstract terms . In the code and output layer, the blind learners engage regularly in anticipated behaviors, they are actively involved in the assignment by pressing the bot, using the cards and reader, and following the bot. Alternative access for them is inherent in these behaviors themselves: they can see, feel, or hear when they press the buttons as well as when they follow the moving bot. This indicates that at these levels it is mostly about the concrete accessibility of the bot and confirming again the fitting “hybrid” nature of the Bee-bot and Blue-bot. At the more abstract design and problem level however, it becomes more about how the materials facilitate abstract thinking, and the presence of alternative behaviors especially in the blind learners suggests exactly both the need for such behaviors and their possibility while working with the bots. A further look at the content of the alternative behaviors used in the problem and design layer indicates two ways in which our learners approach abstract concepts or help themselves form a mental model of the assignment. The first approach involves additional direct involvements with the bot. This can be seen both in the problem layer (putting the bot at the start, talking about the positioning of the bot) and, most often and clearly, in the design layer, where the bot is being moved by hand while programming. This way, instead of having the need to mentally represent the steps of the program coded so far as well as the place and orientation of the bot according to these steps within the environment, the bot is used to keep track. Consequently it seems blind learners attempt to keep the connection with the bot while working in the design layer. Previously in the interpretation of sighted learners behaviors, a split was considered within the design layer by making a distinction between “abstract design” (design unrelated to the specific robot or programming language, such as globally describing the route) and “concrete design” (“describe the solution in human language, while also containing elements of the specific robot or programming language”, such as counting squares ). It could be that the concrete design layer approach is both generally helpful for design-related practices, and supportive for blind learners for whom the direct connection with the bot is beneficial. A second approach towards abstract concepts shown by our visually impaired learners was physical enactment. This could be seen in the placement of a hand in the route to indicate the position of the bot (in the design layer) and in the interesting case of a blind student physically depicting a turn in order to understand what the type of turn made by the bot entails within the currently planned route (in the problem layer). For a sighted novice learner, it can already be difficult to grasp how the bot makes a “turn in place” without moving a step. Comprehending the size and type of the turn and placing it in the mental image of the route can be even more challenging for a blind learner. Acting out the spatial concept of a turn can be seen as an embodied cognition approach towards spatial thinking and abstract concepts . This approach, which entails that mental processes are mediated by body-based systems which include body shape, movement, and the interaction of the body with the environment, has been explored as an educational strategy within STEM fields and specifically within mathematics . In the latter field, it has also been explored how embodied cognition can especially help blind learners . Within the field of computer science education however this approach to facilitate the understanding of abstraction notions has only recently been proposed and has not yet been considered for our specific group of learners. Embodied cognition can take several forms, including hand gestures and acting out concepts or representations spatially which could be, as our case example suggests, highly beneficial for learners with visual impairments to conceptualize computing concepts by representing, interpreting, reasoning and communicating about these concepts . Finally, interestingly this connects to unplugged programming. Unplugged programming activities are often of a physical nature and are moreover known to be activating, engaging and particularly inclusive . Consequently, it can be all the more valuable to further consider how the physical nature of unplugged activities can not only be motivating but also facilitate the learning of abstract computing concepts especially for blind learners. The approach of this qualitative study was to capture our subjects’ experience, within this highly specific and under-explored area of abstract computational concepts in learners with visual impairments. Within our data collection, analysis, and documentation of results, we focused on staying close to the experience of the subjects, in a tailored and detailed manner . Consequently, our small and specific sample was fitting within this aim and approach, and generalization was not the primary intent. However, there are some limitations connected to our approach and set-up. First, we included solely learners with visual impairments, as opposed to sighted individuals as well. The direct comparison of the latter set-up would enable a fuller interpretation of young learners’ computational practices and underlying mental modeling and abstract thinking using the Bee-bot and Blue-bot in programming assignments. Given the complexities and diversity within the group of learners with visual impairments, a focused investigation into low vision and blind children can be seen as a suited first step. Relatedly, we do consider transferability of our findings to similar participants and contexts possible. The diversity of our sample should be carefully taken into account here, in terms of both vision and other specificities. Currently, our findings might be most directly applicable to special education settings, where this diversity is always present and adapted to. In other settings, the insights and ideas generated in this study can be further explored. Second, in the interpretation of our findings, the concrete accessibility of the set of materials of specifically the Blue-bot for blind and low vision learners should be taken into account. Although adapted tactile versions of the Blue-bots cards, designed for learners with visual impairments, were used in our study, these cards are still in a development phase. Consequently, the set of Blue-bot materials is currently not completely (validated) accessible. This could impact how well learners with visual impairments can engage in computational practices with the material, and it emphasises the need to continue to improve accessibility at the concrete level of materials. At the same time, however, how well individuals with visual impairments can work or learn with a not entirely accessible material is also in line with their daily context, where specifically in the case of computers and technology often it is necessary to “work around” accessibility issues . Third, in our data processing we relied upon one coder using a detailed coding scheme that was partially fixed and partially allowed for new observations within fixed categories. This approach fitted our qualitative study set-up, yet there is the risk that personal interpretation of the coder could have impacted the interpretation of the behaviors. Future studies could focus on continuing to understand learners with visual impairments’ approach to the process of abstraction, also by involving and directly comparing their processes and behaviors with those of sighted learners. Further, in order to support the development of computational concepts such as abstraction in visually impaired learners, supportive teaching strategies and instructions should be developed in connection to insights on approaches by these learners. Specific directions suggested in our findings, such as an embodied cognition approach, could be further explored as potentially helpful within early programming education in general, and especially for learners with visual impairments. A valuable option would be to make the connection with unplugged lessons as well. Our findings show that learners with visual impairments, using the Bee-bot and Blue-bot, engage in a formal computational way of working within the process of abstraction, including iterative actions of redesigning and debugging. Further, they engage in these computational practices using a mix of behaviors known from sighted learners as well as, especially the blind learners in the more abstract layers, alternative behaviors. The content of the latter indicates the preference to be physically involved and keep track of the bot and the plan. Moreover, it suggests how embodied cognition in the form of physical enactment can be helpful to grasp an abstract concept and mental representation. Overall, the previous operationalization of the model of the layers of abstraction in sighted learners can be meaningfully applied to low vision and blind learners, when elaborated with specific tactile and physical behaviors. Furthermore, such behaviors can be further established, possibly as part of an embodied cognition approach within inclusive computer science education, in order to encourage teachers to support learners with visual impairments in their conceptualization of abstract notions and mental representations within programming education. | Other | other | en | 0.999999 |
PMC11697393 | A total of 20 young, healthy participants aged between 19 and 29 years (M = 24.15 ± 3.08) took part in the study, including 11 female and 9 male participants. All participants were native German speaking, non-smokers, and reported not to suffer from any neurological or psychiatric conditions. They did not take any medications (except oral contraceptives), iron, or vitamin supplements. Individuals reporting difficulties to fall or stay asleep, nightmares, or other sleep disorders were also excluded. All participants reported to follow a regular sleep/wake schedule with >6 hours of sleep per night and no shift work, night duties, or long-distance flights with jet lag in the 6 weeks prior to the experiment. Only participants were included who assessed themselves as individuals “dreaming on occasion” and normally having at least “fairly good recall” of their dreams. Participants spent one adaptation night in the sleep lab before the experimental night, in order to familiarize them to sleeping with EEG electrodes. Data from one participant were excluded from the analysis because of too poor sleep quality, reducing the sample to a total of N = 19 participants. Before the start of each experimental night, the current well-being and health of the participants were assessed, including whether they had experienced any unusual kind of stress, and had refrained from napping, alcohol, and caffeine on that day before the experimental night. Participants gave written informed consent and were paid for participation. The study was approved by the local ethics committee of the University of Tübingen. The study followed a within-subject design, examining the correspondence of dream reports obtained in a single experimental night to each of three different task plans with different execution status (completed, uncompleted, and interrupted). The task plans were entitled “Setting the table,” “Tidying the desk,” and “Getting ready to leave.” The first two scripts were derived from another study , while the third one was newly developed for the purpose of this study. Each task plan was assigned to one of the three execution statuses. Participants learned the scripts for all task plans in the evening and performed them either afterward (for the completed and interrupted conditions) or in the next morning (uncompleted). The assignment of task plans to the execution status conditions was balanced across conditions (completed – C, interrupted – I, uncompleted – U) distributing as follows: “Setting the table” (C = 7; I = 7; U = 6), “Tidying the desk” (C = 6; I = 6; U = 8), “Getting ready to leave” (I = 7; C = 7; U = 6). Also the order of execution status conditions for the tasks to be performed in the evening was balanced across participants such that the “completed-interrupted” order occurred nine times and the “interrupted-completed” order occurred 11 times. During the nocturnal sleep period, dream reports were gathered during awakenings from either rapid eye movement (REM) sleep or NREM stage 2. The participants were informed that the experiment investigated the effect of sleep on memory for specific task plans and that awakenings for dream reporting would occur. The experimental procedure is illustrated in Figure 1A . All subjects reported to the laboratory at 8:00 pm. First, a questionnaire regarding participant data was completed to ensure all experimental inclusion criteria were met. Then, EEG electrodes were attached, and the participants completed the Stanford Sleepiness Scale (SSS) and a mood questionnaire, followed by the Regensburg Word Fluency Test (RWT) and a Vigilance Task (VT), both assessing executive cognitive functions. After performance on these tests, the participants learned the action sequences for the three task plans as described in . The plans were entitled “Desk tidying,” “Getting ready to leave,” and “Setting the table.” Each task plan comprised an action sequence of five subtasks . For Desk tidying the subtasks were (1) opening a file, (2) filing documents, (3) sharpening a pencil, (4) sorting index cards, and (5) stacking articles. For Getting ready to leave, the subtasks were (1) shutting down the computer, (2) closing the window, (3) putting on a coat, (4) putting on a backpack, and (5) switching off the light; and for Setting the table, the subtasks were (1) spreading out the tablecloth, (2) distributing tableware, (3) polishing glasses, (4) folding napkins, and (5) lighting candles. The subtasks were presented sequentially on a computer screen, with the title of the task plan displayed above each subtask. Every subtask was shown for 6 seconds in a fixed order. Following this sequential presentation, the task title and all five subtasks were presented together for 30 seconds to allow participants to consolidate the information. This process was repeated three times for each task plan, with one complete cycle through the three plans constituting a learning trial. After the initial learning phase, an immediate recall test was performed where each task title was presented, and the participant was asked to recall (verbally) the five subtasks of the plan in the correct order. It was ensured that the participants memorized the exact wording and order of the subtasks. If they did not achieve 100% correct recall on all three task plans during two consecutive recall tests, additional learning trials were conducted. The second (and following) learning trials followed the same procedure as the first, but with each task plan being presented only once. On average, participants required 2.9 ± 0.9 learning trials (range 2–5) to meet the recall criterion. To enhance task involvement, the participants were informed that they would receive an additional 15 € for correctly recalling all three task plans (5 € per plan). After learning, the participants were informed which two task plans were to be performed in the evening and which one in the morning. Plans were executed under observation and evaluation by the experimenter, with the participant not being allowed to ask any questions while performing the task plans. The materials used to execute the plans were prepared beforehand. Once the participant had performed on the first plan in the evening he/she left the room so that the materials could be reset for the next task plan. For the interrupted condition, the experimenter interrupted the execution of the task plan after the participant had completed the first subtask. Under the pretext that an error had occurred, the participant was instructed to perform this task the next morning. After performing the two task plans, the participants went to bed. For the experimental sleep interval, lights were turned off at the participant’s habitual bedtime. After about 3 hours of sleep or latest at 2:00 am, the first awakening occurred. It was ensured by visual inspection of the ongoing polysomnographic recordings, that before an awakening the sleep stage was stable for at least 10 minutes. Awakenings were done up to six times per night, from NREM stage 2 and REM sleep to cover both NonREM and REM sleep-associated dreams [ 42–45 ]. Subsequent awakenings always were carried out after the participant had regained sleep for at least 30 minutes. For awakening, lights were turned on and the participant was addressed by their name and asked to sit up and put on a headset for voice recording. A standard set of questions followed: (1) Tell me everything that was going through your mind before you were woken up. (2) Can you remember any details? (3) And further? (4) Was it a dream or a thought? and (5) Was it pleasant, unpleasant, or neutral? Question 3 was repeated unless the participant explicitly reported not having any further memory. Lights were turned off for the participant to return to sleep once no further details came into their mind. The participant was awakened the next morning at about his/her usual wake-up time. The SSS, mood questionnaire, RWT, and VT were performed a second time about 30 minutes after awakening. Throughout the night, the EEG was recorded from six channels (F3, F4, C3, C4, P3, and P4) referenced to two electrodes attached to the mastoids (M1 and M2) using a BrainAmp DC amplifier (BrainProducts, Munich, Germany). The ground electrode was placed on the forehead. Impedances were always kept below 5 kOhm. Additionally, vertical and horizontal eye movements were measured (VEOG and HEOG) as well as an electromyogram (from two electrodes placed on the chin). Signals were band-pass filtered between 0.3–30 Hz (EEG and EOG signals) and 5–150 Hz (EMG signal), sampled at 500 Hz and stored for offline analyses. Visual scoring of 30-second polysomnographic records followed the criteria outlined by Rechtschaffen and Kales . Audio-recorded dreams were transcribed into written texts before analysis. To quantify the extent to which the task plans at the different execution states were incorporated into the dream reports, we analyzed the semantic similarity between dreams and respective task plans. These similarity analyses were performed in two different ways, first using a traditional approach based on subjective ratings and, second, based on an AI large language model. For the rating-based approach, two colleagues, sleep experts with no special experience in the assessment of dreams, were asked to rate the dream reports according to a standardized scale derived from a previous report by Schredl . The first part of the scale required to rate the general extent of alignment between the reported dream and the specific task plans between 0 and 8 (indicating no vs. high correspondence). The second part of the scale aimed at a similarity rating based on the occurrence of certain core elements characterizing the task plans. For this, for each task plan, 11 core elements, i.e. objects, and activities that characterized the plan, were selected. The rating required to assign, for each of these core elements, a score between 0 and 2 with 0 indicating that the element did not occur in a dream report, 1 indicating that the element occurred metaphorically or indirectly, and 2 indicating that the element was directly named in the dream report, with the sum of the 11 scores defining the similarity between the dream report and the respective task plans. Raters were blinded as to the execution state conditions and as to whether a dream was reported after a NonREM or REM sleep awakening. For the second analysis , we used a large language model to objectively quantify the extent of dream incorporation. Dream incorporation was measured by the degree of semantic similarity between the task plans and the dream reports. Semantic representations of the task plans and dream reports were extracted using the transformer-based representational model Bidirectional Encoder Representations from Transformers (BERT) , a natural text embedding model capable of quantifying semantic textual similarity . Specifically, we used the German version (GermanBERT) which is pretrained on written German texts, i.e. German Wikipedia articles, German OpenLegalData, and German news articles. The parameters of the model were trained by (1) splitting the input texts, i.e. the dream reports and task plans, into tokens representing semantic units (words, subwords) of the input texts, (2) masking some tokens and feeding back the corrupted sentence (with masked tokens) as input into the model, and (3) asking the model to reconstruct the original tokens. Since BERT was pretrained on written language, the transcripts of the dream reports were additionally edited to remove filler words and to correct grammatical errors. Similarly, the task plans were prepared for these analyses by transforming the description of subtasks in bullet points into full sentences. Then, each dream report was summarized automatically using a BERT model fine-tuned for text summarization before giving it as input to the GermanBERT. The GermanBERT encoder embeddings (vectors that take into account single text units and respective semantic relationships to other units) were used as a representation of the dream. While these embeddings are influenced by sentence length and parts of speech, this information is not explicitly encoded in the embeddings. We then encoded each of the five subtasks in each of the three task plans into embeddings following the same procedure (without prior summarization). Cosine similarity between the embeddings was calculated resulting in a similarity score for each dream report with each action of the task plans, for which then the maximum similarity score for each task is chosen. Cosine similarity measures how closely aligned two vectors are, regardless of their magnitude, by calculating the cosine of the angle between them. In our study, we used cosine similarity to quantify the semantic relatedness between dream reports and task descriptions, with scores ranging from −1 (opposite meaning) to 1 (identical meaning). Because similarity cosine values for the dream reports fell into a more limited range at the higher end of the scale , we thresholded cosine values using only values greater or equal to 0.85 to focus the analysis on dreams with high similarity scores. We then compared the number of dream reports with the above criterion similarity scores for a given task plan, between the execution status conditions completed, interrupted, and uncompleted. In a further sentence-wise analysis, we split the written dream reports into single sentences. Based on GermanBERT, we then calculated the cosine similarity score for each sentence of a dream report and each subtask of a task plan. The resulting maximum similarity value for each dream report was then used to allocate the dream report to one of the three execution status conditions according to a forced choice procedure. Similarity scores for dream reports were generally analyzed using analyses of variance (ANOVA) including repeated measures factors representing the executive status conditions (completed, interrupted, and uncompleted) and the sleep stage (NREM stage 2, REM) prior to the obtained report, with subsequent post-hoc t -tests used to specify the significance of pairwise comparisons. Pearson’s correlation coefficients were used to assess interrater reliability in the analyses based on subjective ratings. Given that similarity scores for the dream reports with the above criterion similarity in the AI language model-based analyses were not uniformly distributed among execution status conditions and task plans, we focused on the statistical analysis of these scores on nonparametric testing using χ 2 tests. Prior to testing, the number of dream reports above the criterion in each execution status condition and for each task plan was additionally divided by the total number of dream reports obtained for each execution condition and task plan. The χ 2 test was also used to detect deviations from equal distributions of dream reports collected in different sleep stages, conditions, and tasks. The level of significance was set to p = .05 for all statistical tests, and p = .01, in case of directed one-tailed testing of hypotheses. A total of 117 awakenings were performed, equally distributed across REM sleep and NREM stage 2, i.e. 59 (50.4%) awakenings were performed in REM sleep and 58 (49.6%) awakenings in NREM stage 2 sleep ( p = .926, χ 2 test). A dream was recalled in 86 (73.5%) of the awakenings. Of these 86 dream reports, 51 (59.3%) were collected after REM sleep awakenings and 35 (40.7%) after NREM stage 2 sleep awakenings, resulting in a trend toward more dream reports collected after REM sleep than NREM stage 2 awakenings ( p = 0.084, χ 2 test). Since participants’ ability to recall dreams at each awakening differed, the number of dream reports was not evenly distributed among task plans and execution status conditions ( Table 1 ). SSS sleepiness scores averaged (mean ± standard deviation) 3.1 ± 0.85 in the evening before sleep, and 2.5 ± 0.76 in the next morning. Performance scores on the RWT averaged 16.4 ± 4.81, in the evening, and 16.6 ± 4.32 in the next morning. The two raters only moderately agreed on their ratings of the dream reports. Although significant, both correlations between their ratings of the general similarity between task plans and dream reports as well as correlations of their judgments based on the occurrence of core elements of the task plans (objects, activities) in the individual dreams were only of medium size (0.52 < r < 0.65, p < .01, Pearson’s correlation). ANOVA performed on ratings collapsed across both raters did not reveal any significant difference in similarity ratings between any of the execution status conditions or awakenings from REM or NREM stage 2 (all p > .67 for respective ANOVA factors). Values collapsed across general similarity ratings and core element-based ratings indicated for one of the rater's highest dream incorporation for the completed task plans (1.02 ± 0.20), medium for uncompleted plans (0.98 ± 0.20), and lowest for interrupted task plans (0.80 ± 0.17) whereas for the other rater, values were highest for the interrupted task plans (1.20 ± 0.22), medium for completed plans (0.78 ± 0.14) and lowest for the uncompleted tasks (0.70 ± 0.15). The overall insufficient agreement between our raters is consistent with a great body of findings in this field of dream content analysis and led us to switch to an AI-based approach. Here, we determined the semantic similarity of dream reports by applying a large language model (GermanBERT) to transcripts of the reports . Cosine similarity scores were calculated for the whole texts, indicating the similarity between a dream report and one of the three task plans. For all task plans, similarity scores ranged between 0.54 and 0.90, with maximum frequencies in the 0.84–0.90 range , with this distribution indicating that a minimum similarity (of around 0.54) to each of the three task plans is basically reached by any of the dream reports. The direct comparison between task plans, on the other side, indicated that the task plan “Setting the table” yielded distinctly higher similarity scores than the two other task plans ( F (1.040,88.43) = 116.9, p < .0001, for ANOVA main effect of task plan), with this effect being independent of the execution status condition the task plan was assigned to . This finding points to an a priori difference in our task materials, with a higher likelihood for the “Setting the table” plan to be similar to a dream report than for the other two plans. Given that the frequency distribution of similarity scores indicated that each dream report shows at least a minimum similarity to any of the three task plans (of around 0.54), we focused our analyses on only the dream reports exhibiting substantial similarity to one of the tasks, adopting a criterion similarity score of ≥0.85. Indeed, we assumed that higher similarity scores are associated with a higher probability that this similarity was related to specific features of one of the three task plans. Note, although the ≥0.85 criterion is arbitrary, virtually the same results were obtained with lower criteria, up to ≥0.75. Comparing each dream report with each of the three task plans, we revealed that out of all 86 dream reports, 29 reached a semantic similarity score ≥0.85 to the task plan assigned to the uncompleted execution status condition, 24 dream reports reached the ≥0.85 criterion for the interrupted tasks, and 20 for the completed task plans. Although descriptively this pattern concurred with our hypothesis of an increased incorporation into dream reports of contents from uncompleted and interrupted task plans in comparison with completed task plans, it did not reach significance (χ 2 (2,73) = 1.67, p = .4439, for the comparison between execution state conditions), which we hypothetically attributed to the fact that our task plans showed a priori differences in the likelihood of being highly similar to a dream report with the highest likelihood for the “Setting the table” task plan . Indeed, analyzing separately similarity scores for the different task plans, we found that significantly more dream reports were semantically similar to the “Desk tidying” and “Getting ready to leave” task plans when they were uncompleted (Desk tidying—24, and Getting ready to leave—46) or interrupted (Desk tidying—31, and Getting ready to leave—27) compared to being completed (Desk tidying—5, Getting ready to leave—25; Desk tidying: χ 2 (2,60) = 18.10, p < .001; and Getting ready to leave: χ 2 (2,98) = 8.23, p < .05; for the comparison across all three execution state conditions, see Figure 2C for pairwise comparisons). On the other hand, for the “Setting the table” task plan no such pattern was obtained (χ 2 (2,99) = 2.24, p > .30). We found no comparable differences between the execution status conditions in separate analyses of dream reports obtained after REM sleep awakenings (all p > .36) or after NREM stage 2 sleep awakenings (all p > .33, χ 2 test). Moreover, an exploratory control analysis of word counts for dream reports with the highest similarity to task plans revealed no significant differences between the execution status conditions ( p = .5052, F (2,28) = 0.7053). To further validate our large language model-based approach, in a second analysis, we made use of a forced choice method where dream reports were allocated to one of the three execution status conditions after a sentence-wise comparison of dream reports and task plans . This forced choice approach appeared to be also favorable against the backdrop that the similarity scores of an individual dream report for the three different task plans were generally rather close to each other, i.e. showed relatively low variability in comparisons with the high variability of similarity scores among the different dream reports (averaged across task plans; F (1,83) = 0.0014, p < .002, for a direct comparison between respective variances). This sentence-wise analysis revealed similarity scores ranging between 0.78 and 0.90 for each of the three task plans, with maximum frequencies around 0.86 . Similarity scores again significantly differed for the three task plans ( F (2, 249) = 8.7, p < .0005, for ANOVA main effect of task plan), with this effect being independent of the execution status condition the task plan was assigned to ( F (4, 249) = 0.64, p = .63, for ANOVA task plan × execution status interaction). When we assigned each dream report to the execution status condition with the highest similarity score for this dream report, we found that the lowest number of dream reports, i.e. 20 reports, were assigned to the completed condition, the number of assigned reports was intermediate (28 reports) for Uncompleted task plans, and highest for interrupted task plans (38 dream reports ( χ 2 (2,86) = 5.678, p < .01, one-tailed χ 2 test, for the comparison across all three conditions, Figure 3B and C ). In this study, we explored whether intentions for future actions influence dream content. Employing an AI-based large language model analysis, we show that task plans that have not been completed before sleep and, hence, remain active during sleep, influence the content of a dream to a greater extent than tasks that are completed before sleep. Specifically, tasks whose execution was interrupted before sleep or whose execution was anticipated for the morning after sleep produced dream reports of greater semantic similarity to these tasks than task plans that were completed before sleep. Whereas firm evidence has been accumulated that dreams incorporate past experiences , especially if they are emotional [ 54–56 ], our findings provide first-time experimental evidence that dreams also incorporate anticipated experiences, i.e. future plans. In psychological terms, our findings relate to the well-known Zeigarnik effect or the intention–superiority effect which describes the phenomenon that a planned action is better retained in memory or in a heightened state of activation as long as the plan is not executed [ 57–59 ]. It is assumed that a “tension” sometimes carrying also an emotional tone , drives ongoing processing of memory representations connected to the plan, as long as it is not executed. This tension and associated processing of plan-related representations is not necessarily conscious, and we here provide evidence that it extends into sleep biasing the content of dream reports. This conceptual view is in line with multiple studies showing that sleep promotes problem solving on tasks that remained unsolved before sleep, probably due to a subliminal ongoing processing of the problem . Indeed, the incorporation of experienced content into dreams has likewise been linked to an ongoing reprocessing of respective memory representations that not only supports the consolidation of respective memory but simultaneously, expresses itself in dream reports that are semantically biased toward the reprocessed memory contents . Studies of prospective memories for plans and intentions have indicated a greater benefit for such memories of uncompleted plans from slow wave sleep (SWS) than REM sleep . Hence, assuming a direct link between processes of memory consolidation and dreaming, one might expect that dream reports after awakening from SWS show a greater similarity to the uncompleted task plans than reports after REM sleep awakenings. The present data remain inconclusive in this regard for two reasons. First, rather than in SWS, we awakened the participants in NREM stage 2 sleep during which the reprocessing of the to-be-consolidated memory representations might be less intense than in SWS, though evidence for such difference is mixed [ 62–64 ]. Second, our analyses relied on a rather small number of dream reports. The data set, hence, did not provide sufficient statistical power for reliable analyses on subsets separating dream reports between awakenings from NREM stage 2 and REM sleep, considering the size of f = 0.48 (G*Power version 3.1.9.7) for the effect of the plan execution status on dream report similarity for our analysis across all (NREM stage 2 and REM sleep) dream reports. Our finding of an incorporation of uncompleted task plans into dream reports appears to be especially noteworthy in that it derives from an objective large language model-based machine-learning approach, i.e. an approach which, except for a most recent study , has so far not been used for dream content analyses. This large language model-based approach overcomes the weakness of traditional analyses of dream reports based on subjective ratings which are notoriously unreliable suffering from modest inter-rater agreements . Here, using two independent raters to classify dream reports according to their similarity to the different task plans, we also found only rather low interrater reliability and, consequently no distinct differences in similarity between dream reports and task plans. Conceivably, we could have strengthened inter-rater reliability by a more intense prior training of our raters on “sham reports” including the discussion of discrepant scores between raters . Perhaps, we could have also enhanced our ratings if—like in other studies —we had additionally asked our participants themselves to rate the similarity of their dream reports with the task plans. Exclusively relying on ratings by other persons, our analyses did not yield any conclusive results. Nevertheless, although objective, our large language-model-based machine-learning approach bears several limitations, altogether calling for further confirmation of our main findings. First of all, we applied the large language model-based approach post-hoc, and only after the subjective ratings turned out to be unreliable. Related to this, our task plans were not particularly tailored for a large language model-based analysis of their semantic similarities. Basically, the task plans turned out to be too similar to each other, resulting in a large overlap between the task plans with respect to their similarity to the dream reports. While we adopted our task plans from a foregoing study targeting the persistent activation of intentions in memory, task plans with greater semantic differences in the activities, objects, and contexts might have increased the differences in similarity between plans and dream reports. Interestingly, the “Setting the table” plan yielded significantly higher similarity scores than the two other task plans, suggesting that certain activities may be more prone to dream incorporation than others. Obviously, future studies adopting new task plans should rule out such a priori differences in task plans before experimental use. Such future studies should also overcome other limitations of our study arising, e.g. from a rather crude assessment of sleep lacking occipital recordings which may be particularly important for a precise sleep scoring assessment and therefore relevant in analyses of (visual) dreams. Our large language model-based approach, using GermanBERT as an embedding extractor, revealed significant differences in dream report similarity that confirmed our a priori hypotheses, supporting the validity of this approach. However, statistical significance per se does not necessarily imply that this approach is also the most valid and optimal. It is to emphasize, however, that we could in principle (internally) replicate our findings here, using two different approaches, i.e. a text-based and a sentence-based, strategy of detecting similarity between dream reports and plans, in combination with a statistical assessment of different target parameters (number of above-criterion similarity reports vs. forced choice allocation of the maximum similarity report). This mutual confirmation further corroborates the validity of our approach, although there may be other more optimal strategies for detecting semantic similarity. An example is BERTScore which is a language generation evaluation metric based on BERT contextual embeddings. In contrast to our approach, in which we compute cosine similarities between embeddings of task plans and dream reports, it computes the similarity on a token level, taking into account their context via contextual embeddings, i.e. a strategy potentially allowing for more fine-grained comparisons between dream reports and task plans. Generally, the use of large language models for analyzing dream content is in its beginnings but, eventually, may turn out a promising tool also for other topics of dream research such as the differentiation of reports of dreams versus more or less emotional wake experiences as well as the differentiation of dream reports among individuals, with potentially important therapeutical implications. Whatever the case, GermanBERT administered to the present data set revealed results confirming our a priori hypotheses. Nevertheless, our approach and findings require further confirmation and external validation, ideally through application to other similar data sets, although other validation strategies are conceivable. | Study | biomedical | en | 0.999998 |
PMC11697404 | Health-related cultural norms serve as reference points that influence how individuals assess their health. These norms, however, are shaped by personal expectations and narratives, which play a key role in how people experience, interpret, and maintain their health . For instance, some evidence reveals that culturally specific practices can influence ‘objective health status’ by limiting the spread of diseases in epidemics or offering protection against communicable and non-communicable health threads . Although health assessments are well known to correlate with objective measures of health status, to date, there is limited empirical evidence of how culturally persistent such health self-assessments are, which limits their cross-country comparability. An opportunity to study cultural influences is by examining samples of individuals who have migrated to countries from a specific sending country where we can measure self-assessed health too. This is the case because the effect of culture partially varies with some slow-moving features such as language and traditions . Migrant samples allow studying the effect of cultural persistence once we control for citizenship regulations, welfare institutions, or the duration of an individual's residence in a country. The intuition behind the methodology is that health assessment priors can be conceived as portable reference points of what is regarded as ‘good’ or ‘bad’ health. Hence, a measure of the cultural persistence of health assessments can be extracted by examining the systematic association between migrants’ health assessments and the health assessments of individuals from their home countries. This is possible in surveys that contain large samples of immigrants from multiple sending and host countries to mitigate potential selection biases. ‘Cultural persistence’ refers to the paucity of the culture individuals are being brought up in, namely the extent to which health assessments of migrants are influenced by the culture of the sending countries. Accordingly, we primarily focus on evidence from second-generation migrants, who have grown up in their host country, but might still hold cultural priors aligned with the health assessments of their parents’ sending country’. This is true, insofar as the persistance of health assessments is not biased by the effect of the host institutions, and especially when additional controls are included for citizenship. However, migrants do not qualify as being part of a random sample of their population of residence, so in examining immigrant data it is especially important to control for any characteristics that make immigrants different from the rest of the population, including the fact that first-generation migrants have not been brought up in the host country while second generation migrants might have. In this paper, we investigate the relationship between the health assessments of migrant individuals—whose parents or themselves were not born in the host country—and the average health assessments of their (or their parents') sending (or home) country. We draw upon seven waves of the European Social Survey (ESS) 2004 - 2016 containing self-reported health records from 30 different European member states. The ESS is unique in that it contains a consistent measure of self-assessed health and allows us to include several controls for important alternative explanations that could drive the association between migrants and their home countries' health assessments. Such controls can help identify some of the potential sources of migrant selection (for example, time in the host country or citizenship), as migrants may differ from population averages in key observable dimensions. Nonetheless, an important methodological concern when using migrant records is that the health status of immigrants at the time of migration might be better than that of natives. Given that migration is not a random process, but a rather costly one, only those individuals who are healthy enough to bear the associated costs might undertake the move (also know as the ‘heathy migrant’ effect). However, evidence from European migrants calls the 'healthy migrant effect' into question . We make several contributions to the literature. First, we advance the discussion on the cultural determinants of health assessments, an area that has been underexplored thus far. Our findings also extend beyond the health assessments of first-generation immigrants and measures of happiness . Specifically, we provide evidence that culture exerts a long-term influence by shaping the reference points individuals use when assessing their own health. If we were to compare two individuals in the same health state but who assess their health differently, then this difference could be interpreted as stemming from different cultural reference points in the assessment of their own health status. Second, previous research has used individuals' health in their country of origin as an instrument to exploit the exogenous variation in health assessments, allowing for the examination of its impact on labor market decisions , but it does not examine the cultural transmission mechanisms. This paper considers a number of potential threats, biases and potential genetic effects by adding objective measures of health. Finally, this research contributes to the so-called ‘epidemiological approach’ literature that compares immigrants' preferences to the average preferences of people in their countries of birth which has been used to explain the use of traditional medicines , and differences in savings . We study the cultural persistence of such assessments, how robust such persistence is to the inclusion of country of residence fixed effects and different subsamples. Next, we examine a number of mechanisms to understand different explanations for the cultural effect. Its worth mentionning that the paper most closely related to ours is Roudijk et al. , which explores how country-of-origin influences health and well-being assessments. However, this literature does not address the cultural persistence or the mechanisms that underpin such health assessments. We find evidence consistent with the presence of strong cultural persistence in health assessments. Our estimates are robust to a series of robustness checks, empirical strategies, and the addition of an important set of controls that account for different forms of selection. Finally, our estimates reveal heterogeneous effects by gender, age, and region. The structure of the paper is as follows. The next section discusses previous research, sections two and three reports the data and the empirical strategy. Section four contains the results, followed by robustness checks, and the final section concludes. Culture and health . Culture refers to a system of shared understandings and values that can influence the reference points individuals use in making health assessments . Such shared values can act as triggers (or barriers) for certain behaviours, such as seeking health care, or spending time in or near natural landscapes . It is conceivable that cultural reference points exert a direct impact on how people perceive health, for instance influencing how illness and pain are perceived. More specifically, health professionals in some European countries such as Belgium, Switzerland, and Germany, employ the term “Mediterranean syndrome” to refer to individuals who “are known for their tendency to present with diffuse complaints and exaggerate pain” . Healthy migrant effect (HME) . Examining the health assessment of individuals transitioning from one culture to another can help identify the role of cultural reference points. Previous studies using migrants’ records have provided rich evidence of how migrants adapt to a new culture, and more specifically, how health outcomes are influenced by time spent in a country. Indeed, migrants are argued to exhibit ‘protective cultural factors’ such as a healthier lifestyles. Nonetheless, the health advantage of migrants declines with time spent in the host country. For instance, the health of Latin American migrants to the United States appears to deteriorate as they stay in the country longer, indicating an unhealthy adaptation . However, other evidence suggests that the longer an immigrant stays in the country, the better their health . Indeed, although some evidence suggests that health benefits are lost in childhood , and many health conditions worsen across generations, exposure to a new environment can trigger the adoption of native behaviors . Yet, such healthy migrant advantage disappears in European countries which might be explained by the fact that migrants come from a larger set of sending countries compared to the United States, and there is a large variation in host cultures . Constant et al. did not find evidence of a healthy migrant effect in Europe. Hence, Europe is an ideal setting to study the cultural persistence of health assessments, given its large variation in cultures and lesser exposure to migrant selection. We draw upon data from the European Social Survey (ESS), and more specifically waves 2 to 8, measuring the health self-assessment of Europeans every two years between 2004 and 2016 inclusive . 1 All cross-sections were first merged, and then variables made consistent across waves. The data includes 30 host countries, and the survey contains information about the respondents’ sending country or that of his/her father and mother. Individual-level data from the ESS was matched with health assessment measures constructed at the country level from the World Values Survey (WVS) for over 90 countries . The World Values Survey contains data for many countries, but the survey is conducted every five years , and samples of countries frequently exhibit significant attrition. As a result, the sample is used to compute average health assessments in the home country from 2000 to 2014, though lag averages from 1981 to 1998 are also used in robustness checks. We also account for per capita health expenditure in the country of residence, as retrived from the World Bank database. For all waves, we use self-reported health assessments, which allows us to take advantage of variations in health assessments over time in host countries. However, health measures may be more dependent on changes in individual specific circumstances rather than changes in context (e.g., migration). We draw on two samples from our master dataset: one for first-generation migrants (people born in one country who moved to another) and one for second-generation migrants (defined as children of first-generation immigrants, e.g., those with one or both parents not born in the same country as the child). There were 24,880 and 22,319 observations in these analytic samples, respectively. Our primary variable of interest is self-reported health, which is assessed subjectively on a five-point scale ranging from very good to very bad. The question posed is: “How is your health in general?” Respondents can choose from the following options: very good, good, fair, bad, or very bad (Table A1 in the Appendix for details). It is important to acknowledge that while self-reported health is the most used measure of health, it is not without its biases and can show inflated responses and significant cross-country variation . Given that health assessments are a proxy for latent health, cultural biases in self-assessments are a proxy for cultural effects on health. To analyze some of these effects, we carry out subsample analysis where the composition of the countries differs, alongside analysis of measures of health that are not directly self-reported. Our key explanatory variable refers to the average health assessments in the sending country, specifically distinguishing both the father and mother's country of birth for second generation migrants. Given that the correlation of health assessments can be explained by other potential pathways, we include several controls. Such controls capture individual-specific conditions that can independently influence the way health is individually assessed. Furthermore, given that health declines with age and exhibits gender and household-specific differences, we control for several socioeconomic and demographic characteristics (gender, age, and household size). Institutional explanations for an association in migrant's health assessments such as citizenship status are also considered. These are important measures, as in some countries migrant's citizenship is not automatic after birth. Our data also contains records on how long individuals have lived in the country of residence, and whether they belong to a minority ethnic group. Alongside educational attainment, we include main occupational activity and household net income quintile, which measure socio-economic determinants of health. The baseline specification includes wave controls. In Europe, free mobility between member states ensures limited barriers to the access health care across countries of the European Union. Rights are more restricted to undocumented migrants, although they have a right to health care under legal conventions of the European Union as established in article 35 of the EU Charter of Fundamental Rights. However, countries can differ in whether they provide care beyond emergency care in the first instance. Hence, in our analysis we will perform a specific heterogeneity analysis distinguishing the origin of migrants to account for differences in their rights. Summary statistics are reported in Table A1 in the Appendix. Consistent with studies using the same data, we find that immigrants compare to the general population on many observable variables, with some differences in religion and education, which we control for along with several other controls. In our analysis we specifically distinguish first-generation (migrants themselves) and second-generation migrants (children of migrants). We assume that first-generation migrants have been affected by the institutions of both home and host countries and might even have been affected by transition costs. Hence, the results for first-generation migrants do not reflect cultural effects alone but are influenced by other effects that we capture when we examine the effect of time in the host country. Similarly, given that first-generation migrants chose to migrate themselves, one can expect first-generation migrants to have more incentives to adopt the health-related norms of the destination country, and hence there might be some selection into migration to certain countries based on, for instance, attitudes of the host population. In contrast, second-generation migrants have been raised in the same country as natives and did not choose their country of birth. Hence, if controls for alternative mechanisms are included, evidence of correlation in health assessments suggests cultural persistence in health assessments. Based on the above considerations, we examine the association between migrants’ health assessments and that of their sending country using a reduced form estimate that draws on the following specification: (1) H i j t = ρ H ¯ j + φ X i t + μ t + ε i j t where H i j t is self-reported health of first (second) generation migrant i from the sending country j at time t, H ¯ j refers to the sending country health assessment for either first or second-generation migrants retrieved from the World Value Survey, X i t refers to individual-specific controls that could bias our estimates of cultural persistence, and μ t are fixed wave effects. Our coefficient of interest is ρ , measuring the association between the migrant's health assessment and the average health assessment in the sending country. ε i j indicates random parameter, which may include country-of-residence fixed effects. Country of origin fixed effects are not included in this literature as they absorb the entire effects of cultural norms and values influencing country health assessments. To account for the arbitrary correlation of error terms among individuals from the same country of origin, standard errors are clustered at the individual's country of origin. For robustness purposes, we estimate both linear probability models and ordered probit models. The results are presented in standardised coefficients to compare the mean between the first and second generations; marginal effects for nonlinear models are also included. We run several specifications in addition to our baseline models to investigate heterogeneous effects and address potential biases. We focus on specifications that distinguish between paternal and maternal lineage for second-generation migrants. This is important when second-generation migrants come from different countries, hence we can distinguish the influence of the maternal and paternal country of origin. We consider cohort differences, as early life health assessments may reflect differences in reference points for what constitutes "good health" when compared to other categories, whereas later life health assessments may reflect true differences in health status. Heterogeneous effects by gender and region are also analysed. Other estimates include regional and country of residence fixed effects to account for any unobserved time-invariant characteristics, lags in average health assessments of the home country as migrants might not observe contemporaneous values when making their judgements, and other measures of wellbeing. In addition, we define cohorts based on gender and year of birth and restrict our analysis to migrants from European countries who have similar rights in both host and sending country. Given that mobility restrictions within Europe are less stringent for European citizens, the analysis of this subsample of migrants allows examining potential sources of unobserved heterogeneity that could not be entirely controlled for with destination country fixed effects. Fig. 1 shows the association between the self-assessed health of first- and second-generation migrants and the average health capital in their country of origin. The size of the circles depicts the number of migrants from each country. Estimates show the fitted values of the association between the two measures. Indeed, for both first- and second-generation migrants, the fitted values indicate a steep and positive association consistent with the presence of some cultural persistence in health assessments. Fig. 1 Cultural persistence of health capital. Correlation of self-assessed health between country of origin and first- and second-generation migrants. Note: The size of the circles represents the number of migrants from each country. Fig 1 Table A2 in the Appendix displays more detail on self-reported health patterns among first- and second-generation migrants and natives. In general, second-generation migrants have better average self-reported health. We also observe the typical gradients in self-reported health by age, education, employment status, and income groups for all first- and second-generation migrants and natives, although in some cases the differences between extreme categories are greater for first-generation migrants (e.g., for age groups) or natives (e.g., for education and income groups). Cultural Persistence: In Panel A of Table 1 , we report the regression estimates for first-generation migrants only. We examine estimates both without and with controls (columns 1–2 and 3–4, respectively), using inear and nonlinear models. Given that migrants’ behaviors might change with exposure to the host country, we then include citizenship status and time in the country since arrival (columns 5–6). More specifically, we specify five dummy variables: whether individuals have spent less than 1 year in the country of residence (reference), between 1 and 5 years, between 6 and 10 years, between 11 and 20 years, and more than 20 years. In all cases, the estimates suggest a large and significant coefficient of health assessments of the migrants’ home country consistent with the hypothesis of cultural persistence. As expected, the size of the cultural persistence coefficient declines with the inclusion of socio-economic and demographic controls. We find that spending up to ten years in the host country increases cultural attachment to the country of origin, and the coefficient is even larger when migrants have been in the country of residence for more than 20 years. Table 1 Cultural persistence of health status. Baseline models. Table 1 OLS Oprobit OLS Oprobit OLS Oprobit (1) (2) (3) (4) (5) (6) Panel A. First-generation migrants Self-assessed health at country of origin 0.880*** 0.974*** 0.600*** 0.774*** 0.584*** 0.757*** [0.248] [0.048] [0.169] [0.033] [0.165] [0.031] (0.150) (0.169) (0.060) (0.084) (0.058) (0.081) Citizen of country of residence 0.029 0.038 (0.025) (0.032) Time in country of residence Within last year (reference) 1 to 5 years 0.036 0.083 (0.052) (0.083) 6 to 10 years 0.092* 0.167* (0.055) (0.086) 11 to 20 years 0.153** 0.253*** (0.063) (0.096) More than 20 years 0.280*** 0.417*** (0.067) (0.102) Observations 24,880 24,880 24,880 24,880 24,457 24,457 R 2 / Pseudo R 2 0.06 0.02 0.29 0.12 0.29 0.12 Panel B. Second-generation migrants Self-assessed health at country of origin 0.758*** 0.892*** 0.573*** 0.766*** [0.225] [0.031] [0.170] [0.024] (0.083) (0.106) (0.055) (0.079) Observations 22,319 22,319 22,319 22,319 R 2 / Pseudo R 2 0.05 0.02 0.25 0.11 Wave fixed effects Yes Yes Yes Yes Yes Yes Controls No No Yes Yes Yes Yes Notes: The dependent variable is self-assessed health of first- and second-generation migrants who live in European countries (SAH=1 very good,…, SAH=5 very bad). Standardised coefficients (OLS models) and average marginal effects on the probability of the worst self-assessed health (Oprobit models) are in brackets. Standard errors (in parenthesis) are clustered at the country-of-origin level. Specifications with controls (columns 3–6) include gender, age, education, marital status, household size, religion, whether belongs to minority ethnic group, employment status, and household income (quantiles). * p < 0.1; ** p < 0.05; *** p < 0.01. Table A4 in the Appendix includes additional controls, namely regional fixed effects, country-of-residence fixed effects, and per capita health expenditure in the host country (Panel A). The specification with fixed effects for five regions of Europe (North, South, Centre, East, and West) yields similar results (columns 1–2), although the more specific country-of-residence fixed effects, which capture time-invariant differences between countries, do reduce the size of the coefficient of interest (columns 3–4). Something similar is observed when the per capita health expenditure in the country of residence is considered (columns 5–6). Next, we include lagged values of average health in the country of origin to rule out the possibility that some unobserved variables are simultaneously affecting the health status of immigrants and natives (e.g., international epidemics; columns 1–2 of Table A5 in the Appendix). Estimates are comparable to those in panel A of Table 1 . Finally, panel A of Table A6 in the Appendix adds survey weights to the estimates, revealing no differences from those in Table 1 . Cultural Effects: Second Generation. Results for the first generation cannot be interpreted as evidence of cultural effects alone as migrants might not be subject to the same regulations as natives. Hence, panel B of Table 1 reports the same estimates but for second-generation migrants (e.g., children of migrants) who have been raised in the same institutional environment as natives. Consistent with the results for first-generation migrants in panel A of Table 1 , we report estimates without and with controls (columns 1–2 and 3–4, respectively) as before. Cultural persistence for second-generation migrants is practically the same, as descriptive evidence already suggests. More specifically we estimate that a one standard deviation change in the country of origin self-reported health is associated with an increase in migrants’ self-reported health of about 15–17 standard deviations. Only in the specification with country of residence fixed effects (columns 3 and 4 in Panel B of Table A4 in the Appendix), the coefficient of interest is less sensitive to considering an ordered probit specification rather than linear probability estimates. Table A7 in the Appendix shows the results using an alternative definition of second-generation migrants that distinguishes whether the father (columns 1–3) or mother (columns 4–6) was born abroad. Importantly, we find that cultural persistence is only slightly higher for second-generation migrants when measured along paternal lineage, but the difference between the two coefficients is not statistically significant. Gender Effects . In Table 2 we report the results for both first- and second-generation migrants (Panel A and B, respectively), splitting the sample by gender. Consistently, we find significant and large coefficients that do not differ considerably by gender. A change in one standard deviation in the country-of-origin's self-assessed health increases migrants’ self-assessed health by nearly 0.60 scale units (16%) irrespective of gender (columns 1 and 2). Table A8 in the Appendix again distinguishes paternal and maternal lineage (panels A and B, respectively). The effect decreases to 0.50 scale units (15 % compared to the mean) on maternal lineage among men. However, among women, the effect is virtually the same for second-generation migrants of both maternal and paternal lineage. Table 2 Cultural persistence of health status. Heterogeneous effects by gender. Table 2 OLS Oprobit Female Male Female Male (1) (2) (3) (4) Panel A. First-generation migrants Self-assessed health at country of origin 0.608*** 0.578*** 0.774*** 0.763*** [0.172] [0.162] [0.037] [0.027] (0.066) (0.059) (0.092) (0.084) Wave fixed effects Yes Yes Yes Yes Controls Yes Yes Yes Yes R 2 / Pseudo R 2 0.30 0.27 0.12 0.11 Observations 13,822 11,058 13,822 11,058 Panel B. Second-generation migrants Self-assessed health at country of origin 0.587*** 0.556*** 0.773*** 0.760*** [0.174] [0.166] [0.025] [0.023] (0.064) (0.053) (0.095) (0.074) Wave fixed effects Yes Yes Yes Yes Controls Yes Yes Yes Yes R 2 / Pseudo R 2 0.26 0.23 0.11 0.10 Observations 12,062 10,257 12,062 10,257 Notes: The dependent variable is self-assessed health of first- and second-generation migrants who live in European countries (SAH=1 very good,…, SAH=5 very bad). Standardised coefficients (OLS models) and average marginal effects on the probability of the worst self-assessed health (Oprobit models) are in brackets. Standard errors (in parenthesis) are clustered at the country-of-origin level. Controls include gender, age, education, marital status, household size, religion, whether belongs to minority ethnic group, employment status, and household income (quantiles). * p < 0.1; ** p < 0.05; *** p < 0.01. Age and Geographical Effects . Next, we explore other specifications to try to disentangle whether our estimates could be partly attributed to genetic transmission rather than cultural transmission ( Table 3 ). Specifically, we split the sample by age group (panels A and B for first- and second-generation migrants, respectively) and region of Europe (panels C and D for first- and second-generation migrants, respectively). In the first case, we find statistically significant effects for all age groups that roughly correspond to age quartiles (35 years or less, 36 to 50 years, 51 to 65 years, 66 years, and more), although we find a very clear positive gradient. These results suggests that, even among younger age groups where individuals typically exhibit very good self-assessed health, we still find consistent evidence of cultural transmission, implying that that there are relevant differences in cultural reference points when making health self-assessments across individuals. Significant results, on the other hand, are found for five regions based on country of residence (North, South, Center, East, and West), though with significant variations. The coefficients for first-generation migrants, for example, are estimated to range from 0.169 in the South to 0.673 in the North. When we look at second-generation migrants, however, we find no evidence of cultural transmission in the Southern and Eastern countries. That is, cultural persistence is primarily driven by cultural persistence in Northern and Central European countries. 2 Table 3 Cultural persistence of health status. Heterogeneous effects by age group and regions of Europe. Table 3 (1) (2) (3) (4) (5) AGE GROUP 35 years or less 36–50 years 51–65 years 66+ years Panel A. First-generation migrants Self-assessed health at country of origin 0.239*** 0.613*** 0.707*** 0.927*** [0.081] [0.192] [0.217] [0.270] (0.051) (0.072) (0.060) (0.118) Wave fixed effects and controls Yes Yes Yes Yes R 2 0.04 0.11 0.19 0.20 Observations 6677 6965 5914 5324 Panel B. Second-generation migrants Self-assessed health at country of origin 0.444*** 0.544*** 0.649*** 0.710*** [0.156] [0.172] [0.196] [0.206] (0.060) (0.067) (0.078) (0.095) Wave fixed effects and controls Yes Yes Yes Yes R 2 0.09 0.18 0.18 0.14 Observations 7666 6136 5346 3171 Regions of Europe North South Center East West Panel C. First-generation migrants Self-assessed health at country of origin 0.673*** 0.169** 0.508*** 0.376*** 0.248*** [0.221] [0.045] [0.104] [0.107] [0.049] (0.101) (0.075) (0.091) (0.107) (0.086) Wave fixed effects and controls Yes Yes Yes Yes Yes R 2 0.35 0.17 0.22 0.37 0.16 Observations 8093 2187 6517 5174 2843 Panel D. Second-generation migrants Self-assessed health at country of origin 0.625*** −0.062 0.369*** 0.147 0.266** [0.253] [−0.012] [0.072] [0.043] [0.042] (0.081) (0.112) (0.084) (0.090) (0.108) Wave fixed effects and controls Yes Yes Yes Yes Yes R 2 0.24 0.30 0.24 0.38 0.15 Observations 6459 760 6342 5559 3156 Notes: The dependent variable is self-assessed health of first and second generation migrants who live in European countries (SAH=1 very good,…, SAH=5 very bad). OLS estimates, standardised coefficients are in brackets; standard errors (in parenthesis) are clustered at the country of origin level. Controls include gender, age, education, marital status, household size, religion, whether belongs to minority ethnic group, employment status, and household income (quantiles). * p < 0.1; ** p < 0.05; *** p < 0.01. Cohort Analysis . To enable a more accurate comparison between migrants' self-reported data and information from their country of origin, we categorized cohorts according to year of birth and gender. Specifically, we define seven groups according to year of birth: 1988–2002, 1978–1987, 1968–1977, 1958–1967, 1948–1957, 1938–1947, and before 1938. For example, the self-reported health of first (second) generation female migrants born in 1985 is compared to the average self-reported health of women in their country of origin born between 1978 and 1987 . Table A9 in the Appendix shows that the results are very similar to those in Table 1 , both with and without controls (columns 1–4). Controlling for the average self-assessed health of the country of residence for second-generation migrants reduces the size of the coefficients of interest, although it remains significant (Panel B, columns 5–6). Migrant Selection . To test for potential selection into migration, we limit our analysis to migrants from EU countries with comparable rights and institutional development in both their country of origin and destination. Table 4 differentiates between samples of individuals born in EU countries and those who reside in EU countries but might be born elsewhere. This enables us to determine whether the effects are driven by migration from some of the non-EU countries represented in our sample. Again, we find large and significant coefficients across all regressions. When we examine the effect among migrants born in the EU, we still find evidence of cultural persistence across all generations. Table 4 Cultural persistence of health status. Subsample of migrants within the European Union (EU). Table 4 EU residents (Parents) Born in the EU EU residents and (parents) born in the EU (1) (2) (3) Panel A. First generation migrants Self-assessed health at country of origin 0.593*** 0.515*** 0.316*** [0.173] [0.102] [0.066] (0.061) (0.076) (0.082) European regions fixed effects Yes Yes Yes Wave fixed effects Yes Yes Yes Controls Yes Yes Yes R 2 0.28 0.22 0.21 Observations 16,419 9693 6683 Panel B. Second generation migrants Self-assessed health at country of origin 0.575*** 0.425*** 0.401*** [0.180] [0.087] [0.084] (0.064) (0.086) (0.085) European regions fixed effects Yes Yes Yes Wave fixed effects Yes Yes Yes Controls Yes Yes Yes R 2 0.21 0.20 0.19 Observations 13,973 10,382 6791 Notes: The dependent variable is self-assessed health of first and second generation migrants who were born (or whose parents were born) and/or live in European Union countries (SAH=1 very good,…, SAH=5 very bad). OLS estimates, standardised coefficients are in brackets; standard errors (in parenthesis) are clustered at the country of origin level. Controls include age, education, marital status, household size, religion, whether belongs to minority ethnic group, employment status, and household income (quantiles). * p < 0.1; ** p < 0.05; *** p < 0.01. We also consider migration selection using a two-step procedure. First, we use a probit model to estimate the likelihood of migration (Table A10 in the Appendix); the estimated parameters are then used to calculate the inverse Mills ratio, which is then added to the estimates that consider cohorts to link individuals' self-reported information and that of the country of origin (Table A9 in the Appendix). Our estimates suggest that the difference in the coefficient of country-of-origin self-reported health after including the Mills ratio from the coefficient calculated before is not statistically significant (95 %CI: −0.087, 0.028 for first-generation estimates, and 95 %CI: −0.102, 0.076 for second-generation estimates). Binarisanising self-assessed health . Additionally, we investigate whether binarizing our variable—transforming self-reported health into a binary measure—affects our conclusions. Fig. A1 and Table A11 in the Appendix demonstrate that the results remain virtually unchanged. Genetic effects . Arguably, common genetic factors could explain the similarities in health assessments of individuals from the same sending country. To assess cultural persistence in health assessments, we examine the cultural persistence of study self-assessed health while controlling for objective health measures. Estimates are consistent with baseline estimates. Our estimates are available across several specifications (details can be provided upon request). Other measures . We also estimate the baseline specification for life satisfaction rather than self-reported health ( Table 5 ). This provides additional evidence of the effect of other measures of self-assessed well-being. Table 5 suggests robust evidence of cultural transmission when such measures are employed . Table 5 Cultural persistence of life satisfaction. Table 5 First generation migrants Second generation migrants (1) (2) (3) (4) (5) Life satisfaction at country of origin 0.440*** 0.307*** 0.290*** 0.288** 0.281*** [0.173] [0.122] [0.115] [0.115] [0.112] (0.103) (0.064) (0.061) (0.118) (0.056) Citizen of country of residence −0.093** (0.040) Time in country of residence Within last year (reference) 1 to 5 years −0.033 (0.136) 6 to 10 years −0.166 (0.129) 11 to 20 years −0.266** (0.124) More than 20 years −0.298** (0.141) Wave fixed effects No Yes Yes No Yes Controls No Yes Yes No Yes R 2 0.03 0.11 0.12 0.01 0.13 Observations 27,431 24,829 24,410 24,295 22,239 Notes: The dependent variable is life satisfaction of first- and second-generation migrants who live in European countries. OLS estimates, standardised coefficients are in brackets; standard errors (in parenthesis) are clustered at the country of origin level. Specifications with controls (columns 2, 3, 5) include gender, age, education, marital status, household size, religion, whether belongs to minority ethnic group, employment status, and household income (quantiles). * p < 0.1; ** p < 0.05; *** p < 0.01. In addition, we employ height and weight information collected in the seventh round of the ESS to define body-mass index (BMI), a more objective health measure, to run the baseline models. Age-standardized BMI averages per country for males and females were drawn from the NCD Risk Factor Collaboration. Table A12 in the Appendix first shows the results for self-reported health for round 7 of the ESS. The standardized coefficients of the variable of interest in the specifications with controls (columns 2–3) are only slightly lower than for the full sample (0.14 instead of 0.17). The results for BMI are also significant (columns 4–6), but the standardised coefficients are only half those for self-reported health, suggesting that the positive association for the subjective health measure does reflect, at least in part, the cultural persistence in how health is assessed, rather than the underlying health status. This is important given that the evidence suggests that the correlation between BMI and self-reported health is negligible. Finally, we consider potential differences in social norms. Specifically, add as a control the opinion on the statement “ men should have more right to a job than women when jobs are scarce ”, with response options: agree, neither agree not disagree, disagree , since it may also bear cultural information that may affect the health report in the destination country. We chose this variable because it is one of the few measuring attitudes that is available in both the ESS and the WVS. The results (not shown but available on request) are practically identical to those reported in Table 1 , namely the (standardised) coefficients of the variable of interest in the models with controls remain significant and around 0.17. This paper studies the hypothesis of cultural persistence in health self-assessments in a large and heterogeneous sample of Europeans. We have documented evidence of an association between migrants’ health assessments and that of their home countries (or that of their parents), which we argue capture what can be regarded as evidence of ‘cultural persistence’ in health assessments. This has been a question traditionally ignored in the evaluation of health programs across countries. Specifically, we document a clear association between subjective health assessments of first and second-generation immigrants (residing in 30 different European host countries and over 90 sending countries) and that of their home country. Our findings suggest evidence that migrants' health assessments are associated with the average health status in their sending country, net of socio-demographic characteristics and other relevant controls. We report evidence that the correlation is stronger among older individuals and those residing in Northern Europe. We leverage on a large cross country variation which we beleive attenuates the likelihood of selection bias. We estimate that one standard deviation change in self-reported health in the sending country is associated with an increase in migrants' self-reported health of about 0.17 standard deviations. Our interpretation of the results is that cultural reference points matter in making health assessments and are persistent across generations. Other explanations include some potential negative assimilation when health behaviors and cultural beliefs of the host country are perceived as advantageous, or the presence of selection bias in return migration which we cannot examine in our data as we cannot identify returning migrants. Finally, estimates are limited by any potentially unaccounted selection and the presence of genetic and epigenetic effects, alongside common migration wave-specific effects. Joan Costa-Font: Conceptualization, Data curation, Formal analysis, Writing – original draft, Writing – review & editing. Azusa Sato: Conceptualization, Formal analysis. Belen Saenz-de-Miera: Conceptualization, Formal analysis, Validation, Visualization. This research has received no funding and we have no conflict of interest to disclose | Study | biomedical | en | 0.999997 |
PMC11697409 | Animal venoms harbor a complex blend of salts, amino acids, biogenic amines, neurotransmitters, peptides, and proteins, strategically targeting various receptors crucial for the survival of venomous creatures . These venoms, along with their toxins, exhibit diverse pharmacological properties, serving as valuable resources for investigating cellular and molecular functions. Certain venom components are pivotal in human ailment and have inspired the development of novel therapeutic interventions . Toxins derived from aquatic venomous organisms represent a valuable reservoir of natural compounds for both academic inquiry and practical applications. However, challenges persist in the acquisition and preservation of venom extracts, leading to the underutilization of aquatic animal venoms, especially those from fish species, as a largely untapped wellspring of novel medicines and pharmacological compounds . Peptides, for example, are molecules found in living organisms and play a crucial role in many biological processes [ , , , ]. Their widespread occurrence and functional versatility enhance their therapeutic promise [ , , ]. Peptides are increasingly dubbed the "Goldilocks" chemical modality, characterized by their intermediate size, which combines the advantageous features of small molecules and biologics. This includes high target specificity, minimal off-target effects, and distinctive pharmacokinetic profiles . Nevertheless, while a select few peptides have secured FDA approvals and others are in various stages of clinical trials , it's noteworthy that peptides have a longstanding legacy of contributing to human health spanning over a century. From insulin to vasopressin, and more recently, tirzepatide, peptides have played pivotal roles in healthcare . In the last years, sales of peptide drugs exceeded $70 billion, with 10 non-insulin peptide drugs among the top 200 best-selling drugs, representing a substantial portion of the pharmaceutical market . Peptides sourced from venom have been investigated for their potential in biotechnological applications . While the majority of these peptides stem from a restricted range of venomous terrestrial animal groups, bioactive compounds from fish venoms have also been successfully identified and studied . The therapeutic potetinal of venom/toxin-derived peptides is apparent, attributed to their heightened specificity, stability, and comprehensive evaluation of pharmacokinetic characteristics . These peptides also demonstrate therapeutic attributes, notably antimicrobial properties (AMPs) . The efficacy showcased by these substances is associated with their physicochemical characteristics, such as net charge, hydrophobicity, and solvent accessibility. These properties, in turn, govern their mechanisms of action, selectivity, and specificity towards their targets . AMPs are peptides that exert their main effects on membranes, primarily by disrupting the integrity of the plasma membrane of their cellular targets . The ability of AMPs to penetrate cells appears to bolster their antimicrobial effectiveness by engaging and disrupting intracellular components such as macromolecules and organelles . However, this cellular uptake and permeability of AMPs may be linked to varying degrees of cytotoxicity. Through the design and synthesis of AMPs, it is possible to create peptides with finely tuned membrane translocation and cellular uptake abilities, coupled with reduced or minimal adverse impacts on membrane stability and cellular health. This is demonstrated by examples like buforin II, derived from the stomach tissue of the Asian toad Bufo bufo garagrizans , and its derivatives , as well as other AMPs . While numerous venoms harbor AMPs, it's worth noting that other families of biologically active peptides can also be present, including cell-penetrating peptides (CPPs) . CPPs comprise short sequences typically consisting of a few amino acids up to <40 residues. These peptides possess physicochemical and biological characteristics enabling them to traverse cell lipid membranes and facilitate intracellular transportation of various molecular cargoes. This transport can occur in the form of covalent conjugates or noncovalent complexes . Remarkably, many of the structural and physicochemical traits found in AMPs are also present in CPPs . Moreover, both types of peptides predominantly act on the cell membrane, inducing pore formation through diverse mechanisms that ultimately result in cellular apoptosis. These attributes not only enable their application as independent treatments but also facilitate the investigation of synergistic effects with various approved medications. These peptides can serve as adjuvants for compounds targeting the intracellular milieu, aiding in their effective delivery to the active site and bolstering treatment efficacy by combating infection or disease through diverse mechanisms of action . Undoubtedly, the advancement of novel antimicrobial peptides (AMPs) and cell-penetrating peptides (CPPs) represents a highly promising frontier in biotechnology and therapeutic pharmacology, particularly amidst the rise of strains resistant to conventional antibiotics . Nevertheless, there are notable constraints concerning the commercialization of AMPs and CPPs. These include elevated production expenses and time-intensive procedures, especially in the context of recombinant techniques, limited efficacy in animal models, heightened vulnerability to protease degradation, and diminished activity in specific physiological environments . All these shortcomings can be addressed through the application of in silico methods to assist the design of AMPs and CPPs, termed in silico study . In silico studies represent a logical progression adjuvant to in vitro methods, whereby biological and physiological processes are simulated using computer models. This approach enables researchers to explore a virtually limitless array of parameters, providing more insights or predictions regarding potential outcomes . There is a growing body of literature documenting the utility of in silico studies in predicting, designing, and modifying AMPs and CPPs, underscoring their promise as valuable approaches . Recent progress in analytical methodologies, such as the integration of genomics, mass spectrometry, and proteomics, has greatly facilitated scientists' exploration of venom compositions . Coupled with contemporary high-throughput screening techniques for venom compounds, the ability to predict novel molecules encoded within toxins marks a significant advancement toward harnessing the complete therapeutic capacity of animal venoms. Leveraging high-performance technologies, it is now feasible to anticipate new molecules derived from toxins, thereby tapping into the therapeutic potential inherent in these molecules. Moreover, biologically active peptides sourced from venomous animals indigenous to South America demonstrate significant and varied activities, presenting promising prospects as clinical candidates . One such example is the TnP family of synthetic cyclic peptides discovered in the venom of Thalassophryne nattereri , a venomous fish inhabiting the northern and northeastern coastlines of Brazil [ , , ]. In a previous study, our group employed in silico techniques to design 57 peptides derived from the T. nattereri family of toxins, known as Natterins . Natterins have been identified as the primary agents responsible for the major toxic effects induced by T. nattereri venom, including local edema, and intense pain progressing to necrosis [ , , ]. The predicted peptides exhibit a molecular mass ranging from 965.08 Da to 2704.06 Da, a net charge spanning from -2 to +7, a hydrophobic moment (μH) varying from 0.044 to 0.627, and a hydrophobic ratio ranging from 9 to 50 %. These characteristics facilitate their interaction with the microorganism membrane through adoption of an α-helix formation, ultimately leading to membrane disruption. The present study explores the antimicrobial and antiviral properties of two selected peptides identified as promising. Upon analyzing the findings outlined in De Cena et al. , peptides NATT2_06 and NATT4_01 emerged as particularly noteworthy candidates with potential antimicrobial and antiviral attributes. Among the 57 peptides delineated in the investigation , these specific peptides showcased physicochemical characteristics deemed vital in antimicrobial and antiviral peptides as documented in existing literature. These characteristics include optimal membrane-binding potential, cellular localization both inside and outside the membrane, minimal toxicity and allergenicity, alongside ADMET parameters falling within the expected range for such molecules. Our study demonstrates that these peptides exhibited mild inhibitory effects on the growth of both Gram-positive and Gram-negative bacteria, as well as fungi, over a brief period. They demonstrated comparable inhibitory actions concerning viral replication both intra and extracellularly, without manifesting any toxic effects in vitro or in vivo . Lastly, stability and membrane interaction assessments were conducted to pave the way for these peptides to emerge as potential prototype compounds. The peptides were designed by using a template and physicochemical base method as described in Conceição et al., and de Cena et al., . The NATT peptides amino acid sequence was derived from Natterins toxins from Thalassophryne nattereri and was used as a template protein. The peptide physicochemical properties were calculated through various tools, including ProtParam ( http://web.expasy.org/protparam ) , PepCalc ( https://pepcalc.com/ ), Heliquest version 2 ( https://heliquest.ipmc.cnrs.fr/cgi-bin/ComputParams.py ) , and APD3 for complementary properties ( https://aps.unmc.edu/prediction/predict ). All mentioned software was used with its default parameter configuration. The synthesis of NATT peptides was performed manually on solid phase following a 9 fluorenylmethoxycarbonyl (Fmoc)/ tert-butyl ( t -Bu) protocol , in polypropylene syringes fitted with a polyethylene porous disk. Solvents and soluble reagents were removed in vacuum. Commercially available reagents were used throughout without purification. Fmoc-Rink-MBHA resin (0.71 mmol/g) was used as solid support since it provides C-terminal peptides amides. Fmoc group removal was achieved with piperidine-DMF (3:7, 2 + 10 min). Coupling of commercial Fmoc-amino acids (4 or 3 equiv) were performed using DIC (4 or 3 equiv) and Oxima (4 or 3 equiv) in DMF under stirring at room temperature for 4 or 8 h The completion of the reactions was monitored by the Kaiser test for amino acid bearing a primary amine and by Chloramil test for the proline residue bearing a secondary amine. For each coupling and deprotection step, the resin was washed with DMF (6 × 1 min) and CH 2 Cl 2 (3 × 1 min) and aired-dried. After coupling of ninth amino acid residue, NMP was used instead DMF. Peptide elongation was performed by repeated cycles of Fmoc removal, coupling and washings. Once the synthesis was completed, peptidyl resins were subjected to the N-terminal Fmoc removal. Then, the peptides were cleaved by treatment with TFA-H 2 O-TIS (95:2.5:2.5) for 2 h Following TFA evaporation and diethyl ether extraction, the crude peptides were purified by reverse-phase column chromatography, lyophilized, analyzed by HPLC, and characterized by high resolution mass spectrometry (HRMS) and proton nuclear magnetic resonance (1H-NMR) (Supplementary material). Peptide stability was evaluated under various conditions. To investigate the impact of temperature on peptide structure, samples were incubated at 37 °C and 60 °C for 24 h. A control sample was kept at -4 °C. To evaluate the distribution under acidic conditions, the peptides were dissolved in a 0.074 M HCl solution, resulting in a pH of 3. For evaluation under basic conditions, the peptides were dissolved in a 0.18 M NaOH solution, resulting in a pH of 11. A control sample was maintained at pH 7. Additionally, samples were exposed to trypsin solutions at a concentration of 20 µg/mL. All samples underwent analysis using High-Performance Liquid Chromatography (HPLC) coupled with Mass Spectrometry . The HPLC system comprised LC-10AD mobile phase pumps, an Ultrasphere C-18 column (5 µm; 4.6 × 250 mm), a UV-vis SPD-10AV detector set at a wavelength of 220 nm, and a mass spectrometer operating in electrospray ionization (ESI) mode with a quadrupole separator. Reference strains of Pseudomonas aeruginosa (ATCC 15,442), Staphylococcus aureus and the fungus Candida auris were evaluated in this study. All strains were stored in 20 % (v/v) glycerol at -80 °C. Before the assays, cells were seeded on Cetrimide agar for P. aeruginosa , agar Cled for S. aureus and Sabouraud dextrose agar for C. auris , and grown at 37 °C for 48 h The assessment of antimicrobial activity of the synthesized peptides was conducted following the analytical parameters outlined in the CLSI and NCCLS methods. Strains were cultured in appropriate media (Mueller Hinton Broth for bacteria and Brain Heart Infusion for fungi) for approximately 24 h at 37 °C. The inoculum was adjusted to a concentration of 10 3 cells/mL at different absorbances (630 nm for P. aeruginosa and S. aureus , and 530 nm for C. auris ). Different concentrations of peptides were tested in sterile 96-well plates, with each well containing 200 µL (10 µL of peptide and 190 µL of inoculum in culture medium). Plates were incubated at 37 °C in a bacteriological incubator. Absorbance readings were taken by spectrophotometry at the respective wavelengths using a microplate reader (Synergy H1, BIOTEK, USA) after 2, 4, 6, 12, and 24 h. Tetracycline at 1 mg/mL was used as a positive control for bacteria, and the respective inoculum of each microorganism was used as a negative control. After assessing biomass to determine the number of viable cells following the time-kill assay and biomass evaluation at different time points, this was subjected to Colony-Forming Unit (CFU) assessment. The CFU counting assays were performed following the method described in Herigstad . This method involved the removal of 20 µL aliquots from the samples in the wells, followed by serial dilution in 180 µL of 0.9 % saline solution and plating on agar plates, which were then incubated for 24 h. After this period, the number of colonies on the plates was counted, and the number of cells per mL (CFU/mL) in the original culture was calculated. Murine fibroblast cells (L929) viability was assessed in the presence of the peptides using the cytotoxicity assay . L929 cells were prepared in RPMI medium for the assay. After 24 h, 10 µL of each peptide concentration (3.125 µM, 6.25 µM, 12.5 µM, 25 µM, and 50 µM) were added to a 96-well plate with cells (190 µL) and incubated for 24 h at 37 °C and 5 % CO 2 . Next, 200 µL from each well was removed, and 100 µL of 3-[4,5-dimethylthiazol-2-yl]−2,5-diphenyltetrazolium bromide (MTT) 0.5 mg/mL was added to the wells. The plate was incubated for 3 h under the same conditions. Then, 100 µL from each well was removed, and 100 µL of DMSO (dimethyl sulfoxide) was added as a positive control, and PBS was added as a negative control. The multi-well plate was quantified by absorbance at 540 nm using a spectrophotometer (Synergy H1, BIOTEK, USA). To perform the hemolytic test, human RBC was obtained from a volunteer donor. After centrifugation at 1000 xg for 5 min, the RBC pellets were resuspended in 5 % (vol/vol) sterile saline at different concentrations of peptides NATT2_06 and NATT4_01, and subsequently incubated at 37 °C for 1 hour. The supernatants were transferred to a 96-well plate, and the absorbance at 570 nm (A570) was measured. 0.12 % DMSO and 1 % TritonX-100 were used as negative and positive control, respectively . The hemolysis rate was calculated as Eq 1: H e m o l y s i s % = A ( s a m p l e ) − A ( D M S O ) / A ( T r i t o n ) − A ( D M S O ) x 100 This study was approved by Plataforma Brasil through the number 71282023.5.0000.5505. A low-passage stock of the Chikungunya virus was also employed for antiviral activity. First, the viral propagation and titration were performed in the adherent African green monkey kidney epithelial (Vero) cells . The virus was grown in Vero cultured in MEM medium (Gibco, Waltham, MA, USA) supplemented with 10 % fetal bovine serum (FBS, Gibco), 100 U/mL penicillin, and 100 μg/mL streptomycin (Gibco, Waltham, MA, USA)-M10) for 48 h Then, the supernatant was collected, harvested, and titrated as previously described . The titer obtained for CHIKV was 9.3 × 10 6 PFU/ml. The immortalized human hepatocytes derived from the hepatocarcinoma of a 57-year-old Japanese individual were cultured. The Huh-7 were grown in Minimum Essential Medium (Advanced MEM, Gibco®, USA), supplemented with 10 % (v/v) heat-inactivated fetal bovine serum (FBS) (Gibco®, USA), 100 U/mL penicillin, and 100 μg/mL streptomycin (PenStrep – Gibco®, USA), along with 2 mM L-glutamine (Gibco®, USA), and maintained in a humidified atmosphere with 5 % CO 2 at 37 °C. To analyze the antiviral potential of peptides, Huh-7 cells were seeded in 48-well flat-bottom plates (Sarstedt®, Germany) at a density of 6.5 × 10 4 cells/well in MEM supplemented with 10 % FBS, 100 U/mL penicillin, 100 μg/mL streptomycin, and 2 mM L-glutamine. Cells were incubated for 12 h at 37 °C to allow cell adhesion to the wells. After this period, two types of assays were conducted, considering the peptides' ability to cross the plasma membrane, as described in de Cena et al., and based on a previous study . The first assay, named "post-treatment" , involved removing the culture medium from the wells, washing with phosphate-buffered saline (PBS, pH 7.0), followed by CHIKV inoculation in MEM supplemented with 2 % FBS at a multiplicity of infection (MOI) of 1 for 2 h to allow homogeneous viral adsorption. The inoculum was then removed, and the peptides were added in triplicate at different concentrations , serially diluted at a 1:2 ratio in culture medium with the same supplementation mentioned. Cells were monitored for 12 h. The second assay, named "co-treatment" , involved removing the culture medium from the wells, washing with PBS, followed by simultaneous inoculation of CHIKV and peptides, diluted in the same manner in culture medium with the same supplementation, in triplicate. The same multiplicity of infection (MOI) of 1 for 2 h allows homogeneous viral adsorption. Cells were also monitored for 12 h. In all situations, positive controls were included with only CHIKV inoculation without peptide addition, and negative controls were included without CHIKV or peptide addition. At the end of each period, supernatants from each well were collected in a viral lysis buffer (AVL) and stored at -80 °C until viral RNA extraction and molecular characterization ( Fig. 1 (C)). The flowchart below outlines the step-by-step methodology applied in the two assays. All viral manipulation procedures were conducted in accordance with WHO and PAHO regulations in a suitable Biosafety Level 2 (BSL-2) laboratory, following the biosafety guidelines of ANVISA. Fig. 1 CFU/mL (Log10) of different peptides. (a) NATT2_06 against P. aeruginosa , (b) NATT4_01 against P. aeruginosa , (c) NATT2_06 against S. aureus , (d) NATT4_01 against S. aureus , (e) NATT2_06 against C. auris , (f) NATT4_01 against C. auris tested at 0, 2, 4, 6, 12 and 24 h. Statistical significance was calculated using two-way analysis of variance (ANOVA) in Prism version 8.0 (GraphPad, USA) and represented as ***** p < 0.0001 for all tested concentrations, **** p < 0.0001 for four tested concentrations, *** p < 0.0001 for three tested concentrations, and * p < 0.0001 for only one tested concentration. Fig. 1 After conducting the assays, each viral RNA in the samples' supernatant was extracted using the QIAamp® Viral RNA Mini Kit (QIAGEN®, Germany). In this process, each sample underwent the extraction and purification of viral genetic material (+ssRNA) using extraction columns, with resuspension in nuclease-free ultrapure water using materials and reagents provided by the manufacturer. For the molecular characterization of NATT peptide treatment, RNA samples extracted from the plaque challenge assays' supernatants and viral stock dilutions underwent quantitative reverse transcription PCR (RT-qPCR) assays. CHIKV Primers and specific probe were synthesized by Sigma Life Science®, with 5-carboxyfluorescein (5-FAM) as the fluorophore and Minor Groove Binder (MGB-NFQ) as the fluorescence quencher . For the RT-qPCR reaction, the AgPath-ID® One Step RT-qPCR kit (Applied Biosystems®, USA) was used for each extracted RNA in duplicate. The reverse transcription reaction was carried out at 45 °C for 10 min, followed by 40 amplification cycles at 95 °C for 15 s and 60 °C for 45 s on the PCR StepOne Plus® thermocycler (Applied Biosystems®, USA). Data analysis was performed using StepOne® software, version 2.3 (Applied Biosystems®, USA). C T (Cycle Threshold) values were established for each sample based on the threshold automatically set by the software. All detection and quantification of viral RNA was done by real-time PCR of each sample . All the results of cycle threshold (Ct) were compared to a standard curve, which was obtained by carrying out serial dilutions from the pure stock (PFU), as previously described . For this study, the methodologies described by Mylonakis et al. and Jorjão et al. , with some modifications were employed. G. mellonella larvae in their final larval stage were used for the experiment. Ten randomly selected G. mellonella larvae of similar weight and size (250 to 350 mg) were used per group in all assays. Syringes (Hamilton Inc., USA) used for injections were sterilized with peracetic acid (Henkel - Ecolab GmbH, Düsseldorf, Germany) according to the manufacturer's instructions prior to inoculation. Each larva was injected with 10 µL of each peptide (NATT2_06 and NATT4_01) at a concentration of 50 µM into the last left proleg. A control group was injected with PBS to assess overall viability. The number of deceased G. mellonella was recorded every 24 h after peptide injection, with monitoring continuing for 7 days across three independent experiments. Larvae were considered dead if they showed no movement upon touch. The experiment concluded either when all larvae in the experimental group had died or transitioned into the pupal form . Large unilamellar vesicles (LUVs) of POPC:POPG (1:1), POPC:POPG (2:1) and POPC:POPG: Chol (2:1:1) with 100 nm of diameter were used as membrane model systems. The lipids were solubilized in chloroform in a round-bottom flask, and the organic solvent was evaporated under a gentle nitrogen flow to form a thin lipidic film. This film was then placed under vacuum overnight. The lipidic film was rehydrated with 10 mM phosphate buffer, pH 7.4, followed by ten freeze/thaw cycles to produce a suspension of multilamellar vesicles. Large unilamellar vesicles (LUVs) were obtained by extruding the multilamellar vesicles through 100 nm pore-size polycarbonate filters . Circular dichroism spectra of 50 µM of the peptides in 10 mM HEPES buffer, 50 mM NaF, pH 7.4, in the absence or presence of large unilamellar vesicles (LUVs), were acquired at 25 °C in the 190–260 nm wavelength range using 0.1 cm quartz cells in a JASCO model J-815 spectropolarimeter (Tokyo, Japan). Each final spectrum corresponded to an average of five scans, which were subsequently corrected for buffer or LUV baseline. Zeta potential was measured to evaluate changes in the surface charge of the LUVs (POPC:POPG (1:1), POPC:POPG (2:1) and POPC:POPG: Chol (2:1:1)) in the presence of NATT2_06 and NATT4_01. Assays were performed in a Zetasizer Nano ZS (Malvern Instruments, Malvern, UK) equipped with a 633 nm HeNe laser and disposable ζ cells with gold electrodes. LUVs suspensions were fixed at 200 mM prepared in Mueller-Hinton broth (MHB) to a final concentration of 1 × 10 8 CFU/mL. Each peptide solution from 0 to 30 uM, was added to the LUVs solution. Peptide-treated bacterial suspensions were dispensed into ζ cells and allowed to equilibrate for 15 min at 25 °C. The suspensions were mixed with peptides for 30 min at 37 °C. Values of viscosity and refractive index were set at 0.8872 cP and 1.330, respectively. The electrophoretic mobility of each sample was calculated, and the ζ potential was measured using the Smoluchowski equation, as previously described . Various studies have evaluated the prediction of antimicrobial and cell-penetrating peptides [ , , , ]. Together, these studies contribute to the realm of peptide prediction and classification, offering promising applications in drug development and delivery. A prior study utilized in silico analysis to predict and characterize novel AMPs and CPPs derived from natterins of T. nattereri . Subsequently, the current work evaluated the activity of two selected peptides through in vitro and in vivo assays. The purify of chemically synthesized peptides was confirmed through High Performance Liquid Chromatography (HPLC) and High Resolution Mass Spectrometry (HRMS) . Peptides synthesized to a purity of over 99 %, appearing as white powder, were utilized for in vitro and in vivo activity assessment. Table 1 Table 1 Properties of the peptides. Table 1 Peptide SEquence MW (g/ mol) Charge (at pH = 7) Theoretical PI Boman index NATT2_06 TTLRPKLKSK 1171.45 +4 11.26 3.02 NATT4_01 LYVAKNKYGLGKL 1466.79 +3 9.83 0.08 Following exposure to various temperature and pH conditions, as well as treatment with Trypsin protease, peptides at a concentration of 100 μg/mL underwent analysis using high-performance liquid chromatography coupled with mass spectrometry . Upon subjecting the peptides to a thermal treatment of 60 °C for 24 h, no degradation peaks indicative of peptide breakdown was detected. Consequently, it can be inferred that the peptides remained stable at 37 °C, given their resilience to degradation at 60 °C. The [ M + H ] +1 values observed were 1170, 1197, and 1465, respectively, compared to the control at -4 °C, as shown in Figures S8 and S9. Based on the HPLC chromatograms and mass spectra analysis, all three tested peptides displayed resistance to temperatures up to 60 °C. Previous research has consistently demonstrated the robust stability of antimicrobial peptides (AMPs), even at extreme temperatures such as 121 °C. For instance, Baindara et al. noted that the antimicrobial activity of Penisin peptide remained intact when incubated at temperatures up to 100 °C for 30 min, albeit showing a notable decline at 121 °C. Similarly, peptides including HKPLP, Cap18, Cap11, Cap11–1-18m², Cecropin B, Cecropin P1, Melittin, Indolicidin, and Sub5 have exhibited thermal resilience, withstanding temperatures of up to 100 °C in various assays . Additionally, Georgalaki et al. reported the remarkable thermal stability of the food-grade antibiotic macedocin, which retained its activity even after short-term heating, long-term incubation for up to four weeks at 30 °C and autoclaving at 121 °C for 20 min . The peptides underwent exposure to diverse pH environments, spanning from 3, 7, to 11. Under pH fluctuations between 3 and 7, none of the three tested peptides exhibited notable structural changes, suggesting their stability at pH 3. However, at pH 11, all three peptides experienced structural degradation, evidenced by distinct mass peaks observed in Figure S9e, differing from those in Figure S10c and d, corresponding to pH 3 and 7, respectively. In mass spectrometry assays, an alkaline pH can lead to diverse outcomes. Specifically, in the analysis of proteins and peptides, an alkaline environment may induce unintended chemical modifications in certain samples, impacting ionization, fragmentation, and the detection of reference molecules . Additionally, the peptides underwent treatment with a trypsin solution at a concentration of 20 μg/mL. Among them, the peptide NATT2_06 displayed notable changes in mass, as evidenced by both HPLC and MS, as depicted in Figure S12. Regarding the assessment of the NATT4_01 peptide, there were no notable shifts observed in the chromatograms, as illustrated in Figure s13. Nonetheless, upon examination of the mass spectra, fragments of the peptide were discernible. This observation suggests potential cleavage occurring between the Lys (K) and Arg (R) residues within the peptide sequence (LYVAKNKYGLGKL). The overall stability in plasma of NATT2_06 and NATT4_01 can be estimated using the Cavaco et al. equation that relates half-life (t 1/2 ) in serum with sequence-related physicochemical properties . Worthy of note, t 1/2 of NATT2_06 is 9.3 and 90.2 min for NATT4_01, a 10-fold increase, which is in line with our finding . Prior research indicates that when AMPs such as Cap18, Cecropin P1, Cecropin B, Melittin, and Indolicidin are incubated with trypsin, their antimicrobial activity is entirely lost within just 30 s. In the case of the peptide Cap11, this loss of activity occurs after 15 min of incubation. Conversely, a brief exposure of up to 5 min to trypsin enhances the antimicrobial activity of peptides such as Cap11–1-18m² by a factor of 2 . Furthermore, investigations involving the peptide Penisin revealed no decline in its inhibitory activity following a 6-hour incubation with trypsin . The specificity of trypsin in cleaving peptide bonds stems from its active site, which comprises specific amino acid residues (Lys and Arg) that selectively interact with particular amino acid residues within the polypeptide chain. This specificity grants trypsin the capability to identify and cleave peptide bonds at precise locations within the protein molecule. It's noteworthy that the interplay between AMPs and trypsin can be intricate and variable, suggesting the necessity for further analyzes concerning the behavior of AMPs with other proteolytic enzymes. The antimicrobial efficacy of the tested peptides (NATT2_06 and NATT4_01) against microorganisms was assessed by determining the colony-forming units per milliliter (UFC/mL) using the micro drop technique for cell counting at time intervals of 2, 4, 6, 12, and 24 h. When evaluated against P. aeruginosa , peptides NATT2_06 displayed remarkably similar activities, exhibiting significant inhibition across all tested concentrations at 2, 4, 6, and 12 h. However, at 24-hours, only the 3.1 μM concentration ceased to exhibit statistically significant inhibition, as shown in Fig. 1 (A). The peptide NATT4_01 similarly demonstrates inhibition at all concentrations at 2, 4, and 6 h. At 12 h, it showed activity only at 50 μM, and at 24 h, no antimicrobial action was observed. A general analysis of the antimicrobial activity of the peptides reveals that they exhibit action between 2 and 12 h. The 6-hour incubation period shows the highest activity of the peptides and concentrations tested. Possibly, after this period, the peptide loses its efficacy, allowing the microorganisms to resume growth. Liu et al. successfully employed an integrated in silico-in vitro approach to discover bioactive peptides, marking a milestone in leveraging these methods to design molecules surpassing the potency of native peptides . Given the pronounced cationicity of antimicrobial peptides (AMPs), electrostatic interactions play a pivotal role in their binding to the negatively charged cell membrane . Computational analyzes have underscored that net charge and amphipathic characteristics stand out as the most statistically significant physicochemical attributes distinguishing anti-Gram-negative AMPs from others . Notably, the two peptides under evaluation exhibit a positive charge of +3 and +4, respectively NATT4_01 and NATT2_06 facilitating their interaction with the negatively charged bacterial membrane. Research indicates that high cationicity in synthetically designed AMPs correlates with heightened in vitro antibacterial efficacy and minimal cytotoxicity, up to a threshold of +8, beyond which there's an escalation in hemolytic activity . However, it's noteworthy that lower cationicity has also been associated with peptides demonstrating activity in vivo . While the interaction between peptides and membranes is indeed a crucial aspect of AMP function, with many AMPs acting directly on microorganisms' cell membranes through affinity for certain lipid components, thereby disrupting membrane integrity and creating pores or channels, it's essential not to solely attribute their mechanism of action to electrostatic attraction or hydrophobic interactions . This perspective is supported by extensive research, including studies involving the cell-penetrating peptide penetratin at 100 μM, also known as the protein transduction domain. Penetratin, a 16-residue cationic peptide (RQIKIWFQNRRMKWKK-NH 2 ), derived from the third helix of the Antennapedia protein homeodomain, has been patented as a carrier peptide (or cargo transporter) for drug delivery into cells . Notably, a single change of tryptophan 6 to phenylalanine in the AMP abolished its membrane transfer properties, indicating that lipid binding alone may not be sufficient for AMP activity . Similarly, a W2G mutation in cecropin, an AMP predominant in insect cell-free immunity, nearly eradicated antibacterial activity . There are numerous parallels between CPPs and AMPs . Both types demonstrate antimicrobial effects and possess the capability to transport cargo molecules into cells. For instance, the renowned peptide LL-37 can translocate into eukaryotic cells at concentrations lower than those required for bacterial lethality, when adjusted for equivalent concentrations of divalent cations . However, it's pertinent to highlight a key distinction: AMPs are perceived to possess the ability to traverse bacterial membranes autonomously, without necessitating a transport mechanism, whereas CPPs are primarily internalized via active endocytosis . This discrepancy might indicate a fundamental difference in how peptides gain entry into prokaryotic versus eukaryotic cells. Among the peptides under examination, only NATT4_01 is categorized as non-CPP. However, it's noteworthy that this characteristic didn't compromise its effectiveness against P. aeruginosa , as it still exhibited significant activity, albeit at a lower level, following statistical analyzes. Recently, peptides having the ability to traverse reversibly the blood-brain barrier (BBB), referred to as BBB peptide shuttles (BBBpS), were found to be a class of peptides distinct from CPP . BBBpS are thus membrane-active peptides related but distinct of CPP, as CPP are related but distinct from AMP. According to the quantitative methodology adopted by Cavaco et al. , both NATT2_06 and NATT4_01 have moderate to high propensity (Cavaco's score function, S = 0.7) to traverse the BBB. This result opens an avenue for both peptides to be antimicrobial and target organs protected by physiological barriers. This is particularly relevant for NATT4_01 given its long t 1/2 . When assessed against a strain of S. aureus , peptide NATT2_06 demonstrated significant inhibition for all tested concentrations and time intervals ranging from 2 to 12 h. On the other hand, peptide NATT4_01 exhibited significant antimicrobial activity at all tested concentrations starting from 4 h. The peptides demonstrate antimicrobial efficacy against S. aureus within a time frame spanning from 4 to 12 h. Beyond this period, inhibition notably declines and ceases to maintain statistical significance, suggesting that after 12 h, the efficacy of peptide NATT2_06 diminishes, allowing bacterial growth to resume. Furthermore, it's noteworthy that the observed action does not conform to a dose-response pattern, as inhibitory activity fluctuates among the tested concentrations, and the highest concentration does not consistently correspond to the most pronounced antimicrobial effect. S. aureus stands as a prime example of a Gram-positive bacterium that poses a significant global threat to human and animal health . Correspondingly, findings from the current study align with those of Zhang et al. , who reported inhibitory effects of porcine beta defensin 2 (pBD2) against S. aureus within a timeframe of 1 to 8 h, demonstrating up to 80 % microbial survival inhibition within 4 h at a concentration of 150 µg/mL . In comparison, the peptides examined in this study exhibited microbial survival inhibition rates ranging between 32 % and 48.8 % within the same period. Despite the peptides synthesized in this study displaying lower inhibition rates, it's noteworthy that the tested concentrations were lower, ranging from 3.16 µg/mL (NATT2_6 at 3.125 µM) to 73.3 µg/mL (NATT4_01 at 50 µM). Consequently, these findings indicate promising progress in the pursuit of potential antimicrobial agents. In the study of Mohamed et al. , antimicrobial assays against various clinical and drug-resistant strains of S. aureus were conducted using synthetic peptides RRIKA, RR, KAF, and FAK. They found that the RRIKA peptide exhibited antimicrobial activity at concentrations ranging from 2 to 4 μM, while RR showed activity ranging from 8 to 32 μM. Conversely, the KAF and FAK peptides showed no activity against all tested strains up to 64 μM . Our data reveals a significant observation: the highest concentration doesn't always result in the most favorable outcome for both AMPs and PPCs. In both studies, the concentration range showing substantial inhibitory activity was between 2 and 32 μM, despite higher concentrations being evaluated. In addition to being assessed against strains of both gram-positive and gram-negative bacteria, the peptides underwent testing against a strain of the fungus C. auris , recognized as the most pathogenic species within its genus . Upon evaluating cell viability results, as depicted in Fig. 1 e and f, it becomes evident that all peptides tested exhibit a range of inhibitory effects against C. auris between 6 and 12 h. Conversely, peptide NATT4_01 effectively inhibited the growth of C. auris at all tested concentrations at both 6 and 12 h. Fig. 2 (a) Cell viability of mouse embryonic fibroblasts L929 (MEFs) treated with peptides. (b) Hemolytic activity of NATT2_06 and NATT4_01 peptides in erythrocytes. The mean standard deviation of three independent experiments is presented. Fig. 2 AMPs known to target fungi can bind to chitin, disrupting the integrity of the fungal cell wall by increasing its permeability or forming pores . Both peptides tested in our study displayed significant inhibition at least at one of the five concentrations assessed against C. auris , indicating potential affinity for chitin similar to the 36-amino acid peptide described by Pushpanathan et al. . Many AMPs exhibit a broad spectrum of antifungal activity, proving effective against various fungal species, including drug-resistant pathogens like Candida albicans, Aspergillus fumigatus , and Cryptococcus neoformans . Candidiasis represents an opportunistic infection impacting immunosuppressed and hospitalized individuals, leading to global concerns. The escalating pharmacological resistance among Candida species and the emergence of multidrug-resistant C. auris pose significant public health challenges . AMPs have undergone extensive investigation to assess their efficacy against C. auris . Studies have shown that numerous AMPs exhibit noteworthy antifungal activity against C. auris in in vitro experiments. This activity encompasses the capability to impede biofilm formation and fungal growth, induce damage to the cell membrane, and facilitate fungal cell death . Among the peptides evaluated, it is evident that NATT4_01 displayed superior results in inhibiting the growth of C. auris , spanning concentrations from 3.125 to 50 μM (equivalent to 4.58 μg/mL to 73.3 μg/mL NATT4_01). Other peptides documented in literature have also exhibited inhibitory effects against C. auris , including histatin-5 at 7.5 μM . Histatin-5, the predominant 24-amino acid product resulting from histatin-3 cleavage, demonstrates the most potent antifungal activity among all histatins . Moreover, recent findings indicate that the peptide LL-37 is capable of inhibiting and eradicating C. auris at concentrations ranging from 25 to 200 µg/mL . Comparing these results with our peptides, it is evident that all fall within a similar concentration range, affirming our progress in the pursuit of effective antifungal agents. Despite numerous efforts to develop antimicrobial peptides (AMPs) as antibiotics, one obstacle hindering the progress of many synthetic AMPs is their unknown toxicological profile upon systemic administration . Recent studies have explored the toxicity of antimicrobial peptides across various cell types and organisms . The results of cytotoxicity testing against L929 cells revealed that the peptides exhibited viability exceeding 75 % across all tested concentrations (ranging from 3.125 to 50 μM), as illustrated in Fig. 2 a. This suggests that the peptides did not induce significant toxicity in these cells. Hoskin and Ramamoorthy investigated the toxicity of various antimicrobial peptides (AMPs) on both normal and cancer cells, underscoring the significance of determining the therapeutic index for these molecules. Moreover, the variability in the toxicity of AMPs has been documented, with certain peptides exhibiting selectivity for bacterial cells, while others may impact eukaryotic cells as well. Consequently, further research is imperative to elucidate the interplay between the structure, antimicrobial activity, and toxicity of AMPs across different cell types . In this study, the structure of the tested peptides, along with their antimicrobial activity and in vitro and in vivo toxicity, were assessed. However, it's crucial to acknowledge that the observed in vitro toxicity effects may not necessarily mirror in vivo toxicity or the specific actions on target cells. To access the hemolytic activity, the peptides NATT2_06 and NATT4_01 were incubated with the erythrocytes for 1 h at 37 °C. The highest percentage of observed hemolysis was 2 %, for NATT2_06 at 50 μM . For antimicrobial peptides to be viable for systemic applications, it's crucial for them to demonstrate low toxicity against erythrocytes . The absence of hemolytic activity in the tested peptides is advantageous, as many AMPs are restricted in their use due to their significant hemolytic properties . The outcomes of in vitro assays align with the findings proposed by De Cena et al. , which classified peptides NATT2_06 and NATT4_01 as unlikely to induce hemolysis based on in silico analyzes , further corroborating results obtained with L929 fibroblastic cells, indicating no adverse effects on cell growth. In their research, Ebbensgaard et al. emphasize the correlation between hydrophobicity and hemolytic activity, illustrating how the substitution of specific amino acid residues can augment peptide hemolytic activity when paired with particular amino acid residues crucial for antimicrobial efficacy (such as Leu, Ile, and Thr). Notably, both NATT2_06 and NATT4_01 peptides examined here contain Leu and Thr residues. The study indicates that merely reducing hydrophobicity or achieving a low hydrophobic moment value isn't adequate to annihilate a peptide's antimicrobial activity; instead, the amino acid composition holds significant importance . The reality is that numerous AMPs possess hemolytic properties and can disrupt mammalian cells. Balancing the minimization of cellular toxicity with the maximization of antimicrobial effectiveness poses a significant challenge in the clinical application development of AMPs. The potential for cytotoxicity is an important consideration when it comes to antimicrobial peptides. A common characteristic of positively charged AMPs is nonspecific toxicity. Most known antimicrobial peptides are cationic and cytotoxic . The peptides NATT2_06 and NATT4_01 underwent antiviral assays to validate the findings from the in silico studies outlined by de Cena et al. . In vitro assessments were conducted to evaluate the antiviral efficacy of each peptide in blocking Chikungunya virus (CHIKV) infection or progression. Two distinct assays were performed to gauge the antiviral activity at various stages of CHIKV replication. The outcome was determined based on the peptides' capacity to reduce plaque-forming units (PFU) in the supernatants of Huh-7 cell culture infected 12 h post-infection (h.p.i). The inhibitory effects of peptides NATT2_06 and NATT4_01 on CHIKV replication were dose-dependent, with peptide concentrations ranging from 1.5625 to 50 μM. However, minimal antiviral activity against CHIKV was noted in cells treated with either NATT2_06 or NATT4_01 . Fig. 3 The effect on viral load reduction in post-infection treatment with peptides (a) NATT2_06 and (b) NATT4_01 during the adsorption and replication stages of CHIKV in Huh-7 cell. The p-value results (* p < 0.05, ** p < 0.01, and *** p < 0.001) were calculated using Student's t-test with and without Welch's correction for samples with a normal distribution or the Mann-Whitney test (6.25 μM) for samples with a non-normal distribution, using the CHIKV group as a control sample. (c) Percentage of viral inhibition at different concentrations of the NATT2_06 peptide: 19.12 % (50 μM), 11.09 % (25 μM), 16.25 % (6.25 μM) and 16.63 % (3.125 μM). The green bars represent the levels of viral load inhibition in the presence of NATT2_06 in post-infection treatment of CHIKV. The green bars represent the levels of viral load inhibition in the presence of NATT2_06 post-infection CHIKV treatment. Inhibition was assessed in a dose-dependent manner. Inhibition was assessed in a dose-dependent manner. Fig. 3 Fig. 4 The effect on the reduction of viral load in co-treatment during CHIKV infection with the peptides (A) NATT 2_06 and (B) NATT4_01. The p-value results were calculated using the Student's t-test with and without Welch correction at all concentrations, using the CHIKV group as the control sample. (C) Percentage of viral inhibition under different concentrations of the peptide NATT4_01: 22,62 % (50 μM), 25,31 % (25 μM), 22,08 % (6,25 μM) e 22,44 % . The figure shows the results of three independent experiments. Fig. 4 For the first assay, the treatment with peptides commenced 2 h post-viral infection, following the adsorption and entry stage of the virus into the cell. The final viral load recovered by the cells after the 12-hour assay is illustrated in Fig. 3 a and b. Notably, the inhibitory effect on CHIKV replication was solely evident with the NATT2_06 peptide. At a concentration of 12.5 μM, NATT2_06 exhibited a reduction about ten times in the quantity of plaque-forming units (PFU) (4.11 ± 0.45 Log 10 (PFU/mL), < 0.0001), corresponding to a 21.41 % inhibition of viral load compared to the untreated control (5.23 ± 0.73 Log 10 (PFU/mL)) . The inhibitory effect of NATT2_06 treatment persisted up to a concentration of 3.125 μM (4.36 ± 0.65 Log 10 (PFU/mL)). Conversely, no significant reduction in viral load was observed with NATT4_01 treatment in this assay, indicating no inhibition of CHIKV replication . The studies outlined by de Cena et al. categorize the NATT4_01 peptide as "Non-CPP," unlike NATT2_06, which is classified as "CPP." These classifications offer insight into the assay results. If a peptide fails to penetrate the cell, it cannot interact with the internalized virus, thereby lacking inhibitory activity, as observed with NATT4_01. In a second assay, peptide treatment was administered concurrently with viral infection. The final viral load recovered by the Huh-7 cells after the 12-hour assay is illustrated in Fig. 4 a and b. Notably, antiviral activity against CHIKV was solely observed in the treatment with the NATT4_01 peptide. A reduction of about 10 times in the plaque-forming units (PFU) at a concentration of 12.5 μM (3.94 ± 0.18 Log 10 (PFU/mL), p < 0.0001) was evident, with a corresponding decrease in viral load of 29.26 % , compared to the untreated control (5.57 ± 0.69 Log 10 (PFU/mL)). This effect of NATT4_01 persisted up to the concentration of 3.125 μM (4.32 ± 0.74 Log 10 (PFU/mL)). In this assay, no significant reduction in viral load was observed with treatments using NATT2_06, indicating no activity of the peptides on CHIKV . Arthropod-borne viruses (arboviruses), such as the chikungunya virus (CHIKV), are the primary pathogens of interest for global public health [ , , ]. Therefore, there is a growing need to develop new drugs to treat these viral infections. In this context, AMPs obtained from animal venoms stand out as promising compounds for exhibiting strong antiviral activity against emerging arboviral pathogens . These peptides may have direct antiviral effects on viral particles or replication cycles or exert indirect antiviral effects by modulating the host's immune response . Numerous peptides are undergoing clinical trials as potential antimicrobial agents, owing to their promising antiviral activity against specific viral pathogens and distinctive mechanisms of action. Examples include Myrcludex B, Hepalatide (L47), Adaptavir, and Aviptadil . Additionally, T20© (enfuvirtide) is a peptide currently utilized in the combined therapy of HIV-1 infections, inhibiting the entry of HIV into human cells by preventing viral fusion with the cell membrane. Despite being the sole commercially available peptide for this purpose, T20© faces limitations such as a low genetic barrier to drug resistance and a short in vivo half-life [ , , , ]. Lima et al. investigated the efficacy of Latarcin 1 peptide against CHIKV at varying concentrations ranging from 0.5 to 50 μM, alongside assays involving the NATT peptides. As outlined by Rothan et al. , Latarcin 1 peptide exhibits multifaceted action throughout the viral replication cycle, including entry, assembly/release, fusion, and replication stages. Interestingly, Latarcin 1 demonstrated diminished inhibitory potential against CHIKV during the pre-treatment phase compared to simultaneous addition with the virus inoculum and post-infection treatment. This observation suggests that the peptide exhibits greater potential for inhibiting viral activity when administered alongside the virus or after viral infection, akin to the actions observed with NATT4_01 and NATT2_06 peptides, respectively. Hence, it is plausible that the tested peptides may exert their effects at distinct stages of viral replication. Antiviral peptides are characterized by their cationic nature and amphipathic properties, making them promising candidates for therapeutic applications . These attributes are particularly advantageous in combating enveloped arboviruses like flaviviruses and alphaviruses. The viral envelope originates from host cell membranes, comprising lipid rafts, sphingolipids, and cholesterol, thereby exhibiting an amphipathic nature and negative charge. Consequently, cationic peptides can electrostatically interact with this viral structure, leading to direct virucidal effects or interference with virus binding and fusion during the viral life cycle within host cells . Moreover, they have the capability to disrupt endoplasmic reticulum membranes, thus impeding exponential virus replication . The main advantages of peptides over small chemical compounds are specificity, tolerability, potency, rarer side effects (since the decomposition products are amino acids), and commercial scalability. Moreover, peptides have the potential to interact at the active site of large proteins where protein-protein interaction is essential. Identifying compounds has become much easier now with advances in structural and genomic technologies. However, short half-life, solubility, bioavailability, stability, and natural peptide delivery are the main challenges faced by these peptides . The in vivo toxicity of peptides (NATT2_06 and NATT4_01) was assessed using G. mellonella larvae . The highest concentration tested in all assays was 50 µM (equivalent to 58.6 µg/larvae for NATT2_06 and 73.3 µg/larvae for NATT4_01). No significant toxicity was observed in any of the peptide samples tested. Over the course of the 7-day experiment, only three deaths occurred in the group treated with peptide NATT2_06 (on days 5, 6, and 7), while no deaths were observed in the group treated with NATT4_01. Fig. 5 Toxicity of NATT2_06 and NATT4_01 in G. mellonella. Fig. 5 G. mellonella larvae have emerged as a valuable model for assessing both the in vivo toxicity and efficacy of antimicrobial agents . Notably, there exists a robust correlation between the toxicity of food preservatives observed in Galleria larvae and that in rats, underscoring the model's potential for evaluating in vivo toxicity of various compounds . Insects possess a highly sensitive immune response, and the introduction of foreign material, such as pathogens or pathogen-associated material, can trigger a potent antimicrobial immune response within the insect, rendering it resistant to subsequent infections—a phenomenon known as priming . Moreover, the simplicity and precision of inoculation and control procedures have established Galleria as the predominant model organism in larval studies [ , , ]. Circular dichroism (CD) spectroscopy indicated that the secondary structure of these peptides remained unaffected upon interaction with POPC:POPG LUVs. However, the incorporation of cholesterol into the LUVs (POPC:POPG:Chol) slightly altered only the secondary structure of NATT4_01 . The zeta potential, derived from the mobility of cells in an electric field under defined pH and salt conditions, offers insights into cell surface charge . Assessments of zeta potential using model membrane systems revealed variations across different peptides, indicating that NATT4_01 more efficiently achieves charge neutralization in both POPG:POPC (2:1) and POPC:POPG:Cholesterol (2:1:1) setups . Fig. 6 Zeta-potential for membrane model systems in the presence of (A) NATT4_01 and (B) NATT2_06. Bars represent the zeta-potential range. POPC:POPG (1:1) (orange squares); POPC:POPG (2:1) (pink triangles) and POPC:POPG:Chol (2:1:1) (blue circles). The lipid concentration was kept constant at 200 mM, while peptide concentration ranged from 0 to 30 μM. Fig. 6 Moreira Brito investigated the synthetic antimicrobial peptide LyeTx Ib Cys, derived from LyeTx I found in the venom of the spider Lycosa erythrognata . Their study revealed that when subjected to zeta potential tests on POPC:POPG LUV membranes, it elicited an increase in the membrane's surface charge, even at relatively low concentrations ranging from 20 μM to 40 μM . These concentrations mirror those used in assays with peptides NATT2_06 and NATT4_01, indicating that synthetic antimicrobial peptides indeed interact with membranes. Another investigation into zeta potential analysis in POPC:POPG LUVs, this time utilizing the lipopeptide polymyxin B, indicates that in the presence of the peptide, the zeta potential data display a trend towards less negative values. This outcome suggests that initial electrostatic interactions play a significant role in peptide binding . Despite interacting with LPS, there is not complete neutralization of the membrane, as seen in assays with peptides derived from natterins, hinting that polymyxin B may not fully access the negative charges of LPS aggregates, similar to the behavior expected from the peptides studied by our group. Moreover, Domingues and colleagues propose that most cationic peptides can prompt aggregation of negatively charged lipid vesicles at concentrations considered high. Their study also highlights that many hydrophobic peptides can interact with neutrally charged lipids and induce their aggregation, suggesting these properties hold promise in the design of new peptides with antibiotic activity . This study investigates the intricate dynamics of antimicrobial and cell-penetrating peptides (AMPs and CPPs) derived from Natterin toxin, exploring their stability, antimicrobial efficacy, cytotoxicity, and antiviral activity. The findings underscore the peptides' robust stability under varying temperatures and pH conditions, alongside notable resistance to proteolytic degradation. The antimicrobial assays reveal significant efficacy against P. aeruginosa, S. aureus and C. auris , with varying degrees of inhibition observed across different time intervals and concentrations. Moreover, the minimal cytotoxicity and hemolytic activity demonstrated by the peptides enhance their potential as viable therapeutic agents. The antiviral assays, although revealing limited efficacy against the Chikungunya virus, highlight distinct stages of viral replication where the peptides may exert their effects. Additionally, the in vivo toxicity assessment using G. mellonella larvae provides promising indications of the peptides' safety profiles. Finally, the zeta potential measurements offer insights into the peptides' interactions with model membranes, further elucidating their potential mechanisms of action. As the landscape of antimicrobial resistance continues to evolve, the continuous exploration and refinement of AMPs and CPPs are imperative. This study not only contributes valuable data to the existing body of knowledge but also guides the way for future research endeavors aimed at harnessing the full therapeutic potential of these peptides. The journey towards effective antimicrobial and antiviral agents is arduous, yet the insights gained from this research offer a light of hope in the field of drug development and delivery. The study and protocols have been approved by the ethics committee of centre . The study is conducted according to good clinical practice and the Declaration of Helsinki. This work was supported by the Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP) [ 2021/04316-9 ]. Gabrielle L. de Cena: Writing – review & editing, Writing – original draft, Investigation, Formal analysis, Data curation. Dayane B. Tada: Writing – original draft, Methodology, Investigation, Formal analysis. Danilo B.M. Lucchi: Methodology, Investigation, Formal analysis. Tiago A.A. Santos: Writing – original draft, Methodology, Formal analysis, Data curation. Montserrat Heras: Writing – review & editing, Writing – original draft, Methodology, Investigation, Formal analysis, Conceptualization. Maria Juliano: Methodology, Investigation, Formal analysis. Carla Torres Braconi: Writing – review & editing, Writing – original draft, Methodology, Investigation, Formal analysis. Miguel A.R.B. Castanho: Writing – review & editing, Writing – original draft, Validation, Investigation, Formal analysis, Data curation. Mônica Lopes-Ferreira: Writing – review & editing, Writing – original draft, Supervision, Investigation, Formal analysis, Conceptualization. Katia Conceição: Writing – review & editing, Writing – original draft, Supervision, Resources, Project administration, Methodology, Investigation, Formal analysis, Data curation, Conceptualization. The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | Review | biomedical | en | 0.999997 |
PMC11697424 | Recent years have seen great advances in the treatment of oncology patients. Still, toxicity related to radiotherapy and chemotherapy treatment, or the combination of both, remains high. Cancer patients who undergo this type of therapy often present with symptoms that severely impair their clinical, functional, and nutritional outcome. Specifically, radiotherapy to the pelvic region has been found to be a main cause of nutritional deterioration, mainly due to radicular enteritis, which causes diarrhea, mucositis, abdominal pain, and, to a lesser extent, constipation ( 1 ). Diarrhea related to cancer treatment (DRTO) is a side effect that causes deterioration of the patient’s nutritional status, treatment interruptions, frequent hospitalizations, and impairment of quality of life ( 2 , 3 ). The prevalence of DRTO can reach up to 74% of cancer patients, depending on radiation doses, cancer treatment, female sex, low BMI, advanced age, and having undergone abdominal surgery ( 3 ). It is essential to treat DRTO early and perform the most appropriate intervention to minimize its progression to more severe states that could condition the continuity of cancer treatment and its survival ( 4 ). In clinical practice, early and precise nutritional intervention can favor the control of diarrhea, cover nutritional needs, and promote good nutritional status ( 5 ). Cancer patients frequently present with a high risk of malnutrition per se due to the tumor itself, its location and extension, the oncologic treatment received (surgery, radiotherapy, chemotherapy), the toxicity related to it, the metabolic changes that develop, and their social environment ( 6 ). Previous studies have shown that malnutrition leads to a higher rate of hospital admissions, longer hospital stays, a lower quality of life, and higher mortality related to a decrease in the tolerance of oncologic treatments ( 7 ). Considering the negative effects of malnutrition in cancer patients, it is essential to detect it early and provide optimal nutritional support to minimize its progression. Given the high prevalence of DRTO and malnutrition in the cancer patient, it is striking that clinical practice guidelines focus their recommendations on the pharmacological treatment of diarrhea but do not specifically address the nutritional support needed by patients ( 3 , 8 – 10 ). The nutritional support plan will range from dietary advice (DA) to the use of commercial formulations, including oral nutritional supplements, enteral tube feeding, or even parenteral nutrition, depending on the severity and persistence of symptoms ( 11 ). Oral nutritional supplements may prove the most common and effective tool to treat both symptoms, as long as adequate adherence to treatment is achieved ( 12 ). A peptide diet (PD) may be a nutritional therapy option for patients with DRTO due to its ease of absorption, suppression of pro-inflammatory cytokine production, and maintenance of mucosal integrity ( 13 – 15 ). There are few published studies, however, on the efficacy of nutrition with intestinal peptides in patients with diarrhea associated specifically with colorectal cancer therapy, although there are studies with enteral supplementation with glutamine that show positive results in improving the severity and symptomatology of patients with radicular enteritis ( 16 ). The main studies on PD published to date have been conducted in cancer patients undergoing chemo-radiotherapy treatment but at the level of the oral mucosa, esophagus, stomach, or pancreas, showing heterogeneous results ( 17 – 24 ). For all these reasons, Sanz-Paris et al. ( 25 ) published an algorithm on the nutritional management of DRTO from an oligomeric formula. Based on this algorithm, these authors presented results on the clinical and nutritional efficacy of the implementation of this protocol in clinical practice, with very promising results ( 26 ). In 2023, Peña Vivas et al. ( 27 ) published a clinical study demonstrating that supplementation with PD reduces DRTO with respect to a polymeric diet, affecting the functional and nutritional improvement of the patient with rectal cancer in neoadyuvancy. The aim of the study was to evaluate the efficacy of nutritional supplementation with a glutamine-enriched peptide diet (PD) compared to exclusive dietary advice (DA) on gastrointestinal toxicity, interruption of radiotherapy treatment, and nutritional status in patients with rectal cancer undergoing neoadjuvant chemo-radiotherapy. Cohort study with two groups, performed in patients with rectal adenocarcinoma in neoadjuvant treatment, from May 2021 to July 2023. Adult patients with a diagnosis of adenocarcinoma of the rectum (confirmed by biopsy) in treatment with neoadjuvant chemo-radiotherapy were recruited. Patients with severe renal, cardiac, respiratory, or hepatic disease, pregnant or lactating women, or patients with an allergy or intolerance to any of the ingredients of the formula under study were excluded. The randomization procedure was performed by the person responsible for the study’s statistical analysis, using a number table. Each patient received a participant number that assigned him/her to a specific group (PD or DA). Distribution between groups followed a 1:1 ratio. Patient follow-up was conducted through a series of scheduled visits to assess patient status at different stages of treatment and post-treatment. The evaluations were conducted in three key visits: Visit 1 (V1, 15–20 days before starting radiotherapy), Visit 2 (V2, during radiotherapy) and Visit 3 (V3, at the end of radiotherapy). Additionally, for patients undergoing surgery, an additional evaluation was conducted 30 days post-surgery. In V1, baseline demographic data (sex and age) were collected, along with clinical data related to the oncologic diagnosis and treatment. To determine the effect of nutritional supplementation, the following evaluations were performed in the subsequent visits: Intestinal toxicity: Using the Common Toxicity Criteria version 5.0 of the National Cancer Institute (CTCAE v5.0), the degree of gastrointestinal toxicity associated with cancer treatment was evaluated: nausea, vomiting, abdominal pain, intestinal mucositis, diarrhea, and constipation. In addition, the following were collected: total volume radiation dose (cc), minimum, average and maximum radiated bowel (percentage and Gy), and the volume of radiated bowel (V40 < 150cc) in short cycle and long cycle. Functionality: The scale designed by the Eastern Cooperative Oncology Group (ECOG) was used. The ECOG scale assesses the evolution of the patient’s capabilities in daily life while maintaining maximum autonomy. This data is critical when considering treatment since the therapeutic protocol and the prognosis of the disease depend on this scale. The ECOG scale is scored from 0 to 5. Radiotherapy treatment interruptions: The percentage of patients who required interruption of treatment during follow-up was collected. Nutritional status: Anthropometric data were collected (weight, height, calculation of the percentage of weight lost, and calculation of the body mass index), body composition (percentage of fat mass and percentage of fat-free mass), analytical data (total protein, albumin, prealbumin, C-reactive protein, cholesterol, and triglycerides), and a diagnosis of malnutrition was made following the GLIM criteria. Surgical complications: The percentage of patients who underwent surgery who presented infectious complications, fistulas, re-interventions, re-admissions, or death 30 days after surgery was recorded. Hospital stay was also recorded. Sensory evaluation of the nutritional supplement: A sensory evaluation was carried out in which odor, color, flavor, and perceived texture were evaluated on a semi-quantitative Likert scale (0–5). The responses were qualitatively classified as: very bad = 0, bad = 1, fair = 2, good = 3, very good = 4, or excellent = 5. Following the ESPEN recommendations for cancer patients, all patients received dietary recommendations to increase energy and nutrient intake through regular dietary intake. Moreover, patients in the intervention group that received the peptide diet were instructed to take 1–2 containers of the nutritional supplement daily (according to their nutritional needs to be covered) from day 1 of radiotherapy until the time of surgery, continuously, for a total of 12 weeks. Formula studied ( Table 1 ): ∙ PD (Bi1 PEPTIDIC ® , Adventia Pharma). Oral nutritional supplement (ONS) oligomeric, hypercaloric and hyperproteic, without fiber. The study was carried out in accordance with the Helsinki declaration. The study protocol, the patient information sheet, and the informed consent form were approved by the Ethics Committee for Research with Medicines of the Hospital Universitario de Gran Canaria Doctor Negrín on May/2021 . All patients were informed of the conditions of participation in the study and agreed to participate after signing the informed consent form. A statistical study was carried out using the SPSS 22.0 program (IBM). Quantitative variables were evaluated for normal distribution with the Kolmogorov–Smirnov test and expressed as mean and standard distribution. A comparison between quantitative variables was performed with Student’s t -test. Qualitative variables are expressed as absolute frequencies and percentages. For the comparison between variables, the chi-square test and the calculation of the relative risk (RR) with its 95% confidence interval were used. Subanalysis was performed by tumor stage, oncologic treatment (short or long), and diagnosis of malnutrition. A p -value of less than 0.05 was considered significant. Fifty-four patients diagnosed with rectal adenocarcinoma under neoadjuvant treatment were initially selected. Fifty-one patients were randomized uniformly to the peptide-diet group (25 subjects) or the dietary-counseling group (26 subjects). No enrolled patients were excluded, and all completed the intervention and follow-up period . Table 2 presents the demographic and clinical parameters, with no differences found between intervention groups. Globally, the 52.9% received chemotherapy treatment with capecitabine, 33.3% with FOLFOX (leucovorin calcium, fluorouracil, and oxaliplatin), 11.8% with XELOX (capecitabine and oxaliplatin), and 2.0% did not receive chemotherapy treatment, with no differences among the intervention groups. Regarding associated metabolic pathologies, 19.6% had diabetes mellitus, 39.2% had dyslipidemia, and 5.9% had heart disease, with no differences between intervention groups. The supplementation pattern was 1 brik/day in 52% of the patients and 2 briks/day in 48% throughout the intervention period. Regarding supplement intake, 29.2% stopped taking the supplement, 50% took 1 brik/day, and 20.8% took 2 briks/day. There were no differences between groups in the prevalence of nausea, vomiting, and abdominal pain at the visits performed, but there were differences in the presence of intestinal mucositis and diarrhea at the final visit, with more in the group that received DA ( Table 3 ). When grouping the toxicity grades at ≥ 2, it was observed that toxicity related to the development of diarrhea was confirmed as more frequent in the DA group at the intermediate visit, with a RR of 0.218 (95% CI = 0.052–0.923) and at the final visit, with a RR of 0.103 . This situation was also confirmed in the development of mucositis at the final visit , with a RR of 0.405 (95% CI = 0.280–0.584). In the subanalysis performed by radiotherapy treatment (long or short), in both cases it was again confirmed that mucositis at the final visit was more prevalent in the DA group (long: 33.3 vs. 0%, p = 0.023; short 30.8 vs. 0%, p = 0.036). With respect to diarrhea, it was more frequent in the DA group at the final visit (long: 54.6 vs. 7.7%, p = 0.012; short: 38.5 vs. 8.3%, p = 0.047). In the sub-analysis performed by stage, in stage III mucositis at the final visit was more prevalent in the DA group (38.9 vs. 0%, p = 0.002), as was diarrhea (38.9 vs. 5%, p = 0.011). In the sub-analysis stratified by nutritional status, among patients with malnutrition, the prevalence of mucositis at the final visit was significantly higher in the DA group (30 vs. 0%, p = 0.024). Additionally, the incidence of diarrhea was greater in the DA group at both the intermediate visit (45.5 vs. 6.7%, p = 0.020) and the final visit (50 vs. 6.7%, p = 0.013). Among patients with adequate nutritional status, no significant differences were observed in the incidence of diarrhea. However, mucositis at the final visit remained more prevalent in the DA group (35.7 vs. 0%, p = 0.034). A lower rate of interruptions was observed in the group treated with PD (0%) than in the DA (11.5%), although it did not reach statistical significance ( p = 0.070). In the subanalysis by stage, stage III patients receiving DA were observed to have a higher frequency of interruptions of radiotherapy treatment (15.8 vs. 0%, p = 0.049). In the sub-analysis performed by malnutrition, patients with malnutrition who received DA were observed to have a higher frequency of interruptions in radiotherapy treatment (18.2 vs. 0%, p = 0.040). No differences were observed between groups or in the evolution of functional capacity measured by ECOG during the visits ( Table 4 ). In both groups a deterioration of nutritional status was observed, especially in the DA group . Regarding anthropometric and body composition parameters, no differences were detected throughout the evolution ( Table 5 ). Regarding the analytical analysis, differences between groups were detected in the values of prealbumin in the final determination, but they were not consistent compared to the initial parameters ( Table 5 ). Of the total patients, 41 underwent surgery (20 in the DA group and 21 in the PD group). No differences were observed between groups in the surgical complications evaluated ( Table 6 ), nor in hospital stay [8.84 (10.64) days in the PD group vs. 8.60 (12.11) days in the DA group, p = 0.944]. Adequate acceptance of the peptide diet under study was observed . The mean score for odor was 3.04 (1.55), color 3.71 (1.46), flavor 3.25 (1.36), and texture 3.21 (1.35). Gastrointestinal toxicity, especially diarrhea and mucositis, are frequently present in patients with colorectal cancer. In our study, the comprehensive treatment of both clinical situations with a peptide enteral nutrition formula enriched with glutamine reduced the digestive toxicity associated with oncologic treatment much more than the usual clinical practice consisting of dietary advice. The population recruited in both groups of nutritional intervention was completely homogeneous, with no difference detected between groups. In general, older patients were recruited, mainly males, with a diagnosis of rectal adenocarcinoma, mostly stage III, susceptible to receiving neoadjuvant chemo-radiotherapy treatment. Malnutrition was present in one out of two patients at the beginning of the study. The PD diet achieved an improvement in DRTO with respect to the group that received DA exclusively. Specifically, stage III patients and patients with malnutrition presented a lower incidence of diarrhea when receiving PD compared to those who followed standard clinical practice with DA. Focusing exclusively on the peptide diet, the study by Sanz-Paris et al. ( 26 ) determined the number of stools and their consistency with the Bristol scale but did not measure the intestinal toxicity of diarrhea with the CTCAE 5.0 scale, making the results of our studies difficult to compare. The study by Peña Vivas et al. ( 27 ) did measure the presence of diarrhea with the CTCAE 5.0 scale but did not determine the degrees of toxicity, which were recorded in our study. In this case, at the final visit the prevalence of toxicity was 8% in PD and 45% in DA, values very similar to those detected by this group (5% in the PD group and 85% with a polymeric diet), also achieving in both cases a statistically significant reduction in RR in favor of the PD group ( 27 ) [RR of 0.103 (95% CI = 0.020–0.537) vs. RR of 0.059 (95% CI 0.015–0.229)]. In addition to an improvement in DRTO, an improvement in intestinal mucositis was observed in the group that received PD, an aspect of great clinical effect for the patient. In the literature reviewed, only in the study by Peña Vivas et al. ( 27 ) was this variable evaluated, and in both cases a decrease in RR in favor of PD was observed [RR of 0.405 (95% CI = 0.280–0.584) vs. RR of 0.202 (95% CI 0.102–0.399)]. Certain metabolic alterations and potential improvements may arise from the effects of the test diet (PD), likely attributed to specific bioactive components such as extra virgin olive oil (EVOO), glutamine, and the omega-3 fatty acids EPA and DHA. EVOO is rich in the phenolic compound oleocanthal, which exerts potent anti-inflammatory effects by inhibiting cyclooxygenase (COX) enzymes, specifically COX-1 and COX-2, key mediators in the biosynthesis of pro-inflammatory molecules. Attenuation of chronic inflammation, both at the intestinal and systemic levels, may significantly optimize metabolic function by reducing oxidative stress and downregulating the production of pro-inflammatory cytokines, such as IL-6 and TNF-α. This systemic anti-inflammatory effect may enhance nutrient utilization efficiency and facilitate the restoration of energy metabolism compromised by oncologic treatments ( 28 ). Glutamine, a conditionally essential amino acid, plays a pivotal role in the energy metabolism of enterocytes (intestinal epithelial cells). Under conditions of metabolic stress, such as those induced by cancer therapies, glutamine demand escalates due to its critical involvement in cellular repair and regenerative processes. Exogenous glutamine supplementation via the test diet may promote intestinal homeostasis by upregulating protein synthesis, reducing intestinal permeability, and preserving epithelial barrier integrity. These effects could enhance nutrient absorption and attenuate the protein catabolism linked to systemic inflammation and treatment-induced toxicity, thereby supporting improved nutritional status ( 29 ). EPA and DHA are implicated in mitigating metabolic dysfunctions triggered by cancer therapies and in enhancing patient immune function through modulation of inflammatory pathways and cell membrane fluidity ( 30 ). Finally, hydrolyzed proteins are characterized by an accelerated absorption profile within the gastrointestinal tract, leading to a more rapid increase in plasma amino acid levels. This faster digestion rate enables a quicker entry of amino acids into circulation, thereby augmenting the anabolic response in skeletal muscle. Additionally, hydrolyzed proteins reduce splanchnic amino acid extraction, thereby increasing peripheral availability to tissues such as muscle, and enhancing postprandial protein synthesis ( 31 ). One of the most noteworthy results of the study is the reduction in the number of interruptions of radiotherapy treatment in the group that received PD when the onco-logic stage was III and they were malnourished. This variable was not evaluated in any of the studies reviewed, so it could shed light on the clinical effect of specific nutritional treatment with peptide formulas in patients with rectal cancer and malnutrition. Regarding the effect on nutritional status as measured by GLIM criteria, both groups recovered during follow-up, although it was more effective in the PD group. In the case of anthropometric, body-composition, and analytical variables, there were no statistically significant differences between intervention groups. Other studies carried out with elemental and peptide diets ( 22 – 24 ) and peptide diets ( 26 , 27 ) also showed an improvement in nutritional status at the anthropometric and analytical levels, but with greater robustness. This situation could be justified by a major methodological difference, since our study did not solely include patients at risk of malnutrition, which could have influenced the results obtained in the improvement of nutritional status. No differences were detected in the frequency of surgical complications or hospital stay between groups. This situation could be explained by the fact that the peptide formula under study was not enriched by immunonutrients (arginine and nucleotides), nor did it contain the doses of omega-3 (EPA and DHA) that have been shown to be effective in the clinical improvement of the surgical patient (2–3 g/day) ( 6 ). As limitations of the study, it should be noted that a dietary record was not collected, which may have limited the study of how the overall intake of the patient may have influenced their nutritional evolution. In addition, the nutritional supplementation pattern in the PD group was not homogeneous, since it was adapted to the specific nutritional needs of each patient. This situation could also be assessed as standard clinical practice since the nutritional-support regimen should always be individualized to the nutritional needs of the patient. As strengths of the study, it should be noted that this is the first study in which the efficacy of a peptide enteral nutrition formula was evaluated during interruptions of radiotherapy treatment. This shows that specific nutritional support with a peptide formula goes beyond the simple recovery of the oncologic patient’s nutritional status and also has an effect on their clinical improvement, reducing digestive symptoms that condition their overall evolution and tolerance of oncologic treatment. This may be due to, among other possible factors, the use of partially hydrolyzed protein, the fact that the fat intake is mainly from MCT, or the fact that glutamine, the main amino acid of the enterocyte, has been supplemented. In conclusion, the glutamine-enriched peptide diet had a protective effect on the development of gastrointestinal toxicity associated with antineoplastic treatment, specifically on the development of DRTO and intestinal mucositis, and reduced the interruptions of oncologic treatment in patients with colorectal cancer undergoing radiotherapy and chemotherapy. | Study | biomedical | en | 0.999996 |
PMC11697425 | Substance use is a major public health concern globally ( 1 ). Disability-adjusted life years (DALYs) are often used as a measure of impact of disease states or health behaviors on health-related quality of life. One DALY represents the loss of the equivalent of one year of full health. In 2016, 4.2% of DALYs globally were attributable to alcohol use, while 1.3% were attributable to other substance use ( 1 ). Annually, about 11.8 million deaths are linked to substance use ( 2 ), with alcohol alone causing three million deaths worldwide ( 3 , 4 ). In North America, in 2019, substance use disorders (SUDs) ranked 5 th for years lived with disability (YLDs) and 15 th for years of life lost (YLLs) ( 5 ). Among countries in South and North America, Canada ranks second in terms of DALYs ( 5 ). Furthermore, Canada experiences approximately 67,000 deaths each year as a result of substance use ( 6 ). Based on data from the 2012 Canadian Community Health Survey – Mental Health, about 6 million Canadians (21.6%) met the criteria for SUD in their lifetime ( 7 ). In Nova Scotia, the lifetime prevalence of SUD was 30.2%, the second highest in the Canadian provinces ( 8 ). SUD and mental illnesses often co-occur ( 9 ). Substance use can exacerbate symptoms of mental illnesses, while conversely, mental illnesses can drive individuals towards substance use as a form of coping or self-medication ( 10 ). A study conducted in Ontario showed that the prevalence of SUD varies from 17.1% among individuals with anxiety disorder to 34% among individuals with personality disorder ( 11 ). Moreover, polysubstance use is common among those with mental health disorders. For example, a study conducted in Nova Scotia showed that the prevalence of comorbid alcohol and cannabis use disorders among patients with a psychotic disorder was 50.0%, while the prevalences of alcohol use disorder alone and cannabis use disorder use alone were 12.5% and 20.8%, respectively ( 12 ). SUD among individuals with mental illnesses can lead to misdiagnosis, delayed intervention, relapse, poor prognosis, and poorer overall health ( 13 ). Thus, understanding the substance use profile among individuals with mental health needs is essential for tailoring effective interventions, addressing their specific needs, managing comorbidities, and improving treatment outcomes ( 14 ). Most of the current studies about substance use are predominantly centered on clients already engaged in mental health and addiction services or on the general population. Less attention has been paid to individuals in the early stages of help-seeking. To the best of our knowledge, no study has investigated substance use profiles among the ‘pre-clinical’ population of those seeking mental health and addiction (MHA) services but who have yet to see a clinician. Also, no study has examined substance use disparities based on gender, race, or ethnicity, and socio-economic status among this population. Understanding how gender, ethnicity, and income sources intersect to influence substance use patterns among individuals seeking MHA services is essential for designing prevention and treatment strategies tailored to an individual’s unique needs. Additionally, understanding the epidemiology of substance use and its intersectionality with socio-demographic features in this population has significant implications for planning MHA services. Furthermore, exploring substance use profiles among individuals with mental health needs is pivotal for developing early interventions, personalized treatment plans, and targeted resource allocation. Examining the various routes of substance administration among individuals seeking MHA services can provide insight into potential health risks, helping to inform harm reduction strategies. Therefore, the objectives of this study were to investigate, among MHA intake clients in Nova Scotia: 1) the prevalence of substance use by gender, ethnicity, and income source; 2) the routes of substance administration; and ( 3 ) factors associated with substance use. All clients aged 19-64 years who were assessed by MHA Intake between January 2020 and December 2021 were included. The MHA Central Intake was established in 2019 by the Department of Health and Wellness of Nova Scotia as the entry point of MHA specialty services in the province and is the primary single entry-point of MHA services in Nova Scotia for this age range (individuals 18 years and younger are service through the child and adolescent system, and those 65 years and older are referred directly to geriatric specialty services). Individuals with symptoms of mental health and/or addiction problems living in any region of Nova Scotia (Northern, Eastern, Western, and Central zones) can directly contact MHA central intake using a toll-free telephone number. This central intake screens individuals with mental health and/or addiction problems and promptly links them to the appropriate level of mental health and addiction care. The intake process involves a semi-structured interview with the client by a clinician (e.g., clinical therapist, social worker, or registered nurse) over the telephone or in person. The information gathered during the interview was recorded on the electronic Intake Assessment form, which, once finalized, becomes an integral part of the individual’s permanent health record ( 15 ). This study was a secondary data analysis using existing de-identified data. Given the large number of clients in the database and vast area where they lived, obtaining informed consent from each client was not feasible. This study was approved by the Research Ethics Board of the Nova Scotia Health Authority. Individuals at higher risk of suicide were referred to psychiatrist for further evaluation, while those at mild and moderate risk received support from health professionals and psychologists working in the MHA Central Intake. The current substance use screening by MHA Intake included current use of alcohol, cannabis, hallucinogens, inhalants, opioids, sedatives/hypnotics, stimulants, and tobacco. The frequency of using these substances was evaluated via a questionnaire and included options of 2-4 times a month, 2-3 times a week, four or more times a week, and daily. The method(s) of administering each substance used was also queried, including oral, intravenous, inhaling, intramuscular, subcutaneous, smoking, snorting, transdermal patch, and/or rectal administration routes. The frequency of substance use was recoded into three categories of use: occasional use (2-4 times a month), frequent use (2-3 times a week, and four or more times a week), and daily use. The following mental health problems were assessed and current/provisional diagnoses were made based on the client’s report: depression, anxiety disorder, bipolar disorder, attention-deficit/hyperactivity disorder, adjustment disorder, autism, eating disorder, neurocognitive disorder, obsessive-compulsive disorder, personality disorder, psychotic disorder, posttraumatic stress disorder, and substance use disorder. We aggregated the presence of current or past provisional diagnoses of mental health disorders into a single variable with three levels: no mental health disorder (coded as 0); provisional diagnosis of one current/past mental illness (coded as 1); or two or more provisional diagnoses of current/past mental illnesses (coded as 2). Clients were interviewed for the presence current/past medical illnesses and we aggregated the presence of current or past provisional diagnoses of medical illnesses into a single variable with three levels: no current/past medical illness (coded as 0); a provisional diagnosis of one current/past medical illness (coded as 1); or two or more provisional diagnoses of current/past medical illnesses (coded as 2). Clients were assessed for past suicide attempts, suicidal ideation in the two weeks before the interview, and current suicidal ideation (at the time of the interview). The clinician who conducted the interview classified clients into low, moderate, or high suicide risk levels based on a suicide risk assessment and intervention tool ( 15 ). Clients were assessed to determine if they had experienced current/past psychosocial stressors in the following areas: childhood adversity, abuse or other trauma, economic/financial, educational/school, ethnic/cultural, spiritual/religious beliefs, family and/or significant relationship, social relationships, housing or legal issues, leisure/recreational, military, parent/guardian–child conflict, or physical health/disability, and how these stressors affected their functioning ( 15 ). In this analysis, we classified psychosocial stressors into three categories: the absence of any such stressors (coded as 0); the experience of one such stressor (coded as 1); or the experience of two or more psychosocial stressors (coded as 2). Clients were queried on gender identity, age, marital status, income source(s), ethnicity, living conditions, access to employee assistance programs (EAP) or private insurance, and Nova Scotia health zone (Northern, Eastern, Western, or Central). We first examined the frequency of each variable, its distribution, and rates of missing values. We selected 128 variables with missing values for imputation based on our objectives. Multiple Imputation by Chained Equations (MICE) was used to impute variables with missing values (missing at random). We opted for MICE as our method of choice because of its flexibility in generating multiple predictions for each missing value. This approach relies on the variable’s distribution, the observed values for a given participant, and the correlations observed in the dataset for other participants ( 16 , 17 ). In this study, the imputed variables with missing values ranged from 0.004% (for bromazepam [a sedative/hypnotic] route of administration) to 20.9% (impact of mood symptoms on functioning). Traditionally, a small number of imputations (five to ten) are commonly used ( 18 , 19 ). However, to achieve a better estimate of standard error, a higher number of imputations are recommended, which is at least equal to the average percentage rate of missing values, as a rule of thumb ( 18 , 19 ). Considering the average percentage rate of missing values in our study (i.e., 0.76%), we used five imputations with a maximum iteration of 20. The imputed datasets were used to complete variables with missing values and Rubin’s rules were used to pool estimates in our analysis ( 20 ). Descriptive statistics were used to report on socio-demographic characteristics of the sample and rates of use of each substance. To reduce the complexity of the analysis and increase the interpretability of the results, for objective one, the frequency of using substances such as alcohol, opioids, stimulants, cannabis, hallucinogens, sedatives/hypnotics, tobacco, and other substances (nitrous oxide, cough syrup, caffeine pills) was aggregated to yield one composite variable labelled “frequency of substance use.” To derive this composite frequency of substance use variable, we retained the highest frequency score from among the individual frequency of alcohol, opioid, amphetamine/methamphetamine, cocaine, cannabis, hallucinogens, sedatives, and/or other substance use items. For example, if the client’s responses were ‘not using’ for alcohol, ‘2-4 times a month’ for opioids, ‘2-3 times a week’ for cocaine and amphetamine/methamphetamine, and ‘daily’ for cannabis, their overall frequency of substance use was coded as ‘daily’. Then, the client’s overall frequency of substance use was re-coded into three categories: occasional use (2-4 times a month), frequent use (2-3 times a week and four or more times a week), or daily use. We then calculated the proportion of the sample who were using substances and the proportions using at each frequency category. We also computed these two substance indices as a function of gender, ethnicity/race, and income source. For objective two, we calculated the proportion each route of administration for users of each substance. Multinomial logistic regression was employed to investigate factors associated with occasional, frequent, and daily substance use compared to abstaining from substance use. First, we included demographic and socio-economic variables as predictors in the multinomial logistic regression model without introducing any interactions between variables. Then, two-way interactions between gender and other predictor variables were included in the multinomial logistic regression model, along with demographic and socio-economic variables, history of mental and physical illnesses, suicide risk, and psychosocial stressors. Pooled adjusted odds ratios and corresponding 95% confidence intervals were used to estimate the strength of association. The analysis was conducted utilizing R software (version 4.2.3). A total of 22,500 clients who contacted MHA intake from 2020 to 2021 were included in this study . The most frequently reported substances used were alcohol (47.3%), tobacco (44.4%), cannabis (38.4%), and cocaine (8.8%) . The prevalence of polysubstance use was 44.4%. Among the participants, 36.1% used a substance daily, while 10.0% and 12.4% used it frequently and occasionally, respectively. A significantly higher prevalence of daily substance use was observed among men (44.7%, p < 0.001) than among women (29.3%), non-binary individuals (32.3%), and those who did not specify their gender (36.7%). Among homeless participants, 69.7% reported daily substance use, which was about two times higher than the prevalence observed among individuals living in private homes, apartments, or rented rooms (35.3%) (see Table 1 ). High prevalences of daily substance use were observed among non-White men whose income source was social assistance or disability support (60.9%) or employment insurance/pension (56.4%) (see Table 2 ). Among clients who used amphetamine/methamphetamine, cannabis, and opioids, 52.4%, 60.1%, and 69.0% reported daily use, respectively (see Table 3 ). Smoking was a common route of administration among participants using cannabis (80.0%), cocaine (38.3%), and amphetamine/methamphetamine (28.3%), whereas injection was a common route of administration among participants using opioids (52.2%) ( Supplementary Table 1 ). Multinomial logistic regression modelling revealed that men were more likely to engage in occasional (aOR =1.48, 95% CI: 1.24, 1.76), frequent (aOR =2.12, 95% CI: 1.77, 2.54), and daily substance use (aOR = 2.60, 95% CI: 2.27, 2.97) than women. Also, non-binary individuals or those not specifying their gender had higher odds of occasional (aOR = 1.19, 95% CI: 1.00, 1.41), frequent (aOR =1.23, 95% CI: 1.02, 1.48), and daily (aOR =1.39, 95% CI: 1.21, 1.58) substance use compared to women. In comparison to individuals residing in a private home, apartment, or rented home, individuals experiencing homelessness or residing in other living conditions had increased odds of daily substance use (aOR = 1.93, 95%CI = 1.57, 2.37). Non-White individuals, as compared to those of White ethnicity/race, had higher odds of daily substance use when their income source was from social assistance or disability (aOR = 2.82, 95% CI: 2.08, 3.82), or employment insurance or pension (aOR = 1.68, 95% CI: 1.16, 2.42). The presence of two or more mental illnesses currently or in the past was associated with increased odds of occasional, frequent, and daily substance use compared to no mental health conditions. In comparison to the absence of psychosocial stressors, experiencing two or more psychosocial stressors was associated with higher odds of engaging in occasional, frequent, and daily substance use (see Table 4 ). Our study revealed large proportions of the MHA Intake clients reported daily substance use. A particularly high prevalence of daily substance use was observed among non-White men with income sources from social assistance or disability support. Being non-White with income sources from social assistance or disability and employment insurance or pension, homelessness/others, and the presence of two or more mental or medical illnesses were associated with higher odds of daily substance use. In this study, about one-third (36.1%) of our sample of individuals seeking mental health and addictions services reported daily substance use. More specifically, the observed prevalence of daily opioid (69.0%) and cannabis (60.1%) use in our study was higher than the prevalence of daily opioid (40%) and cannabis (36%) use reported in a study conducted in Vancouver ( 21 ). The observed differences may be due to variations in the study populations. Our study population consisted of individuals with mental illnesses and addiction, while the Vancouver study focused on individuals who use drugs and experienced chronic pain. Additionally, the Vancouver study had a smaller sample size (1,476 participants) compared to our study, which may contribute to the observed differences. In our study, a large proportion of daily amphetamine/methamphetamine (52.4%), sedative/hypnotics (50.8%), and cocaine (42.6%) use was reported. The high prevalence of daily opioids use among clients of MHA Intake may lead to opioid use disorder and exposes these individuals to overdose risk ( 22 ). The high prevalence of daily substance use in our study can be attributed to the unique nature of our study population: individuals in the early stage of seeking mental health and addiction treatment services. These individuals may use substances daily as a form of self-medication for symptoms of mental health problems ( 23 ). Additionally, the high prevalence of substance use, particularly daily substance use, observed in our study has important clinical implications since using substances can either exacerbate the existing mental health problems or lead to the development of new conditions (e.g., addiction, physical health problems) and drop out once they are engaged in services ( 24 ). Furthermore, the high prevalence of daily substance use in our study implies the importance of an integrated care model that addresses both mental health problems and substance use simultaneously, as well as targeted prevention and intervention strategies aimed at reducing substance use among vulnerable individuals. Also, this finding indicates the need for a broader and more nuanced approach to understanding how substance use interacts with mental health problems and psychiatric medications. The high prevalence of polysubstance use observed in our study (44.4%) has significant clinical implications. Polysubstance use not only exacerbates symptoms of mental health problems but can also interfere with the efficacy of psychiatric medications ( 25 , 26 ). Additionally, the concurrent use of various substances can mask underlying mental health problems and complicate their treatment ( 27 ). Moreover, polysubstance use can increase the risk of overdose, cognitive dysfunctions, and aggressiveness including violent criminal behavior ( 25 ). Using various substances, particularly when novel psychoactive substances are used for adulteration, can lead to in unpredictable health consequences and complicated treatment and harm reduction efforts ( 25 , 28 ). Our study found a significant variation in substance use across socio-demographic characteristics. In line with previous studies ( 29 , 30 ), we found a high prevalence of daily substance use among men (44.7%) compared to women (29.3%) and non-binary/gender non-specified individuals (36.7%). This gender difference can be at least partially attributed to sociocultural factors, including societal norms, expectations, and culturally-sanctioned gender roles ( 30 ). Though the prevalence of daily substance use among women was lower than among men, women are at higher risk of experiencing acute and long term consequences of substance use than men ( 30 ), making the relatively high rates of daily use seen among women in our sample (29.3%) of clinical concern. Among homeless individuals, about two-thirds (69.7%) were engaged in daily substance use. This could be due to the fact that substance use disorder can lead to job loss, disruption of social ties, and loss of housing, which results in homelessness ( 31 ). In Canada, for instance, about 25% of Canadians reported that substance use was responsible for their most recent housing loss ( 32 ). On the other hand, homelessness-related stress may also lead to substance use to cope ( 33 ). An individual’s socio-economic condition significantly influences their substance use and the development SUD. Poverty not only increases substance use but also exacerbates the risks associated with SUD ( 34 ). In line with studies conducted in the USA ( 35 – 37 ), we found a very high prevalence of daily substance use among individuals with income sources from social assistance or disability support (41.3%) and employment insurance or pension (41.1%). This could be due to individuals with economic problems resorting to substance use to cope with difficult life situations and stress related to financial hardships ( 38 ). Additionally, individuals with insecure sources of income may face challenges in accessing mental and addiction treatment services, and as a result, substances may be used as self-medication ( 36 ). We also found that the majority of non-White men with income sources from social assistance/disability (60.9%) and employment insurance/pension (56.4%) engaged in daily substance use. Since non-White races/ethnicities were disproportionately using substances, developing targeted interventions and promoting equitable access to treatment and support services are crucial. In this study, the presence of two or more mental health problems was associated with increased odds of daily substance use. This could be due to the fact that individuals with mental health problems may turn to substance use as a self-medication to temporarily alleviate symptoms of mental illnesses ( 39 ). Additionally, individuals with mental health problems may use substances to cope with stress, as a source of pleasure, and for socialization purposes ( 39 , 40 ). Conversely, in the longer term, substance use affects the brain’s neurobiology and leads to changes in mood, cognition, and behavior, which contribute to the development of mental illnesses or exacerbation of symptoms ( 39 ). We also found that having two or more psychosocial stressors was associated with all levels of substance use: occasional, frequent, and daily. This may stem from the tendency of individuals facing psychosocial stressors to utilize substances as a coping mechanism ( 41 ). Over time, these stressors can increase the risk of initiating substance use and developing addiction ( 42 ). This study is the first provincial-level analysis providing evidence regarding substance use disparities, considering the intersection of gender, ethnicity, and income sources among clients seeking MHA services. This type of study is instrumental in identifying and developing plans to address health equity concerns and instituting intervention strategies that consider the unique needs of various subgroups in society. Also, what makes our study the first in Canada is the unique nature of our study population: individuals seeking MHA services with symptoms of unconfirmed mental health problems and addiction. However, our study has also some limitations. We used a cross-sectional study design that cannot establish a temporal relationship, making it difficult to know if, for example, social disability and mental illnesses precede and/or follow substance use. Moreover, due to social desirability bias, clients may not disclose detailed information about illegal drug use or even deny using it. Additionally, this study may not generalizable to all individuals with mental health problems and addiction across Canada. Moreover, we did not gather data regarding tobacco and other substance use frequency. Additionally, although the prevalence of opioid use was high, we did not gather data regarding opioid overdose and related emergency department visit or hospitalization. Also, we did not use standard tools or DSM-5 criteria to assess mental health problems. The prevalence of daily substance use was high in our sample of individuals seeking mental health and addictions services and varied by participant socio-demographic characteristics of gender identity, ethnicity/race, and/or income source. The highest prevalence of daily substance use was observed among non-White men whose income source was from social assistance or disability support and employment insurance/pension, indicating that prevention and treatment approaches should address these individual and structural level factors contributing to daily substance use. Being homeless/other living conditions (Group home, transition house, jail, halfway house, hostel, and hotel), having two or more medical or mental illnesses (current or past), and experiencing two or more psychosocial stressors were associated with daily substance use; further studies are needed to understand the temporal relationship between these variables and daily substance use. | Review | biomedical | en | 0.999998 |
PMC11697427 | Non-alcoholic fatty liver disease (NAFLD) has emerged as the most prevalent chronic liver disease and a significant cost to the global health system . It is anticipated that the prevalence of NAFLD will increase in tandem with the rise in disorders of glycolipid metabolism, as the progression of NAFLD is closely linked to obesity and insulin resistance. Nevertheless, not much is understood about the pathogenesis of NAFLD. Genetic susceptibility variation, environmental factors, insulin resistance, and alterations in the gut microbiome are believed to be involved in the complex interactions . The interaction between these factors results in the excessive accumulation of lipids in liver cells and changes in lipid metabolism, which ultimately contribute to the development of NAFLD. In addition, the microbiota is responsible for the regulation of the balance between pro-inflammatory and anti-inflammatory signals, which can result in inflammation and the development of non-alcoholic steatohepatitis (NASH). A progressive form of NAFLD, NASH has the potential to progress to cirrhosis and hepatocellular carcinoma (HCC) and is presently the most common reason for liver transplantation. While there has been consistent progress in the identification of therapeutic targets, pathogenesis, and epidemiology of the disease, the therapeutic area has experienced the most sluggish progress. There are currently no FDA-approved pharmaceuticals to treat this disease, and it is imperative that appropriate therapeutic targets be identified. Thus, it is imperative to gain a comprehensive understanding of the pathogenesis of NAFLD and the role of the microbiome in its occurrence and development. This knowledge may be beneficial for the diagnosis of the disease, the identification of new therapeutic targets, and the potential for the microbiome to be used as an early clinical warning system for NAFLD. Over the past decade, the gut microbiome has emerged as a significant regulator of substrate metabolism and energy homeostasis in the host. Abnormalities in the structure and, particularly, the function of the microbiota are anticipated to influence the metabolism of the brain, adipose tissue, muscle, and liver. There is a strong correlation between the development of intestinal host-microbial metabolic axes and metabolic diseases and microbial components or metabolites, including lipopolysaccharides, secondary bile acids, dimethylamine, and trimethylamine, as well as compounds produced by carbohydrate and protein fermentation . In recent years, there has been exploration of the potential mechanisms by which the intestinal microbiota regulates NAFLD. The transfer of harmful microbes and their derived metabolites to the liver through a disrupted intestinal barrier is one of the hypothesized mechanisms. This process results in an inflammatory response in the liver and the co-occurrence of steatosis with dietary factors or metabolite-induced interactions. The notion that gut bacteria affect liver homeostasis is derived from the near anatomical interaction between the gastrointestinal tract and the liver, which is frequently referred to as the “gut-liver axis.” The liver is the initial organ to drain the stomach through the portal vein, which is a critical component of the link between host-microbial interactions. Portal blood contains additional microorganisms that actively or passively migrate from the intestines to the bloodstream, in addition to nutrients. This results in the liver being one of the organs that is most susceptible to gut bacteria and bacteria-derived metabolites . Nevertheless, there are limited direct studies on the hepatic microflora, and the precise mechanism of action for the dysregulation of the hepatic microflora that contributes to the development of NAFLD remains unclear. This is expected to result in alterations in the pertinent terminal metabolites in the liver tissues of patients. At present, there is a lack of consensus regarding the specific microorganisms and metabolites present in patients with NAFLD. The identification of specific NAFLD metabolome phenotypes can assist in the development of additional diagnostic tools and therapeutic interventions. Our research directly investigates the microbes and their metabolites in the liver. The basic biological characteristics of microbial composition and metabolomics in the liver tissues of NAFLD patients may offer valuable insights into the disease mechanisms and physiological functions of the host. Consequently, the objective of this investigation was to examine the microbial composition and metabolite characteristics of liver tissue in two distinct coyotes: patients with NAFLD and normal controls. Additionally, the study sought to determine the impact on NAFLD through the regulatory influences between the two. We enrolled 13 patients (≥18 years) who were newly diagnosed with NAFLD at Yan’an Hospital in Kunming, Yunnan Province, from July 2020 to March 2024. A control group of 12 non-NAFLD subjects, matched by age, sex, and ethnicity, was concurrently recruited. All participants in the control group were asymptomatic volunteers with a standard diet and no recent or chronic illnesses. Liver biopsies were conducted on participants exhibiting abnormal pre-specified imaging criteria, and the biopsies were evaluated blindly, with outcomes determined by the consensus of two expert pathologists. The incidence of NAFLD was ascertained via biopsy. Table 1 presents the demographic information of the subjects enrolled in the study. All subjects granted informed consent to partake in the study. The Medical Ethics Committee of Yan’an Hospital Affiliated to Kunming Medical University approved this study. The FastDNA ® Spin Kit for Soil (MP Biomedicals, China) was employed to extract the genomic DNA of liver tissue samples. To assess the purity and integrity of the genomic DNA, it was extracted using 1% agarose gel electrophoresis. Nanodrop 2000 was employed to ascertain the purity and concentration of genomic DNA. The V3–V4 hypervariable region of the 16S r RNA gene was amplified by primers Primer F = Illumina adapter sequence1 + GTGCCAGCMGCCGCGGTAA and Primer R = Illumina adapter for each sample sequence2 + GGACTACHVGGGTWTCTAAT. The Illumina Miseqbenchtop sequencer (Illumina, United States) was employed to sequence the amplified libraries, which were constructed using purified PCR products. A double-terminal sequencing strategy of 2 × 250 bp was employed. In QIIME2 , the original sequencing data underwent quality filtering, noise reduction, splicing, and dechimerization. Feature tables and representative sequences were subsequently generated. Additional analysis was conducted in the DADA2 (v 1.6.0) pipeline , and amplicon sequence variants (ASVs) were acquired. The confidence threshold was 0.8, and the RDP classifier algorithm was used to compare the taxonomic attribution of ASV representative sequences with the Ribosomal Database Project (RDP) (version 11.5) database. Alpha diversity analysis was implemented to evaluate the diversity and richness of each group. Species, observed, and Chao1 The richness was analyzed using observed species and ACE, while the diversity was analyzed using Shanno, Simpson, InvSimpson, and Coverage. The vegan package from the R project (v2.5.6) was employed to execute the calculations, and the ggplot2 package (v3.3.0) was used to visualize the results. The difference in ASV composition between various samples is measured by beta diversity, which is assessed using principal component analysis (PCA) and principal coordinate analysis (PCoA). Both of these methods are appropriate for the supervised analysis of high-dimensional data. The vegan package (v2.5.6) employs the similarity analysis (ANOSIM) function to determine the importance of beta diversity. Metastats software was employed to compare the microbiota characteristics of healthy controls and NAFLD patients. The non-targeted metabolites in liver samples were identified using liquid chromatography-mass spectrometry (LC-MS). Weigh the correct amount of sample in a 2 mL centrifuge tube and add 1,000 μL tissue extract [75% (9:1 methanol: chloroform)]. The steel ball was filled with 25% H 2 O (stored at −20°C) and placed in a tissue grinder. It was ground at 50 Hz for 60 s, ultrasonic at room temperature for 30 min, placed on ice for 30 min, centrifuged, concentrated, and dried. Twenty microliters of each sample was combined into QC samples, while the remaining samples were examined using LCMS. The samples were separated using an ACQUITY UPLC ® HSS T3 1.8 μm (2.1 × 100 mm) column (Waters, Milford, MA, United States) at 40°C. Mass spectrometry was carried out at a flow rate of 0.30 mL/min. The mass spectrum data were collected using a Thermo quadrupole-electrostatic field orbital trap high resolution mass spectrometer (Thermo Fisher Scientific, United States) with an electrospray ion source (ESI) that may operate in either positive or negative ion mode. The original data is first converted to mzXML format using MSConvert in the ProteoWizard package . In addition, RXCMS was utilized for peak detection, filtering, and alignment. Metabolites are identified using exact mass numbers (<30 ppm) and MS/MS fragmentation patterns, then matched with HMDB, MassBank, LIPID MAPS, mzCloud, and KEGG. To detect differences between groups, we employed orthogonal projection to potential structure discriminant analysis (OPLS-DA) or the discriminant analysis model (PLS-DA). To reduce the risk of overfitting, the model parameters R 2 and Q 2 were calculated to determine the model’s interpretability and predictability. The OPLS-DA model calculates the variable importance in projection (VIP). The p -value was calculated using the paired t -test in one-dimensional statistical analysis, with p1 serving as the screening criterion for meaningful differential metabolites. Metabolites were annotated using the Kyoto Encyclopedia of Genes and Genomes (KEGG). The route analysis was carried out on the MetaboAnalyst7 database. Quantitative data conforming to normal distribution were expressed as mean ± SD, and t -test was used for comparison between groups. Quantitative data of non-normal distribution were expressed as median (Q1, Q3), and Mann–Whitney U test was used for comparison between groups. p < 0.05 was considered to be statistically significant. All statistical analyses were performed using the SPSS Statistics 27 software package. p < 0.05 was considered significant difference. Mean age, fasting blood glucose, ALT, AST were not significantly different between the two groups. The number of NAFLD group with type 2 diabetes mellitus, BMI, total cholesterol level, degree of steatosis, lobular inflammation, fibrosis stage, NAFLD activity score (NAS) were significantly higher than the control group. The demographic information of the participants included in the study is shown in Table 1 . A total of 2,693 high-quality reads were obtained from 25 liver tissue specimen samples. Furthermore, a total of 823 ASVs were obtained. Of these ASVs, 143 were shared among the three groups, while 357 and 323 ASVs were specific to the control and NAFLD groups, respectively . We used the LEfSe approach to find the biometric traits that separate the two sample groups. The LEfSe results confirmed that the bacteria most likely to explain the difference between the two groups are the order_Enterobacterales (genus_ Escherichia - Shigella ), order_Mycobacteriales (family_Nocardiaceae, family_Nocardiaceae, genus_ Rhodococcus ), order_Pseudomonadales (family_Pseudomonadaceae, genus_ Pseudomonas ), and order_Flavobacteriales (family_Weeksellaceae, genus_ Chryseobacterium ). The NAFLD group was evidently devoid of the order_Xanthomonadales, order_Sphingomonadales (genus_ Sphingobium ), and order_Lysobacterales (genus_ Stenotrophomonas ) . Alpha diversity analysis was performed to determine the richness and diversity of species in each group. The primary metastats used in the Alpha diversity analysis were observed, Chao 1, ACE, Shannon, and Simpson. The results indicated that there was no significant difference in flora richness between the NAFLD group and the control group ( p > 0.05) . PLS-DA analysis showed that there was significant separation of liver microbial community structure between NAFLD and control groups, as shown in Figure 3B . ANOSIM based on UniFrac distances was calculated . At the order level, the control group had considerably larger ratios of Lactobacillales (16.34% vs. 7.66%), Xanthomonadales (3.70% vs. 1.25%), and Sphingomonadales (2.25% vs. 0.98%) than the NAFLD group. The NAFLD group had a larger number of Enterobacterales (12.10% vs. 3.42%), Corynebacteriales (8.04% vs. 6.13%), and Bacteroidales (2.07% vs. 0.79%) than the control group . The abundance values of Xanthomonadales and Sphingomonadales in the NAFLD group were considerably lower than those in the control group ( p = 0.004, p = 0.008), whereas the abundance values of Flavobacteriales in the NAFLD group were significantly greater than those in the control group ( p = 0.019) . Escherichia - Shigella (10.07% vs. 0.99%), Rhodococcus (6.76% vs. 3.53%), Enterococcus (8.89% vs. 1.03%), Helicobacter (2.72% vs. 0.07%), and Pseudomonas (15.83% vs. 10.05%) were significantly more prevalent in NAFLD than in the control group in terms of genus. Stenotrophomonas (3.22% vs. 1.24%), Lawsonella (1.06% vs. 0.21%), and Helicobacter (2.72% vs. 0.07%) were the most prevalent bacteria in the control group . The control group exhibited a substantially lower abundance of Rhodococcus , Escherichia - Shigella , and Sphingobacterium than the NAFLD group ( p = 0.007, p = 0.009). The abundance value of Pseudomonas ( p = 0.038) increased considerably, and it was classified as phulum_Pseudomonadota . Conversely, the abundance values of Stenotrophomonas , Sphingobium , and Lawsonella in the control group were significantly higher than those in the NAFLD group ( p = 0.014, p = 0.011, p = 0.031). The abundance of Turicella in the phylum actinomycetota and the family Corynebacteriaceae also increased significantly ( p = 0.017) . The OPLS-DA results demonstrated that the hepatic bacterial flora exhibited differences in metabolite profiles between the NAFLD group and the control group (R2Y = 0.458, Q2Y = 0.994). This suggests that the bacterial metabolites in the liver of NAFLD were transformed, and the metabolic level differences between the two groups could be clearly observed. Four hundred two annotable differential metabolites were screened from the NAFLD group and control group using VIP >1 and p < 0.05 as the first principal component of the OPLS-DA model. This included 78 up-regulated metabolites and 14 down-regulated metabolites . Five metabolites with significant differences in up-regulation and down-regulation between the NAFLD group and the control group were evaluated after further adjusting the screening conditions ( Table 2 ). The majority of them are carboxylic acids and their derivatives, steroids and steroid derivatives, azagands and complexes, fatty acyl groups, fatty acids and conjugates, linolenic acid metabolism, nucleotide metabolism, glycerophospholipids, and glutathione synthase. Significant differential metabolism is observed in the lipid metabolism of these pathways . The metabolic pathway enrichment results were derived from the KEGG pathway database. The findings indicated that the primary metabolite enrichment pathways were linoleic acid metabolism, ABC transporter, phagocytosis, necrotic apoptosis, and calcium signaling pathways. Linoleic acid metabolism was the most significant contributor to metabolic differences . The relationship between hepatic flora and metabolite groups was analyzed using Pearson correlation analysis. In order to investigate potential sources of metabolites in the liver, we conducted a generic-level analysis of the correlations between the liver flora and metabolites. It was determined that the bacteria Lawsonella , Stenotrophomonas , and Sphingobium , which are abundant in the liver of the control group, and the bacteria Rhodococcus , Chryseobacterium , and Escherichia - Shigella , which are significantly more abundant in the liver of NAFLD patients, have a strong correlation with differential metabolites, as illustrated in Figure 6 . Lawsonella was positively correlated with glutathione and benzaldehyde, and negatively correlated with carboxyspermidine, (2R) -2-hydroxy-3-(phosphonatooxy)propanoate. Pipecolic acid and myriocin were positively correlated with Stenotrophomonas . 13-oxoODE is negatively correlated with Sphingobium , while lithocholic acid glycine conjugate is positively correlated with Escherichia - Shigella and Sphingobacterium and negatively correlated with Sphingobium and Stenotrophomonas . Rhodococcus was positively correlated with dehydroepiandrosterone. Chryseobacterium and Lawsonella were positively and negatively correlated with carboxyspermidine. There is increasing interest in elucidating the microbiome’s role in the pathophysiology of MAFLD, with numerous gut bacterial communities identified in various studies as components of microbial patterns in NAFLD. Intestinal flora significantly influences health, and its imbalance is associated with the expedited advancement of NAFLD. Intestinal bacteria and their metabolites directly access the liver via the portal vein and indirectly influence the onset and progression of NAFLD, either directly or through signaling pathways mediated by their constituents . Our research directly examined the microbiota in human liver tissue, minimizing the confounding influence of intestinal microflora, with the diagnosis of NAFLD relying on liver imaging and biopsy. We concentrated on NAFLD and the control group, demonstrating that the bacterial DNA signature in the liver of NAFLD is greatly influenced by the host phenotype. The hepatic bacterial community composition in the two groups was equivalent at many levels; nevertheless, substantial variations were noted in abundance analysis, diversity measurement, and the predictive utility of bacterial DNA. Distinct metabolites in the samples from the two groups influenced the onset and progression of NAFLD, and we additionally identified the association between the varying bacterial communities and metabolites. Nonetheless, individuals with NAFLD (validated by liver needle biopsy) exhibited no significant differences in the Metastats analysis of α diversity in liver bacteria when compared to controls without NAFLD. In contrast, other researchers reported diminished bacterial diversity in NAFLD subjects, potentially attributable to the sample size in our study. In our study, the disparity between the two groups was substantial, although the variance within the two groups was minimal. Consequently, a notable disparity in the amount of hepatic flora existed between the two groups at varying levels. The prevalence of Enterobacteriales, Corynebacteriales, and Flavobacteriales in the NAFLD group, together with Escherichia - Shigella , Rhodococcus , and Chryseobacterium in the NAFLD group, was significant. The levels were markedly elevated compared to the control group, and were devoid of Lactobacillales, Xanthomonadales, Sphingomonadales, Stenotrophomonas , Lawsonella , and Sphingobium , which constituted a considerably smaller proportion than in the control group. This may facilitate the progression of the condition. Escherichia - Shigella has been shown to induce steatohepatitis and fibrosis in non-obese rats through the secretion of msRNA 23,487 . Escherichia - Shigella is linked to steatosis and necrotic inflammatory activity, whereas Shigella is related with fibrosis and necrotic inflammatory activity . Rhodococcus is intimately associated with the phenotype of NAFLD and can effectively differentiate between NAFLD patients and healthy non-NAFLD individuals . Chryseobacterium is a non-fermentative gram-negative bacterium. It is a conditional pathogen that remains non-infectious under typical conditions but may induce infection when the immune system is compromised. Flavobacteriaceae and Porphyromonadaceae have markedly proliferated in the intestines of animals subjected to a high-fat diet, although have not been documented in human intestines . Conversely, Stenotrophomonas rectified ecological imbalances in individuals with NAFLD, stabilized inflammatory cytokine expression and mucosal immune function, and mitigated NAFLD and its associated hazards . Lawsonella participates in the metabolic pathways of fatty acids, nucleotides, and carbohydrates. Bacteroidetes and bacilobacteria are believed to significantly contribute to intestinal homeostasis, comprising over 90% of the bacteria present in healthy human intestines , a finding corroborated by our research results. NAFLD patients have a deficiency in the protective effects of beneficial bacteria, which are essential for combating inflammation and stabilizing hepatic immunity, hence exposing the liver to bacteria that readily induce immune suppression and inflammatory responses. Analysis revealed that bacterial metabolites in the liver of both groups were highly enriched, with a notable difference between the two groups. N1-Acetylspermidine, an acetyl-derivative of polyamines, is up-regulated in the liver of NAFLD patients and may serve as a valuable biomarker linked to the course of nonalcoholic fatty liver disease . Irregular steroids and steroid derivatives dehydroepiandrosterone may influence the progression of NAFLD and engage in lipid metabolism, however the function of its signaling in the pathogenesis of NAFLD is not yet elucidated . 13-oxoODE is an oxidized lipid derivative of linoleic acid (LA) and correlates with the histological severity of NAFLD . GDP participates in the metabolism of fructose and mannose. Conversely, glutathione, a down-regulated metabolite, has the potential to ameliorate NAFLD . The metabolic route exhibiting the most significant variation is linoleic acid metabolism, a process of fatty acid synthesis and degradation, which regulates blood glucose levels and facilitates the oxidation of saturated fatty acids while diminishing the synthesis of cholesterol and triacylglycerol. To elucidate the pathogenicity of metabolites, it is essential to comprehend the bacterial origins of these metabolites and their interrelationships. To further investigate the bacterial origins of metabolites in the liver, we performed a comprehensive examination of both bacteria and metabolites within the liver. Lawsonella is classified within the phylum Actinomycetota, class Actinomycetes, order Mycobacteriales, and family Lawsonellaceae. Benzaldehyde, which has a positive correlation, inhibits fat formation in normal human liver cells and reduces the onset of NAFLD, potentially linked to the metabolic product aldehyde oxidase 2. Furthermore, glutathione is favorably correlated with the production of glutathione synthetase, which plays a role in glutathione metabolism, and the synthesis of glutathione mitigates NAFLD . Lawsonella exhibits an inverse correlation with the metabolite (2R)-2-hydroxy-3-(phosphonatooxy)propanoate ethyl propionate, which is found in all eukaryotes, ranging from yeast to humans. Ethyl propionate is connected with various known ailments, including autism, irritable bowel syndrome, ulcerative colitis, and non-alcoholic fatty liver disease. Furthermore, it has been linked to congenital metabolic problems, such as celiac disease. Ethyl propionate, a volatile organic molecule, has been recognized as a fecal biomarker for C. difficile infection . Stenotrophomonas is classified within the phylum Pseudomonadota, class Gammaproteobacteria, order Lysobacterales, and family Lysobacteraceae. Ecological disturbances in NAFLD patients can be ameliorated by stabilizing the expression of inflammatory cytokines and enhancing mucosal immune function . The former had a favorable correlation with pipecolic acid and myriocin, respectively. Metabolomic studies of serum and liver indicated that the former contained a non-coding amino acid. The study findings demonstrated that early consistent exercise may improve the anti-inflammatory immune response in middle-aged male mice via epigenetic modulation of immune metabolism. The hepatic production of pipecolic acid is pivotal , being intricately linked to fatty acid synthase and fatty acid desaturase, and constitutes a significant component of the lipid metabolism route. Insufficient pipecolic acid can result in fatty acid oxidation disorder, bile acid synthesis defect, and long-chain fatty acid transport deficiency, culminating in lipid metabolism problem. The latter suppressed ceramide and lipid buildup while enhancing fibrosis in liver tissue samples from rats subjected to a high-fat diet (HFD), and myriocin also markedly ameliorated liver inflammation and apoptosis in HFD rats . Sphingobium is classified under phylum Pseudomonadota, class Alphaproteobacteria, order Sphingomonadales, and family Sphingomonadaceae. The negatively correlated 13-oxoODE, an oxidized lipid derivative of linoleic acid, correlates with the histological severity of NAFLD and facilitates the evolution of NASH by elevating oxidized fatty acids . Rhodococcus is classified within the phylum Actinomycetota, class Actinomycetes, order Mycobacteriales, and family Nocardiaceae. Aberrant synthesis and metabolism of positively linked substances dehydroepiandrosterone and catecholamines may be linked to the onset of NAFLD . Levels of 16 hydroxydehydroepiandrosterone sulfate (16-OH-DHEA-S) elevated with the advancement of fibrosis . Chryseobacterium is classified under phylum Bacteroidota, class Flavobacteriia, order Flavobacteriales, and family Weeksellaceae. Carboxyspermidine, positively associated with Chryseobacterium , serves as a novel biomarker for NAFLD progression, with elevated levels correlating with the condition . 11-Dehydrocorticosterone, which is positively correlated with metabolic syndrome , exhibits a substantial association with NAFLD and a negative correlation with glutathione. The glycine conjugate of lithocholic acid is elevated in the intestines of patients with NAFLD , potentially linked to fatty acid oxidation dysfunction and positively connected with Escherichia - Shigella and Sphingobacterium . It had a negative correlation with Sphingobium and Stenotrophomonas . Phylum Bacteroidota, class Sphingobacteriia, order Sphingobacteriales, family Sphingobacteriaceae Escherichia - Shigella is classified within the phylum Pseudomonadota, class Gammaproteobacteria, order Enterobacterales, and family Enterobacteriaceae. This work used multi-omics to connect hepatic microbiota and metabolites. Correlation analysis indicated that the liver microbiota not only modulates inflammation and immunity but also regulates lipid synthesis, metabolism, and transport via associated metabolites, influences hepatic fat accumulation, and significantly impacts the enhancement or exacerbation of inflammation and fibrosis. This study highlights that metabolic disorders resulting from bacterial imbalance in the liver are significant contributors to the pathogenesis of NAFLD, and investigating the relationship between specific metabolites and bacterial flora may ultimately aid in regulating bacterial flora function in NAFLD treatment. This study has certain drawbacks. The limited sample size necessitates external validation via larger samples and multi-center experiments. However, confounding variables can be efficiently managed by enlisting healthy participants matched by age, gender, and ethnicity. This study was a case–control study. Although our data indicate a functional relationship among the bacteriome, metabolome, and illness, causality remains undetermined, and the mechanism behind this functional correlation requires additional investigation. In conclusion, examining the correlation between the human hepatic microbiota and NAFLD reveals distinct bacterial communities and metabolic traits, hence presenting new opportunities for researchers to investigate the possibly advantageous effects of specific nutrient supplementation. This study establishes an experimental foundation for developing prospective diagnostic and therapeutic targets in the future. | Review | biomedical | en | 0.999997 |
PMC11697428 | Advancements in diffusion magnetic resonance imaging (dMRI) have facilitated our understanding of the brain's intricate architecture and organization . By measuring the diffusion of water molecules within the brain tissue, dMRI provides valuable information to investigate the connectivity and assess the white matter (WM) microstructure of pathways in the brain. Voxels in the WM can contain different axonal fiber populations with complex configurations . Each one of these populations is called fixel , which denotes the discrete component of a fiber element . Fixels and their properties, like orientation and tissue metrics, are fully determined by the voxel in which they reside. Local modeling allows for estimating these fixel properties at each voxel of the dMRI data . Tractography can use these locally estimated fixel orientations to reconstructs the trajectories of the WM, which are often called streamlines . Additionally, tractometry has emerged as a useful method for quantitative analysis of the WM pathways. It encompasses the streamlines obtained with tractography at the macroscopic level with the metrics obtained from a local modeling method at the microscopic level. This combination enables the analysis of microstructural changes by extracting quantitative metrics along specific WM anatomical tracts. Tractometry insights could potentially serve as a valuable tool for investigating WM characterization and degeneration associated with neurological disorders, such as multiple sclerosis (MS) , Alzheimer's disease , and traumatic brain injury , among others. Diffusion Tensor Imaging (DTI) is a single fiber method traditionally used to estimate properties of the fixels, averaging the diffusion properties of all the fixels within a voxel. Thus, DTI results in a loss of important information of the fixels, especially when different fiber populations with different properties or lesions are present within the same voxel. This presents an important problem in estimation because WM tissue contains between 66% to 90% of voxels with crossing fibers . DTI Limitations motivated the development of more advanced acquisition and local modeling techniques. Multi-shell H igh A ngular R esolution D iffusion I maging (HARDI) was originally developed to provide anisotropy measures beyond DTI metrics that are more robust to crossing fibers and sensitive to WM alterations, making also tractography more robust . HARDI allowed to develop techniques that estimate multiple fixels within a voxel. Notable examples of these techniques are: Q - b all I maging (QBI) , the M ulti- T ensor M odel (MTM) , and C onstrained S pherical D econvolution (CSD) . In particular, MTM is a straightforward extension of DTI that represents each one of the fiber populations in the voxel by a different diffusion tensor. However, the estimation of MTM parameters is an ill-posed challenging problem, that requires very high SNR data and large computational resources, restricting it's routine clinical use. The dMRI signal arising from the WM is composed of several compartments. Thus, taking advantage of HARDI, several techniques were developed to decompose the dMRI data into contributions from various compartments. An example of these multi-compartment methods procedures is the model in Novikov et al. , which depicts the dMRI data as a combination of I ntra- C ellular (IC), E xtra- C ellular (EC) and ISO tropic (ISO) contributions. Other hybrid methods are based in the MTM like the F ree- W ater DTI (FW-DTI) , which fits for each voxel a bi-tensor model including an anisotropic tensor for tissue compartment and an isotropic tensor for a free water compartment. The DI stribution of A nisotropic M icr O -structural e N vironments with D iffusion-weighted imaging (DIAMOND) and the M ulti- R esolution D iscrete- S earch (MRDS) are more general MTM-based methods, which fits up to three restricted anisotropic tensors for the restricted and hindered diffusion compartments and one isotropic tensor for the free diffusion compartment. DTI metrics are the most widely used metrics for tractometry . Although DTI metrics have the potential to be biomarkers, they have inconsistent sensitivity to characterize the WM as they are easily biased. For example, the common F ractional A nisotropy (FA) metric is informative about changes in WM microstructure caused by pathology, but crossing fibers bias it. FA decreases in fiber crossing voxels because oblate tensors are obtained, which leads to an alteration in the resulting FA tract profile in the tractometry as shown in Figure 1 . These alterations can be confused with alterations derived from WM degeneration, which is also illustrated in Figure 1 , leading to erroneous or ambiguous interpretations. Moreover, in the presence of crossing fibers together with pathology, FA increases, which could seem counterintuitive. However, this could happen, when only one of the fiber populations in the crossing is affected by the pathology, then the resulting single-tensor may become sharper, see Figure 2 . Approaches that have studied other DTI metrics, like the radial diffusivity (RD) metric, have shown that RD is a promising biomarker for demyelination . However, they have reported that RD can be inconsistent, presenting challenges in its reliability and reproducibility and resulting in misleading results. Besides, co-existing inflammation, edema, and crossing fibers can significantly impact on the DTI metrics at the same time . Multi-fixel methods have further expanded the scope of tractometry, resulting in tract-specific analyses less impacted by crossing fibers. Remarkable examples are the Automated Fiber-tract Quantification , the Connectivity-based Fixel Enhancement , the Fixel-Based Analysis framework , the Tractometry_flow pipeline and, recently, the UNRAVEL framework . Other tractometry frameworks have combined DTI metrics with other metrics including fixel-based metrics like the A pparent F iber D ensity (AFD). For example, the framework called Profilometry performs a simultaneous analysis of DTI metrics and other metrics, resulting in tract profiles as parameterized curves in a multi-dimensional space. Nonetheless, the crossing fibers bias in DTI metrics still limits it. Besides, these types of multi-fixel methods face several challenges and limitations. As example, frameworks informed with CSD metrics such as AFD, while sensitive, do not have a straightforward biological interpretation; moreover, they could be biased as CSD employ a fixed response function across the entire WM . On the other hand, previous tractometry results using MTM fixel-based metrics are not free of limitations. For instance, they need more complex multi-shell dMRI acquisitions and are limited to a maximum of 2 fixels per voxel . This is insufficient in many brain regions, e.g. the centrum semiovale, where 3 fiber populations cross from the corticospinal tract, corpus callosum, and superior longitudinal tract intersect. Additionally, fixel-FA estimation has shown to be affected by high levels of noise and inconsistent through scan-rescan experiments as a consequence of MTM fitting being numerically unstable . MTM-based methods generally struggle to accurately estimate the required number of tensors per voxel ( N ). These methods tend to overestimate the value of N as a direct consequence that a single diffusion tensor does not properly represent the dMRI signal (even when a single fixel is present) for b-values higher than 1ms/μm 2 , needing more tensors to fit the per voxel signal . Between the MTM-based methods, MRDS offers a balanced trade-off in terms of model complexity and accuracy when using short-acquisition-time clinical multi-shell dMRI data . MRDS has proven to be a noise-robust and accurate multi-fixel method for estimating the directions of the fixels and their metrics. In addition, MRDS has been histologically validated in a rat model of unilateral retinal ischemia in which only one of the optic nerves was damaged. This nerve lesion was correctly detected by MRDS at the region where the optic nerves cross (optic chiasm) . Moreover, MRDS has shown to be capable of recognizing 3 fiber populations in regions-of-interest (ROI) like the centrum semioval when using clinical in vivo multi-shell dMRI data. A recent work has proposed to use the T rack O rientation D ensity I maging (TODI) as a useful spatial regularizer for a more accurate and robust estimation of N in MRDS. The T rack O rientation D istribution (TOD) image estimated with TODI presents an increased amount of spatial consistency compared with the fiber orientation distribution (FOD) image obtained with constrained spherical deconvolution (CSD) . In this paper, we propose a novel tractometry pipeline to address several current limitations of tractometry informed with multi-fixed methods. Our proposed pipeline combines multi-tensor fixel-based metrics estimated with MRDS and the Tractoflow and Tractometry_flow pipelines. The proposed pipeline provides fixel-based tensor metrics that are robust to crossing fibers and noise. Provided fixel-based metrics have the potential to be biomarkers for pathologies like demyelination and can be useful for the characterization and study of underlying WM anomalies in patients with pathologies such as MS. Most of the previous tractometry studies in pathology used DTI metrics, then, our multi-tensor pipeline results can be straightforwardly situated in their context and compared with them. Finally, the pipeline is tested on both synthetic phantom dMRI data and clinical dMRI in vivo data from a large healthy control and MS groups with a scan-rescan experiment, highlighting the robustness and potential of our approach when studying WM anomalies in patients with such neurological disorders. In this section, we describe the simulation of the synthetic phantom and the acquired in-vivo dMRI data. We also explain each step in the proposed pipeline. A synthetic phantom was generated based on the geometry of a previously published dMRI phantom , see Figure 3 . The size of the phantom is 50 × 50 × 50 voxels with an isotropic dimension of 1.0mm. Similar to Caruyer et al. , our synthetic phantom has 20 distinct bundles showing a complex fiber crossing configuration and volume contamination with C erebro S pinal F luid (CSF). Each bundle in the phantom exhibits unique diffusivities and axonal dispersion characteristics. The diffusivities of each bundle were tuned to mimic those found in healthy human brains . We have simulated a phantom dMRI signal for each individual bundle and the whole volume signal without noise and without dispersion. Then, DTI was fitted to each individual bundle signal as well as the whole dMRI signal, and the tensor metrics were extracted. This simulated dataset was employed as Gold Standard (GS) to compare results with the experiments on in-vivo dMRI data. A multi-compartment model also known as Standard Model (SM) , was adopted to simulate this phantom signal by including three types of microstructural environments: intracellular (IC), extracellular (EC), and isotropic (ISO). Each environment was simulated with a given volume fraction denoted by f ic , f ec and f iso , respectively. The IC space was modeled with cylinders of zero radius ( sticks ), the EC space with a cylindrically symmetric tensor ( zeppelin ), and finally, the ISO space was modeled as a free diffusion compartment ( ball ) . Three datasets were generated with known GS. The radial EC diffusivities were simulated based on Fieremans et al. . Thus, the EC space tortuosity D 0 / D ec ⊥ , which quantifies how the diffusion is affected by cellular and extracellular structures within tissue, was defined as the ratio of free diffusivity D 0 = 2 μ m 2 / ms over the EC diffusivity D ec ⊥ . Therefore, the intracellular volume fraction f ic was most sensitive to axonal loss. Besides, it was most sensitive to demyelination. The first dataset incorporated D ic ∥ and D ec ∥ diffusivities within a healthy range sampling a Gaussian distribution with a mean of 2μm 2 /ms and variance of 0.01μm 2 /ms, while D ec ⊥ = 0 . 48 μ m 2 / ms , f ic = 0.65 and f ec = 1− f ic . On the other hand, the second dataset simulated, in some bundles, conditions associated with demyelination on MS. Specifically, in regions with demyelination f ic = 0.55 and D ec ⊥ = 0 . 71 μ m 2 / ms , while in regions without damage, the values remained the same as in the first dataset. Finally, our third dataset simulated conditions related to axonal loss. For this case, f ic = 0.35 and D ec ⊥ = 0 . 59 μ m 2 / ms in regions with lesion and regions without lesion maintained the same control values as the first dataset. All datasets were generated with a high and realistic noise level ( SNR = 12). The isotropic diffusivity D iso and volume fraction f iso were fixed equal to 3μm 2 /ms and 0.05, respectively. Axonal dispersion was modeled with a Watson distribution . The κ value of each bundle used as the parameter for the Watson distribution was sampled from a Gaussian distribution with mean 20 and variance 0.01. Lastly, we used the same protocol of the in-vivo data described below. The 13th bundle of the phantom was selected to compare the three scenarios above because bundle 13 crosses with 2 and 3 bundles at different places. In the datasets simulating damages, the lesion was simulated in a spot of the bundle represented by the red region, while the diffusivities outside the lesion remained the same as in the control case. Two groups of participants were recruited from the University of Sherbrooke (UdS) and the Center Hospitalier Universitaire of Sherbrooke (CHUS) community. The first group was a healthy control (HC) group with 26 adults, and the second group has 22 relapsing-remitting MS patients. Both groups had a gender proportion of 75% women and 25% men. Diffusion MRI data was acquired using a clinical 3T MRI scanner (Ingenia, Philips Healthcare) using a 32-channel head coil. Each subject was scanned 5 times over 6 months and a 4-week interval (±1 week) with a total acquisition time of 20 minutes for each session. MRI acquisitions were obtained for each subject at roughly the same time daily to mitigate potential diurnal impacts, i.e. morning subjects underwent all sessions in the morning with a permissible 2-3-hour variation. Finally, 6 of the 26 healthy control subjects were discarded for several reasons, including problems during the scan or processing. Thus, the HC group employed for the experiments had 20 subjects. All MRI images were aligned respect to the anterior commissure-posterior commissure plane (AC-PC), which is an anatomical reference defined by two small bundles in the brain. One bundle located in the front part of the brain, and the other in the back. This ensured consistency in the orientation and position of the images when analyzing them across scans and subjects. In addition, 3 type of data were included: Finally, all images have been subjected to visual quality assessment. A detailed and more extensive data description can be found in Edde et al. . The data processing pipeline consists of 6 key steps described in sequential order in the following subsections, see Figure 4 : The preprocessing of the dMRI data was performed using the Tractoflow pipeline . This includes the brain and WM masks extraction, T1 registration and tractography. The dMRI data was denoised using the MP-PCA method. Brain deformation induced by magnetic field susceptibility artifacts was corrected . Motion artifacts corrections and slice-wise outlier detection were performed . Image intensities were normalized to reduce the bias by the magnetic field . The brain mask was obtained from the bet command from FSL . Specifically, Tractoflow performed an extraction on the b = 0 image. Then, the obtained mask was applied to the whole DWI to remove the skull and prepare the DWI for the T1 Registration. Tractoflow performed brain extraction after Eddy/Topup correction to have a distortion-free brain mask. Tractoflow processed the T1 image using eight different steps. First, Tractoflow preprocessed the T1 image including denoising, correcting and resampling steps for the T1 image. Then, the T1 image was registered on the b = 0 and FA images using the nonlinear SyN ANTs (antsRegistration) multivariate option, where the T1 image is set as moving image and the b0 and FA images are set as target images. After registration, Tractoflow extracted gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF) partial volume masks using fast from FSL. These maps were used to compute the exclusion and inclusion maps for tractography , which are anatomical constraints for the tracking . The fiber tracking was also done using the Tractoflow pipeline. The seeding mask employed in the tractography was the obtained WM mask. The tractogram was generated employing the anatomically constrained particle filter tracking (PFT) algorithm . This algorithm utilized the FOD image obtained with the Multi-Shell Multi-Tissue Constrained Spherical Deconvolution (MSMT-CSD) , along with the inclusion map, exclusion map, and a seeding mask to guide the tractography process. The seeding mask employed in the tractography was the extracted WM mask. A fully detailed explanation of the whole Tractoflow pipeline can be found in Theaud et al. . Additionally, Tractoflow includes strategies to avoid premature track termination when tracking MS patients. The seeding mask in MS patients was filled using a lesion-corrected WM mask. During the tracking process, if a peak in the FOD image is coherent and well-defined, the tracking continues even if the voxel is inside a WM lesion, increasing anatomical accuracy and consistency in obtained tractograms for MS patients. This step can be omitted as the tractogram can be generated with any fiber tracking technique. The tractogram from fiber tracking was then processed with the TODI method to obtain a TOD image. Subsequently, the resulting TOD image was segmented to produce discrete fixels . Then, the fixel-based image was converted into a Nu mber of F iber O rientations (NuFO) scalar image, where the number of fixels was counted in each voxel. A threshold peak amplitude was utilized to prune the spurious peaks, such that any lobe for which the maximal peak amplitude was smaller than 0.1 was omitted. Finally, this NuFO image was used as input MOSEMAP in MRDS, which is better described in the next step (Section 2.3.4). The MTM represents the diffusion signal S i at each voxel as: where M is the number of unitary gradient orientations g i , N is the number of tensors, and α j is the fraction of the j -th diffusion tensor D j . Assuming axial symmetry, then D j is parameterized by the unitary principal diffusion direction (PDD) θ j , the axial ( λ j ∥ ) and radial ( λ j ⊥ ) diffusivities, such that: The bundle-specific parameters of the MTM were non-linearly estimated using the MRDS method for N = 1, N = 2, and N = 3, resulting in 3 multi-tensor fields (MTFs), see Figure 4 . More than three fixels can be estimated, albeit with increased computation time and reduced precision for the estimated parameters. Besides, N ≤ 3 has been reported to be a reasonable threshold . Initial diffusivities for the non-linear estimation of parameters in Equation 2 were obtained from DTI at brain WM voxels with a high probability of containing only one fiber. Given that high b-value diffusion signals are not fully represented with the diffusion tensor, causing an overestimation in N . Thus, the original statistical model selection in MRDS, which provides a model selection map (MOSEMAP) with the value of N that better describes the diffusion signal at each voxel, is replaced for the NuFO scalar map obtained with TODI in step 2.3.3. The TODI NuFO scalar map merges the 3 MTFs into a reevaluated and refined MTF with the spatially smoothed information provided by the tractogram , see Figure 4 . From this improved MTF, fixel-FA, fixel-MD, fixel-AD, and fixel-RD maps were generated. The fixel-FA map maintains the same spatial dimensions as the original DWI Each voxel may contain multiple tensors. Then, an extra layer was added to store the multiple fixel-FA values obtained at each voxel. The scalar fixel-FA value was obtained for every tensor within a voxel, computed as the standard FA . This resulted in a 4D-dimensional fixel-FA map. The computation of fixel-RD, fixel-AD and fixel-MD maps was analogous to the computation of the fixel-FA map. Similarly, a map storing the PPDs of the MTF was computed. These maps were used as input for the tractometry step. The tractogram was segmented into major bundles employing the RecoBundlesX pipeline, see Figure 4 . RecoBundlesX recognizes bundles by comparing the subject's tractogram with a template (or atlas) through a similarity metric based on their shapes. This algorithm is re-evaluated multiple times with parameter variations and label fusion because RecoBundlesX is a multi-atlas and multi-parameter approach. We used the atlas in Rheault , which is designed specifically to be used with RecoBundlesX, and it was built from delineation informed with anatomical priors . After RecoBundlesX identified a large number of WM bundles, tracks were visually inspected to ensure their quality. The Superior Longitudinal Fasciculus (SLF), Arcuate Fasciculus (AF), Pyramidal Tract (PYT), Inferior Longitudinal Fasciculus (ILF), Inferior Fronto-Occipital Fasciculus (IFOF), Middle Cerebellar Peduncle (MCP) and Cingulum (CG) bundles were selected to showcase the pipeline's capabilities. Selected bundles comprise a large set covering most of the brain, showing complex crossing fiber configurations, which is why they are frequently studied in the literature . In the experiments with MS patients, we have chosen the AF, ILF, IFOF and PYT bundles, which have clinical implications in the context of MS studies . The AF bundle connects the frontal and temporal lobes, crucial in speech communication. On the other hand, the ILF bundle connects the occipital and temporal lobes. Its functionality includes visual processing, tracking and recognition of objects and obstacles. Like AF and ILF, the IFOF bundle is involved in speech communication and visual processing tasks, transporting signals from the frontal to occipital and temporal lobes. The PYT connects the spinal cord with the cerebral cortex. It is essential in voluntary control movements. Therefore, when MS lesions appear in the AF, ILF, IFOF, and PYT bundles, several symptoms are experienced by MS patients. These symptoms include difficulties in speech and comprehension, visual deterioration, visual memory problems, attention issues, and affected motor coordination. The proposed pipeline employed the Tractometry_flow pipeline, which delivers metric maps along each individual input bundle. Then, each metric map was projected through every bundle to obtain a tract profile. We adapted the Tractometry_flow pipeline to support multi-tensor fixel-based metrics. The closest-fixel-only strategy was used to map the contribution of the multi-fixels estimated by MRDS to a given streamline. We designed three experiments to study the behavior of the pipeline: In Figure 5 , we show violin plots comparing single-tensor (blue) and multi-tensor (green) metrics. Horizontal lines refer to the GS (red) and the mean of each distribution. Single-tensor metrics exhibit several discrepancies with respect to the GS, most DTI distributions are bimodal, such that one of the peaks is close to the GS, while the other is underestimated for FA and AD, and overestimated for RD. For each bundle, we accounted for the proportion of voxels containing 1, 2, and 3 fiber populations using the NuFO map obtained with TODI, i.e., we accounted for the proportions of N . These proportions are at the top of Figure 5 . By inspecting percentages of N shown in Figure 5 , it is reasonable to assume that DTI bimodality is caused by crossing fiber biases. In Figure 5 , bundles with a high proportion of N = 2 and N = 3 have a more pronounced bimodality; this is particularly evident for the 13th bundle. In contrast, it can be seen in Figure 5 that the mean of the estimated fixel-FA and fixel-AD are similar to the GS value in all bundles. The relative error of fixel-FA and fixel-AD is around 10% as it is reported in Table 1 , reaching a relative error as low as 2.7% in some bundles where the average relative error is 5.6%. It is important to note that, for fixel-based metrics, the relative error of bundles with a high count of 2 and 3 crossing fibers is similar to the relative error in bundles exposing mostly single fiber composition. As an example, percentages shown in Figure 5 exhibit that bundles 5 and 10 mainly have no crossing fibers, while bundle 11 has, for the most part, crossing fibers. However, the relative error of fixel-FA for bundles 5, 10, and 11 in Table 1 are around 3%. Even for the bundle 13, which is one of the most challenging bundle as it has a high proportion of crossing fibers, the relative error remains at 7%. Values in Table 1 exhibit a higher relative error for fixel-RD compared with fixel-AD and fixel-FA, but still less relative error than RD in general. Additionally, bundle 2 shows abnormal relative errors compared to the other bundles. Looking at the obtained tractogram, the streamline count for bundle number 2 after segmentation is 104, which is insufficient to cover the whole bundle's volume, resulting in an increased relative error. Thus, results in bundle 2 should be interpreted with caution because of the low number of streamlines. Bundle 10 has almost 100% single fiber composition. It is the only bundle where the relative error of RD is less than the one reported in fixel-RD. This suggests that, in the absence of crossing fibers, DTI's RD may be more accurate than fixel-RD. Besides, fixel-RD violin plots in Figure 5 indicate that, in general, fixel-RD tends to underestimate the GS value, which is congruent with the relative errors reported in Table 1 . In the MTM fitting with MRDS the isotropic volume fraction is overestimated, see Appendix A . Since the synthetic data was generated using a multi-compartment model and MTM does not fully represent the signal for the b-values in our protocol, then the ISO compartment may be partially explaining the contribution of the EC compartment (see Appendix A for more details). Therefore, this underestimation of the fixel-RD metric might be related to the overestimation of the isotropic volume fraction. Similar to Figure 5 , in Figure 6 violin plots on the 13th bundle are reported for the 3 simulated scenarios detailed in Section 2.1: healthy control case, demyelination, and axonal loss. Additionally, tractometry results on the same bundle for the 3 different scenarios can be found in Figure 6B . In the healthy control scenario, the limitations of DTI in capturing the overall WM microstructure configuration are evident. Tract profiles informed with standard DTI metrics are biased by crossing fibers as FA, RD, and AD tract profiles have variations along the bundle, while the GS does not. In particular, FA tract profile decreases and RD tract profile increases when the value of N increases, see Figure 6B . In contrast, the robustness of the multi-tensor fixel-based metrics estimated with MRDS is evident as they provide tract profiles independent of the underlying fiber configuration, see Figure 6B . In the demyelination scenario, results with DTI metrics in Figure 6 showed limited sensitivity to changes in the WM microstructure. In the region with a lesion, tract profiles exhibit variations, but they do not correspond with the GS. Contrarily, results with multi-tensor fixel-based metrics show enhanced sensitivity, detecting reductions in FA and increase in RD associated with simulated demyelination while maintaining robustness to noise and crossing fibers, see Figure 6 . Like the demyelination scenario, DTI metrics exhibit limitations in detecting axonal loss, particularly in regions with crossing fibers. Despite the differences in the three simulated scenarios, results in Figure 6 show no substantial differences in DTI metrics. This makes it impossible to distinguish between different scenarios. Results with multi-tensor fixel-based metrics are less contaminated by fiber crossing artifacts, which allows to detect variations in the tract profiles related to lesions. Obtained tract profiles informed with fixel-based metrics underestimate the GS RD, which is expected and congruent with the results investigated in Figure 5 . Although results with multi-tensor fixel-based metrics overestimate the GS FA and underestimate the GS RD, they are accurate in shape and sensitive to small variations. For experiments on in-vivo data, we focus only on FA and RD metrics and their fixel-based counterparts, as MS research and literature report that FA and RD are potential biomarkers closely related to microstructure anomalies and demyelination . Figure 7 illustrates tract profiles for different major bundles in the left hemisphere of the healthy participants. According to Table 2 , tract profiles obtained with MRDS fixel-FA and fixel-RD metrics show an overall reduction in the correlation with the value of N compared to FA and RD metrics. Tract FA profile decreases in locations where N is high and vice versa. In contrast, tract fixel-FA profiles exhibit more robustness to crossing fibers. Additionally, tract profiles informed with fixel-based multi-tensor metrics show FA similar to the ones reported in healthy WM of human brain. Based on the literature, FA values in the healthy human brain WM generally range between 0.60 and 0.85, depending on the specific tract or region. For example, FA in the corpus callosum was reported to be between 0.72 and 0.78 , and between 0.73 and 0.76. FA in the internal capsule was reported to be between 0.70 and 0.80 , and around 0.75. Finally, FA in frontal WM was found to be between 0.60 and 0.70 , and between 0.60 and 0.65. Results on the healthy control group dataset follow the patterns observed in the experiments on the control synthetic dataset. Like Figure 5 , tract profiles obtained with DTI-based metrics consistently show lower FA and higher RD values compared to the fixel-based metrics across every bundle. Figure 7 shows average tract profiles computed from the HC cohort, which includes different subjects scanned in different time stamps. Tract profiles in Figure 7 show visually low variability overall. In Table C1 , the standard deviation (SD) is presented for tract profiles informed with both fixel-based and DTI metrics. The SDs are computed within-subject and between-subject for each bundle and each section of the bundle. The SD from tract fixel-FA profiles is generally higher than the tract FA profiles, though they remain comparable overall. Additionally, Table C2 presents the results of the ANOVA test conducted to compare the mean of 5 tract fixel-FA and fixel-RD profiles resulting from the five scans of sub-015 (one of the subjects exhibiting the highest variability), see Appendix C . The ANOVA test shows the F-statistic and p-value across the 20 locations (labels) of the selected bundles. The results revealed significant differences in the means for various bundles at specific labels. Notably, Label 2 exhibited a statistically significant effect in the PYT_L bundle with an F-statistic of 3.9618 (p = 3.84E-04) and in the ILF_L bundle . Similarly, Label 8 demonstrated significant findings in the SLF_L and MCP bundles. Additionally, Label 12 showed highly significant results in the AF_L and PYT_L bundles. Our pipeline was applied to the MS dataset for a set of relevant bundles in the context of MS studies: AF, ILF, IFOF, and PYT. Differences between MS patients and HC group-averaged tract profiles are studied in Figure 8 . In locations adjacent to lesions, fixel-FA tract profiles show lower values than the healthy control group. Moreover, on the ILF and IFOF bundles, fixel-FA values are beyond the second variance, which may indicate degradation of the WM integrity. Besides, fixel-RD tract profiles are consistently elevated compared to healthy controls in regions with lesions, suggesting widespread expected demyelination. The spatial extent of the lesions correlates with the extent of the changes in both metrics. Figure 9 displays FA and fixel-FA maps along the IFOF_L bundle in the patient 004. Both single- and multi-tensors are visualized in the region of the bundle. Each tensor is colored according to its FA value. Additionally, Figure 9 compares the tensor renders in two different ROIs within the bundle. The ROI outlined in blue has MS lesions while the orange outlined ROI is located at the normal-appearing white matter. Several crossing fibers are present in each ROI as the IFOF bundle crosses with other bundles, such as the PYT and ILF bundles. The FA map shows darker areas in both ROIs, corresponding with the shape, and decreased FA showed by the tensors. No significant differences in FA values are appreciated between the two ROIs. On the other side, the fixel-FA map is darker only in the ROI with the lesion. However, unlike the FA map, fixel-FA map shows higher values and fewer dark spots in the crossing fiber ROI. This indicates that fixel-FA is more robust to crossing fibers. In addition, multi-tensors show FA values within a healthy control range in the crossing fiber ROI, highlighting their potential to differentiate between crossing fibers and lesions. In this work, we address the crossing fiber bias from DTI metrics used in tractometry, by instead using multi-tensor fixel-based measures obtained from multi-shell HARDI acquisitions. Multiple b-vale diffusion-weighted data is mandatory for reliable parameter estimation in MTM-based methods such as MRDS . Our multi-shell acquisitions remain clinically feasible (~30 minutes). Previous works in literature have reported limitations when informing tractometry with multi-tensor fixel-based metrics. Multi-tensor fitting is computationally demanding, highly affected by noise, and requires extensive high-quality dMRI HARDI acquisitions, which are time-consuming and challenging to find in clinical settings . Because of this, previous tractometry methods informed with multi-tensor fixel-based metrics have been limited up to 2 fixels per voxels, resulting insufficient in many regions of the brain . MTM methods generally struggle to accurately determine the number of fixels at each voxel, which is especially challenging in regions with complex fiber configurations. Choosing MRDS as a framework to estimate the multi-tensor fixel-based measures and using TODI to inform MRDS's model selection with tractography regularization allowed us to address these limitations in the current state-of-the-art. MRDS accounts for the presence of up to 3 fixels within each voxel plus an isotropic compartment, allowing for more accurate characterization of the fixel-specific tract profiles and being robust to fiber-crossing. MRDS is relatively fast when estimating the diffusivities in the resampled WM at 1 mm isotropic resolution (~1 h of computing time per subject). Moreover, it has been shown to be accurate and robust to noise (SNR = 12) when using clinical-grade dMRI data and protocols. Finally, the new model selection applied in MRDS allows for improvement in the estimation of the required number of tensors per voxel, taking advantage of the spatial regularization provided by tractography. In the experiments with synthetic dMRI data, relative errors for fixel-based metrics indicate that multi-tensor fixel-based metrics estimated with MRDS are robust to crossing fibers and sensitive to WM anomalies . When comparing the tract profiles obtained with fixel-based multi-tensor metrics to traditional single-fixel tensor metrics, a difference in sensitivity was observed. Tract profiles informed with multi-tensor fixel-based metrics distinguish between crossing fibers and scenarios like axonal loss and demyelination by assessing the underlying fiber configuration and WM tissue metrics. We tested our proposed tractometry pipeline on several WM bundles of the in-vivo healthy control group: SLF, AF, CG, IFOF, PYT, ILF and MCP. We compared the obtained tract profiles informed with multi-tensor fixel-based metrics with tract profiles informed with single-tensor metrics along these bundles, focusing on their robustness to crossing fibers . The robustness of the multi-tensor fixel-based metrics to crossing fibers is evident across all examined bundles. Besides, our findings indicate that tractometry informed with multi-tensor fixel-based metrics is consistent, reliable, and not significantly affected by random noise or crossing fibers. As expected, single-tensor metrics exhibit a notable fluctuation when the estimated number of crossing fibers per voxel ( N ) along the bundle increases or decreases. This pattern suggests that single-tensor metrics are highly influenced by crossing fibers. According to the literature , tract profiles informed with multi-tensor fixel-based metrics exhibit FA values in a range that is considered normal for healthy WM. This alignment suggests that multi-tensor fixel-based metrics provide more accurate representation of the WM integrity. Contrary, singles tensor metrics fail to estimate FA values considered normal in the WM because they are biased by crossing fibers. In Section 3.2 we quantitatively and qualitatively explored the within-subject and between-subject variability of the tract profiles. The consistent low SDs values for the tract profiles indicate minimal variability within and between subjects. Despite the higher variability in multi-tensor fixel-based tract profiles, they remain within acceptable limits. This suggests that multi-tensor fixel-based informed tract profiles are more accurate, but less precise than DTI informed tract profiles. Moreover, we conducted an ANOVA test to evaluate the differences in mean fixel-FA and fixel-RD metrics across 20 locations of several bundles in sub-015. The overall rejection rates across the labels suggest a high level of consistency in the measurements, with an average rejection rate of 40%. However, our findings indicate that the tract profiles of certain bundles are significantly influenced by the anatomical location, revealing significant differences in the means of fixel-FA and fixel-RD across different regions of the brain. These results underscore the importance for careful interpretation of tract profiles as certain bundles, particularly in subjects with pronounced variability. While Rojas-Vite et al. provided a solid foundation for the application of fixel-based metrics provided by MRDS, further validation using animal models remains essential. Particularly, in the context of demyelination and tractometry. Understanding the intricate changes in the obtained fixel-based metrics associated with demyelination is crucial for accurately interpreting the alterations detected by our method. Future studies utilizing animal models have to be driven for a more comprehensive assessment of our approach's sensitivity to demyelination and its correlation with histological outcomes. We compared relapsing-remitting MS patients to a group of healthy subjects with similar age and brain configuration . The proposed pipeline shows to be sensitive to WM anomalies related to relapsing-remitting MS disease. The single MS patient tract profiles exhibit values that clearly deviate from the healthy control group. These deviations are potentially related to MS pathology as they occur around lesion location. A similar behavior is reproduced in the synthetic data simulating demyelination . Therefore, differences between group-averaged and individual MS patients' tract profiles in Figure 8 are assumed to be a consequence of the disease. In general, for all bundles, MS patients consistently show reduced fixel-FA and increased fixel-RD compared to healthy controls. In Section 3.3, we made a comparison between the tract profiles of the HC group and two MS patients (sub-004-ms and sub-022-ms) of the MS group. Although a group comparison (HC vs. MS) may be done, the inherent group-averaging may not be beneficial because of the variability of MS lesions among MS patients. MS lesions can appear in different regions along the brain, and the severity of these lesions varies between patients . Averaging these tract profiles across patients could lead to loss of critical information that is essential for understanding the individual differences within the MS group. Nonetheless, it is important, as a future work, to design an analysis for the entire MS cohort, which will provide a more comprehensive understanding of these dynamics. Moreover, we recognize the importance of developing a framework for explicit comparison of Wallerian degeneration, which would provide valuable insights to the MS research community. Finally, we acknowledge the need for a more comprehensive analysis comparing FA and RD values in the normal-appearing white matter of MS patients with those of healthy controls, which could further enhance our understanding of the integrity of WM in regions without visible lesions. In a previous study , tractometry with dMRI metrics was investigated in young adults with relapsing-remitting MS. They reported significant abnormalities in the WM microstructure in WM bundles similar to those we used. In particular, reduced FA and increased RD were observed, indicating demyelination, which aligns with our reported results. Additionally, specific changes in fiber density and complexity were noted, indicating axonal degeneration. In Chamberland et al. a study using tractometry was conducted on MS patients with optic neuritis. It was found a limited ability to differentiate between various types of lesions like demyelination and axon loss using dMRI metrics, which is consistent with our findings. In another example , tractometry informed with single tensor and other advanced fixel-based metrics was used to investigate the association between diffusion MRI-derived measures and neuropsychological symptoms of MS. They focused on WM fascicles that are associated with cognitive dysfunction in the presence of lesions. Our approach could offer several benefits to this kind of studies. For example, MTM metrics may replace standard DTI metrics in their analysis. The integration of these new metrics should be direct, as MTM metrics have the same biological and geometrical interpretation as DTI metrics without the crossing fiber bias. This could provide a more robust and accurate depiction of microstructural WM changes in MS patients. MTM metrics like fixel-RD would allow for a more precise and sensitive characterization of demyelination and other alterations, including axon loss. Robust multi-tensor metrics could improve the reliability of longitudinal studies by providing consistent and accurate measures over time. This would facilitate the monitoring of disease progression. By incorporating multi-tensor fixel-based tractometry analysis, researchers and clinicians may underscore the advantages of multi-tensor fixel-based metrics in improving the fidelity of studies. One of the main limitations in the current literature is that RD metric can be contaminated in regions with crossing fibers and lesions, leading to erroneous interpretations and conclusions, making RD unstable as a biomarker . This work addresses this limitation by offering a tractometry pipeline robust to crossing fibers, suggesting the fixel-RD metric as a more robust biomarker for demyelination. Our pipeline shows that multi-tensor fixel-based methods could be a robust alternative to DTI, in which familiar metrics such as FA or RD are now specific to a particular fixel or track, with similar biological/geometrical interpretation. This facilitates the contextualization of these MTM metrics regarding many studies utilizing DTI metrics. Besides, it is unnecessary to include other fixel-based metrics such as AFD, which has challenging biological interpretability. AFD reflects the density of axonal fibers within a voxel, but not necessarily their functional status or health . Thus, an increase or decrease in AFD does not directly translate to improved or deteriorated neurological function, requiring additional context . Moreover, pathological conditions like demyelination or axon loss can alter diffusion properties in ways that are not straightforward to disentangle, making it hard to pinpoint the exact biological cause of changes in AFD . In this work, we utilized a simulated phantom that incorporates different compartments to simulate WM microstructure to evaluate our proposed method. However, it is important to acknowledge the limitations of this phantom as it only serves as an approximation that does not capture the full complexities of the human WM. Membrane permeability and vascularization are examples of factors that were not considered in these simulations. Future work should focus on validating the proposed method using more realistic phantoms, such as the proposed by Callaghan et al. and Villarreal-Haro et al. . Our pipeline uses the closest-fixel-only strategy when relating the streamline's segments to local fixel properties. This does not allow multiple local fixels to contribute to a given streamline and might contribute to erroneous tractometry results if the bundle does not have enough streamlines. This can be improved by employing a fixel angular weighting strategy as the one proposed and used in Delinte et al. . Our results showed a decrease in RD precision when a single fiber population is present. This is concordant with what has been reported in other multi-fiber methods . Including a free water tensor in MRDS enhances results accuracy and mitigates potential biases, particularly when analyzing patient data. Nonetheless, this inclusion decreases sensitivity in the estimated fixel-based diffusivities due to the increased complexity of fitting 4 tensors (3 anisotropic and 1 isotropic) instead of 3 with MRDS. Besides, for acquisition schemes including high b-values the estimation of N and the isotropic volume fraction is affected, see Appendix A . Hence, the isotropic compartment may partially explain the contribution of the extra-cellular part of the dMRI signal, resulting in a reduction of the RD as shown in results with synthetic data. In the future, we consider that it will be important to study in depth the impact of including a free water compartment in MRDS and their implications in other lesions like edema as it is still an open question. In our study, we demonstrated that the proposed method effectively detects variations in tract profiles associated with lesions, both in synthetic simulations and in-vivo data. This capability underscores the potential of our approach for identifying abnormalities in complex fiber crossing regions. However, it is important to note that while our method shows promise in detecting lesions, future work is necessary to further investigate its performance in accurately assessing the actual severity of detected lesions. Although the obtained results underscore the capabilities of the proposed pipeline to identify WM lesions while being robust to crossing fibers, it cannot discriminate between demyelination and axonal damage. This is congruent to previous studies, which found that RD is sensitive to several microstructural changes different from demyelination, such as axonal deterioration, edema, and inflammation . More advanced models like SM can distinguish between changes occasioned by axonal integrity and changes due to demyelination, but they still use one single tissue kernel per voxel, not per fixel. Similar to Dayan et al. , our robust multi-tensor fixel-based metrics can be combined with these advanced methods, leading to a more sophisticated pipeline with a different type of metrics. Additionally, the employment of M agnetization T ransfer I maging (MTI), which is sensitive to myelin content, could help to differentiate between demyelination and axonal injury. However, it is necessary to extend the developed phantom for simulating not only dMRI acquisitions but also MTI acquisitions in order to validate the results on in-vivo data. Another important aspect to consider is the high amount of false-positive streamlines in the tractogram and recognized bundles . While segmenting the tractogram and focusing the analysis on known tract bundles, false-positive streamlines can lead to inconsistencies in the tract profiles of the tractometry analysis, like overestimating the tract profiles from the estimated fixel-based metrics. Additionally, they can introduce more noise and variability into the analysis, hindering reproducibility. This can reduce the sensitivity of tractometry analysis to detect genuine alterations in WM between control subjects and patients, resulting in misinterpretations and erroneous conclusions. Fortunately, there are methods like COMMIT that assign weights to individual streamlines in the tractogram by solving a convex optimization problem. This enables the detection of false-positive streamlines, which can be removed by discarding streamlines with weight equal to 0. As future work, COMMIT can be integrated into the pipeline to obtain a pipeline more robust to false-positive streamlines. In conclusion, our work focuses on creating a robust tractometry framework informed by tractography-regularized multi-tensor fixel-based metrics. It demonstrates its capabilities to address the crossing fibers bias and lesions, increasing the sensibility in both simulated and real-world scenarios. This study makes several key contributions to the field of WM imaging analysis. First, developing a simulated phantom with challenging and customizable geometry, incorporating different WM scenarios by using the standard model (healthy tissue, demyelination, and axon loss). This phantom provides a controlled environment to systematically evaluate and compare different imaging techniques and models. This allows us to verify the accuracy and robustness of our proposed methods against various fiber configurations and pathologies. Second, our proposed pipeline informed with the multi-compartment framework MRDS–three anisotropic and one isotropic compartment–marks a substantial methodological advancement. This pipeline goes from raw data to tract profiles informed with track-specific tensor metrics. By combining tractography robust to lesions and accurate multi-tensor fixel-based metrics, our pipeline achieves more robust, precise, and sensitive representations of the WM microstructure, particularly in regions with complex crossing fiber configurations or lesions related to pathologies. This approach addresses limitations in the current state-of-the-art methods. Thirdly, we evaluated the proposed tractometry pipeline in a cohort of 20 healthy individuals. Our results demonstrate the superiority of MTM over DTI, highlighting MTM's enhanced ability to capture detailed microstructural information and resolve crossing fiber geometries. The increased sensitivity of MTM metrics provides more accurate assessments of white matter integrity. Finally, applying our tractometry pipeline to a cohort with relapsing-remitting MS further underscores the clinical relevance of our work. Our qualitative analysis demonstrates the sensitivity of the pipeline in detecting WM anomalies related to demyelination. This is particularly important in diseases like MS, where it is important to differentiate between crossing fibers and lesion contamination. Pipeline's capabilities to delineate these anomalies offer an improvement over those that only include DTI metrics for studying and monitoring MS and potentially other neurological conditions. | Review | biomedical | en | 0.999996 |
PMC11697429 | In this article, we report our experiences of participant selection in different traditional, modern, and community-oriented qualitative methodologies—auto/ethnography, narrative inquiry, participatory action research, ethnography, case study, grounded theory, and phenomenology, and some thinking points for consideration. Qualitative research is gaining popularity in social science and educational research for exploring human experiences and feelings. Humans aligned to the phenomenon are the main information and/or data sources. Aspers and Corte noted that this popularity should be “Seen in a historical light, what is today called qualitative, or sometimes ethnographic, interpretative research—or a number of other terms—has more or less always existed.” (p. 141). Denzin and Lincoln stated, “Qualitative research is a situated activity that locates the observer in the world. It consists of a set of interpretive, material practices that make the world visible. These practices transform the world. They turn the world into a series of representations, including field notes, interviews, conversations, photographs, recordings, and memos to the self.” (p. 3). Thus, qualitative research plays a crucial role in exploring complex phenomena and gaining in-depth insights into human experiences. Further, Pyo et al. remarked that “Qualitative research is conducted in the following order: (1) selection of a research topic and question, (2) selection of a theoretical framework and methods, (3) literature analysis, (4) selection of the research participants and data collection methods, (5) data analysis and description of findings, and (6) research validation.” (p. 12). However, there is always back and forth between different processes, making it iterative. Despite the popularity of qualitative research, novice researchers often struggle with the intricacies of participant selection for exploring human experiences aligned with their feelings, emotions, and perceptions. So, selecting the appropriate research participants is crucial to conducting any qualitative research and/or inquiry that directly influences the rigor—credibility, and richness of the data collection and/or generation. However, inappropriate choice of participants and data collection may lead to methodological flaws and compromised study outcomes. With advantages, disadvantages, and characteristics, the participant selection procedure in qualitative research is considered purposeful sampling with referred criteria in general and co-researchers in particular. These selection procedures are often based on problem, purpose, research question, and theoretical referents. In this article, we have explored participants' selection procedures, drawings from our experiences and understanding. We offer some thinking points for consideration in qualitative methods and the nuanced differences and uniqueness in each of the chosen methodologies by offering a practical guide for consideration for novice and/or veteran researchers regarding participant selection in seven qualitative methodologies—auto/ethnography, narrative inquiry, participatory action research, ethnography, case study, phenomenology, and grounded theory. Choosing appropriate participant selection procedures is essential to enhance the quality of qualitative studies. This paper serves as a comprehensive guide for novice and/or veteran researchers, offering a step-by-step approach to participant selection in the chosen qualitative research method, taking care of challenges, and offering practical solutions based on our studies. In this article, we report our experiences of participant selection in each methodological tradition, as the process is essential for ensuring quality in qualitative research findings and/or outcomes . Addressing the ethical considerations and practical tips for selecting participants, this article offers a participant selection procedure in seven qualitative methodologies (1) auto/ethnography, (2) narrative inquiry, (3) participatory action research, (4) ethnographic study, (5) case study, (6) grounded theory, and (7) phenomenology. As the authors have also embedded reflective experiences and some thinking points for consideration of our research journey in relation to participant selection, we have used the first-person pronoun “I” in subsequent sections. In this section, I, the first and corresponding author, share my experiences of conducting auto/ethnographic inquiry, particularly the participant selection process. In my auto/ethnographic inquiry, the research site was not confined to boundaries because of its unique nature. Based on my research purpose, my inquiry was confined to myself, four research participants, and a critical friend based in the South Asian context. This limitation, however, suggested exploring the information through postmodern approaches of self and others connecting life and research . Postmodern approaches encouraged me to uncover my beliefs, thinking, and process of being and becoming in a larger context, thinking qualitatively. Likewise, the main motto of my auto/ethnographic inquiry was to explore anecdotal and personal experiences of self (insider) and others (culture) and connect the autobiographical story to wider cultural and social meanings and understandings to enrich the meaning-making process from my research participants—Aarati, Kamal, Hari, and Santosh, and my critical friend—Naresh. However, the names and institutions they serve are pseudonyms. Next, in my PhD inquiry , I primarily generated data from myself by incorporating the research participants and critical observations from a critical friend. My conversations were followed based on the narrative generated by myself with four research participants and a critical friend observation while envisioning the science, technology, engineering, the arts, and mathematics (STEAM)-based mathematics education on (1) school-community relations, (2) mathematical curricular spaces, (3) professional development, and (4) leadership development in STEAM-based mathematics education . In contrast, I was flexible in terms of the number of research participants: first six, then five, and four. These were the basic tenets of my auto/ethnography as per the emerging nature of the inquiry. Data generation was carried out before, during, and after the field engagements. I employed writing as/for the process of the inquiry to capture the contextual and universal perspectives in my inquiry. Data generation by incorporating the research participants and critical observations from the critical friend took almost 2 years and other professional engagements while continuously envisioning the STEAM-based mathematics education, even in the finalizing stage of the thesis. In my PhD study, I have presented a brief description of my participants as follows: Thus, a brief description of participants in the method section of the article and/or thesis makes the research method more transparent as participant selection procedures. In this section, I, the second author, briefly discuss the narrative inquiry and then the participant selection process in detail. Narrative inquiry is an emerging research method that is gaining popularity in social science research in general and educational and teacher education research in particular. Narrative inquiry mainly predominates research studies exploring “educational experience as lived” . In narrative research, we explore participants' lived experiences that are recorded in the form of stories. Highlighting the importance of stories in human life, Silko mentions, “You don't have anything if you don't have stories” in her widely acknowledged novel Ceremony . Human beings are storytelling creatures who explain their and others' doings through narratives of past, present, and imagined future experiences. According to Kramp , stories “assist humans to make life experiences meaningful. Stories preserve our memories, prompt our reflections, connect us to our past and present, and assist us to envision our future” (p. 107). Barkhuizen argues that “Experiences become narratives when we tell them to an audience, and narratives become part of narrative inquiry when they are examined for research purposes or generated to report the findings of an inquiry” (p. 4). According to Webster and Mertova , narratives allow teachers and researchers to present experiences holistically with their complex situatedness. Narrative inquiry is concerned with analyzing, interpreting, critiquing, and presenting stories we live by, be it individual storied life or myths surrounding us. The critique process means that the research process can indeed become a matter of co-generating imagined future experiences . As people live storied lives and narrate the stories of such lives, the primary responsibility of the researcher in narrative inquiry is to present such lived experiences with their meaning, during which even the researcher becomes part of the meaning-making process where they construct and reconstruct a shared narrative through inquiry . Narratives occur in specific socio-cultural contexts with three main commonplaces: temporality, sociality, and place, constituting the concept of narrative inquiry. These commonplaces, always in the process of becoming or transitioning, distinguish narrative inquiry from other methodologies . Contextual factors embedded in participants' narratives make narrative data rich and complex. As Connelly and Clandinin noted, data collection methods ranging from field notes of the shared experience to in-depth interviews, journals, and other sources make it rich. Most often, narrative inquiry involves a minimal number of research participants , sometimes just one, primarily in the case of the life history approach, with an in-depth and prolonged period of story generation . The type of narrative inquiry—autobiographical, biographical, life history, arts-based—and the data collection method chosen by the researcher also influences the number of research participants. For instance, if a researcher is willing to conduct the narrative survey first and then select a chosen few as research participants, 30 participants are generally considered an ideal number. However, for in-depth interviews to explore the rich data, six to ten participants suffice the purpose of the research. In the case of the autobiographical and life history approach, even just one participant is enough. The prolonged engagement is crucial in the life history approach of narrative inquiry instead of numbers. In my PhD study that explored the trajectory of identity negotiation of English language teachers in Nepal subscribing to the life history approach, I have taken just four secondary-level English language teachers in public schools in Kathmandu Valley as my research participants. Like other qualitative research approaches, even in narrative inquiry, purposive sampling is relevant in selecting participants based on the criteria. In my research studies , the participant's selection criteria were defined as (a) teachers having at least one education degree, either Master of Education (M.Ed.) or Master of Philosophy (MPhil), (b) currently teaching secondary-level students in public schools in Nepal, and (c) having at least seven to 10 years of teaching experience. However, before deciding on the four participants who met my criteria, I had a primary level of conversation with seven participants. Out of those seven participants, four participants who best fit the defined criteria were selected. Experienced teachers were purposively selected as research participants as they could achieve a certain level of maturity, gain ample experience, and attain a certain level of identity construction during this stage. In narrative inquiry, participant selection is primarily purposive, and the selection is done based on defined criteria to meet the purpose of the study. Next, the type of method that we have chosen also influences the number of participants. In narrative research, prolonged engagement and drawing detailed experiences are crucial instead of the number of participants. In narrative research, researchers can also imbed their experiences as data. In my PhD research and participants' stories, I included my lived experiences wherever relevant. Besides, mentioning the participant's selection criteria and including a brief description of the participants in the method section always adds transparency and rigor to the research. The general notion of sampling is not appropriate terminology in the Participatory Action Research (PAR). In generic qualitative research, sampling refers to the participants who are “selected” by the researcher to receive the information and textual data during data collection. However, in PAR, the researcher who initiates the research at the beginning shares the ideas and invites people to contribute to the research process, and hence shares the responsibility and commitment to change in the research site, and they are collectively named ”co-researchers.“ In this article, I, the third author, share how co-researchers are invited and negotiate the roles and responsibilities of the PAR team. In a general sense, the popular and general method of sampling in qualitative research—purposive sampling —seems useful in PAR. However, the notion of purposive is rich and multilayered and needs further clarification in PAR. Chevalier and Buckles mentioned that the researcher should first know the actors in the research process. In this regard, knowing the stakeholders who can potentially be co-researchers in a complex task is crucial. In this regard, the authors themselves raised some questions. Having raised the question, it was finally mentioned that, in the end, problems, actors, and options are inseparable. This is a powerful and authentic set of questions that leads the process of confirming the “participants” (i.e., co-researchers) in PAR. Livingston and Perkins suggested that the role of the (academic) researcher is to facilitate discussions and understanding among the participants through scoping conversations at first and support them in agreeing upon the specific methods of inquiry. Here, the method of inquiry refers to the entire process of engagement into the field not as a passive way of information provider but through an active engagement in the cycle and action and reflection during knowledge generation. Another popular method in PAR is respondent-driven sampling . This is applicable when the population is ”hidden“ when no sampling frame exists, and public acknowledgment of membership in the population is potentially threatening, but there are people in the hidden population who can contribute to some actions and knowledge generation approaches. This can be done by a chain-referrals system in which one person refers to the other, and finally, the academic researcher makes a group of co-researchers. The analysis contains proof that, while sampling, like most chain-referral samples, begins with an arbitrarily defined set of initial subjects, the composition of the ultimate sample is entirely independent of those initial subjects. Milne indicated that the participatory nature of research conceives a problem-solving technique that often involves researchers and research participants working together to examine a problematic situation, actions, or issues. So, participants should be mentally and emotionally ready to tackle such situations. In PAR, sometimes, people misconceive that this is a rigorous process, and participants are considered co-researchers, so the quality data might come from academic participants. In this context, Mata-Codesal et al. mentioned that non-academic participants themselves come to recognize, reflect on, and express their experiences in a novel way that may not be captured in basic texts through a conventional research method. It can be possible when artistic and creative practices capture such rich narratives of the field while researching and engaging with wider audiences. In this context, while forming a group of participants, the researchers should be aware of those people who can contribute in a noble way to the community toward using the research process to seek transformation in the direction of social justice rather than gathering people from the perspective of contributions through cognitive engagement. Ethnographic studies are inherently context and culture-sensitive, necessitating careful consideration of various socio-cultural factors in selecting both the research sites in which to “hang around” and the research participants with whom to engage for a prolonged period. Ethnographers prefer “participant selection” to “sampling” or “recruitment of participants” to ensure that the individuals chosen for the study reflect the socio-cultural dynamics of the context being researched . This approach acknowledges the complexity of human societies and emphasizes the importance of understanding cultural intricacies while respecting and honoring the cultural contexts of the participants. First of all, ethnographers select research sites that are justified by specific criteria, as the site significantly influences participant selection. Once the research site is identified (of course, which should also be rationally justified with some specific criteria), the task of participant selection begins. Sometimes, the participants are clearly laid down in the research questions (e.g., Mayor, Minister, Secretary or Officer of the District/local education office, headteacher, School Management Committee Chair, or specific grade or subject teacher, or the students with a disability of a certain kind), in which case, the job becomes slightly easier—however, given the participants' right to not participate, we need to be open about moving on to another site. Again, the choice of site followed by participants may also not always be linear; it may be the other way round in some cases—after all, participants are the key focus. Therefore, researchers must remain flexible, as the choice of site and participants can be iterative rather than linear, with the potential to shift focus based on emerging insights during the research process . Moreover, detailed socio-cultural, economic, political, or other local dynamics of the site and participants are to be carefully and minutely considered in an ethnographic study to delve deeply into the intricacies of culture. The culture here means either or both community culture or institutional/group behavior. While there is no watertight answer to how many participants would make an ideal ethnographic study, Angrosino's suggestion may be worthwhile—“the size of a sample depends on the characteristics of the group you are studying, on your own resources (i.e. legitimate limitations on your time, mobility, access to equipment, and so forth), and on the objectives of your study” (p. 48). For pragmatic reasons, the ethnographic practice in academic research has revealed that the number of participants in ethnographic studies typically ranges from six to ten, particularly when focusing on in-depth interviews and observations . This small size is conducive to generating rich qualitative data while allowing researchers to delve deeply into participants' experiences and interactions within their social contexts. If focus groups are included, the number may increase slightly. Importantly, the selection of participants is not merely about quantity; it is about achieving theoretical saturation, assuring that the data collected is comprehensive enough to address the research questions effectively. Therefore, it is the researchers' job to claim 'data saturation' and thus to limit the number of participants. In terms of our co-author's (Dhakal's) own experience and practice of engaging in ethnographic research and ethnographic sub-studies, he had tried to select the participants to best reflect the cultural or social groups being studied, especially focusing on small, purposefully chosen information-rich participants. The actual participants to interview/interact with or have group interactions (such as focused group discussion) may not be of a focus since most ethnographic studies rely on multiple methods of data generation—some of which do not require one-to-one or group interaction but only observations. Since ethnographers focus on observing daily interactions and participating in cultural activities, actual interviews with community members to gather narratives and stories may be relatively smaller. Ethnographers employ various techniques of participant selection tailored to their research goals, such as purposive, snowball, convenience, or theoretical sampling. However, the purposive selection of participants is most commonly used in ethnographic research, as it facilitates the identification of information-rich participants who can provide deeper insights into the cultural dynamics being studied. While representation of diverse cultural or social groups is important, it is not the primary aim of ethnographic research. Instead, the focus is on selecting participants who can provide in-depth insights into the norms and behaviors of the community. And “the community” need not be regarded as homogenous. Different participants in the community may offer different accounts of what is valued. Critical ethnography also examines power relations and how these impact social interactions. So, a strong rationale must be clearly laid out for how and why the chosen number of participants would suffice and why they are the best-fit participants. Moreover, a detailed description of each participant's profile can be presented either as a summary table (see example Table 1 ) that I, the fourth author, had used in my PhD thesis on Women's participation in School governance' or detailed bio-sketch of each participant (see Table 1 from Dhakal's PhD). I had combined both the approaches in my PhD. Table 1 shows the ways of presenting the participants' details aid in the transparency of the research process and participation selection procedure . Qualitative case study is a popular research method in social science and educational research . A case study can be defined as an in-depth examination of a complex subject(s), institution(s), problem(s), or subjects in a real-life setting . The method is appropriate when the research question is “how” or “why” and the phenomenon is to be studied in a real-life or natural context . There are two major types of case study research in terms of selection of case(s): single case and multiple. The subject of investigation in a single case study can be an individual, a family, a household, a community, an organization, an event, or even a decision , whereas in multiple case studies two or more cases can be studied. In this section, I, the fifth author, discuss participants' selection strategies in case studies to collect rich data for a holistic understanding of the studied phenomenon or phenomena. The case study can be explanatory, exploratory, or descriptive. So, the case study design selection depends on the study's overall purpose . An explanatory case study seeks to identify the causal factors that explain a particular case. The primary focus of such a study is to explain “why” and “how” certain conditions come into being and why certain consequences of events occur or do not occur . An exploratory case study explores the context of the phenomenon, and its primary purpose is to investigate or identify the new research question that can be used extensively in succeeding research studies. Likewise, the primary purpose of a descriptive case study is to describe a phenomenon in detail in its real-life situation in which it occurred . In terms of the number of cases, it can be single and multiple case study research in which the researcher tries to have a holistic understanding of a unique, extreme, or critical case in a single case study, whereas, in contrast, the researcher explores the similarities or differences in multiple case studies. Sampling in a case study involves selecting representatives from a larger population for an in-depth analysis of the issue to be studied . Qualitative case studies employ purposive sampling to illustrate the phenomenon of interest and present an in-depth understanding of the case of the study. In case study, like other qualitative research, participants are selected in terms of their relevance to the research topic or question(s) . Like other qualitative studies, even in case study research, researchers need to define participant selection criteria clearly and should have a specific purpose behind selecting the case(s), that offers valuable insights into the phenomenon under investigation. Similarly, considering the access and feasibility of data regarding availability, willingness to participate, the practicality of data collection methods, and continuity of the data collection process until it reaches the saturation point is also critical for determining samples. Moreover, the case study researchers select their cases gradually, not limiting the number of participants chosen until the data reaches saturation. Regarding this, Glesne and Peshkin suggest that if the stories are repeated among the participants and no new information is added to the research by any new participants, researchers need to stop selecting new participants. For the sound, undulated, and unbiased study of the phenomenon under study, a case study involves multiple sources of data collection, like participant/nonparticipant observation, in-depth interviews, audio/video recordings, field notes, focus group discussion (FGD), conversations in a natural setting, and study of documents (whether of books, archival manuscripts, signs, physical artifacts, and so on) . The concept of conducting an “unbiased” study is highly debated, even within qualitative research . In constructivist credo, Lincoln and Guba noted that researchers do not need to assert their impartiality but should instead embrace a dialogical approach with their participants or co-researchers. Gergen further emphasizes that achieving excellence in qualitative research is less about striving for objectivity and more about fostering strong relationships with research participants. The multiple sources of data collection are crucial in the case study, which does not seek to offer a more or less unbiased representation, but it can be used to enhance dialogically generated insights and increase the richness and quality of the findings, which is likely to be more convincing and accurate . At the same time, due to the bulk of data from multiple sources, sometimes there is the risk of the researchers being lost in the data. Romm expanded the discussion to provide more detailed accounts of what acting responsibly toward research participants means. Therefore, Baxter and Jack suggest proper organization and analysis of the data as each data source is one piece of the “puzzle,” each contributes to the researcher's holistic understanding of the phenomenon. Thus, the emphasis is not solely on the professional researcher's comprehension but on co-creating understanding and insight . Grounded theory is a highly favored qualitative research methodology in the social sciences because of its distinct theory development process. In contrast to conventional research methods, Grounded Theory enables theories to arise directly from methodically gathered and examined data . Studying social interactions, processes, and behaviors in natural environments is where this approach works. This approach emphasizes the development of theory from the bottom up, which gives researchers a more authentic understanding of the phenomenon they are studying and helps them gain insights into the lived experiences of participants . In this sense, grounded theory has proven to be an effective tool for examining complicated social issues, especially poorly understood ones, because of its versatility and flexibility. As grounded theory aims to build a theory grounded in the data, participant selection is crucial. Grounded theory adopts theoretical sampling as an effective strategy as it is a dynamic, iterative process that is driven by emerging theory . Researchers continuously gather and analyze data using this sampling method by letting the developing theory determine where, when, and from whom further data should be collected . As a result, this method enables researchers to concentrate on their areas of interest, find theoretical gaps as they occur, and ensure that the final theory is thorough and solidly supported by the available data. A broad research question or area of interest is usually the starting point for theoretical sampling. The initial data collection process may involve document analysis, observations, or interviews, depending on the research context. Furthermore, the emerging theory guides the sampling decisions as they are gathered and analyzed. In this regard, the researcher looks for more subjects or data sources that can elaborate on that idea . Ensuring that the final theory is thorough and firmly based on the data enables the researcher to expand and improve the theory as the study goes on. Another element to consider during the grounded theory sampling process is reflexivity. In qualitative research, reflexivity is crucial, especially when using techniques like grounded theory, where the researcher's work is closely linked to gathering and analyzing data . By practicing reflexivity, researchers become conscious of their own prejudices and how they might affect the way they conduct their work, including how they choose to sample. Neill emphasized the importance of reflexivity to ensure that researchers' preconceptions do not influence the sampling process, keeping it aligned with the emerging theory's requirements. Charmaz argued that grounded theory can never be a completely objective representation of phenomena. Researchers should transparently disclose how their theories have been constructed or co-constructed . Mills et al. offer a comprehensive discussion of the nuances within grounded theory and constructivist approaches, including those explicitly promoted by Charmaz and other proponents. By practicing reflexivity, researchers can address potential biases, enhancing the quality and credibility of their data collection. This process strengthens the rigor of grounded theory and the research relationship. The process of theoretical sampling is a continuous cycle of data collection, analysis, and refinement rather than a linear one. To find trends, concepts, and categories, newly acquired data is instantly examined and contrasted with the database. A key component of grounded theory, constant comparison, ensures that the evidence supports the developing theory . The process of theoretical sampling persists until theoretical saturation is achieved, which transpires when supplementary data ceases to advance the theory . At this stage, the researcher can be sure that the theory appropriately explains the phenomena being studied and is well-supported by the data. According to Thomson , a sample size of about 25 interviews is typical for grounded theory research. A larger sample size of up to 30 interviews might be advised in some circumstances, though, to thoroughly develop the patterns, concepts, categories, attributes, and dimensions of the phenomenon being studied. Researchers can examine the subtleties of the data and make sure the developing theory is thorough and well-supported with a larger sample size. Grounded theory carefully considers sample diversity in addition to sample size. Depending on what the study requires, the sample consists of people with a range of experiences, backgrounds, and viewpoints . Researchers can make sure that the developing theory accurately reflects the complexity of the phenomena they are studying by enlisting the assistance of a wide range of participants. However, the idea of accuracy is challenged by constructivist grounded theorists—the developing theory is a co-construction that can never be checked for accuracy against some objective reality. So, it is deemed to offer insights in relation to the complexity of the phenomena being researched. Likewise, another important component of grounded theory is the consideration of ethical issues. The requirements of the developing theory dictate the sampling procedure. Because of this, researchers need to consider any potential ethical ramifications before making any sampling decisions. Researchers should take into account concerns about participant impact, informed consent, and confidentiality, according to Conlon et al. . By doing this, researchers can ensure that participants are treated fairly and respectfully during sampling. The rigor and credibility of the research are enhanced by important aspects of sampling in grounded theory, including reflexivity, sample size, diversity, and ethical considerations. Phenomenology is a unique qualitative form of inquiry into lived experiences of human existence, and it aims to understand those experiences from the participants' perspectives . This methodology is ingrained in early 20th-century European philosophy, which comprises the use of thick descriptions of close inquiry of lived experience to understand how meaning is created through personified insights and perceptions . Digging deep into participants' lived experiences that reflect their life's pains and gains is a challenging job for a researcher. For instance, “a study on the lived experiences of pregnant women with psychosocial support from primary care midwives will recruit pregnant women varying in age, parity and educational level in primary midwifery practices” . It is essential to appropriately select research phenomena and participants and formulate research questions for a phenomenological study to capture the essence of the participants' shared experience and construct meaning from their experiences. The term “sample within phenomenological methodology should not refer to an empirical sample as a subset of a population”, but to a wisely chosen group of human beings that share in-depth insights into the essence of the subject being studied aligned to transformative intents. Phenomenological researchers primarily employ purposive, snowball, and maximum variation strategies for selecting their research participants . These strategies help researchers delve deep into their participants' lived experiences about the phenomenon being studied. Purposive sampling is a key data collection strategy in phenomenological study as it enables researchers to select participants with a rich array of lived experiences of the phenomenon under study and willing to provide rich, thorough, and evocative data . The participants are selected based on their lived experiences, knowledge of the phenomenon being studied, and their verbal efficiency in describing their group or (sub)culture . The participants can provide rich descriptions of their experiences with the phenomenon, collaborating with the researcher to explore its essence and construct meaning. Another important information-gathering strategy is snowballing. While selecting research participants through snowball sampling, the researcher first selects one or a few participants, considering their knowledge and ability to express their experiences about the phenomenon under study. Then, they recognize other prospective participants who are supposed to have in-depth information about the phenomenon, for example, a program or community, being explored by requesting initial participants to recommend other people who possess similar characteristics and experiences . Mertens further states that the list of participants grows, like a snowball, as the added participants refer to other prospective members' names. Hence, the initial participants' recommendations help a phenomenological researcher conveniently select the appropriate study participants. Maximum variation sampling is another significant strategy to gather intentionally heterogeneous data for phenomenological research . The researcher selects participants with a wide range of characteristics and experiences about the phenomenon being explored. The data collected from such a selection of participants can yield varied information from a wide range of perspectives and identify important common patterns . The participants share experiences of a phenomenon being explored in a phenomenological study . Phenomenological study focuses on gathering the depth and quality of the information primarily through interviews and observations rather than the number of participants. There are different opinions regarding the number of participants in qualitative research, including phenomenology. For example, Polkinghorne suggests interviewing 10 to 25 participants, and Moustakas recommends that a researcher should take between 5 and 25 participants. However, the gathering of data continues until saturation occurs or when the data no longer reveals new insights or themes from the participants . The researcher encourages and probes their participants to describe their experiences in detail during the unstructured interviews and semi-structured interviews and observes them in the context where the phenomenon being explored is experienced . A researcher can use an unstructured interview if they have a limited understanding of the topic and want to rely on their participants' information to lead the conversation and a semi-structured interview in order to obtain in-depth data from the participants . The data is expected to reach a point of saturation from the participants, ensuring that no new understandings or themes would emerge from further participants . Saturation occurs when the data no longer reveals new information or themes, and further interviews or data collection yields redundant information . The primary emphasis of a phenomenological study is on the richness and saturation of the information, so selecting appropriate participants is a critical methodological process. A researcher can obtain in-depth, diverse, and evocative insights into participants' lived experiences by employing purposive, snowball, or maximum variation sampling strategies, ensuring that the selection process benefits both researchers and participants. The number of participants is decided when the insights or themes start getting repeated, affirming the study encapsulates the essence of the phenomenon being explored. Hence, an appropriate selection of participants helps researchers construct an in-depth understanding of human experiences based on the viewpoints of those who have experienced them. Participant selection is a critical aspect of qualitative research designs that influences the credibility and richness of the data collected . This article has provided a comprehensive guide and offers some thinking points for consideration for novice and/or veteran researchers, drawing from the authors' extensive experiences in different qualitative methodologies, including auto/ethnography, narrative inquiry, participatory action research, ethnography, case study, phenomenology, and grounded theory. The nuanced differences and unique aspects of participant selection across these methodologies (as shown in Table 2 ) highlight the importance of a thoughtful and deliberate approach. However, considering the problem, purpose, research question, and theoretical framework, researchers can ensure that their participant selection process aligns with the goals of their study and enhances the overall quality of their research without compromising their methodology. This article also emphasized the iterative nature of qualitative research design, such as auto/ethnography, where participant selection is not a one-time decision in qualitative research design but an ongoing process that may require adjustments as the study progresses as an emerging nature of the qualitative inquiry. Ethical considerations, our experiences, and some thinking points in this serve as valuable guidelines for researchers to navigate the complexities of participant selection. Starting with the participant selection procedure in auto/ethnographic inquiry, the first author discusses and further exemplifies the adaptability and depth of qualitative research. This approach blends personal and cultural narratives, allowing researchers to connect individual stories to broader social and cultural contexts. The selection of participants in auto/ethnography is also guided by the research purpose and the need to explore both self and others' experiences in a meaningful way. However, narrative and auto/ethnographic inquiries underscore the iterative nature of qualitative research, where participant selection is an ongoing process that may evolve as the study progresses. Ethical considerations and the need for prolonged engagement with participants are essential to ensuring the richness and credibility of the data collected. Next, the narrative inquiry that emphasizes the importance of temporality, sociality, and place involves a minimal number of participants, sometimes even just one, especially in autobiographical or life history approaches. The selection criteria are typically based on the research purpose, problem, and theoretical framework, ensuring that the participants' stories are rich and meaningful. For instance, in the second author's PhD study, the selection of experienced English language teachers in Nepal was based on specific criteria to ensure depth and relevance in the narratives collected. Further, by subscribing to participatory action research (PAR), the third author redefines the traditional sampling concept by emphasizing the research process's collaborative nature. In contrast to conventional qualitative research, where the researcher chooses the participants, PAR entails inviting people to join as co-researchers who share responsibility and dedication to the research objectives. This approach fosters a sense of ownership and active engagement among all participants, enhancing the relevance and impact of the research. The process of identifying co-researchers in PAR is complex and multifaceted. It requires understanding of the stakeholders, their shared problems or goals, and the broader vision they aim to achieve. This collaborative approach ensures that the research is grounded in the real-world experiences and aspirations of the community involved. Methods such as purposive sampling and respondent-driven sampling can be adapted to fit the unique needs of PAR, ensuring that the co-researchers are well-suited to contribute meaningfully to the research process. The participatory nature of PAR encourages a problem-solving mindset, where researchers and co-researchers work together to address issues and generate knowledge. And this idea of collaboration (and co-generation of meaningful insights) is not confined to PAR. It can enter much qualitative research . This collaborative effort often leads to richer, more nuanced data that might be captured through something other than traditional research methods. The inclusion of non-academic participants who bring diverse perspectives and experiences further enriches the research outcomes. PAR transforms the research process into a collective journey of inquiry and action, where the roles and responsibilities are shared, and the knowledge generated is co-created. This approach not only enhances the quality and relevance of the research but also empowers the participants, fostering a deeper connection to the research outcomes and their potential impact on the community. Researchers can create more inclusive, impactful, and ethically sound research practices by embracing the principles of PAR. Likewise, the fourth author added that ethnographic studies require a deep understanding of the socio-cultural context and careful consideration of participant selection. Unlike other qualitative methodologies, ethnography emphasizes “participant selection” oversampling to ensure that the chosen individuals reflect the research site's cultural dynamics and enable (critical) exploration of power relations. This approach respects the complexity of human societies and honors the cultural contexts of the participants. The process begins with selecting a research site based on specific criteria, which significantly influences participant selection. Flexibility is crucial, as the choice of site and participants can be iterative, adapting to emerging insights during the research process. Detailed consideration of socio-cultural, economic, and political dynamics is essential to delve deeply into the intricacies of the culture being studied. The number of participants in ethnographic studies typically ranges from six to ten, focusing on in-depth interviews and observations. This small size allows for rich qualitative data while ensuring comprehensive coverage of the research questions. Achieving theoretical saturation is key, ensuring that the data collected is sufficient to address the research objectives effectively. Ethnographers employ various participant selection techniques, with purposive sampling being the most common. This method identifies information-rich participants who can provide deep insights into the cultural norms and behaviors of the community. The goal is to represent diverse groups and select participants who can contribute meaningfully to the study. Providing detailed profiles of participants enhances the transparency of the research process. This can be done through summary tables or detailed bio-sketches, offering a clear rationale for the selection and demonstrating how the participants' characteristics align with the research goals. Ultimately, ethnographic research is about understanding and interpreting the lived experiences of individuals within their cultural contexts. By carefully selecting participants and considering the socio-cultural dynamics, researchers can produce rich, nuanced insights that contribute to a deeper understanding of human societies. Next, in qualitative case study research, the fifth author reported that qualitative case study offers a robust method for exploring complex phenomena within their real-life contexts. This approach is particularly valuable in social science and educational research, where understanding the intricacies of specific cases can provide deep insights into broader issues. Case studies can be explanatory, exploratory, or descriptive, each serving different research purposes. The selection of cases, whether single or multiple, is guided by the research questions and the need to understand the phenomenon in depth. Also, it is important to consider who is setting the research questions so that the research can benefit participants, especially those most marginalized in the social fabric (and, for that matter, the ecological fabric). Single case studies focus on unique or critical cases, while multiple case studies explore similarities and differences across several cases. Participant selection in case studies is a critical process that involves purposive sampling to ensure that the participants are relevant to the research questions. Defining clear selection criteria and considering factors such as availability, willingness to participate, and data collection feasibility are essential. The goal is to continue selecting participants until data saturation is achieved, ensuring additional participants add no new information. The richness of case study research lies in its use of multiple data sources, including observations, interviews, recordings, field notes, focus group discussions, and document analysis. This triangulation of data sources enhances the credibility and depth of the findings. However, researchers must be cautious of the potential for data overload and ensure proper organization and analysis to maintain a clear focus on the research objectives. Thus, qualitative case study research provides a comprehensive and nuanced understanding of the studied phenomena. By carefully selecting participants and employing multiple data collection methods, researchers can produce convincing and accurate findings, contributing valuable insights to the field. All the above, the sixth author added that grounded theory stands out as a powerful qualitative research methodology, particularly valued for its ability to generate theories directly from systematically gathered and analyzed data. This bottom-up approach provides researchers with authentic insights into social interactions, processes, and behaviors within their natural environments, making it especially effective for exploring complex and poorly understood social issues. Participant selection in grounded theory is a dynamic and iterative process known as theoretical sampling. This method allows the emerging theory to guide the selection of participants, ensuring that data collection is continuously refined and focused on filling theoretical gaps. This iterative process, combined with constant comparison, ensures that the developing theory is robust and well-supported by the data. Reflexivity is another crucial element in grounded theory, helping researchers remain aware of their biases and ensuring that these do not influence the sampling process. By practicing reflexivity, researchers can enhance the quality and credibility of their data collection, thereby strengthening the overall research process. Theoretical sampling continues until theoretical saturation is achieved, meaning no new data significantly advances the theory. This ensures that the final theory is comprehensive and accurately reflects the phenomena being studied. While a typical sample size for grounded theory research is around 25 interviews, it may extend to 30 to develop theoretical constructs thoroughly. Diversity in the sample is also essential, as it ensures that the emerging theory captures the complexity of the phenomena under study. Researchers must consider a range of experiences, backgrounds, and perspectives to develop a well-rounded theory. Ethical considerations are paramount in grounded theory research. Researchers must ensure that participants are treated with respect and fairness, addressing issues such as informed consent, confidentiality, and the potential impact on participants. Researchers must examine the likely consequences with participants and ways of setting research questions and proceeding with the research. By adhering to these ethical standards, researchers can enhance the rigor and credibility of their studies. Overall, grounded theory provides a flexible and rigorous framework for developing theories that are deeply rooted in empirical data. By carefully considering participant selection, reflexivity, sample diversity, and ethical issues, researchers can produce robust and meaningful theories that contribute significantly to our understanding of complex social phenomena. Finally, the seventh author adds that phenomenology offers a thoughtful approach to understanding the lived experiences of individuals, focusing on capturing the essence of these experiences from the participants' perspectives. Rooted in early 20th-century European philosophy, phenomenology employs thick descriptions and close inquiry to uncover how meaning is constructed through personal insights and perceptions. Selecting participants for phenomenological study is a critical process that goes beyond traditional sampling methods. Instead, it involves purposive, snowball, and maximum variation strategies to ensure that participants provide rich, detailed accounts of their experiences. Purposive sampling allows researchers to choose individuals who have deep insights into the phenomenon, while snowball sampling helps identify additional participants through recommendations from initial subjects. Maximum variation sampling ensures a diverse range of perspectives, enhancing the depth and breadth of the data collected. The number of participants in phenomenological research varies, with recommendations ranging from 5 to 25 participants. The key is to continue data collection until saturation is reached, meaning no new themes or insights emerge from additional data. This ensures that the study captures the full essence of the phenomenon being explored. Interviews, both unstructured and semi-structured, are primary data collection methods in phenomenology. These interviews allow participants to describe their experiences in detail, providing the rich, evocative data needed to understand the phenomenon thoroughly. Observations in the context where the phenomenon occur further enrich the data. The success of a phenomenological study hinges on the careful selection of participants and the depth of the data collected. By employing appropriate sampling strategies and focusing on the richness and saturation of the information, researchers can construct a comprehensive understanding of human experiences grounded in the authentic perspectives of those who have lived them. These days, many qualitative researchers point out that the construction should not lie in the hands of professional researchers but must be a co-construction with participants, with the intent that the research will benefit (marginalized) participants. This approach not only enhances the credibility and depth of the research but also provides valuable insights into the complexities of human existence. In summary, Table 2 shows the overview of participant selection across selected qualitative methodologies, and Table 3 shows participant selection considerations by methodologies. In closing, this article shall empower novice and/or veteran, graduate, and postgraduate researchers with the knowledge and tools needed to make informed decisions about participant selection considering some thinking points, thereby contributing to the rigor and richness of qualitative research in educational contexts. With our experiences and insights, we hope to foster a deeper understanding of the participant selection process and inspire researchers to approach it with the care and consideration it deserves. | Study | other | en | 0.999997 |
PMC11697431 | The term cognitive impairment/dementia alludes to a continuum, a progressive and changing syndrome, which leads to successive disabilities and the loss of personal autonomy (i.e., to dependence on third parties) ( 1 ). Due to its enormous biological, psychological, and social complexity (both in terms of the patient and their caregiver), we consider dementia to be one of the best examples of complex chronic psychogeriatric diseases, and it will therefore always be at the center of reflection and psychogeriatric intervention. Geriatrics and Old Age Psychiatry both support a holistic approach as the essential tool to deal with this complexity ( 2 , 3 ). In the health care field, the concept of need can be applied at both the collective and individual level ( 4 ). At population level, it serves as a tool to organize clinical management and design health policies. Likewise, the study of individual needs, once assessed and prioritized, allows for personalized interventions. The combination of decades of daily experience in the clinical approach to dementia and our research experience with CANE, an instrument used frequently to assess needs in psychogeriatrics, has led us to believe that needs assessment is an inextricable part of the comprehensive psychogeriatric assessment ( 5 , 6 ). We are convinced that a care model based on a sufficiently thorough and operationalized study of the needs of the subject and their caregivers, while not essential, will greatly facilitate the comprehensive approach, the operationalization of the biopsychosocial model (more often advocated than actually put into practice), and, ultimately, person-centered care ( 7 – 10 ). Models that assess the condition of older persons based on needs assessment take these aspects into account, identifying the need itself (understood as a deficit), the relevance or appropriateness of third-party assistance (whether from the family, the community or the health system) and how these needs vary over time. There are several theoretical models that integrate needs and their relationship with morbidity and disability, such as that of Miranda-Castillo et al ( 11 ), Shmidt et al ( 12 ) or the studies conducted by Kitwood ( 13 ). These models are not mutually exclusive; they share common points, while emphasizing different aspects. Thus, the Miranda-Castillo model relates the person’s needs to the clinical, social and caregiver spheres. Schmidt focuses on morbidity (in cognitive impairment) as the main element of loss of autonomy, adding the importance of knowing how the person copes with difficulties in maintaining their autonomy and what needs are most important to them. The latter idea is also emphasized in Kitwood’s publications, promoting person-centered interventions and, ultimately (along with previous models), highlighting the importance of meeting needs, relating them to autonomy, the state of well-being and quality of life. With the aim of further examining and validating these theoretical models, a reanalysis of data from a community epidemiological study conducted two decades ago was carried out in order to empirically validate the needs assessment model, in this case, applied to people with cognitive impairment/dementia. The aim of this study is to examine the relationship between needs and functional capacity/dependency in people with cognitive impairment/dementia, establishing the hypothesis that people with cognitive impairment will have a greater number of needs and a higher level of disability and dependency, and that the severity of cognitive impairment is in correlation with an increased number of needs (both met and unmet). This is a community-based, cross-sectional, descriptive epidemiological study of morbidity and other health-relevant conditions. It is based on a reanalysis of data from a community-based epidemiological study conducted in Santiago de Compostela, Spain, of people over 65 years of age ( 14 , 15 ). It complies with the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) criteria ( 16 ). The sample for the present study is composed of 368 subjects. The original study was a two-phase epidemiological survey. In the first phase, a community sample of 800 people over 65 years of age, representative of the Santiago-Barbanza Health Area, were interviewed in their homes and screened for cognitive impairment, depression and dependency. The second phase included 368 older persons: a) those with suspected cognitive impairment and/or depression and/or physical problems leading to dependency (N=254), and b) a control subsample of people without cognitive impairment, depression or dependency (N=114). All the subjects in the second phase underwent a further interview to study their physical and mental health and needs, constituting the aforementioned subsample of 368 subjects included in the present paper. The sociodemographic variables collected were: age, sex, level of education, marital status, cohabitation status (number of cohabitants, living alone or with a partner), profession, rural/urban environment. The clinical variables collected in the research project (of which not all are presented in this paper) can be classified into: a) physical or mental morbidity variables; b) level of functionality/dependency. The main defining morbidity variables in the project were: the Spanish 30-item version of the Minimental State Examination (MMSE) ( 17 ), the 30-item version of Yesavage’s Geriatric Depression Scale (GDS-30) ( 18 ), a self-referenced questionnaire of chronic diseases common in older persons ( 19 ) and a brief ad hoc questionnaire of Likert scale type questions on health self-perception. The clinical diagnosis of dementia was made according to the International Classification of Diseases (ICD-10) ( 20 ). The MMSE score was used to classify the level of cognitive impairment. Other clinical variables included the frequency of the subjects’ visits to their primary care physician/specialists and whether they had been hospitalized recently. The Katz Index ( 21 ) was used to assess functional level , as well as the Barthel Index ( 22 ) (for the basic activities of daily living) and the Lawton & Brodie Scale ( 23 ) (for instrumental activities of daily living). The needs assessment was carried out using the Camberwell Assessment of Need for the Elderly (CANE) ( 24 , 25 ). This instrument analyzes 24 biopsychosocial needs of older persons and, if applicable, the caregiver’s overburden and need for information. This needs assessment is carried out from the triple perspective of the professional/researcher, of the caregiver (in the case of a dependent person) and of the subject themselves. For readers interested in gaining a deeper understanding of the instrument, we refer them to the two editions of the detailed manuals published on the subject ( 24 , 25 ). It is worth noting that this is a semi-structured interview, in which “older people are fully involved in the needs assessment process and there is a special section noting their own views and their satisfaction with the amount of help received” ( 24 ). The professional (in this case, the researcher) bases their assessment of needs for each CANE item on the answers given by both the older person and their caregiver, also incorporating any contextual information from other research instruments. This approach allows each CANE item to be classified into one of three statuses: “no need”, “met need” or “unmet need”. The quantitative analysis of the CANE for a specific subject provides the total number of needs (differentiating between “met needs” and “unmet needs”). When group data are reported, the mean values of “met needs”, “unmet needs” and “total number of needs” are usually analyzed. In the second phase of the study, 66 caregivers of dependent persons were also interviewed. They were assessed on: a) their level of social support; b) their level of stress; c) data on the cognitive impairment of the older person; d) perceived difficulties in their caregiving tasks and their coping strategies and, e) especially relevant for the present study, they filled out the CANE, providing their perspective of the needs of the older person in their care. In the present paper, the quantitative data recorded in the professional/researcher column will be presented. At the statistical level, a study of the association between variables was carried out through hypothesis testing and a multivariate study was performed by means of regression models. Specifically: In all cases, p<0.05 are considered statistically significant. All the analyses included were performed in R version 4 ( 27 ). At the ethical and legal level, the research data used were taken from the reanalysis of previous epidemiological studies, carried out between 1998 and 2000, accessed through anonymized databases. To carry out these studies, in accordance with the Spanish legislation in force at the time, verbal consent was requested from those interviewed and was approved by the corresponding Research Ethics Committee. The results of the study were organized under the following headings: Table 1 shows the frequency distributions of the level of cognitive impairment and Table 2 the sociodemographic variables (based on their sociological descriptive value, some variables that are not included in this publication have been presented). Tables 3 and 4 show the contingency tables for the MMSE variable (cognitive impairment). The Barthel Index, the Lawton and Brodie scale and the Katz Index ( Table 3 ), as well as the contrast of these variables, show a statistically significant relationship with these 3 scales ( Table 4 ). People with cognitive impairment have a greater level of dependency, perceived as a worsening of the ability to perform basic and instrumental activities of daily living. With regard to the relationship between needs and cognitive impairment, three new variables were generated to assess needs: total number of unmet needs (those receiving the response “Severe Problem”), total number of met needs (those receiving the response “No problems due to the help given”) and the total number of needs (sum of met and unmet needs). Table 5 presents a descriptive overview of these variables. After analyzing the relationship between cognitive impairment/dementia and number of needs, a statistically significant relationship was found, with an increase in needs in individuals with cognitive impairment with respect to those without . As indicated in the Material and Methods section, for the multivariate analysis a Generalized Linear Model of the negative binomial family with logarithm as the link function was performed to try to elucidate the important predictors in relation to the unmet needs response variable (count variable). The following explanatory variables will be maintained: After verifying the presence of overdispersion in the data, mainly as a result of the excess of 0 values in the number of needs, this type of model was selected after comparing the fit offered compared to that of a Poisson-type model. After making a selection of variables and eliminating those that are not significant, the final model includes the following predictors: GDS score, MMSE score and urological pathology . In this case, depression, dementia, and urological pathology are considered as significant predictors of the number of unmet needs. The relationship between pathologies and global number of needs (the sum of met and unmet needs) was also verified. The analysis followed the same procedure as that for unmet needs, now including met needs (total number of needs). The results are similar to previous ones; there are significant differences between healthy individuals and those affected by cognitive impairment/dementia or depression and, furthermore, these differences point to an increase in unmet needs in the latter . The difference in means in the case of cognitive impairment is 1.8 ( Table 8 ). The multivariate model will be the same as that applied for unmet needs. The following explanatory variables will be maintained: After making a selection of variables and eliminating those that are not significant, the final model includes the predictors shown in Table 9 and Figure 4 . In this case, the following variables are considered as significant predictors of the total number of met or unmet needs: The main results are as follows: The aim of the present study was to show the importance of studying needs as part of the comprehensive psychogeriatric assessment of persons with cognitive impairment/dementia and how increased needs, in particular unmet needs, are related to the level of cognitive impairment and dependency. The most frequent needs found in the study sample are related to physical health (85.5%), visual and auditory deficits (40.9%), distress and anxiety (37.1%) and the state of the home (36.5%), relating in general to the domains of self-care and the physical and psychological sphere. Variable results have been found in other studies, although most are related to the physical, psychological and environmental spheres (social variables). This is the case of the study conducted by Tiativiriyakul et al. ( 28 ), that of Hoogendijk E et al. ( 29 ) and that of Magalhaes Sousa R et al. ( 30 ). With respect to unmet needs in the study conducted, the most frequent were those pertaining to the psychological and social sphere, with particular emphasis on needs related to companionship, financial management, mobility, distress/anxiety and activities of daily living. The same result was also found in other studies (Titiviriyakul P et al. ( 28 ) Passos et al. ( 31 ). Regarding the most frequent unmet needs in people with cognitive impairment, the study found them to be memory, home care, financial management and physical health. In their research based on the Actifcare Cohort Study, Gonçalves-Pereira M et al ( 32 ) reported that unmet needs were mostly related to companionship, stress/anxiety and activities of daily living. These results were also found in the study by Mazurek et al ( 33 ), as well as in the study by Tobis S et al ( 34 ), Kerpershoek K et al ( 35 ) and Miranda-Castillo C et al. ( 11 , 36 ). The most frequent total needs (met and unmet) in our study of people with cognitive impairment were home care, personal care, financial management, nutrition, mobility, memory, anxiety/stress, activities of daily living, and continence. In the study conducted by Bohlken J et al ( 37 ), psychological disorders (related to mood, anxiety/stress), limitation in activities of daily living and memory disorders were highlighted. In their study, Tapia-Muñoz T et al. ( 38 ) described home care, nutrition and self-care as the most frequent needs. In the study carried out by Van der Ploeg ES et al ( 39 ), comparing the needs of people over the age of 65 with and without dementia living in a nursing home, it was found that the most frequent needs were those related to housing, financial management, continence, medication, memory, risk of (accidental) self-injury, companionship and activities of daily living (as in our study). This result was the same as that reported by Worden A et al ( 40 ) or the study by Hancock G et al ( 41 ). In the present study, one of the main results is the relationship between the diagnosis of cognitive impairment/dementia and needs, finding that they are related to an increase in the number of needs (both met and unmet) and an increase in dependency. Furthermore, the severity of the cognitive impairment has been shown to increase needs (both met and unmet). The relationship between having cognitive impairment/dementia and having a higher number of needs has been found in most of the literature reviewed ( 34 , 39 , 42 , 43 ). However, no such relationship was reported in the study by Ballard C et al. or that of Ashaye OA et al. ( 44 , 45 ). The severity of cognitive impairment/dementia is related to an increase in the number of needs (both unmet and total) ( 31 , 43 ) There are three characteristics that should be highlighted within the study of the needs of people with a dementia diagnosis, which were also found in our study (although they have not been contrasted in this paper, since only the perspective of the professional has been used within this reanalysis of data): The main conclusion of our study is that the use of an instrument that allows the analysis of a large number of biopsychosocial needs, such as CANE, provides essential information for a comprehensive psychogeriatric assessment of the person with dementia and will facilitate the implementation of personalized care, as recommended in virtually all good clinical practice guidelines for psychogeriatric and, more specifically, for dementia care ( 2 , 3 , 7 – 10 ). In concluding this discussion, it is essential to consider both the strengths and limitations of the present study, as these factors provide a balanced perspective on the findings and their applicability. A potential weakness is the limited geographical diversity of the sample. The study was conducted in a single region (Santiago de Compostela, in the Autonomous Region of Galicia, Spain), which might limit the generalizability of the findings to other populations with different demographic and cultural characteristics. Certainly, the conclusions of any epidemiological survey are limited to the social and cultural context in which the data were generated and, therefore, this study should be repeated in other parts of Spain, other European countries and in other contexts that are geographically and culturally more distant. On the other hand, we believe that the study has coherence enough to encourage other research groups to participate in such a collaborative project. That said, we feel it is important to emphasize that although the territorial location is relatively limited, this is a field study on a representative sample of the general population. Numerous studies of this nature have been conducted and continue to be conducted with data from populations receiving health and/or social care, often in specific care facilities (memory units, day centers, nursing homes, etc.). Our study analyzes a representative sample of the “Santiago de Compostela-Barbanza Health Area”, which is an administrative division of a population that all receives health and social care from the same health and social services. The merit of this Health Area is that it brings together all the socio-economic and cultural nuances of the Autonomous Region of Galicia. In short, we consider that the fact that this is a field study and the socio-cultural richness of the sample are part of the study’s strengths. Another potential weakness of the study is reliance on outdated data. The study is based on the reanalysis of data from an epidemiological study conducted two decades ago, which might affect the relevance of the findings in the current context. Without a doubt, the needs of the population change over time, as a consequence of advances in knowledge and in social protection systems (although they may also regress), and of other social and political changes, for example. But our study is not a descriptive study of the prevalence of the current needs of older persons in Galicia (although it was at the time). The aim of this study is to analyze the structure of the concept of needs, using data to demonstrate something that is conceptually reasonable to expect: that the study of needs is not opposed to or an alternative to clinical diagnoses or functional assessment, but rather it adds another layer of knowledge about the health status of the older population and helps professionals to make better clinical decisions and social interventions. Due to the fact that the study analyzes the relationship between different types of variables: diagnoses of mental and somatic disorders, functionality, and formally assessed needs, we consider that the age of the sample does not constitute an impediment. This could have been the case if this analysis had been performed on a very specific sample of patients (for example, health care services that no longer exist or whose functioning have been greatly modified over two decades). But we consider that this analysis is valid due to the fact that it is a community sample that is representative of the general population of Galicia. While many things have changed in the past two decades in this territory, we believe that the sociological and healthcare structure and social resources have not experienced radical changes. Another limitation could be the cross-sectional design of the study. While longitudinal studies on how needs change over time will undoubtedly provide a greater wealth of knowledge, we consider that this kind of analysis deserves to be known and disseminated, among other reasons, to encourage other groups to conduct such longitudinal (preferably international-collaborative) studies. One of the strengths of the CANE, the instrument chosen to measure needs, is that it includes (in three separate columns) the perspective of the user and the caregiver, as well as the perspective of the professional, making it favorable over other instruments that measure needs. In other words, the caregivers of all the dependent older persons in our study provided insight into the needs of these individuals. Likewise, users always provided their own perspective of their needs, except in cases of dementia so advanced that it prevented dialogue with the patient (which marks the natural limitation of collecting the user’s perspective of their needs). In all cases, with the information provided by the subject and the caregiver (where applicable), the research team drew up the professional opinion on the needs of that older person (i.e., the epidemiological study has mimicked the use of CANE in routine clinical practice). It seems appropriate to point out that the field study that generated these data was one of the first in the history of CANE, if not the first, to be used in an epidemiological survey ( 14 ). As indicated in the brief description of the methods used to prepare the study sample, it includes not only subjects with all levels of severity of cognitive impairment and physical dependence, but also “healthy” subjects (regarding cognitive impairment, depression, Basic Activities of Daily Living and Instrumental Activities of Daily Living). This enables a holistic and detailed analysis of the relationship between needs and the successive stages of cognitive impairment. | Review | biomedical | en | 0.999997 |
PMC11697483 | Schwannomas are tumors originating from the Schwann cells of peripheral nerves. Approximately 25–45% of schwannomas occur in the head and neck region , followed by the limbs . Schwannoma in the pancreas is extremely rare. Pancreatic schwannomas are usually solid or cystic benign tumors, though some may have a tendency for malignant transformation , and their pathogenesis remains unclear. It primarily affects individuals between the ages of 20 and 50, with no gender preference. Most patients present with gastrointestinal symptoms, such as nausea, vomiting, and indigestion, although some cases are asymptomatic. Currently, the treatment for pancreatic schwannomas primarily involves surgical resection. Pancreaticoduodenectomy (PD) and distal pancreatectomy (DP) are the main surgical approaches reported in most cases, with only one report detailing a case where central pancreatectomy (CP) was performed . In this article, we present a report on a 44-year-old female patient with pancreatic schwannoma and diabetes who underwent CP, and conduct a review of the relevant literature. In 2021, a 44-year-old female presented to a local hospital with upper abdominal discomfort. Abdominal Computed Tomography (CT) revealed a pancreatic mass, and she was subsequently transferred to our hospital for further treatment . Physical examination showed a deep mass in the upper abdomen, approximately 7 cm × 7 cm in size, with a hard consistency and poor mobility. No other significant abnormalities were noted on the rest of the examination. The patient had a 2-year history of type 2 diabetes with poor medication control. Tumor markers, including CEA, CA19-9, and CA72-4, were within normal limits, but neuron-specific enolase (NSE) was elevated. Insulin: 36.57 mIU/L, C-peptide: 1.9 nmol/L, albumin: 37.6 g/L and LDH: 201 U/L. Abdominal CT revealed a 64 mm × 54 mm mass in the body of the pancreas, with clear borders and no enhancement on the slice, and mild dilation of the main pancreatic duct was observed, but no evidence of metastasis was found . Due to the patient’s financial constraints, she refused a magnetic resonance imaging (MRI) scan, which hindered the accuracy of our diagnosis. MRI, with its ability to assess tumor characteristics through various sequences such as T1 and T2, can more accurately display the tumor’s morphology and its relationship with surrounding tissues, which is extremely helpful for diagnosing solid tumors. Clinically, pancreatic cystic tumors are more common than pancreatic schwannomas, and the CT features of pancreatic solid pseudopapillary neoplasms (pSPN) can closely resemble those of pancreatic schwannomas. Based on the patient’s symptoms and laboratory results, our preliminary diagnosis was pSPN. The Royal Marsden Hospital score indicated a low-risk group, suggesting a relatively favorable prognosis . Fig. 1 The timeline of key events during patient care Fig. 2 Preoperative abdominal CT. A : Abdominal x-ray plain films; B : Non-contrast enhanced CT; C : Arterial phase of contrast-enhanced CT; D : Venous phase of contrast-enhanced CT; Note: The tumor indicated by the arrow in the figure is a pancreatic schwannoma. It is a low-density solid mass under non-contrast enhanced CT, and heterogeneous enhancement can be seen under contrast-enhanced CT After obtaining informed consent from the patient and her family, our treatment team performed an exploratory laparotomy. During surgery, a mass approximately 8 cm × 7 cm × 4 cm was palpated in the body of the pancreas. Therefore, the patient underwent CP and Roux-en-Y pancreaticojejunostomy. The postoperative CT images of the patient are shown in Fig. 3 , the direction indicated by the arrows in pictures A and B shows the pancreatic remnant, while the arrows in pictures C and D indicate the location of Roux-en-Y pancreaticojejunostomy, with the pancreatic stent clearly visible at the central part of the pancreas. Frozen section analysis was performed on the mass that was completely resected. The frozen section labeled “pancreatic mass” revealed tumor cells that were polygonal or round in shape, uniform in morphology, and arranged in cords or blocks. Foam-like stromal cells were observed. We suspected that the mass was a pSPN, which needs to be differentiated from pancreatic neuroendocrine neoplasm. The paraffin section showed that the tumor cells had round or polygonal nuclei, with fine granular chromatin. Some nuclei contained visible nucleoli, and the cellular boundaries were not well defined. The cells were arranged in a palisading pattern in some areas, and the other were arranged in a rope-like or fascicular pattern, or in a pseudo-glandular or sheet-like arrangement . Immunohistochemical results : S100 (+), P53 (+), CK5/6 (-), CD56 (+), CD68 (+), Ki − 67 hot zone (< 5% +), NSE (+). The diagnosis was pancreatic schwannoma. Fig. 3 Postoperative CT images after Roux-en-Y Pancreaticojejunostomy. A : Arterial phase of contrast-enhanced CT; B : Venous phase of contrast-enhanced CT; C : Arterial phase of contrast-enhanced CT; D : Venous phase of contrast-enhanced CT; Note: The arrows in pictures A and B indicate the residual pancreatic head; The arrow in picture C indicates the location of the anastomosis of pancreaticojejunostomy; The arrow in picture D indicates the residual pancreatic tail Fig. 4 Pathological photograph. A : Resected pancreatic schwannoma; B : H&E ×100, a large amount of foamy histiocytes were deposited in the interstitium; C : H&E ×200, the epithelioid tumor cells were arranged in strips and sheets, and the stroma was collagenous; D : H&E ×200, lymphocyte aggregation at the edge of the tumor Fig. 5 Immunohistochemical staining picture. A : S100 (+) ×200; B : S100 (+) ×400; C : NSE (+) ×200; D : NSE (+) ×400 After surgery, the patient developed abdominal pain and fever. Amylase and lipase levels in the abdominal drain fluid were elevated, indicating a Grade B pancreatic fistula. The patient was treated symptomatically with fasting, nutritional support, antibiotics, and gastric lavage. After these treatments, her symptoms resolved, and the amylase levels in the drain fluid returned to normal. Preoperatively, her fasting venous blood glucose was approximately 8–15 mmol/L, controlled by oral medications, but with poor efficacy. Postoperatively, her fasting venous blood glucose fluctuated between 12 and 20 mmol/L with insulin therapy. After her feeding, subcutaneous insulin injections were used to maintain blood glucose levels below 11.1 mmol/L. About 40 days after surgery, her treatment was adjusted to oral hypoglycemic medications, and her venous blood glucose was stabilized at around 10 mmol/L. At a 32-month follow-up after discharge, no tumor recurrence was observed, and the patient’s blood glucose was controlled below 11.1mmol/L with only oral antidiabetic drugs. The patient fully understood the purpose of this case report and its contents, and she signed an informed consent form allowing the publication of her relevant medical information. Schwannomas are tumors originating from Schwann cells, which surround the myelinated nerve fibers. Schwannomas are generally benign, with approximately 10–15% undergoing malignant transformation . These tumors are most commonly found in the limbs, neck, mediastinum, retroperitoneum, and posterior nerve roots of the spinal cord . The majority of patients present initially with a painless mass, and other signs and symptoms vary depending on the tumor’s anatomical location . Zhang included 75 reported cases of pancreatic schwannomas, with abdominal pain being the most common symptom (44%), followed by asymptomatic patients (31%), and other symptoms include weight loss, mass, and jaundice . Pancreatic schwannomas are extremely rare , and their growth pattern is similar to that of schwannomas found in other parts of the body. However, pancreatic schwannomas typically present with nonspecific abdominal pain . The most common location for pancreatic schwannomas is the head of the pancreas, followed by the body, tail, and uncinate process . A literature search was conducted in September 2024. The MeSH term “pancreatic schwannoma” was used in searches on both PubMed and China National Knowledge Infrastructure (CNKI). The PubMed search for the past decade yielded 38 articles describing 41 detailed cases of pancreatic schwannoma in the English literature. The CNKI search for the past decade identified 4 articles describing 4 detailed cases of pancreatic schwannoma in the Chinese literature (Detailed documents are provided in the supplementary materials ). We analyzed and summarized the 45 cases of pancreatic schwannoma identified from the searches, with clinical and pathological data summarized in Table 1 . Table 1 Summary of clinicopathological data from all 45 cases of pancreatic schwannoma reported in the recent 10 years N (%) or Mean ± SD Age (year) ( n = 45) ≤ 30 4 30–60 22 ≥ 60 19 55.43 ± 14.839 Sex ( n = 45) Male 15 Female 30 Male: Female 1:2 Symptoms ( n = 41) Abdominal pain 20(48.78%) Abdominal bloating 2 (4.88%) Diarrhea 1(2.44%) Nausea/ Vomiting 3(7.32%) Indigestion 3(7.32%) Weight loss 4(9.76%) Jaundice 2(4.88%) No symptoms 17(41.46%) Tumor location ( n = 45) Head 19(42.22%) Head + body 6(13.33%) Body 11(24.44%) Body + Tail 1(2.22%) Tail 8(17.78%) Nature of tumor on imaging ( n = 44) Soild 28(63.64%) Cystic 9(20.45%) Soild + Cystic 7(15.91%) Preoperative diagnosis ( n = 36) Pancreatic Schwannoma 16(44.44%) Pancreatic cystadenoma 8(22.22%) Pancreatic solid pseudopapillary neoplasm 8(22.22%) Neuroendocrine neoplasm 1(2.78%) Acinic cell carcinoma 1(2.78%) Pancreatic cancer 2(5.56%) Accuracy 35.60% Surgical methods ( n = 40) Enucleation of tumor 10(25.00%) Pancreaticoduodenectomy 9(22.50%) Distal pancreatectomy 11(27.50%) Central pancreatectomy 2(5.00%) Conservative treatment 8(20.00%) Note: Because some patients come in with multiple symptoms, the percentage in the symptoms column will be greater than 100% Due to the lack of specific diagnostic methods, preoperative diagnosis of pancreatic schwannoma is challenging. In the absence of pathological results, imaging is often a key tool for preoperative diagnosis. On CT, pancreatic schwannomas typically present as well-defined, round or oval masses with clear borders, marked cystic degeneration, and punctate calcifications. CT contrast enhancement shows localized cystic changes within the tumor, with areas of low density and no enhancement . Malignant transformation of pancreatic schwannomas is characterized by rapid growth, infiltration of surrounding tissues, and the presence of irregularly shaped, solid, heterogeneous masses, with possible lymph node metastasis . Additionally, the tumor may show the formation of vascular thrombosis. On MRI, a well-defined pancreatic mass appears as heterogeneous high signal intensity on T2-weighted images, with distinct low signal intensity on T1-weighted images, and high signal intensity on diffusion-weighted imaging. The mass shows mild enhancement in the arterial phase, with further enhancement in the portal venous and delayed phases. These imaging features suggest a possible diagnosis of pancreatic schwannoma . The diagnosis of schwannoma requires differentiation from other pancreatic tumors, such as pancreatic cystic tumors, pancreatic neuroendocrine neoplasms, pancreatic solid pseudopapillary neoplasms (pSPN), and pancreatic cancer. Pancreatic cystic tumors primarily present as cystic lesions on imaging, characterized by fluid-filled dark areas, often with multilocular structures and minimal solid components, which differ significantly from pancreatic schwannomas. Pancreatic neuroendocrine neoplasms share both cystic and solid components, similar to schwannomas, but neuroendocrine neoplasms tend to exhibit a dense vascular pattern, leading to homogeneous enhancement on contrast-enhanced CT , which is not consistent with the imaging features of pancreatic schwannomas. pSPN are also mixed solid-cystic masses, making them difficult to distinguish from pancreatic schwannomas. Moreover, pSPN can also present as cystic masses or calcified cystic tumors . Although pancreatic schwannoma and pSPN have similar imaging findings, pSPN does not express NSE, whereas pancreatic schwannoma does. Therefore, these two diseases can be differentiated through a combination of imaging studies and laboratory examinations. Early pancreatic cancer can present as a solitary solid mass similar to pancreatic schwannoma. However, pancreatic cancer has distinct features, such as elevated CA-199 levels, significant enhancement on contrast-enhanced CT and clear signs of tissue invasion, which help differentiate it from pancreatic schwannomas. Compared to CT, PET/CT is more sensitive for the diagnosis of pancreatic cancer . Therefore, in our data, the misdiagnosis rate for pancreatic cancer is relatively low. Since the first case of endoscopic ultrasound-guided fine-needle aspiration (EUS-FNA) was performed in 1997 , EUS-FNA has been very helpful for the preoperative diagnosis of pancreatic schwannoma [ 20 – 24 ]. With the development of technology, the sensitivity of EUS-FNA for determining the nature of a tumor can exceed 90%, with a specificity of over 97% . This technique plays a crucial role in formulating precise treatment plans, not only optimizing medical decisions but also significantly improving treatment outcomes and prognosis for patients. Currently, the diagnosis of pancreatic schwannoma mainly relies on histopathology and immunohistochemical staining. Pancreatic schwannomas are uniform, yellow-brown nodules with clear boundaries and an intact capsule observed macroscopically [ 27 – 29 ]. Microscopically, they typically exhibit two types of tissue structures: Antoni A and Antoni B. The Antoni A area is characterized by a rich presence of spindle-shaped cells, usually arranged in a palisade pattern or forming Verocay bodies . Tumor cells in the Antoni A area have very few mitotic figures, typically less than 5 mitotic figures per 10 high-power fields . In contrast, the Antoni B area has fewer tumor cells, which are arranged in a sparse network-like structure. There is a large amount of fluid and mucinous matrix within and between cells, forming cystic structures, typically exhibiting degenerative changes such as myxoid changes, cyst formation, stromal hemorrhage, and calcification . On CT, Antoni A-type pancreatic schwannomas appear as low-density solid masses with an uneven enhancement pattern, occasionally with multiple septal enhancements. Antoni B-type pancreatic schwannomas tend to appear as homogeneous cystic or multiple masses . The more vascularized Antoni A areas typically show enhancement, while Antoni B areas show no enhancement . Almost all benign schwannomas contain abundant S100 (+) cells, while only about 50% of malignant schwannomas show S100 (-), suggesting that S100 can be used as an initial marker to differentiate between benign and malignant schwannomas [ 30 , 35 – 38 ]. NSE is a glycolytic enzyme isozyme primarily found in the cytoplasm of central and peripheral neurons, as well as neuroendocrine cells, and is an important marker for diagnosing various neuroendocrine neoplasm . Through literature review, we found that pancreatic tissue-derived tumors rarely express this enzyme . Therefore, the strong positive staining for S100 and NSE in this case provides solid evidence for the diagnosis of pancreatic schwannoma. Most schwannomas grow slowly, with an average growth rate of 1.2 mm per year . Small schwannomas can be monitored periodically . However, for symptomatic schwannomas, surgical treatment is necessary. Regarding surgical options for pancreatic schwannoma, in cases with a confirmed diagnosis, complete resection can achieve the therapeutic goal. However, if the preoperative diagnosis is unclear, the tumor should be completely resected during surgery, and frozen section pathology should be performed to determine the extent of resection. In a previous review of 65 cases of pancreatic schwannomas, Fukuhara et al. found that schwannomas most commonly occur in the head of the pancreas (40%), followed by the body (23.1%), tail (10.8%), and uncinate process (10.8%). The most common treatment approach is pancreaticoduodenectomy (34%), followed by distal pancreatectomy (25%) and enucleation (14%). The pancreas is a key organ responsible for secreting various hormones and digestive enzymes. Insulin and glucagon are secreted by the β-cells and α-cells of the pancreas, respectively, and play a central role in glucose metabolism . Pancreatic resection can be categorized into two main types: partial and total. Total pancreatic resection results in complete loss of both endocrine and exocrine functions of the pancreas, leading to difficulty in achieving glucose control . In contrast, partial pancreatic resection preserves both the endocrine and exocrine functions of the pancreas, making it easier to manage blood glucose levels compared to total pancreatic resection. Partial pancreatic resection can be further subdivided into pancreaticoduodenectomy (PD), distal pancreatectomy (DP), and central pancreatectomy (CP). After PD, about 50% of the pancreatic tissue remains, which leads to a reduction in the secretion of insulin and glucagon . For patients with preexisting diabetes, this operation may worsen their condition. Additionally, PD significantly alters the digestive system and reduces exocrine function , making it unacceptable for patients with non-malignant tumors who do not require radical surgery . After DP, approximately 30-40% of the pancreatic tissue remains . Compared to PD, DP has a relatively smaller impact on the structure of the digestive system. However, this operation inevitably involves the removal of a considerable amount of healthy pancreatic tissue, which can significantly affect the postoperative recovery of pancreatic function . In contrast, CP preserves more pancreatic tissue (and sometimes the spleen), which greatly facilitates the recovery of pancreatic function post-surgery. Studies have shown that the incidence of new-onset diabetes after CP is lower than after PD and DP , suggesting that CP has a lesser impact on pancreatic function and a better blood glucose control for diabetic patients. However, CP also has certain drawbacks. Due to the necessity of carefully managing both ends of the pancreatic remnant, CP requires longer operating times and is associated with a higher incidence of pancreatic fistula compared to PD and DP. A meta-analysis by Bi et al. comparing the advantages and disadvantages of DP and CP supports this conclusion. The surgical time in the DP group was significantly shorter than CP group, but intraoperative blood loss was higher in the DP group. Regarding postoperative complications, the incidence of pancreatic fistula in the CP group (36.9%) was significantly higher than DP group (20.2%). The incidence of severe postoperative complications (Clavien-Dindo grade III or higher) in the CP group (21.8%) was also higher than DP group (12.8%). However, the incidence of endocrine insufficiency after surgery in the CP group (6.7%) was much lower than DP group (20.6%), and the incidence of new-onset or worsened diabetes in the CP group was also lower than DP group . On the other hand, another article indicated no significant difference in the probability of pancreatic fistula between the CP and DP groups . This discrepancy may be attributed to the surgeon’s technical skills, suggesting that CP can minimize its drawbacks and effectively prevent postoperative metabolic disorders through precise technique and enhanced postoperative care, ultimately ensuring a higher quality of life for patients after surgery. Additionally, after comparing 34 patients in the CP group and 262 patients in the DP group, Chen YW et al. found that no new-onset or worsening diabetes occurred in the CP group, while 40 patients in the DP group developed endocrine insufficiency after surgery ( P < 0.05), and the incidence of exocrine insufficiency was significantly higher in the DP group . Some studies have pointed out that poor blood glucose control increases the risk of surgical site infections . Therefore, CP can preserve both endocrine and exocrine pancreatic functions postoperatively, reducing the incidence of new-onset or worsening diabetes , which offers long-term benefits for the patients. In this case, the patient’s diabetes remained stable after surgery, with oral medication treatment, demonstrating the therapeutic value of CP for patients with pancreatic schwannomas and diabetes. In conclusion, pancreatic schwannoma is a rare disease that presents unique challenges in both diagnosis and treatment. Due to the lack of specific clinical symptoms and typical imaging features, the preoperative misdiagnosis rate remains high, making it a significant challenge to improve diagnostic accuracy. However, once diagnosed, surgical treatment typically yields favorable outcomes and prognosis. In this case, we chose CP and achieved significant therapeutic success. Our treatment experience, combined with findings from previous literature, suggests that CP may be a more ideal surgical approach for patients with pancreatic schwannoma and diabetes. Below is the link to the electronic supplementary material. Supplementary Material 1 Supplementary Material 2 | Clinical case | biomedical | en | 0.999997 |
PMC11697502 | Arteriovenous fistulas (AVFs) of the filum terminale (FTAVFs) are rare vascular malformations that can present with symptoms ranging from low back pain (LBP) to severe radiculopathy . Overall, vascular malformations of the spine are relatively rare (3% of all spinal arteriovenous shunts), with lesions occurring caudal to the conus medullaris infrequently observed . FTAVFs are perimedullary arteriovenous malformations (AVMs) that are found on the surface of the pia mater and are without a capillary bed between arterial and venous systems . These lesions are classified as type IV arteriovenous malformations of the spinal cord and are subcategorized into type IVa, type IVb, and type IVc by Anson and Spetzler . Type IVa lesions are low-flow AVFs supplied by a single anterior spinal artery (ASA) branch. Type IVb lesions are intermediate-flow fistulas with multiple arterial feeders. Type IVc lesions are high-flow fistulas supplied by several ASA or posterior spinal artery branches . Over time, these fistulas contribute to the development of myelopathic or radicular symptoms, secondary to abnormal vascular flow and venous congestion, resulting in arterial insufficiency . Treatment for FTAVFs includes endovascular embolization or open microsurgical resection . The treatment choice is made for each patient individually, depending on vascular characteristics and institutional resources . Importantly, lesions that are not completely obliterated surgically or endovascularly are at high risk of recurring with worsening of symptoms. In this report, we present the case of a 64-year-old male who presented to the hospital with lower back pain and proximal bilateral lower extremity weakness. Additionally, we provide a current literature review of reported cases of FTAVFs. A 64-year-old male of African descent presented to the emergency room with lower back pain and bilateral lower extremity weakness of several months’ duration. His only neurological deficit was 4/5 strength in the bilateral lower extremities, most notably proximally in the hip flexors and extensors. An outpatient MRI of the thoracic spine demonstrated cord edema from T7-conus medullaris and multiple flow voids consistent with intradural vessels overlying the spinal cord, which progressed to T1-L2 cord edema on the preoperative MRI scan . A spinal digital subtraction angiogram (DSA) demonstrated a perimedullary arteriovenous fistula spanning the L2-5 vertebrae supplied by the ASA originating from the artery of Adamkiewicz . Angiographic embolization of the lesion under general anesthesia was offered and scheduled. Somatosensory evoked potentials (SSEPs) and motor evoked potentials (MEPs) were monitored for the procedure. A 5-Fr Cobra tip femoral angiography sheath was introduced through the left femoral artery and advanced cranially through the descending aorta under fluoroscopy during the procedure. Contrast dye and overlay mapping were then utilized to identify the artery of Adamkiewicz, which originated at the level of the left L2 intervertebral foramen. Once the AVF was isolated on fluoroscopy, a preembolization trial with lidocaine and pentobarbital greatly diminished SSEPs in the lower extremities, with a similar loss of MEPs. Due to the loss of neuromonitoring signals, it was considered unsafe to proceed with the embolization, and open surgical treatment was planned. In situations where endovascular embolization results in loss of neuromonitoring, open approaches are preferred as occlusion of the feeding artery/arteries can be rapidly reversed by removing the temporary clip to avoid permanent detriment to the spinal cord, which may not be readily resolved during embolization procedures. Following team and patient discussions, microsurgical obliteration of the AVF through an open surgical approach was planned. Following the L2-4 laminectomy, the dura was longitudinally opened under microscopic visualization. After proper extradural hemostasis was achieved, the dura was opened longitudinally and tacked up to the laterally dissected paraspinal musculature. At this point, the cauda equina and filum terminale came into view. A prominent arterialized vein coursing alongside the filum was identified. Indocyanine green (ICG) video angiography confirmed arterialization of the vein at the lower end of L4 with contiguous vessels visualized going caudally and another traveling cephalad. A temporary clip was then applied just cephalad to the site of the AVF, and intraoperative angiography confirmed occlusion of the AVF. No signal change from baseline in neuromonitoring occurred. A permanent clip was then deployed cephalad to the first clip, followed by bipolar cauterization of the filum terminale between the two micro-vascular clips . The filum terminale was divided, and adequate closure was then achieved in a multilayer fashion. No surgical specimen was sent for pathologic diagnosis. The patient tolerated the procedure well and his lower extremity weakness was mildly improved compared to presurgical assessment. Postoperative spinal angiography displayed resolution of the FTAVF. Ten days following discharge, while in acute rehab, the patient experienced severe shortness of breath and was diagnosed with a saddle pulmonary embolism. Interventional thrombectomy was attempted and successful. A right lower extremity deep venous thrombosis (DVT) was identified with compression ultrasonography (US). Due to contraindications for antiplatelets and anticoagulants, an inferior vena cava (IVC) filter was placed. The patient was stabilized and discharged to subacute rehabilitation. Here, we present a case of a 64-year-old male patient presented with myelopathic symptoms of the lower extremities. The patient’s symptoms had quickly progressed from LBP to lower extremity pain and weakness, for which an MRI with and without contrast of the lumbar spine was appropriately performed, demonstrating spinal cord edema from T1-L2. Further investigation revealed a FTAVF at the level of L2-L5, originating from the artery of Adamkiewicz. Endovascular intervention was planned. However, following changes in neuromonitoring during the endovascular approach, the patient underwent successful open surgical intervention. Spinal AVMs are rare tortuous vascular lesions that often arise in pediatric populations . In 1987, Rosenblum et al. proposed a four-tier classification system for spinal AV shunts . In 1992, Anson and Spetzler further developed the system by adding subclassifications for type IV lesions (Table 1 ) . The lesion in the present case fits with a type IVa AV shunt . These lesions are low-flow, high-pressure systems that are often unstable and unpredictable. Due to the low flow in this system, ischemia can occur in the supplied tissue, a condition known as "Foix-Alajouanine syndrome" or "subacute necrotizing myelopathy." This involves progressive congestive ischemia of the spinal cord, which develops over months or years . Progressive myelopathy, radiculopathy, LBP, and bladder or bowel incontinence may also occur during the course of the disease. Due to the high pressure of this system, these lesions are vulnerable to rupture, resulting in hemorrhaging into the subarachnoid space. Rapid, excruciating back pain is often the first symptom, classically referred to as “Coup de poignard of Michon" . Efficient diagnosis and treatment are crucial to avoid catastrophic outcomes in these patients, which may involve permanent damage to the spinal cord and possibly death. In cases of FTAVFs, the main cause of neurological symptoms is unlikely to be due to direct ischemia or compression of the AVF on the FT or adjacent nerve roots. The cauda equina typically has adequate space to maneuver and the FT rarely carries any meaningful neurologic signals. The symptoms are thought to be primarily due to the venous congestion caused by the AVF, affecting the levels of cephalad to the fistula, and can cause myelopathic or radicular symptoms . On MRI, venous congestion is visualized in the form of spinal cord edema at the spinal levels, where congestion has impacted normal vascular dynamics . Importantly, this edema, and presumably the venous stasis, is typically improved or eliminated when FTAVFs are promptly treated . Many patients affected by FTAVFs also present with lumbar spinal stenosis, leading some to hypothesize that longstanding neural compression and inflammation can contribute to AVF formation . In the cases indexed in this literature review, 17 cases reported the presence of lumbar spinal stenosis (nine cases reported the absence of lumbar spinal stenosis and 32 cases failed to report the absence or presence of stenosis). The presence of concurrent lumbar stenosis has the potential to mask the true cause of symptoms, especially when symptoms are primarily radicular, causing a delay in diagnosis. Treatment for FTAVFs may include surgical, endovascular, or radiotherapeutic management. The surgical approach has previously been established as the modality of choice, with the first successful treatment in 1916 . This approach involves occlusion of the receiving vein of the shunt, with definitive interruption of other spinal draining veins. This is crucial for successful treatment, as occlusion of arterial feeders may result in re-establishment of the fistula via recruitment of new arterial feeders, which can lead to relapsing symptoms . Surgical management has been shown to be the most definitive treatment . However, endovascular treatment has recently seen a surge in popularity in treating spinal AVFs . Many institutions utilize endovascular techniques as first-line treatment as it is less invasive. While no difference has been seen when comparing complication rates between surgical and endovascular management for spinal AVFs, embolization is associated with a much higher failure rate, with patients often having to return for open surgery or repeat endovascular embolization . Finally, stereotactic radiosurgery has also been described in the literature as a means to treat dural AVFs . However, it has not been established as a mode of treatment for perimedullary AVFs, and with the availability of other effective treatment options, radiosurgery is currently not recommended as a management option in most AVF cases . We indexed and reviewed 24 articles with 59 cases in the literature that reported FTAVFs with either progressive myelopathy and/or radiculopathy. The identified feeding vessel(s) and subsequent draining vein(s), chosen treatment options, complications, and outcomes are shown in Table 2 . The patients' ages ranged from 3 to 84 years, with 38 males, nine females, and two unidentified. FTAVFs were more common in males, which is consistent with previously published literature . We compared the approach of treating AVFs (for which both endovascular and microsurgical approaches have been frequently utilized) by observing outcomes and intraoperative or postoperative complications. Both approaches offered positive outcomes, resulting in improvement, if not resolution of symptoms in a majority of cases. However, in previously reported cases of FTAVF, endovascular treatment was associated with more complications (46.7%), with failed embolization being the reported complication in all cases, requiring repeat embolization or subsequent microsurgical intervention. There were two cases of microsurgical complications in which patients experienced worsening urinary symptoms. Cases treated with microsurgery reported higher success rates with complete resolution being identified in 14 of the 59 cases. Compared to endovascular approaches which had no cases reporting complete symptom resolution. Finally, microsurgical management reported two cases where symptoms were unchanged, compared to one case that was approached endovascularly. This case demonstrates the importance of early identification and treatment of AVFs, as well as the importance of a multidisciplinary therapeutic approach. In this case, endovascular embolization was attempted; however, it was aborted due to loss of neuromonitoring signal, and open surgical management was scheduled. Successful treatment was achieved with microsurgery, with improvement immediately postoperatively. While endovascular management is often highly successful in treating FTAVFs, surgeons should be prepared for microsurgical treatment if embolization fails or is unsafe to proceed. | Clinical case | biomedical | en | 0.999998 |
PMC11697511 | Tradeoffs in life-history strategy are key features in animal evolution . These tradeoffs often involve differential investments in life-history traits such as growth rate ; reproductive maturation, timing, and fecundity ; or resistance to stress , predation , or disease . The fitness costs and benefits of these investments are often context-dependent and shifts in ecological or environmental conditions can favor some life-history strategies over others , sculpting trait evolution within animal lineages and reshaping ecological communities. Global climate change is shifting the patterns and prevalence of disease in many animal taxa, while increasing the virulence of some pathogens . Identifying evolutionary tradeoffs and resulting trait correlations associated with disease susceptibility can therefore help predict how species survival will shift with climate change. Although much research on evolutionary tradeoffs focuses on the traits of animals themselves, it is also well documented that the physiology , fitness and even behavior of many animals are influenced by their microbiomes. Animal microbiomes have been linked to multiple key life-history traits, including growth , development rate , fecundity , stress resistance , and disease susceptibility . It therefore seems likely that microbial symbiosis is an important aspect of animal life-history tradeoffs and may correlate with host traits over long periods of animal evolution. However, testing the potential relevance of microbial symbiosis for life-history strategy evolution over long time periods is challenging. The reef-building corals that have evolved over 425 million years represent a diverse group of animals, including an estimated >1600 species , with an extensive fossil record, and a well-known variety in both life-history strategy and microbial symbiosis [ 16 – 18 ]. As such, they present a valuable opportunity to explore connections between microbes and life history strategy. These animals also have special ecological and societal importance, as corals are foundational to reef ecosystems that support some of the most biodiverse assemblages on the planet and the livelihoods of many coastal communities . Yet the ancient diversity of coral reefs is currently threatened by global climate change, which is driving both dramatic mass bleaching events and increased prevalence and severity of disease outbreaks . Alongside research on how coral health is affected by both well-studied (e.g., Symbiodiniaceae [ 20 – 22 ]) and emerging (e.g., corallicolids, fungi ) microbial eukaryotes, extensive research has demonstrated that present-day communities of coral-associated bacteria and archaea (hereafter ‘coral microbiomes’) play a myriad of roles in host biology that could impact disease susceptibility. These include antimicrobial production , predation of pathogens , jamming of quorum-sensing systems , and passive competition for space and resources. Yet these microbiomes are also influenced by host traits , local environmental factors, and ecological context , including host disease susceptibility patterns within and among species . While this supports a connection between present-day coral life-history, microbiome structure and disease susceptibility, these data do not directly allow for statistical testing of evolutionary hypotheses about potential roles of microbial symbiosis in life history tradeoffs. Clarifying whether microbiome structure and coral life-history traits correlate over coral evolution globally will contextualize studies of extant coral symbiosis and disease at local or regional scales. Several lines of research have created a strong foundation on which such comprehensive comparative evolutionary analyses can be built. Coral disease patterns have been intensively researched, and an increasing number of datasets are now openly available . Well-curated global databases of coral physiological traits have been established and mapped to coral life-history strategies . Finally, several large cross-species studies of corals and their microbiomes have been launched. These advances provide an opportunity to compare host trait data and microbiome structure from across the coral tree of life. Here, we test whether microbiome structure correlates with two key aspects of coral life history strategy: disease susceptibility and growth rate. To address this question quantitatively, we first characterized the microbiome composition from visibly healthy samples of 40 coral genera using 16S rRNA gene amplicon sequencing results from the Global Coral Microbiome Project (Supplementary Data Table S1 a), and subsequently combined these data with coral growth rates from the Coral Trait Database , and genus-level long-term disease prevalence data from several tropical regions around the globe . These long-term disease datasets included the Florida Reef Resilience Project data (FRRP, https://frrp.org/ ) , Hawaiʻi Coral Disease Database (HICORDIS) , and new data covering eastern Australia (this study; Supplementary Data Table S1 b). With the resulting microbiome structure, coral growth rate, and disease data across a global distribution of coral genera (Supplementary Data Table S1 c), we compared these traits using methods that account for phylogenetic correlations using a time-calibrated multi-gene reference tree of corals . Fig. 1 Conceptual overview of data sources integrated for the project. ( A ) Map of sampling locations for coral microbiomes analyzed in the manuscript. Pie charts show the proportion of coral samples from families in the Complex clade (cool colors) and Robust clade (warm colors). Samples were collected from coral mucus, tissue, and endolithic skeleton (see Methods). ( B ) Schematic representation of data integration for the project. Coral microbiome data (as shown in A) were combined with long-term disease prevalence data from 3 projects (the Florida Reef Resilience Program (FFRP), the Hawaiʻi Coral Disease Database (HICORDIS), and data from Australia (this study)), as well as coral trait data from the Coral Trait Database, and a molecular phylogeny of corals (see Methods). To integrate data from these disparate sources, all annotations were pooled at the genus level. The end product was a trait table of microbiome, taxonomic, physiological, and disease data across diverse coral genera The microbiome of corals is often dominated by a few highly-abundant taxa that demonstrate species-specificity , though why these highly-abundant microbial taxa differ across coral diversity is unknown. To test this, we first identified a restricted set of dominant bacterial or archaeal taxa in visibly healthy corals retrieved from mucus, tissue, and skeleton samples of 40 coral genera. (‘Dominant taxa’ were defined as those that are most abundant on average within all samples from a given portion of coral anatomy in a given coral genus). Thirty-eight of the coral genera were dominated by the bacterial classes 𝛼 - or γ-proteobacteria, which are known to include common coral associates , with more detailed taxonomy revealing that the number of dominant bacterial and archaeal genera across compartments is also somewhat limited . For example, only 17 genera of bacteria or archaea accounted for the dominant microbial genus in the tissue microbiomes of all 40 coral genera (this number excludes 4 unclassified ‘genera’ that could not be classified to at least the order level). Mucus and skeleton showed similar trends, with only 16 and 25 dominant genera, plus 2 or 4 unclassified genera, respectively. Across coral-associated bacterial or archaeal genera, Pseudomonas was most commonly dominant in mucus (31.4% of coral genera), while Endozoicomonas was most commonly dominant in tissue (18%) and Candidatus Amoebophilus (13.5%) was most commonly dominant in skeleton microbiomes. However, whether differences in microbiome structure and dominant microbes across coral diversity influence differences in coral physiology is not yet well understood. Fig. 2 Dominant microbes in the coral microbiome. ( A ) Dominant bacterial or archaeal genera in coral mucus (cyan), tissue (orange), or skeleton (purple) microbiomes. Pie wedges represent the fraction of coral host genera in which the labeled bacterium is more abundant than all other bacterial or archaeal taxa. Cyan shades represent microbes dominant in mucus, oranges represent microbes dominant in tissue (but not mucus), purple shades represent microbes dominant in skeleton (but not mucus or tissue). Endozoicomonas , which is of special significance later in the paper, is highlighted in aqua. ( B ) Bar charts showing correlations between microbiome alpha and beta diversity metrics and disease, represented by the R 2 for PGLS correlations. Alpha diversity metrics include richness, evenness (Gini index), and dominance (Simpson’s index), and weighted UniFrac beta diversity metrics including the three principal component axes (PC1, PC2, PC3) that represent measures of community structure. Significant relationships ( p < 0.05, Supplementary Data Table S4 ) are marked by an asterisk (*). ( C ) Bubble plot showing correlations between dominant microbial taxa and coral disease prevalence. The size of each triangle represents the R 2 for PGLS correlations between disease susceptibility and microbial relative abundance for each listed taxon in either all samples (top row), mucus samples (cyan row), tissue samples (orange row), or skeleton samples (purple row). Colored points were significant ( p < 0.05, FDR q < 0.05) and hashed points were nominally significant ( p < 0.05, FDR q > 0.05; Supplementary Data Table S7 a). Points that were not significant or had too little data ( n < 5) for reliable testing are marked in white. Taxa whose relative abundance is significantly correlated with disease are marked in bold on the x-axis We visualized the evolution of coral disease susceptibility and multiple measures of microbiome diversity using ancestral state reconstruction , then tested whether microbial alpha or beta diversity correlated with disease susceptibility using phylogenetic generalized least squares (PGLS). We found no evidence for an effect of microbiome ecological richness or evenness (considered individually) on disease susceptibility (Supplementary Data Table S3 ), and limited evidence for an effect of microbiome composition on disease susceptibility (Supplementary Information; Supplementary Data Table S4 ). However, given that cross-species differences in a limited number of dominant microbes were very notable in the data, we hypothesized that corals with highly abundant bacterial taxa might display more disease vulnerability. To quantify this, ecological dominance among identified amplicon sequence variants (ASVs) was calculated using Simpson’s Index, which estimates the probability that two species drawn from a population belong to the same group, and thereby incorporates aspects of both richness and evenness simultaneously. We correlated Simpson’s Index against coral disease prevalence for either all coral samples, or those in mucus, tissue, or skeleton considered individually. In coral tissue, microbiome dominance significantly correlated with disease, explaining roughly 27% of overall variation in disease susceptibility across coral species . No other combination of alpha diversity measure and compartment correlated with disease after accounting for multiple comparisons . Thus, microbiome dominance as measured by Simpson’s Index was a far stronger predictor of coral disease susceptibility than 𝛼 -diversity measures that considered either richness or evenness individually. Regionally-specific analysis, which eliminates potential confounders due to the global nature of the comparison, recaptured this dominance-disease relationship (Supplementary Information; Supplementary Data Table S3 b). Further testing showed that corals dominated by γ-proteobacteria drove the dominance-disease trend, suggesting a specific microbial genus (rather than a general ecological feature) might be responsible for this striking correlation (Supplementary Information; Supplementary Table S3 c). Bacteria in the genus Endozoicomonas are among the most-studied γ-proteobacterial symbionts of corals. In several species Endozoicomonas forms prominent aggregates known as CAMAs (coral associated microbial aggregates) in coral tissue . In species where members of genus Endozoicomonas are common, decreases in relative abundance during coral bleaching or disease are frequently observed , suggesting a commensal or mutualistic rather than opportunistic relationship with host health, although evidence exists for the potential of Endozoicomonas to form relationships with corals along the entire spectrum of symbioses (i.e., beneficial, commensal, and/or antagonistic; see ). Further, it has previously been observed that the family Endozoicomonadaceae shows by far the strongest signal of cophylogeny with coral hosts among tested bacterial families in coral tissue . In the present dataset, Endozoicomonas was also the single genus that most typically dominated coral tissue microbiomes . We therefore tested whether the signal of microbiome dominance on disease susceptibility could be explained by the abundances of dominant taxa, and found that across all corals in our dataset (regardless of whether Endozoicomonas was present and/or dominant; n = 40 genera), Endozoicomonas relative abundance explained the majority of variation in ecological dominance among coral tissue microbiomes . Further, the relative abundance of Endozoicomonas in coral tissue alone explained 30% of variance in overall disease susceptibility , exceeding the signal from ecological dominance. Endozoicomonas remained significantly correlated with disease susceptibility after testing multiple linear models with depth, temperature, extent of turf algae contact, latitude and overall microbiome richness as confounders (Supplementary Data Table S5 b & c). Neither commonly opportunistic microbes in corals (Supplementary Data Table S6 ), nor other dominant microbes (Supplementary Data Table S7 ) showed similar patterns ( Supplementary Information ). Thus, our prior results linking ecological dominance and overall disease susceptibility appear to be largely explained by changes in Endozoicomonas relative abundance over coral evolution. Fig. 3 Endozoicomonas correlates with growth and disease. Phylogenetic independent contrast in Endozoicomonas relative abundance in coral tissue , correlated against ( A ) contrast in microbial dominance in coral tissue (assessed by Simpson’s Index), ( B ) constrast in coral disease susceptibility (estimated from integrated long-term coral disease prevalence data) and ( C ) coral growth rate (mm per year) from the Coral Traits Database. Dotted red lines in panels A-C indicate the null expectation that if traits are uncorrelated, change in the x-axis trait will not correlate with changes in the y-axis trait, with contrasts instead distributed equally above or below the dotted line. Statistics from phylogenetic generalized least squares (PGLS) regression for A-C are available in Supplemental Data Tables 5 and 9. ( D ) Modeled strength and direction of causality between Endozoicomonas relative abundance, disease susceptibility and growth rate during coral evolution using both Brownian Motion (blue) and Pagel’s Lambda (green, dotted) evolutionary models. The thickness of the lines represents the averaged standardized path coefficients of the top competing models based on CICc values (Supplementary Data Table S11 ) Endozoicomonas is often linked to metabolic benefits to the coral host (but see ), including a potential role in steroid processing . Experimental studies have shown that decreases in its relative abundance are typical with disease or other health stressors such as bleaching . This suggests that the striking correlation between Endozoicomonas and disease is not due to pathogenesis by Endozoicomonas . There are several possibilities for how a non-pathogen might nonetheless increase disease, including opportunity costs in host biology (e.g., in innate immunity, permissiveness to CAMA formation), tradeoffs in microbial symbiosis (e.g., dominance of Endozoicomonas vs. more diverse and potentially flexible microbiome associates with benefits for pathogen defense or resilience to environmental change), or tradeoffs driven by host physiological changes induced by Endozoicomonas (e.g., in steroid hormone processing). However, regardless of mechanism, if maintenance of high relative abundances of Endozoicomonas has fitness costs, they may be balanced by benefits to the host – at least under some conditions. If symbiosis with Endozoicomonas did play a causal role in coral life-history tradeoffs, we hypothesized that we would see a positive correlation between a beneficial coral trait and Endozoicomonas that counterbalances the correlation between Endozoicomonas and disease. Given that Endozoicomonas is thought to be a metabolic mutualist of corals, and it has recently been suggested to facilitate faster coral growth , growth rate seemed like a likely candidate for a potential benefit explaining the persistence of coral- Endozoicomonas associations. Depending on the mechanism of action, any such Endozoicomonas - growth correlations might depend merely on the presence of Endozoicomonas , or alternatively on its relative abundance. Using data from the Coral Trait Database (CTDB) we tested whether Endozoicomonas relative abundance was correlated with growth rate in corals where we detected Endozoicomonas (i.e., the effect of relative abundance alone) and in all corals (i.e., the combined effect of presence and relative abundance). In both cases, we limited this analysis to only corals with replicated growth rate data ( > = 5 replicates in the CTDB). While the relative abundance of Endozoicomonas was not correlated with growth rate across all coral genera (tissue PGLS: R 2 = 0.11, p = 0.17, FDR q = 0.37; Supplementary Data Table S8 a), across coral genera where Endozoicomonas was detected and replicated growth rate data were available ( n = 17 genera), its relative abundance in tissue was strongly correlated with growth rate (tissue PGLS: R 2 = 0.31, p = 0.024, FDR q = 0.024; Supplementary Data Table S8 b). Unlike for disease susceptibility, several additional microbes showed anatomically-specific correlations with the growth rate of their coral hosts, including strong positive correlations between growth and uncultured Rhodobacteria (Family: Terasakiellaceae) and negative correlations between growth rate and the archaeal genus Nitrosopumilis . However, Endozoicomonas appears unique in its association with both growth and disease. Overall, Endozoicomonas may in part explain, or at least correlate with, about a third of known growth rate differences between coral genera. Across the coral genera surveyed in our dataset, initial, low-level symbiosis with Endozoicomonas does not correlate with growth rate, but subsequent expansions of the relative abundance of Endozoicomonas within coral microbiomes co-occur with both higher average growth rates and greater disease susceptibility. Having seen that Endozoicomonas is correlated with both disease susceptibility and growth-rate in corals, we investigated if these correlations were stronger or weaker than any direct correlation between disease and growth rate in our dataset. Across genera with both growth rate and disease prevalence data, the correlation between growth and disease susceptibility had only a modest effect size and was not statistically significant. Thus, in this dataset Endozoicomonas showed stronger associations with both growth and disease than these factors showed with one another, regardless of whether the analysis was conducted across all coral genera (tissue PGLS: R 2 = 0.12, p = 0.17, FDR q = 0.17; Supplementary Data Table S10 a) or just those where Endozoicomonas was present (tissue PGLS: R 2 = 0.06, p = 0.37, FDR q = 0.37; Supplementary Data Table S10 b). This suggested that Endozoicomonas relative abundance might not merely mark tradeoffs between growth and disease but may play some causal role in one or both processes. The univariate correlations between Endozoicomonas , host disease susceptibility and growth rate raise the question of the direction of causality by which these factors have become non-randomly associated during coral evolution. Using phylogenetic path analysis (Methods), we compared 14 models of the relationship between Endozoicomonas relative abundance, disease susceptibility, and growth rate . As is common in this type of analysis, more than one model was consistent with the data. However, none of the top models using either Brownian Motion (Supplementary Table S11 b) or Pagel’s lambda (Supplementary Data Table S11 c) suggested that disease influenced growth rate or vice versa without the influence of Endozoicomonas , and all significant models include Endozoicomonas . Thus, while the precise feedback remains to be determined, causality analysis suggests that, in some capacity, Endozoicomonas likely mediates growth rate and disease. Our comparative results across coral genera suggest that the total relative abundance of microbes in genus Endozoicomonas is linked to shifts in host disease susceptibility and growth rate over coral evolution. However, Endozoicomonas is comprised of many strains that may differ in their interactions with coral hosts. For example, Endozoicomonas phylotypes in nearby corals may differ in genomic features like capacity for reactive oxygen species scavenging that could have implications for host-microbial symbiosis . Moreover, our cross-compartment analysis showed anatomically-specific differences in associations between Endozoicomonas and host traits: Endozoicomonas relative abundances were significantly associated with disease susceptibility and growth rate in tissue, but only disease susceptibility in mucus. In past literature and our results, Endozoicomonas are most abundant in tissue . Therefore, differences in associations between host traits and mucus- or tissue-associated Endozoicomonas may simply reflect somewhat less statistical power in mucus (where Endozoicomonas is less abundant) vs. tissue, and in our growth rate analysis ( n = 17 genera) vs. disease susceptibility analysis ( n = 40 genera). However, these results also raise the question of whether stable sub-populations of Endozoicomonas in mucus vs. tissue have distinct effects on host physiology. To test for any differences among mucus- vs. tissue-associated Endozoicomonas , we characterized the distribution of Endozoicomonas ASVs across coral compartments. Our dataset contained 123 Endozoicomonas ASVs. Of these, 23 abundant ASVs explained 95% of total Endozoicomonas reads, while the remainder were relatively rare. After removing ASVs with < 10 counts, we sorted the remaining Endozoicomonas ASVs according to the compartment in which they showed highest abundance. This yielded 15 ASVs that were most prevalent in mucus, 42 in tissue and 3 in skeleton. We then analyzed the relative abundance of these compartment-specific pools separately to see which, if any, would recapture associations between genus Endozoicomonas and host disease susceptibility. In this more nuanced analysis, the pool of Endozoicomonas ASVs associated with tissue showed a strong relationship with disease susceptibility , while ASV pools associated with both mucus and skeleton showed no association with disease (PGLS mucus R 2 = 0.02, p = 0.37, FDR q = 0.56; skeleton R 2 = 0.008, p = 0.57, FDR q = 0.57) (Supplementary Data Table S12 a). Thus, associations between Endozoicomonas relative abundance in mucus and coral disease susceptibility appear to derive from ASVs that have highest relative abundance in bulk tissue samples, but appear in mucus at lower relative abundance – consistent with evidence from fluorescence imaging showing Endozoicomonas can aggregate within multiple coral tissues, including tentacles, mesenteries, and calicodermis . In contrast to the strong association between total Endozoicomonas relative abundance in coral tissues and host growth rate, the association between tissue-enriched Endozoicomonas ASVs and growth rate was not significant in the top model (PGLS FDR q > 0.05; Supplementary Data Table S12 b). This may indicate that ASVs excluded in this analysis are important to the Endozoicomonas – growth rate association, perhaps due to contributions from ASVs common in multiple compartments or the summed influence of multiple rare ASVs. Experimental tests on diverse Endozoicomonas strains will be important to track the dynamics of Endozoicomonas across coral anatomy, and delineate any direct, strain-specific effects on disease susceptibility or growth rate. We found positive correlations between the total relative abundance of Endozoicomonas in coral tissue and the host traits of growth rate and disease susceptibility. This finding complements and contextualizes ongoing work on the mechanisms underlying the coral- Endozoicomonas symbiosis and the potential role of Endozoicomonas as a metabolic mutualist . It also echoes findings of correlations between life-history strategy and microbiome structure in other important marine invertebrates, such as that between predator defense and microbial abundance in marine sponges . The mechanism by which corals with high proportions of Endozoicomonas become more vulnerable to disease are not yet known, but may shed light on their role in coral symbiosis . Because these results rely on relative abundance, it is not yet clear whether differences in absolute abundances of Endozoicomonas also vary. Importantly, anatomically-specific variation in true abundances may complicate relative abundance in bulk tissue – for example, if coral taxa vary greatly in absolute microbial abundances outside of CAMAs (similar to low vs. high microbial abundance sponges ), those differences could alter apparent Endozoicomonas relative abundance. If the pattern of relative abundance reported here corresponds to absolute Endozoicomonas abundances, potential explanations fall into three main categories: ecological, structural, or immunological. Many coral microbes (but not Endozoicomonas ) are thought to protect against pathogenic disease by mechanisms such as antibiotic secretion , direct predation , jamming of quorum signaling , and through physically occupying space close to host tissues that may restrict binding sites for opportunists and pathogens. In theory, it is possible that high dominance of Endozoicomonas may impact the overall diversity or richness of the coral microbiome, effectively restricting the diversity of potential microbial defenses that may benefit the health of the coral. Similarly, Endozoicomonas may interact directly or indirectly with other microbiome members in a way that reduces microbially-derived host defenses. However, that Endozoicomonas are frequently observed in discrete CAMAs complicates this possibility, as any effects on microbes outside the local area of these CAMAs would have to rely on indirect consequences of Endozoicomonas -coral interactions or secreted factors. Nevertheless, if this hypothesis were correct, the reductions in the abundance or relative abundance of Endozoicomonas that are often reported in diseased coral phenotypes (e.g., ) would then be adaptive on the part of the host, by allowing proportionally greater growth of other, more protective microbes. This hypothesis could be tested by microbial inoculation experiments that increase Endozoicomonas abundances prior to or concurrent with disease exposure, with the prediction that this would increase disease severity (although care must be taken to exclude nutritional benefits from corals directly eating the Endozoicomonas confounding the results). More systematic studies of whether high abundances of Endozoicomonas are exclusively found in visible CAMAs could also speak to the plausibility of this ecological hypothesis, by clarifying the likely routes for interaction between Endozoicomonas and other coral-associated microbes. In addition to ecological interactions, the Endozoicomonas - disease susceptibility correlation may also arise as a result of host traits that are permissive for the formation of microbial aggregates. As the cellular processes involved in establishing mutualism, commensalism and pathogenesis often overlap, the same host-microbe interactions that allow Endozoicomonas and some other microbes like Simkania to aggregate within coral tissues may also be more permissive towards invasion by pathogens. So far known coral pathogens have not been reported to be present within CAMAs. However, other structural mechanisms are possible. For example, the density, morphology, or diversity of septate junctions — which form epithelial barriers similar to tight junctions in chordates — might, in theory, influence the ability of both Endozoicomonas and pathogenic microbes to enter coral tissues. This idea could be tested by examining cellular morphology, sequence similarity, and/or gene expression of septate junctions and their constituent components in coral species in which CAMAs did or did not form. Finally, it is possible that coral immunological strategies that permit symbiosis with high abundances of Endozoicomonas also tend to make corals more vulnerable to pathogens. Coral species vary in immune investment (as measured by immune parameters like melanin abundance, phenoloxidase activity, etc.), and low immune investment has been observed to correlate with disease susceptibility . Some theory predicts that the evolution of more permissive immunological strategies is favored by symbionts that provide metabolic benefits to the host . In corals specifically, immune repertoires in key gene families such as TIR-domain containing genes vary greatly between species, which has been hypothesized to influence microbiome structure . Indeed, in sequenced coral genomes the copy number of some of these, such as IL-1R receptors, appear to correlate with several features of coral microbiomes, including Endozoicomonas abundance . Thus, symbiosis with Endozoicomonas may promote lower immune investment in corals, which in turn increases disease susceptibility. This hypothesis could be tested by comparing the length of coral- Endozoicomonas associations, to see whether longer histories of association lead to low immune investment, or by examining selection on innate immune genes in low vs. high Endozoicomonas coral lineages (e.g., by dN/dS ratios). A related immunological explanation would occur if Endozoicomonas itself achieves high relative abundances by suppressing aspects of host immunity. Genomic studies of host-associated Endozoicomonas identified variation in the proportion of eukaryote-derived genes and domains as a key feature of strain variation, including some domains thought to suppress immunity-induced apoptosis . Endozoicomonas has also recently been suggested to play a role in coral hormone homeostasis , which could have multiple physiological effects on coral tissues (even those not in direct contact with CAMAs), including potentially influencing both growth rate and immunity. If representatives of diverse strains could be cultured, experiments adding exogenous Endozoicomonas might clarify whether Endozoicomonas strains have any direct effects on coral immunity, and if so whether they differ from strain to strain. Animals evolved in a microbial world. The resulting interactions between animal hosts and their associated microbes influence organismal fitness, and the history of these interactions across generations may influence eco-evolutionary patterns. Using evolutionary analyses of coral microbiomes, we provide evidence that symbiosis with Endozoicomonas may mediate growth vs. disease resistance tradeoffs. While further manipulative studies are necessary to confirm this finding and determine the directionality of the relationship, evidence for this trend across the coral tree of life is compelling. Our comparative approach suggests that Endozoicomonas -dominated lineages of corals may grow more quickly under ideal conditions but are more likely to succumb to coral disease. Because much other work has shown that coral disease is exacerbated by global and local stressors such as climate-change driven heat waves or local pollution events , this may make Endozoicomonas- dominated coral especially vulnerable to environmental change . It has even been suggested that high dominance of one microbial taxon in the coral microbiome may have a stabilizing effect on the rest of the community , thereby limiting the flexibility of the microbiome to functionally adapt through restructuring when exposed to environmental stressors . Fig. 4 Endozoicomonas dominance facilitates life history tradeoffs. Conceptual hypothesis on the role Endozoicomonas dominance in coral microbiomes (teal icons, top row) plays in the tradeoff between growth and defense under varying environmental conditions. Endozoicomonas -dominated microbiomes may ( A ) provide a metabolic advantage for growth under normal environmental conditions (top left), but ( B ) lack the ecological, structural or immunological defenses against pathogen invasion, and therefore become susceptible to disease under stressful environmental conditions (top right). In contrast, microbiomes not dominated by Endozoicomonas (bottom left) grow slower, but may have lower disease susceptibility in stressful environmental conditions (bottom right) If microbial symbiosis does play a causal role in coral life history tradeoffs in the present day, then identifying microbes underlying those tradeoffs may benefit microbiome manipulation for targeted coral conservation and restoration strategies. For example, microbial screening (e.g., ) could help identify Endozoicomonas -dominated coral species or populations that may be more susceptible to disease and drive the conservation and protection of these individuals or their habitats. Identification of these target corals is perhaps most relevant for coral restoration initiatives that include breeding, nursery propagation and out-planting, where coral health is monitored closely and predicting disease susceptibility can inform decision-making. Depending on the mechanism underlying the Endozoicomonas- disease susceptibility correlations reported here, Endozoicomonas -dominated corals may further represent strong candidates for microbiome engineering (e.g., human-assisted manipulation of host-associated microbes or the application of probiotics ) to enhance host resilience in anticipation of stress events by decreasing microbiome dominance. That said, we emphasize that microbiome manipulation and other restoration initiatives are not replacements for efforts to decarbonize global economies to limit greenhouse gas emissions. The results presented here provide the first evidence of a likely microbe-mediated life-history tradeoff in Scleractinian corals. Further exploration of this and other such potential tradeoffs may shed light on the evolutionary interplay between microbes and the physiology and ecology of their animal hosts. 16S rRNA sequence data were obtained from visibly healthy coral DNA extractions collected and processed for the Global Coral Microbiome Project (GCMP). This included coral samples taken from Eastern and Western Australia that were used in a previous study by Pollock and co-authors in addition to coral samples taken from the Red Sea, Indian Ocean, Coral Triangle, Caribbean, and Eastern Pacific. All samples compared in this study were collected, processed, and sequenced using consistent protocols as outlined below. In total, 1,440 coral, outgroup, and environmental samples were collected. Of these GCMP samples, the 1,283 scleractinian coral and outgroup samples were used in the present study (Supplementary Data Table S1 a). These comprise 132 species and 64 genera of corals originating from 42 reefs spanning the Pacific, Indian, and Atlantic oceans. Excluding outgroups, these data included an average of 22.3 ± 3.3 samples per genus, with a minimum of n of 2 in the genus Lithophyllon (Supplementary Data Table S1 a, d). The collection and processing of these coral samples followed the methods outlined in Pollock et al. and are compatible with samples processed for the Earth Microbiome Project . Briefly, three coral compartments were targeted for each sample: tissue, mucus, and skeleton. Mucus was released through agitation of coral surface using a blunt 10mL syringe for approximately 30 s and collected via suction into a cryogenic vial. Small coral fragments were collected by hammer and chisel or bone shears for both tissue and skeleton samples into sterile WhirlPaks (Nasco Sampling, Madison, WI). All samples were frozen in liquid nitrogen on immediate return to the surface prior to processing. In the laboratory, snap frozen coral fragments were washed with sterile seawater and the tissue was separated from skeleton using sterilized pressurized air at between 800 and 2000 PSI. Tissue and skeleton samples were then preserved in PowerSoil DNA Isolation kit (MoBio Laboratories, Carlsbad, CA; now Qiagen, Venlo, Netherlands) bead tubes, which contain a guanidinium preservative, and stored at -80℃ to await further processing. Outgroup non-scleractinian Anthozoans were also opportunistically collected and stored similarly, including healthy samples of the genera Millepora (hydrozoan fire coral), Palythoa (zoanthid), Heliopora (blue coral), Tubipora (organ pipe coral), and Xenia and Lobophytum (soft corals). Bacterial and archaeal DNA were extracted using the PowerSoil DNA Isolation Kit (MoBio Laboratories, Carlsbad, CA; now Qiagen, Venlo, Netherlands). To select for the 16S rRNA V4 gene region, polymerase chain reaction (PCR) was performed using the following primers with Illumina adapter sequences (underlined) at the 5’ ends: 515 F 5′− TCG TCG GCA GCG TCA GAT GTG TAT AAG AGA CAG GTG YCA GCM GCC GCG GTA A − 3′ and 806R 5’− GTC TCG TGG GCT CGG AGA TGT GTA TAA GAG ACA GGG ACT ACN VGG GTW TCT AAT − 3′). PCR, library preparation, and sequencing on an Illumina HiSeq (2 × 125 bp) was performed by the EMP . All raw sequencing data and associated metadata for the samples used in this study are available on Qiita (qiita.ucsd.edu) under project ID 10895, prep ID 3439. 16S rRNA sequencing data were processed in Qiita using the standard EMP workflow. Briefly, sequences were demultiplexed based on 12 bp Golay barcodes using “split_libraries” with default parameters in QIIME1.9.1 and trimmed to 100 bp to remove low quality base pairs. Quality control (e.g., denoising, de-replication and chimera filtering) and identification of amplicon sequence variants (ASVs) were performed on forward reads using deblur 1.1.0 with default parameters. The resulting biom and taxonomy tables were obtained from Qiita and processed using a customized QIIME2 v. 2020.8.0 pipeline in python (github.com/zaneveld/GCMP_global_disease). Taxonomic assignment of ASVs was performed using vsearch with SILVA v. 138 (see below). Coral mitochondrial reads obtained from metaxa2 were added to the SILVA repository to better identify host mitochondrial reads that may be present in the sequencing data . We refer to this expanded taxonomy as “silva_metaxa2” in code. After taxonomic assignment, all mitochondrial and chloroplast reads were removed. The bacterial phylogenetic tree was built using the SATé-enabled phylogenetic placement (SEPP) insertion technique with the q2-fragment-insertion plugin to account for the short-read sequencing data, again using the SILVA v. 138 database as reference taxonomy. The final output from this pipeline consisted of a taxonomy table, ASV feature table and phylogenetic tree that were used for downstream analyses. Potential contaminants from extraction and sequence blanks ( n = 103 negative controls) were identified and removed using the decontam package in R v. 4.0.2 with a conservative threshold value of 0.5 to ensure all ASVs that were more prevalent in negative controls than samples were removed ( n = 662 potential contaminants). The final feature table consisted of a total of 1,383 samples, 195,684 ASVs, and 37,469,008 reads. Disease data were gathered from long-term multi-species surveys in the Florida Keys (the Florida Reef Resilience Program (FRRP), https://frrp.org/ ), Hawaiʻi (HICORDIS ), and Australia (this study). Disease counts for Australian corals were collected over a period of 5 years across 109 reef sites and 65 coral genera (Supplementary Data Table S1 b). At each of the 109 reefs, we surveyed coral health using 3 replicate belt transects laid along reef contours at 3–4 m depth and approximately 20 m apart using globally standardized protocols . Depending on the reef location, belt transects were either 10, 15, or 20 m in length by 2m width making the area surveyed at each reef between 60 and 120m 2 . Within each belt transect, we identified each coral colony over 5 cm in diameter to genus and classified it as either healthy (no observable disease lesions) or affected by one or more of six common Indo-Pacific coral diseases (according to Lamb and co-authors ). Together with the FRRP and HICORDIS data, the combined disease dataset contained 582,342 coral observations across 99 coral genera (Supplementary Data Table S1 c). Because many of these disease observations identified corals only to genus, disease prevalence data were summarized at the genus level. All three resources represent coral surveys over time, ranging from 5 to 16 years. We chose such long-term datasets in an attempt to minimize the potential effects of specific events (e.g., bleaching in a single summer) and instead to capture more general trends in disease susceptibility across species, if such trends were present. Summarizing these data at the genus level was thus part of a comparative strategy, enabling us to extract overall trends and average out local circumstances, so that we could find holobiont features that control disease resistance that may protect some corals but not others. When summarizing at the genus level, individual counts of healthy corals or corals with specific diseases were summed within coral genera across these datasets. To ensure sufficient replication, we excluded coral genera with fewer than 100 observed individuals. This minimal count was selected because it is the lowest frequency at which diseases with a reasonably high frequency (e.g., 5%) can be reliably detected. (With 100 counts, there is a > 95% chance of detecting at least one count of any disease present with > = 5% prevalence; cumulative binomial, 100 trials, success chance = 0.05). Because only very rarely observed taxa were removed, this filtering preserved 99.8% of total observations. Ultimately, our genus-level summary produced a table with 581,311 observations across 60 coral genera (Supplementary Data Table S1 d). For a breakdown of disease susceptibility by coral host genus, see Supplementary Fig. S5 A. Statistical summaries of microbiome community composition were calculated for each sample in QIIME2 , and then summarized within anatomical compartments and coral genera. These summaries of coral microbiome alpha diversity were richness , evenness (the Gini Index), and Simpson’s Index, which combines both richness and evenness. Thus, each combination of coral genus and anatomical compartment — such as Acropora mucus — was assigned an average α-diversity value. Simpson’s Index, which is of particular importance in these results, is at its highest when a single taxon is the only one present in microbiome, and at its lowest when there are both a large number of taxa, and all taxa have equal abundance (or relative abundance). Thus, this measure is reduced both by community richness and community evenness (Simpson’s Index is closely related to Simpson’s Diversity, which is calculated as 1 - Simpson’s Index, such that more rich or even communities produce higher values). The summarized, genus-level disease susceptibility data compiled from all disease projects, and the summarized genus-level microbiome diversity data (see above) were combined to form a trait table that was used in subsequent evolutionary modeling. Additionally, the relative abundance of ‘dominant’ microbes analyzed in this study was averaged within genera and added to this genus-level trait table. Starting with a published multigene time-calibrated phylogeny of corals that we had previously used to demonstrate phylosymbiosis in corals , we randomly selected one representative species per genus to produce a genus level tree. This approach was preferred over several alternatives — such as trimming the tree back to the last common ancestor of each genus and reconstructing trait values — because it required fewer assumptions about the process of trait evolution. As microbiome data were not available for all genera on the coral tree (e.g., temperate deep-sea corals), the tree was further pruned (preserving branch lengths) to include only the subset of branches that matched those with microbiome data. To examine the influence of microbiome structure on coral traits, we pulled growth data from the Coral Trait Database from all coral genera that matched those with both microbiome and disease data, and were collected using consistent metrics (mm/yr). This resulted in growth rate data from 18 coral genera that were subsequently combined with our genus-level trait table . Shared evolutionary history induces correlations in traits between species that violate the requirement of standard statistical tests that observations must be independent and uncorrelated. Thus, special care must be taken to account for phylogeny in comparative analysis. We first applied Felsenstein’s phylogenetic independent contrasts (PIC) to visualize our cross-genus trait correlations using the phytools R package . This method removes the effect of any shared evolutionary histories by calculating differences in trait values (contrasts) between sister taxa. We next examined the relationships between traits using information-theoretic model selection (that is, comparison of AICc scores) to identify phylogenetic generalized least squares (PGLS) models of evolution that best explained the observed distribution of microbiome α- or β-diversity and disease susceptibility (as continuous evolutionary characters) in extant species. We tested 4 evolutionary models in the caper R package . In the first model, we used PGLS with no branch length transformation (i.e. holding λ, 𝜹, κ = 1). Thus, this first model is equivalent to PIC. In the next 3 models, we transformed branch lengths on the tree by allowing the model to fit either λ, 𝜹, or κ (see below) using maximum likelihood estimation, while fixing the other 2 parameters at 1. We refer to these 4 models as PGLS, PGLS + λ, PGLS + 𝜹, and PGLS + κ. For detailed explanations of each parameter, please refer to Supplementary Data Table S13 . Typically, these models estimated very low λ (~ 0), indicating little or low phylogenetic inertia. Multiple comparisons were accounted for by calculating q values for false discovery rate (FDR) control. Significant relationships between the two traits suggests that they are evolutionarily correlated. All statistics reported represent the best PGLS model results. Additionally, ancestral state reconstructions of key traits were visualized using the contmap function in the phytools R package , which in turn estimates internal states using fast maximum-likelihood (ML) ancestral state reconstruct as implemented in the fastAnc phytools function. Annotated code for all phylogenetic correlations are available within the run_all_PICs.ipynb script on GitHub: https://github.com/zaneveld/GCMP_Global_Disease/blob/master/analysis/core_analysis/procedure/run_all_PICs.ipynb with correlation results and stats organized by analysis number (A1-15). Observing that A and B are correlated famously does not guarantee that A causes B. However, non-random correlation between A and B does imply some causal association - though there are many possibilities (A causes B, B causes A, a positive feedback loop exists between A & B, some external factor C causes both A and B, etc.). Path analysis represents hypotheses of causality using directed acyclic graphs, then tests the different strengths of association predicted under different hypotheses of causation to test which are consistent with data. The cross-species nature of these data further necessitated use of phylogenetic path analysis, which also accounts for expected trait correlations among related genera. Hypotheses of the direction of causality between microbiome (specifically Endozoicomonas ), disease, and growth rate were tested using a phylogenetic causality analysis performed in the R package phylopath . This analysis tests the ability of different models to explain correlations in trait data. For example, does selection for a high growth rate in turn drive selection for increased Endozoicomonas relative abundance, which then increases disease susceptibility, or does symbiosis with Endozoicomonas itself separately increase disease and growth? Fourteen potential causality models were tested to incorporate all biologically plausible pathways between Endozoicomonas relative abundance, disease susceptibility, and growth rate . The top performing causality models according to CICc values (using both Pagel’s λ and Brownian Motion models of evolution) were averaged for interpretation and visualization. ASVs annotated as Endozoicomonas at the genus level were extracted from the rarefied QIIME2 coral microbiome feature table. Differences in ASV diversity within the Endozoicomonas genus was assessed by PERMANOVA of Weighted UniFrac or Aitchison beta-diversity distance matrices. For analysis of Endozoicomonas ASVs by compartment, Endozoicomonas ASVs were pooled according to whether they had greatest relative abundance in mucus, tissue or skeleton. The relative abundance of these compartment-specific pools was then regressed against host traits using PGLS, as outlined above. Below is the link to the electronic supplementary material. Supplementary Material 1 Supplementary Material 2 Supplementary Material 3 Supplementary Material 4 Supplementary Material 5 Supplementary Material 6 Supplementary Material 7 Supplementary Material 8 Supplementary Material 9 Supplementary Material 10 Supplementary Material 11 Supplementary Material 12 Supplementary Material 13 Supplementary Material 14 | Study | biomedical | en | 0.999997 |
PMC11697514 | Central pancreatectomy has emerged as an effective therapeutic alternative for the management of benign and low-grade pancreatic tumors, especially in cases where the preservation of pancreatic function is crucial, and minimizing morbidity associated with more radical resections, such as distal pancreatectomy or pancreatoduodenectomy, is desired . Unlike these traditional techniques, central pancreatectomy allows for the resection of localized neoplasms without significantly compromising the surrounding pancreatic tissue, resulting in a reduced risk of postoperative pancreatic insufficiency . Recent studies support central pancreatectomy as a valid therapeutic option in selected clinical contexts, where multidisciplinary preoperative evaluation plays a fundamental role in case selection. This article presents a clinical case of central pancreatectomy to contribute to the understanding of its role as a therapeutic option in the treatment of low-grade pancreatic tumors and its impact on pancreatic tissue preservation. A 45-year-old female patient with a history of conversion from sleeve gastrectomy to Roux-en-Y gastric bypass three years ago due to gastroesophageal reflux disease (GERD) presented with a pancreatic cystic lesion found incidentally on abdominal ultrasound screening . Further investigation with magnetic resonance imaging (MRI) revealed a 20 mm cystic lesion in the neck of the pancreas without features of malignancy . Endosonography showed no findings suggestive of aggressiveness. Pancreatic fine-needle aspiration (FNA) was performed (Table 1 ), with citrine-colored fluid aspirated and carcinoembryonic antigen (CEA) < 1.8 ng/mL. The biochemical analysis of the fluid showed amylase of 144 U/L; glucose of 102 mg/dL; immunohistochemistry, chromogranin A diffusely positive in neoplastic cells; synaptophysin, diffusely positive in neoplastic cells; and Ki-67, proliferation index estimated at <1%, consistent with well-differentiated grade 1 neuroendocrine tumor (G1 NET). Preoperative laboratory tests were performed, showing serum chromogranin A within the normal reference range (Table 2 ). PET-CT with octreotide showed uptake in the pancreas with no other lesions. Given the suspicion of a neuroendocrine pancreatic neoplasm, the case was discussed in a clinical committee, and an open central pancreatectomy was decided. The surgery was performed through a midline laparotomy, with the opening and section of the gastrocolic ligament, providing access to the lesser sac and full exposure of the pancreas. Macroscopically, a soft pancreas with a well-defined, partially exophytic cystic lesion in the neck, approximately 20 mm in its largest diameter, was identified, involving almost the entire thickness of the pancreatic parenchyma . The lymph node dissection of group 8 was performed, along with the dissection of the common hepatic artery and splenic artery. The dissection of the pancreatic groove to the left of the mesenteric vessels allowed the creation of a retro-pancreatic tunnel without complications. To the left, the splenic vein was identified in its usual position. The tunnel was completed using blunt dissection and an esophageal retractor to encircle the pancreatic body, achieving wide proximal and distal margins . A macroscopic view revealed a well-defined, partially exophytic cystic neoplasm located in the neck of the pancreas. The pancreatic neck was fully exposed, ensuring wide distal and proximal margins. The transection of the pancreatic neck was performed using an Endo GIA (Covidien, Dublin, Ireland) 60 mm purple cartridge, and distal pancreas resection was completed with monopolar energy . The Wirsung duct was identified with a diameter of approximately 2-3 mm . The jejunum was transected 20 cm distal to the previous entero-entero anastomosis of the gastric bypass. A transmesocolic loop was brought up, and a Blumgart pancreatojejunostomy was created with 10 separate duct-to-mucosa Prolene 5-0 stitches . A side-to-side mechanical entero-entero anastomosis was performed. Two Blake drains were placed in the pancreatic bed, with distal ends adjacent to the pancreatic stump and exteriorized through the right flank. On postoperative day 4, both drains had minimal output (drain I, 10 cc; drain II, 59 cc) with amylase levels of 5878 and 59 mg/dL, respectively (Table 3 ). Drain II was removed, and the patient was discharged on postoperative day 5 due to favorable clinical progress. She was evaluated eight days post-discharge; drain I output was 3-5 cc/day, leading to its removal. Biopsy confirmed a well-differentiated 2.1 cm G1 neuroendocrine tumor. Surgical margins were negative, with no vascular, lymphatic, or perineural invasion (pT2N0). At the five-month follow-up, the patient was asymptomatic, with a control abdominal PET-CT showing no abnormalities. Traditional approaches, such as distal pancreatectomy and pancreatoduodenectomy, involve more extensive resections, which may result in a higher risk of postoperative complications, including diabetes and malabsorption . Authors such as Iacono et al. support the idea that central pancreatectomy may be superior to distal pancreatectomy in certain contexts, particularly for patients with benign or low-grade tumors. Central pancreatectomy offers greater pancreatic tissue preservation and a lower rate of severe complications compared to distal pancreatectomy, making it the preferred surgical option in selected cases . The choice of surgical technique should consider not only the type of tumor but also the patient's clinical characteristics and the surgical context. Multidisciplinary clinical evaluation is essential for decision-making, where surgical planning plays a fundamental role. In terms of pancreatic function preservation, a recent study by Lee et al. directly compared central pancreatectomy, distal pancreatectomy, and duodenopancreatectomy. Central pancreatectomy demonstrated significant advantages in terms of pancreatic functional preservation, with favorable long-term outcomes not only for pancreatic function but also in preserving pancreatic mass, which ultimately translates into a lower incidence of postoperative diabetes and an improved quality of life for patients . Regarding the effectiveness and safety of central pancreatectomy, a recent systematic review and meta-analysis established its feasibility for both open and minimally invasive techniques. Although minimally invasive techniques offer additional benefits in terms of reducing surgical trauma, recovery time, and hospital stay, central pancreatectomy remains effective in open surgery . The adoption of minimally invasive techniques developed in recent years represents a current challenge, with laparoscopic approaches being a safe technique and an important advancement in surgical practice . Despite the numerous advantages of central pancreatectomy, the technique requires a high level of skill and experience from the surgeon, which may limit its application in centers with less experience in pancreatic surgery. In our case, the surgical team had extensive experience in pancreatic surgery, enabling central pancreatectomy to be considered as a therapeutic option, with thorough preoperative evaluation and planning. We believe that continuous training in minimally invasive techniques and the establishment of standardized protocols play a fundamental role in improving surgical outcomes in these cases. The reviewed studies support central pancreatectomy as a valid therapeutic option and, in certain cases, a preferred choice, particularly with the current advancements in minimally invasive techniques. | Clinical case | clinical | en | 0.999996 |
PMC11697539 | Dupilumab is a fully human monoclonal antibody against the interleukin (IL)-4 receptor alpha subunit (IL-4Rα). Binding of the monoclonal antibody to the IL-4Rα inhibits the signaling of IL-4 and IL-13, the 2 major cytokines secreted by CD4 + T-helper 2 (Th2) cells. 1 Dupilumab has been approved for the treatment of moderate-to-severe atopic dermatitis (AD) not adequately controlled by topical therapies and has become the first monoclonal antibody for the treatment of AD. 1 Cutaneous T-cell lymphomas (CTCLs) are characterized by mature CD4 + T-helper cells that are remarkably Th2-biased with strong inhibition of Th1 responses. 2 , 3 Blocking IL-4/IL-13 signaling pathways by anti-IL-4 neutralizing antibody reduces the proliferation of mycosis fungoides (MF) cells. 2 IL-4 and IL-13 are the major cytokines transforming the tumor-associated macrophages (TAMs) to M2 macrophages that promote cancer progression and treatment resistance, and dupilumab reduces the pro-tumor phenotype of M2 macrophages. 4 However, previous studies have shown that IgG4 is highly expressed in various types of tumor tissues, such as pancreatic cancer, 5 gastric cancer, 6 and others. IgG4 reduces the expression of CD206, CD163, and CD14 on the surface of M2 macrophages, increases the production of CCL-1, IL-10, and IL-6, induces the M2b-like macrophage phenotype, which impairs the tumor cell phagocytosis function and the function of anti-cancer effector cells. 7 Therefore, dupilumab, a fully human IgG4 monoclonal antibody, may induce macrophage polarization to M2b, mediating tumor tolerance and ultimately leading to cancer progression. Previous studies have reported that dupilumab may cause the worsening of existing tumors prior to the antibody therapy or may drive the appearance of typical tumors, in AD or refractory pruritus patients during or after dupilumab treatment. These tumors include CTCLs, other skin tumors, hematologic tumors, and solid tumors. For example, dupilumab treatment unmasked the atypical lymphoid infiltrates or MF in patients with refractory presumed AD and pruritus. 8 In this article, we collected and analyzed the relevant cases reported in the literature to explore the safety of dupilumab treatment for AD or refractory pruritus and the possible mechanisms of dupilumab on tumors. A systematic search in the PubMed database was performed following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines . We used the keywords “dupilumab AND cancer” and “dupilumab AND tumor” without limitation of article type to search the English publications in PubMed from 2017, the beginning of dupilumab for clinical use, until August 2024. A total of 148 full-text and eligible articles were retrieved, of which 43 articles were analyzed in this study after excluding the eligible articles without reported case(s) (n = 105). Fig. 1 PRISMA flow diagram of systematic review Fig. 1 A total of 90 patients, including AD, presumed AD, and other diseases with refractory pruritus, were reported in the 43 articles. Their demographics, preliminary diagnosis and treatment, dupilumab treatment method, the tumor diagnosed before or after dupilumab treatment, tumor type, TNM (tumor, node, and manifestation) classification and stage, changes of tumor and skin lesions after dupilumab treatment, and outcome of the patient were extracted from the 43 articles. A 600 mg dupilumab loading dose followed by 300 mg dupilumab every 2 weeks was used in most cases. The therapeutic effect was evaluated mainly according to the changes in skin rash, pruritus score, and quality of life index. We paid special attention to the occurrence of tumors and changes in tumor progression to evaluate the effect of dupilumab treatment on tumors. The summary statistics of 90 patients are shown in Table 1 , and the specific clinical characteristics are shown in Table S1 . They were aged 22–82 years old. Except for 7 patients whose gender was not specified, 50 patients were males and 33 were females, with a male-to-female ratio of 1.52:1. The course of dupilumab treatment ranged from 1 injection to several years. It is important to note that all patients had concomitant tumors or newly developed tumors after the use of dupilumab. Table 1 Demographic characteristics and changes of disease in patients with concomitant or newly emerging tumors treated with dupilumab. Table 1 Demographic characteristic Patients, No. (%) Male sex 50 (55.6) Female sex 33 (36.7) Unknown sex 7 (7.8) Age 22–82 years old Dupilumab treatment duration Once to 30 months Tumor characteristic Pre-exist before dupilumab treatment 62 With primary tumors 57 (63.3) CTCL misdiagnosed as AD 3 (3.3) With primary solid tumors, new CTCLs 2 (2.2) Present after dupilumab treatment 30 With new tumors 28 (31.1) With primary solid tumors, new CTCLs 2 (2.2) Tumor type CTCL 34 (37.8) Other skin tumors 9 (10) Hematological 24 (26.7) Solid tumors 26 (28.9) Tumor changes Stable 14 (15.6) Progression 10 (11.1) Response 13 (14.4) Not available 53 (58.9) Primary dermatological changes No response 24 (26.7) Response 40 (44.4) Not available 26 (28.9) AD, atopic dermatitis; CTCL, Cutaneous T-cell lymphoma. A total of 62 patients had the tumors before dupilumab treatment. 8 , 9 , 10 , 11 , 12 , 13 , 14 , 15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 , 23 , 24 , 25 , 26 , 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 , 35 , 36 Most patients showed improvement in AD rash and pruritus, and only 8 patients showed aggravation of rash and pruritus. 8 , 13 , 37 The changes in tumors after dupilumab treatment are listed in Table 2 . It seems that dupilumab has no significant negative effects on these tumors, except that more CTCL patients under dupilumab treatment are required to be observed to draw a conclusion. Table 2 Patients with pre-existing tumors and changes in the tumor after dupilumab treatment. Table 2 Tumor No. Stable Death or progression Partial response Remission or very good response Changes not available Skin (CTCL: MF, SS, CTCL-NOS) 10 5 (case 1,5,6,7,25) 3 (case 19,21,26) 2 (case 33,34) Skin (melanoma, squamous cell carcinoma, angiosarcoma) 9 3 (case 39,40,41) 2 (case 38,42) 2 (case 35,36) 2 (case 33,37) Hematological (multiple myeloma, lymphoma) 21 1 (case 53) 3 (case 54,59,61) 2 (case 38,62) 2 (case 52,60) 13 (case 43,44,45,46,47,48,49,50,51,55,56,57,58) Solid tumors 25 10 (case 78,79,82,83,84,85,86,87,88,90) [16,17,27.28,34] 1 (case 89) 2 (case 42,76) 2 (case 77,80) 10 (case 15,31,66,67,68,69,70,71,72,75) P.s.: case 15 and case 31 were originally diagnosed with solid tumors and presented CTCL after dupilumab treatment. Case 33,38 and 42 had overlapping multiple tumor types, so they were counted twice. A total of 30 patients presented tumors after dupilumab treatment (1 patient subjected to only 1 injection of dupilumab is excluded from the analysis). 8 , 10 , 14 , 21 , 36 , 37 , 38 , 39 , 40 , 41 , 42 , 43 , 44 , 45 , 46 , 47 , 48 , 49 Interestingly, 23 of the 29 patients developed CTCLs. Most authors believed that the pre-existing malignant T-cell clone may overgrow in a changed immune microenvironment. For example, some patients were first diagnosed with presumed AD (cases 1, 2, 3, 4, 16) and actually were the presentations of CTCL at an early stage; they developed typical CTCL symptoms after dupilumab treatment. 8 , 49 Therefore, careful examination of the refractory patients with atypical AD lesions to identify the possibility of concomitant tumors, especially CTCLs, before dupilumab treatment is recommended. On the other hand, in some patients with long-standing AD, eczematous or psoriasiform lesions confirmed by multiple biopsies, CTCLs occurred following dupilumab treatment (cases 24,27,28,29,30,63,64). 36 , 41 , 42 , 46 , 48 Three patients (cases 73,74,81) developed testicular tumors, embryonic cancer, or bladder cancer after dupilumab treatment. 10 , 39 Whether these tumors were coincidental events or a specific correlation with the inhibition of IL-4/IL-13 is uncertain. Type 2 immunity, illustrated by T helper 2 lymphocytes (Th2) and downstream cytokines (IL-4, IL-13, IL-31) as well as group 2 innate lymphoid cells (ILC2), is important in host defense and wound healing. 50 In CTCL, the expression of STAT4, an important transcription factor of Th1 lymphocyte subsets, is upregulated in the early stage. However, with the development of the disease, the expression of signal transducer and activator of transcription 4 (STAT4) is usually lost, leading to the switch from Th1 to Th2, causing cancer progression and immunosuppression, which is associated with worse clinical prognosis. 51 The 2 most common CTCLs, advanced MF and SS, are often associated with eosinophilia and high IgE levels. 52 IL-4 and IL-13 are the main cytokines that drive the Th2 response and inhibit Th1/Th17 differentiation. 53 , 54 They are also important growth factors in primary cutaneous lymphoma, where IL-13 acts on tumor lymphocytes in an autocrine manner. 2 Both are involved in stimulating B-cell differentiation, IgE production, eosinophilic growth, and aggregation 53 , 54 , 55 and have been confirmed as irritants of chronic pruritus. 53 , 54 Therefore, inhibition of the IL-4/13 pathway can theoretically improve the clinical symptoms of CTCL. It has been reported that IL-13 is an autocrine factor in CTCL. CTCL cells produce IL13 and express IL13 receptors, which can induce the growth of CTCL cells. Moreover, pSTAT6 was highly expressed in CTCL lesions, implying the activation of the IL4/IL13 pathway. It was confirmed that blocking IL-4 and IL-13 had a synergistic effect on inhibiting the growth of CTCL cells. Interestingly, blocking IL-13Rα2 revealed an even stronger inhibition of tumor growth, considering that IL-13 binds to IL13Rα2 more strongly than IL13Rα1. 2 , 56 Therefore, blocking the heterodimer formed by IL-4Rα and IL-13Rα1 may increase the binding of IL-13 to the IL-13Rα2 site. Effectively increasing the IL-13 shunt in the tumorigenic pathway may achieve a tumor promotion effect. TAMs are abundant tumor-associated macrophages in the tumor microenvironment (TME). 4 Macrophages can account for more than 50% of solid tumors and play an important role in cancer progression. 57 , 58 The high permeability of TAMs is associated with poor prognosis. 59 , 60 , 61 , 62 , 63 They are usually classified as either an antitumor phenotype (M1-like) or a tumor-friendly phenotype (M2-like). Most TAMs exhibit an M2 phenotype that supports tumor growth, immune escape, and metastasis 4 and promotes therapeutic resistance through various mechanisms. 57 , 60 , 64 , 65 , 66 , 67 M2 TAMs can also counteract the effect of cytotoxic agents on cancer cells through the secretion of survival signals and cathepsins. 64 , 65 IL-4 and IL-13 are the main cytokines that polarize macrophages into the M2 subpopulation. 4 Therefore, blocking the IL-4/IL-13 pathway may have anticancer effects. However, the tumor microenvironment is complex and dynamic and cannot be fully simulated by in vitro models. In an animal model of prostate cancer, drug inhibition of IL4Rα did not affect tumor growth. 4 Therefore, further in vitro and in vivo tests are needed to evaluate the effect of targeting the IL4/IL13 pathway in different tumors. Dupilumab is a fully human monoclonal antibody of the immunoglobulin G4 (IgG4) subclass. IgG4 antibody have a unique affinity profile for Fc gamma receptors (FcγRs) and support phenotypical macrophage changes towards an M2b-like state. 68 Macrophages express FcγRIIa which is involved in antibody-dependent cellular phagocytosis (ADCP). 69 Since IgG4 has a low affinity for FcγRIIa and a higher affinity for inhibitory FcγRIIb than for other IgG subclasses and only acts as an inhibitory effect when other FcγRs are co-involved, 70 it is possible that IgG4 may dampen FcγR immune activation by co-engaging FcγRIIb together with the engagement of any other FcγRs by antigen-specific IgG1. Furthermore, since IgG4 is not able to trigger complement-dependent cytotoxicity (CDC), 71 any tumor specific IgG4 antibody competing with tumor specific IgG1 antibody indirectly reduces IgG1-mediated CDC. Therefore, IgG4 is a key to immune tolerance in cancer. Previous study has found that IgG4 inhibited IFNγ signaling via FcγRI, and favoring an M2b-like phenotype, 72 which plays a role in the formation of CTCL by secreting various chemokines, 52 such as CCL-1, IL-10, and IL-6. 68 CCL1 secretion is critical to maintain the M2b phenotype in mice and humans, 73 while IL-10 has been found to impair the differentiation of infiltrated monocytes into mature dendritic cells (DCs), thereby compromising the competent host anti-tumor immune response. 74 Through the analysis of CTCL patients and animal experiments, it has been proved that M2-like phenotype macrophages play an important role in the tumorigenesis of CTCL, and the depletion of macrophages inhibits tumor growth in a mouse model. 75 In conclusion, as an IgG4 monoclonal antibody, dupilumab may promote tumor immune escape by affecting macrophage polarization and cytokine secretion. In our study, most AD patients with tumors showed improvement in AD symptoms, tumor stabilization, or regression after treatment with dupilumab. Only a few patients showed tumor progression, which was mainly MM and CTCL. Combined with the above analysis, we suggest that in most cases, dupilumab has no effect on tumor progression or even prevents tumor progression by blocking the IL-4/IL-13 pathway and/or inhibiting the transformation of TAMs to the M2 phenotype. However, dupilumab may promote tumor progression by blocking IL-13Rα1 and then increasing IL-13 binding to IL13Rα2, thus promoting tumor progression in some tumors, especially CTCLs. The effect of dupilumab on tumors may be determined by whether IL4/IL13 signaling plays a dominant role in tumors. Different tumors have distinctive signaling pathways with pro-tumor and tumor-suppressive roles. Furthermore, advanced CTCLs are aggressive. It is unclear whether dupilumab treatment worsens the disease or whether it is the natural course of the disease's progression. Due to the limited sample size, more observations and studies are needed to determine the effect of dupilumab on primary tumors. Some patients with dupilumab treatment developed new tumors, including CTCL in the majority, as well as other skin tumors, hematological tumors, and solid tumors such as bladder cancer. For individual patients with new solid tumors such as bladder cancer, the authors noted that there was no significant correlation between the occurrence of tumors and the use of dupilumab. 39 However, we should pay special attention to CTCL. A group of patients with an initial diagnosis of AD confirmed pathologically atypical lymphocyte infiltration following dupilumab treatment. 44 Over an average of 9.8 months after dupilumab treatment, the density, distribution pattern, and composition of lymphatic infiltrates gradually changed from reactive to neoplastic patterns. 44 Previous data have supported the progressive development of CTCL in the context of chronic inflammatory processes such as AD driven by exogenous and endogenous factors. 76 Therefore, dupilumab may be a potential trigger for the initial progression of benign lymphocyte tissue infiltration, leading to clonal expansion of T lymphocytes and subsequent malignant transformation. Although this study was limited by its retrospective design and sample size, it reminds us that careful clinical, histopathological, and immunohistochemical evaluation should be performed before and during the treatment of refractory AD and that continuous skin biopsy is necessary. As described above, individual patients were misdiagnosed with AD and were given dupilumab treatment. After the treatment, the clinical symptoms worsened, and the diagnosis of CTCL was confirmed by multiple biopsies and other relevant examinations. Because CTCL can simulate multiple clinical manifestations of inflammatory skin diseases, especially in the early stages, repeated biopsies are required to assist in a definitive diagnosis. The prognosis and treatment regimens of AD and CTCL are largely different. Therefore, it is necessary to perform multiple or repeated pathological biopsies on refractory AD patients or AD patients with atypical skin lesions to determine whether there is a diagnosis of CTCL. We summarized the literature on tumors in the setting of dupilumab use and came up with the following conclusions: dupilumab is theoretically safe in patients with concomitant tumors, but a small number of patients, especially those with CTCLs, developed progression of the primary tumor after treatment. Although a clear correlation between dupilumab therapy and tumor progression cannot be demonstrated, dupilumab does not have a definite effect on preventing tumor progression. In clinical studies of dupilumab for other diseases, such as asthma, chronic rhinosinusitis with nasal polyps, and eosinophilic oesophagitis, a small number of patients had serious adverse events related to neoplasms, which was ultimately determined without significant relationship to dupilumab treatment. 77 , 78 , 79 , 80 Due to the small sample size of relevant studies and the characteristics of the advanced malignant behavior of CTCL itself, it is still uncertain whether dupilumab causes tumor progression, and it is necessary to pay close attention to tumor changes during treatment. In addition, early CTCL is easily misdiagnosed as AD, which has a distinctive prognosis and treatment, so it is necessary to make a clear and definitive diagnosis. In a cross-sectional study of dupilumab-associated MF, the more advanced disease stage at the time of MF diagnosis during dupilumab use, the shorter the treatment duration to MF onset. In addition, older age and male sex seem to have a higher risk of advanced MF. 81 Therefore, close monitoring of elderly men and late-stage MF patients with serial biopsies and close observation of clinical changes may be warranted. If patients show refractory or atypical lesions of AD, the possibility of CTCL should be considered. We suggest that biopsy criteria should be lowered before the application of biologics, and close follow-up should be conducted during treatment to evaluate the presence of CTCL based on clinical manifestations, pathological biopsies, TCR rearrangement, and immunological tests. Once a diagnosis of CTCL is established, caution is recommended to discontinue dupilumab and aggressively pursue lymphoma-related therapy. Thus, a multidisciplinary committee with oncologists is recommended to jointly assess the patient's condition and guide more precise treatment. In conclusion, dupilumab, as the first monoclonal antibody approved for the treatment of moderate to severe AD, makes a significant contribution to improving the quality of life of patients with AD and AD-like symptoms. However, its safety and efficacy in the context of cancer remain unclear. In this study, we conclude that tumors are not an absolute contraindication for dupilumab, but careful evaluation before and during treatment is warranted. Limitations of this review include small sample size, incomplete clinical data of patients, short mean follow-up time, and inability to know the long-term prognosis of patients. More clinical reports and mechanistic studies are needed to clarify the safety and efficacy of dupilumab in the tumor setting. Here, we summarize the use of dupilumab in the tumor setting and cases of new tumors after dupilumab treatment in the literature and describe the demographics, clinical characteristics, therapeutic responses, and clinical outcomes of these patients. Based on these findings, we conclude that dupilumab is not an absolute contraindication for tumor use, but also does not have a definite tumor therapeutic effect. We recommend early and repeated testing in refractory AD patients with atypical lesions to identify the possibility of concomitant tumors, especially CTCLs. We suggest that the criteria for biopsy may be lowered appropriately and note that negative or ambiguous results do not preclude a diagnosis of CTCL. Dupilumab can be discontinued out of caution when the tumor is detected and active tumor-related therapy is initiated. It is necessary to follow up closely before and during treatment and to monitor the occurrence and development of tumors. AD, atopic dermatitis; CTCL, Cutaneous T-cell lymphoma; IL, interleukin; IL-4Rα, interleukin-4 receptor alpha subunit; Th2, T-helper 2; MF, mycosis fungoides; TAM, tumor-associated macrophage; PRISMA, Preferred Reporting Items for Systematic Reviews and Meta-Analyses; ILC2, group 2 innate lymphoid cell; STAT4, signal transducer and activator of transcription 4; TME, tumor microenvironment; CLL, chronic lymphocytic leukemia; Dpl, dupilumab treatment; EDHM, eosinophilic dermatosis of hematologic malignancy; MM, multiple myeloma; MTX, methotrexate; SS, Sézary syndrome. All authors give their consent to the publication of this manuscript. Data is available on request from the authors. Not applicable. This study was supported by the Guangzhou Science and Technology Program Basic Research Program - City School (Hospital) Jointly Funded Project - Yat-Sen Youth Medical Talent Plan ; National Nature Science Foundation of China . None. | Other | biomedical | en | 0.999996 |
PMC11697550 | The incidence of hepatocellular carcinoma (HCC), the predominant form of primary liver cancer, is increasing more rapidly than that of any other cancer in the United States. 1 HCC is frequently diagnosed at advanced stages, making surgical resection or liver transplantation for curative purposes challenging. 2 , 3 Recent advancements in first-line treatments for advanced HCC involve combination therapies that include immune checkpoint blockade (ICB), specifically anti-programmed cell death protein/ligand 1 (PD-1/PD-L1) antibodies, combined with anti-angiogenics. 4 Despite these advances, the response rates to these therapies hover around 15%–20%, with median survival for patients with advanced unresectable HCC ranging from 1 to 2 years. 5 For patients who respond, tumor-infiltrating lymphocytes (TILs) correlate strongly with better outcomes. Patients with high TILs in their tumors typically experience better responses to the combination of ICB and anti-angiogenic therapy, leading to increased overall survival rates. 6 , 7 , 8 , 9 However, HCC tumors often lack tumor-associated antigens that are strongly and consistently expressed and capable of triggering anti-tumor immune responses or being targeted through adoptive immunotherapy or vaccination. In addition, the presence of cells that suppress T cell responses in the tumor microenvironment (TME) further limit immune response against tumors. 6 In this context, developing strategies to increase immune responses within HCC tumors may open new opportunities for maximizing therapeutic potential for patients with advanced HCC. Oncolytic viruses (OVs) have emerged as a promising strategy to enhance the immunogenicity of tumors by directly lysing cancer cells and inducing an immune response. 10 , 11 OVs promote tumor cell death and increase the recruitment and activation of TILs within the TME. This dual action can potentiate the effects of standard therapies, such as ICB and anti-angiogenic agents, by modifying the immune landscape of the tumor and making it more receptive to treatment. 10 , 11 Among the attractive class of OVs, the Rhabdoviridae family, specifically the vesiculovirus genus, has garnered significant interest due to their inherent advantages over other viral vectors, including board tropism, fast replication in cancer cells, cytoplasmic replication, genetic manipulability, and low human seroprevalence. 11 Here, we show that Jurona virus (JURV), 12 a member of the vesiculovirus genus, effectively induces cytolytic activity in HCC cells in vitro and delays tumor progression in vivo . Moreover, we demonstrate that JURV is safe and elicits systemic anti-tumor immunity, inhibiting growth in both virus injected and distal tumors in a syngeneic HCC model. Furthermore, administration of JURV remodeled the TME by enhancing the activation of tumor-specific cytotoxic T cells and, when combined with ICB, improved survival in an aggressive orthotopic murine model of HCC. These results lay a foundational basis for further exploration of JURV and the combination of JURV with ICB as a novel therapeutic approach in HCC treatment. We obtained JURV from the University of Texas Medical Branch World Reference Center for Emerging Viruses and Arboviruses (Galveston, TX). It has been isolated from Haemagogus sp. and a human in northern Brazil. 12 A laboratory-adapted viral clone of JURV was generated using sequential plaque purifications in Vero cells . RNA sequencing was applied to confirm the full-length JURV genome (10,993 bp) as described previously. 13 Analysis of the genome of JURV showed an identical genome organization as that observed in Vesicular stomatitis virus (VSV) and Morreton virus (MORV) , two other members of the Rhabdoviridae family. 5 Infectious JURV was recovered from a full-length cDNA clone (GenScript) comprising genes encoding for the nucleoprotein (JURV-N), phosphoprotein (JURV-P), matrix protein (JURV-M), glycoprotein (JURV-G), and RNA-directed RNA polymerase L protein (JURV-L), as described in the materials and methods . We assessed the in vitro cytotoxicity of JURV in various human and murine HCC lines, including HEP3B, PLC, HuH7, HEPA 1–6, and RILWT. These cell lines were infected with JURV at multiplicities of infection (MOIs) of 0.1, 1, and 10 . With an MTS cell viability assay at 72 h post-infection, we observed a reduction in cell viability across all cell lines, with differences in response to each cell type. HEP3B and PLC cells showed a ∼30% reduction in cell viability irrespective of the MOI , while the other cell lines showed MOI-dependent cell cytotoxic effects, mostly reaching ∼30% at high MOI. Crystal violet staining was performed 3 days post-infection with an MOI of 0.1. It showed that JURV infection resulted in the substantial loss of adherent cells in most cell lines, except HuH7 , indicating that the MTS assay might have underestimated JURV’s oncolytic impact. In addition, the viral kinetic analysis revealed that JURV amplification reached around 10 6 plaque-forming units (PFU)/mL viral titers in HCC cell supernatants as early as 10 h post-infection , indicating JURV’s high infectivity and fast replication capability in these HCC cells. Figure 1 Oncolytic JURV is effective at inducing oncolysis in HCC cell lines (A) Monolayers of human HCC (HEP3B, PLC, HuH7), murine HCC (HEPA 1–6 and RILWT) were seeded at a density of 1.5 × 10 4 /well in 96-well plates and infected with JURV at an MOIs of 10, 1, or 0.1, respectively. The percentage of cell viability was determined 72 h post-infection using a colorimetric assay (MTS, Promega) and calculated as percent of noninfected control cells. The discontinued lines on the graphs indicate the cutoff percentage for resistance (>50% cell viability above the line) and sensitivity (<50% of cell viability, below the line). Data were collected from multiple replicates over three independent experiments. Bars indicate mean ± SEM. (B) Crystal violet staining. Cancer cells were plated at 5.0 × 10 5 /well in a 6-well plate and rested overnight. The following day they were infected with JURV at an MOI of 0.1. Cells were fixed and stained with crystal violet 72 h post-infection, and images were captured at 10× magnification on an Olympus IX83 Inverted Microscope System. (C) HCC cells were plated in 6-well plates at 2.0 × 10 5 /well and infected with JURV at an MOI of 0.1. Supernatants from infected cells were collected at different time points, and viral titer was determined using a TCID 50 (50% tissue culture infective dose) or PFU method on Vero cells (1.5 × 10 4 ). Data are plotted from two independent assessments of TCID 50 for each point with mean ± SEM. We assessed the safety profile of JURV using two doses (1.0 × 10 7 or 1.0 × 10 8 TCID 50 [50% tissue culture infective dose]) of JURV that are around 5- to 50-fold higher than the toxic threshold for VSV (1 × 10 6 PFU). 14 Doses were administrated to non-tumor-bearing healthy mice either intranasally (i.n.) or intravenously (i.v.). Our analyses, including post-infection body weight monitoring and histological examination of key organs (brain, liver, or spleen), revealed a mild weight loss (10%–15%) in the initial 3 days but no significant histopathological changes in the brain, liver, or spleen . Importantly, there were no marked differences in clinical signs such as paralysis, death, fur condition, or serum markers of drug-induced toxicity between the JURV-treated and control groups, indicating that high-dose JURV administration is not associated with severe adverse effects in this model. Figure 2 Effects of low and high doses of oncolytic JURV on body weight and hemogram in mice Non-tumor-bearing female C57BL6/J of age 6–8 weeks were administered single doses of PBS, 1 × 10 7 TCID 50 of JURV, or 1 × 10 8 TCID 50 of JURV (A) intranasally (i.n.) or (B) intravenously (i.v.). Body weight was recorded twice a week in both the i.n. and i.v. cohorts to assess drug-related toxicity. Three mice per group in each cohort (i.n. or i.v.) were sacrificed 3 days post-infection, and blood, brain, and liver were harvested to assess the short-term toxicity. Hematoxylin and eosin (H&E) staining (brain, spleen, and liver) are shown for i.n. and i.v. administration (C), where black arrows indicate that samples were within normal limits. Green arrows indicate necrosis, single cell, macrophage, sporadic. Yellow triangles indicate pigmentation increased in macrophages, red pulp, and white pulp. Next, we evaluated whether the observed in vitro cell killing capacity of JURV is associated with its capacity to induce an oncolysis-dependent tumor cell killing in vivo . We injected intratumorally (i.t.) three doses of JURV into human HEP3B xenografts. We used luciferase-tagged HEP3B cells to monitor tumor growth during the first 3 weeks of treatment. Bioluminescence imaging revealed significant tumor inhibition in JURV-treated mice compared with phosphate-buffered saline (PBS) controls, evident from the first week post-injection . We observed a significant reduction in tumor growth (>90%) in the JURV-treated group . However, while the HEP3B xenograft mice exhibited tumor reduction, we also noted some weight loss , which could be due to tumor volume reduction. In addition, NOD scid mice, being severely immunocompromised, are susceptible to viral infections, which likely contributed to this effect as well. In contrast, in immunocompetent HCC models, JURV-treated mice maintained stable body weight, further supporting the safety and tolerability of JURV in hosts with intact immune systems as described elsewhere in this manuscript. Figure 3 Assessment of JURV-mediated oncolysis in Hep3B xenografts Female NOD.Cg-Prkdcscid/J mice ( n = 6/group) were inoculated subcutaneously with HEP3B cells tagged with a luciferase reporter protein. When the average tumor volume reached 80–120 mm 3 , mice were divided into two groups and received i.t. injections with either PBS or JURV at a dose of 1.0 × 10 7 TCID 50 (days 0, 7, and 14). (A) Tumor volume was recorded twice weekly until the humane endpoint, or end of the study (day 21). HEP3B tumors treated with PBS or JURV were harvested and analyzed for changes in protein expression. (B) Volcano plot of protein expression differences in HEP3B tumors treated with PBS vs. 1 × 10 7 TCID 50 of JURV. (C) 3D pie slices of the numbers of differentially expressed proteins (DEPs) in HEP3B tumors injected with PBS vs. 1 × 10 7 TCID 50 of JURV. (D) Heatmap of the top 20 DEPs upregulated or downregulated in HEP3B tumors injected with PBS vs. 1.0 × 10 7 TCID 50 of JURV. DEPs were determined using the limma-voom method as described in material and methods section. A fold-change |logFC| ≥ 1 and a false discovery rate (FDR) of 0.05 were used as a cutoff. The logFC was computed using the difference between the mean of log2(JURV) and the mean of log2(PBS), that is, mean of log2(JURV) – mean of log2(PBS). (E) Graph showing top-scoring canonical pathways significantly enriched by treatment with 1.0 × 10 7 TCID 50 of JURV in the HEP3B tumors. We have previously demonstrated that the responsiveness to type I IFN production or viral kinetics in vitro by infected cancer cell lines does not always correlate with the in vivo efficacy of OVs. 13 Consequently, we conducted a proteomics analysis of tumor tissues to identify changes, specifically focusing on proteins involved in the anti-viral pathway, following intratumoral delivery of JURV in HEP3B tumors. The analysis of 2,088 proteins showed that a storm of 160 differentially expressed proteins (DEPs) were upregulated, and 170 DEPs were downregulated in the JURV-treated vs. control group tumors . Key upregulated proteins, including VIM, 15 LCP1, 16 COL6A3, 17 HSPG2, 18 NAMPT, 18 and STAT1, 19 are associated with the activation of the mTORC2/AKT pathway, whose inhibition reduces the expression of type I IFN genes (IFN-α/β) during TLR triggering . A subcutaneous syngeneic HEPA 1–6 HCC model was used to evaluate the anti-tumor efficacy of JURV. The treatment regimen included three i.t. doses of JURV within 3 weeks. A significant delay in tumor growth was observed in mice treated with JURV compared with PBS-injected control , with no adverse effects . In addition, to investigate further the potential abscopal effect and the broader systemic immune response triggered by JURV, we implanted bilateral Hepa 1–6 tumors subcutaneously on both flanks of the mice. JURV was administered i.t. exclusively to the right flank tumors. Interestingly, this treatment led to tumor regression on both the treated and untreated sides, indicating a potential systemic anti-tumor response . However, we recognize the complexity of accurately evaluating the abscopal effect. Further studies are required to thoroughly assess JURV’s ability to induce local and systemic immune responses capable of eradicating distant tumors. Figure 4 Evaluation of the anti-tumor efficacy of oncolytic JURV in an immuno-competent murine HCC model HEPA 1–6 cells were implanted into the right flanks of female C57BL6/J ( n = 7/group; Jackson Laboratory). (A) When the average tumor volume reached 80–120 mm 3 , mice were administered 50 μL i.t. injections containing PBS (vehicle) or 1 × 10 7 TCID 50 units of JURV were injected (inj.) into tumor-bearing mice at days 0, 7, and 14. Tumor volume was recorded twice weekly. Tumors were harvested at the end of the study for downstream analysis. (B) In the abscopal model (dual flanks), HEPA 1–6 cells (1 × 10 6 cells/mouse) were first subcutaneously grafted into the right flanks and were categorized as “primary” tumors. Simultaneously, we performed distant HEPA 1–6 tumor grafts (1 × 10 6 cells/mouse) into the left flanks of these mice. Mice in the dual-flank group received 50 μL i.t. injections of 1 × 10 7 TCID 50 units of JURV only on their right flanks once a week for 3 weeks. Data plotted as mean ± SD; ∗∗ p < 0.001, ∗∗∗ p < 0.0001. Area under the curve for tumor growth was compared by one-way ANOVA with Holm-Sidak correction for type I error. The first day of JURV or PBS injection was defined as day 0. t-SNE (t-distributed stochastic neighbor embedding) plot showing variable composition of tumor-infiltrating lymphocytes in JURV-treated tumors. Viable CD45 (12,500 events per tumor) were clustered by t-SNE. (C) Global cell density by t-SNE for each tumor treatment group. (D) Heatmap level of expression of each cellular marker across all groups. (E and F) Analysis of tumor-infiltrating immune cells following i.t. injection of oncolytic JURV in murine HCC tumors. The parent gate used is the live CD45+CD3+ population. We analyzed the changes in the immune landscape in murine HCC treated with JURV by flow cytometry. With t-SNE analysis , we observed that JURV treatment-induced tumor growth delay was associated with a significantly altered TME to favor a more robust immune response. This effect was evidenced by increased markers of activated and proliferating T cells (CD44, Ki67), cytotoxic markers (CD8, GzmB), and IFN-γ production , with PD-1 expression suggesting a potentially active immune response. Our results indicate that JURV effectively recruits cytotoxic T lymphocytes and modulates immunosuppression, a key feature of durable immunotherapy responses. Hepa 1–6 HCC tumors were subjected to transcriptional profiling to discern the gene and pathway alterations occurring following treatment with JURV, compared with controls treated with PBS. Differentially expressed genes (DEGs) were analyzed using the limma-voom method. 20 Our data showed that, among the 22,786 genes, 203 DEGs were upregulated and 464 DEGs were downregulated (2-fold change >2, p < 0.055). Several of the top 10 upregulated DEGs, Myo3a, 21 Cd209c, 22 Trim67, 23 St8sia2, 24 and Wnt5b 25 are associated with immune response pathways . Many of the enriched cellular signaling pathways, such as the B cell receptor signaling, IL-15 signaling, and phagosome formation, identified by IPA analysis are related to the activation of the host’s innate and adaptive immune responses . Furthermore, to better understand the mechanism of JURV-induced anti-tumor activity, we analyzed the DEPs and DEGs from the transcriptomic and proteomic data . In the associated DEGs/DEPs, we identified the top 30 enriched features that are significantly upregulated or downregulated in the JURV group compared with the PBS-treated control group. Among the upregulated features, S1pr3, 26 Tnpo1, 27 Psmb1, 28 Ddt, 29 Ncor2, 30 and Slc04c1 31 have been identified in inflammation, host immune response against microorganisms (virus, bacteria), and tumorigenesis. These studies reveal potential molecular mechanisms involved in the JURV-induced anti-tumor activities. Figure 5 Proteogenomic changes in murine HCC injected with oncolytic JURV (A) Volcano plot of murine HCC tumor mRNA expression differences for PBS vs. JURV (1.0 × 10 7 TCID 50 ). (B) 3D pie slices of the numbers of differentially expressed genes (DEGs) between PBS vs. JURV. (C) Heatmap of the top 20 DEGs upregulated or downregulated in PBS vs. JURV. DEGs were determined using the limma-voom. (D) Graph showing top-scoring canonical pathways significantly enriched by treatment with PBS vs. JURV. A MixOmics supervised analysis was carried out between DEPs and DEGs based on Log2 fold change values. Log2 fold change of DEG × Log2 fold change of DEP > 0 with a p value of DEG and DEP < 0.05 were considered associated DEGs/DEPs. (E) DEG/DEP expression heatmap of the 30 most upregulated and downregulated features DEG/DEP in PBS vs. JURV. To comprehensively evaluate the therapeutic efficacy of oncolytic JURV across diverse TMEs, we employed distinct experimental approaches tailored to each model: i.t. injections for the non-metastatic Hepa 1–6 model and intraperitoneal (i.p.) injections for the metastatic RILWT model. Building on the observed effects of JURV in delaying tumor growth, modulating the TME, and activating immune effectors critical for anti-tumor immunity, we further investigated the synergistic potential of combining i.p. administration of JURV with anti-PD-1 therapy in an orthotopic RILWT mouse model. Employing immunocompetent C57BL6/J mice with RILWT HCC cells implanted orthotopically, we administered i.p. injections of JURV (1.0 × 10 7 TCID 50 ) weekly for 3 weeks from day 7 post-tumor implantation, either alone or combined with anti-PD-1 antibody (5 mg/kg given twice weekly for 3 weeks). Kaplan-Meier survival analysis showed significant improvements in survival for mice treated with anti-PD-1 antibodies , JURV , and notably the combination of JURV and anti-PD-1 antibodies , compared with PBS-treated controls . The combination therapy notably outperformed both anti-PD-L1 antibody alone and JURV alone without inducing adverse clinical events . Furthermore, RILWT-cured and treatment-naive mice were rechallenged subcutaneously with 5.0 × 10 5 RILWT cells to evaluate long-term immunity. Interestingly, all mice previously treated with JURV, anti-PD-1, or their combination, successfully rejected the implanted RILWT cells, contrasting with the tumor development in treatment-naive mice . These findings suggest the induction of a robust tumor-specific immune response by the treatments highlighting the potential of JURV in combination with anti-PD-1 therapy as a potent strategy for HCC treatment. Figure 6 JURV synergizes with checkpoint inhibitors to significantly control tumor growth and prolong survival compared with single treatments in the metastatic HCC orthotopic mouse model (A) Kaplan-Meier survival curves illustrate the probability of survival over time for RILWT tumor-bearing mice ( n = 10/group) treated with PBS (vehicle), JURV alone, anti-PD-1 antibodies alone, and the combination of JURV and anti-PD-1. Median survival times are indicated for each treatment group, with the combination therapy showing significantly extended survival compared with all other groups . (B) Body weight changes of the mice are plotted over time post-treatment, serving as an indirect measure of general health and treatment tolerability. Data points represent mean body weights with error bars indicating standard deviation. (C) Tumor growth post-rechallenge demonstrates individual tumor progression for each treatment cohort. JURV, αPD-1, and their combination notably inhibit tumor growth, which correlates with enhanced survival rates and suggests induction of tumor-specific immune responses. Statistical significance for survival rates was calculated using log rank (Mantel-Cox) tests, with the following notations: ns, not significant; ∗ p < 0.05, ∗∗ p < 0.01, ∗∗∗ p < 0.001, ∗∗∗∗ p < 0.0001. Tumor volume and body weight data were analyzed using repeated measures ANOVA with post hoc tests appropriate for multiple comparisons. In this study, we have demonstrated the oncolytic efficacy of JURV in targeting murine and human HCC cell lines in vitro , as well as its capability to delay tumor growth and prolong survival in murine cancer models of HCC. Our data show that JURV modulated the TME by enhancing the infiltration of cytotoxic T cells and recruiting diverse immune effectors. When used in combination with anti-PD-1 antibodies, JURV greatly enhances tumor regression and improves survival rates in orthotopic HCC models. This survival benefit remained effective when surviving mice were rechallenged with subsequent tumor implantations, strongly indicating a tumor-specific immune response. This work further investigated the safety profile of JURV, underscoring its lack of neurotoxic and hepatotoxic effects, thus making it a promising candidate for oncolytic viral therapy. This safety, combined with its effectiveness, was demonstrated in HEP3B xenografts, where JURV’s anti-tumor activity led to the complete eradication of human HCC in tumor-bearing mice. This outcome correlated with our in vitro cytotoxicity assays and the activation of IFN-associated proteins, as described in our earlier publication. 13 , 32 Moreover, our data reveal the protective effect of exogenous type I IFN, which reduces JURV-induced cell killing in a dose-dependent manner. This suggests that normal tissues with intact IFN responses are likely protected from viral infection, further supporting JURV’s tumor specificity and safety. Our comprehensive analysis of proteomic and transcriptomic data uncovered various molecular pathway alterations and changes in gene expression in HEPA 1–6 tumors following treatment with i.t. injections of JURV. Transcriptional profiling identified several DEGs associated with immune response pathways, such as Myo3a, Cd209c, Trim67, St8sia2, and Wnt5b. Enrichment analysis highlighted major immune-related signaling pathways, including B cell receptor and IL-15 signaling. By integrating transcriptomic and proteomic data, we observed the upregulation of proteins such as S1pr3, Tnpo1, and Psmb10, which are involved in inflammation, immune response, and tumorigenesis. However, we acknowledge several limitations in the models used. The subcutaneous Hepa 1–6 tumor model, while providing important insights into localized tumor-immune interactions, does not fully replicate the complex TME or metastatic behavior typical of HCC. In addition, the use of human cell lines in xenograft models presents challenges due to species-specific immune system differences, which may affect the translational relevance of our findings. To address these limitations, future studies employing orthotopic or patient-derived models are necessary to validate our observations and refine therapeutic strategies. Our results also align with the concept of locoregional oncolytic virotherapy as reported in other therapies. 33 It shows that JURV-mediated oncolysis effectively induces tumor growth delays in both primary and distant tumors, demonstrating its ability to trigger an abscopal effect that is less commonly observed in other therapies for HCC. 34 In summary, we demonstrated that JURV effectively induces cancer cell death and stimulates anti-tumor immunity in HCC. Moreover, we showed that the combination of JURV with anti-PD-1 antibodies provides additional survival benefits in preclinical HCC models. This study not only highlights the potential of JURV as a potent therapeutic option for HCC treatment but also introduces an innovative strategy with the potential to overcome challenges such as low immunogenicity and immunosuppression safely and potently. The addition of JURV in the field of oncolytic viral therapy promises to broaden the clinical application of OVs in cancer treatment, providing new avenues for therapy optimization. The procedure used for JURV recovery was as in Lawson et al. 35 In short, 6-well plates were used to plate BHK cells at a density of 5 × 10 5 cells/well. At an MOI of 10, the cells were infected with a vaccinia virus that encodes T7 polymerase. Following 1 h incubation, excess vaccinia was removed and cells were transfected with 2 μg pJURV, 1 μg pN, 0.8 μg pP, and 0.4 μg pL (the N, P, and L plasmids were constructed in the pCI vector) using 12.5 μL of Lipofectamine LTX transfection reagent (Life Technologies, Grand Island, NY) following the manufacturer’s instructions. The cells were incubated in Opti-MEM Reduced-Serum Medium (Gibco) at 37°C for 48 h. The cells were cultured in Opti-MEM Reduced-Serum Medium (Gibco) for 48 h at 37°C. The culture medium was taken out after 48 h, twice filtered through a 0.2-μm filter, and then placed on top of fresh BHK cells in a 6-well plate. After 48 h, the culture medium was taken out, centrifuged at a low speed, filtered through a 0.2-μm filter, titrated on new Vero cells, and kept in storage at −80°C. This study used a panel of three human HCC cell lines: HEP3B , PLC, HuH7 , and two murine HCC cell lines: HEPA 1–6 and R1LWT (RRID: CVCL_B7TK). We also used several murine solid tumor cells, including colon carcinoma cells , skin melanoma cells , and prostate cancer cells . All cell lines were cultured at 37°C with 5% CO 2 in medium supplemented with antibiotic agents (100 μg/mL penicillin and 100 μg/mL streptomycin). HEP3B, PLC, and HuH7 were maintained in Dulbecco’s modified Eagle’s medium (DMEM) with 10% fetal bovine serum (FBS). We maintained HEPA 1–6, RILWT, BHK-21 , and Vero cells in DMEM with 10% FBS. BHK-21, Vero, HEP3B, PLC, HuH7, HEPA 1–6, CT26, and BF16-F10 cells were obtained from the American Type Culture Collection (Manassas, VA). The RILWT cell line derived from RIL-175 cells was from Dan G. Duda, PhD, Massachusetts General Hospital, Boston, MA. Viral amplification was done by infecting confluent (∼80%) Vero cells in T-175 flasks of JURV at an MOI of 0.001. At 48 h post-infection or when cytopathic effects were observable, supernatants of virus-infected cells were collected from the flasks. The viral stocks were purified using 10%–40% sucrose-density gradient ultracentrifugation followed by dialysis. The titer (TCID 50 ) of the rescued virus was determined by the Spearman-Kärber algorithm using serial viral dilutions in BHK-21 cells. BHK-21 and Vero cells were obtained from the American Type Culture Collection. For all cytotoxicity assays (96-well format), 1.5 × 10 4 HEP3B, PLC, HuH7, HEPA 1–6, or RILWT cells were infected with JURV at the indicated MOIs of 10, 1, or 0.1 in serum-free Gibco Minimum Essential Medium (Opti-MEM). Cell viability was determined using a Cell Titer 96 AQueous One Solution Cell Proliferation Assay (Promega, Madison, WI). Data were generated from six replicates from two independent experiments ± SEM. Five hundred thousand HEP3B, PLC, HuH7, HEPA 1–6, or RILWT cells were infected with oncolytic JURV in 6-well plates at an MOI of 0.1 for 1 h. Supernatants of virus-infected cells were removed, and cells were washed with PBS and incubated at 37°C until analysis. At 72 h after infection, cells were fixed with 5% glutaraldehyde and stained with 0.1% crystal violet to visualize the cellular morphology and remaining adherence indicative of cell viability. Pictures of representative areas were taken. Two hundred thousand HCC cells were plated in each well of a 6-well plate in 2 mL of complete DMEM. After allowing cells to rest overnight, we infected them with JURV at an MOI of 0.1 for 1 h. Supernatants of virus-infected cells were removed, cells were washed with PBS, and fresh medium was added. At 10, 24, 48, and 72 h, the supernatant was collected and stored at −80°C. Viral titers (PFU/mL) were determined with serial dilutions of the supernatant on Vero cells. Data were generated as means of two independent experiments ± SEM. The following antibodies were used for flow cytometry analysis: CD45-FITC , CD3-BUV395 , CD4-BUV737 , CD8-Percp-Cy5.5 , CD44-BV711 , CD335-PE/Dazzle594 , PD-1-PE , Ki67∗-BV605 , Granzyme B∗-APC , IFN-γ∗-BV421 , CD11b-PE-Cy7 , F4/80-BV51.00 , CD206-AF700 , I-A/I-E-BV786 , and L/D-efluor780 . Female mice C57BL/6J , BALB/cJ , and NOD.Cg-Prkdc scid /J were purchased from Jackson Laboratories at age 6–8 weeks. Male C57BL6/J mice were also obtained from Jackson Laboratories. All mice were housed at the Division of Laboratory Animal Medicine at the University of Arkansas for Medical Sciences (UAMS), which employs a full staff of veterinarians and veterinary technicians who supervised and assisted in animal care throughout the studies. All animal studies were approved by the Institutional Animal Care and Use Committee at UAMS. Female C57BL/6J mice ( n = 6 mice/group) were administered PBS, a moderately high viral dose (1.0 × 10 7 TCID 50 ), or a high viral dose (1.0 × 10 8 TCID 50 ) i.n. (25 μL in each nostril) or i.v. (50 μL/mouse). Body weight, temperature, behavior, and clinical signs were monitored by a board-certified veterinarian at least three times a week to detect any signs of toxicity. At 3 days post-infection, three mice per group were sacrificed, and blood and animal tissues (brain, liver, and spleen) were collected and subjected to hematoxylin and eosin staining to assess short-term toxicity and viral biodistribution. The remaining mice were monitored for 30 days. Female NOD.Cg- Prkdc scid /J mice were subcutaneously inoculated with HEP3B cells expressing a firefly luciferase reporter gene on the right flanks ( n = 6–7/group). When the average tumor volume reached 80–120 mm 3 , mice were administered 50 μL i.t. injections of JURV (1.0 × 10 7 TCID 50 ) or 50 μL of PBS (controls) once weekly for 3 weeks. Tumor volume was measured twice weekly until the end of the study (day 21), or the humane endpoint as described above. We also recorded mouse body weight and clinical observations twice per week. Tumor-bearing (HEP3B) mice were anesthetized with isoflurane and imaged once a week (days 0, 7, and 14) with an IVIS Xenogen imaging system to assess virus-induced changes in tumor growth. Anesthesia was induced in an induction chamber (2%–5% isoflurane), after which the mice were placed in the imaging instrument and fitted with a nose cone connected to a vaporizer to maintain the isoflurane concentration (0.5%–2%) during the procedure. This range of concentrations produces a level of anesthesia that prevents animal movement during scanning. If the respiratory rate accelerates or slows, the isoflurane concentration is increased or decreased. We used a heated animal bed, heating pads, and, if necessary, a heating lamp to ensure that body temperature was maintained both before imaging and during the procedure. Each mouse received an i.p. injection of D-luciferin . Anesthetized mice were placed into the IVIS Xenogen imaging system on their stomachs. Imaging of each group of mice took less than 10 min. This was a non-invasive imaging procedure, and no restraints were needed. To evaluate the in vivo therapeutic efficacy of oncolytic JURV in a syngeneic mouse HCC model, we injected 1 × 10 6 HEPA 1–6 cells in 100 μL of cold RPMI into the right flanks of immunocompetent female C57BL6/J mice ( n = 7–8/group; Jackson Laboratory) using 1 mL syringes. Mice were monitored weekly for palpable tumors or any changes in appearance or behavior. When average tumors reached a treatable size (80–120 mm 3 ), mice were randomized into the respective study groups—PBS (controls) and JURV. Dosing began within 24 h of randomization. Depending on the treatment regimen, mice were administered 50 μL i.t. injections of either PBS or JURV (1 × 10 7 TCID 50 units) on days 0, 7, and 14. To establish syngeneic bilateral HCC tumors (dual flanks), in additional groups of mice, HEPA 1–6 cells (1 × 10 6 cells/mouse) were first subcutaneously grafted into the right flanks (resulting in tumors at ∼14 days) and categorized as “primary” tumors. Simultaneously, we performed distant HEPA 1–6 tumor graft injections (1 × 10 6 cells/mouse) into the left flanks of these mice. Mice in the dual-flank groups only received 50 μL i.t. injections of 1 × 10 7 TCID 50 units of JURV on their right flanks once a week for 3 weeks. Tumor volume and body weight were measured twice weekly using a digital caliper and balance following randomization and initiation of treatment. Tumor volume was calculated as (longest diameter × shortest diameter 2 )/2. During the first week of treatment and after each injection, mice were monitored daily for signs of recovery for up to 72 h. Mice were euthanized when body weight loss exceeded 20%, when tumor size was larger than 2,000 m³, or for adverse effects of treatment. Mice were sacrificed 28 days following the first JURV dose administration, at which time tumors and blood were collected for downstream analysis. To evaluate the in vivo therapeutic efficacy of oncolytic JURV in a syngeneic mouse orthotopic HCC model, 1.0 × 10 6 luciferase-expressing RILWT cells were surgically implanted into one of the lobes of the liver of a syngeneic orthotopic HCC mouse model. Following 14 days after tumor implantation, mice were randomized ( n = 10/group) and grouped. To determine the safety and efficacy of the JURV (1.0 × 10 7 TCID 50 ) and/or anti-mPD-1 were administered i.p. Tumor size was measured by bioluminescent imaging 14 days after tumor implantation for animal randomization and once weekly for 60–90 days. Body weight was measured twice weekly. During the first week of treatment and after each injection, mice were monitored daily for signs of recovery for up to 72 h. Mice were euthanized when body weight loss exceeded 20% or for adverse effects of treatment. Mortality during the survival study was assessed using the log rank test to compare the differences in Kaplan-Meier survival curves. RILWT-cured or treatment-naive C57BL/6J mice were rechallenged by subcutaneously inoculating 5.0 × 10 5 RILWT cells. Tumor growth was monitored for 30 days post-implantation. Hepa 1–6 tumors ( n = 3 samples/group) were excised and dissociated on day 18, 3 days after the last JURV injection, using a mouse tumor dissociation kit (Miltenyi, cat. no. 130-096-730) with a gentleMACS Octo Dissociator (Miltenyi) according to the manufacturer’s protocol. CD45 + cells were isolated with mouse CD45 (TIL) microbeads (Miltenyi). Cells were incubated with Fixable Viability Stain 510 for 15 min at 4°C, followed by anti-Fc blocking reagent for 10 min before surface staining. Cells were stained, followed by data acquisition with a BD LSRFortessa X-20 flow cytometer. All antibodies ( Table S1 ) were used following the manufacturer’s recommendation. Fluorescence Minus One control was used for each independent experiment to establish gating. For intracellular staining of granzyme B, cells were stained using an intracellular staining kit (Miltenyi), and analysis was performed using FlowJo (TreeStar). Forward scatter and side scatter cytometry were used to exclude cell debris and doublets. Hepa 1–6 ( n = 3 samples/group) FFPE scrolls were processed for DNA and RNA extraction using a Quick-DNA/RNA FFPE Miniprep Kit with on-column DNase digestion for the RNA preps . RNA was assessed for mass concentration using the Qubit RNA Broad Range Assay Kit with a Qubit 4 fluorometer . RNA quality was assessed with a Standard Sensitivity RNA Analysis Kit on a Fragment Analyzer System . Sequencing libraries were prepared using TruSeq Stranded Total RNA Library Prep Gold . RNA DV200 scores were used to determine fragmentation times. Libraries were assessed for mass concentration using a Qubit 1X dsDNA HS Assay Kit with a Qubit 4 fluorometer . Library fragment size was assessed with a High Sensitivity NGS Fragment Analysis Kit on a Fragment Analyzer System . Libraries were functionally validated with a KAPA Universal Library Quantification Kit . Sequencing was performed to generate paired-end reads (2 × 100 bp) with a 200-cycle S1 flow cell on a NovaSeq 6000 sequencing system (Illumina). We examined the mRNA and protein expression profiles of Hepa 1–6 tumors treated with PBS, JURV, anti-PD-1, or JURV + anti-PD-1. Three replicates were used to analyze each of the untreated (PBS) and treated groups. The tumor samples were sequenced on an NGS platform. The files containing the sequencing reads (FASTQ) were then tested for quality control using MultiQC. 36 The Cutadapt tool trims the Illumina adapter and low-quality bases at the end. After the quality control, the reads were aligned to a mouse reference genome (mm10/GRCm38) with the HISAT2 aligner, 37 followed by counting reads mapped to RefSeq genes with feature counts. We generated the count matrix from the sequence reads using HTSeq-count. 38 Genes with low counts across the samples affect the false discovery rate, thus reducing the power to detect DEGs; thus, before identifying DEGs, we filtered out genes with low expression utilizing a module in the limma-voom tool. 39 Then, we normalized the counts by using TMM normalization, 40 a weighted trimmed mean of the log expression proportions used to scale the counts of the samples. Finally, we fitted a linear model in limma to determine DEGs and expressed data as mean ± standard error of the mean. All p values were corrected for multiple comparisons using Benjamini-Hochberg FDR adjustment. After identifying DEGs, enriched pathways were performed using the Ingenuity Pathway Analyses (IPA) tool to gain biological insights. The statistical difference between groups was assessed using the nonparametric Mann-Whitney U test R module. The limma-normalized transcript expression levels and the normalized protein intensities were integrated using two independent methods. Firstly, the mixOmics package (Omics Data Integration Project R package, version 6.1.1) was implemented to generate heatmaps of the associated DEPs/DEGs as described previously. 41 Secondly, the MOGSA package was used to generate heatmaps of the top 30 upregulated or downregulated DEPs/DEGs between the various groups. 42 All numerical variables were summarized using mean ± standard error. A one-way ANOVA model assessed the association of the numerical variable to an experiment factor. Post hoc means were compared between experiment groups after adjusting for multiple comparisons using Turkey’s method. Sequencing data were analyzed after controlling for false discovery rate using a Benjamini-Hochberg method. Time-to-event data were analyzed using Kaplan-Meier curves and compared between groups using a log rank test. Paired comparisons were conducted using paired t tests and/or Wilcoxon signed rank tests. Statistical analyses were performed using GraphPad Prism . p values <0.05 were considered statistically significant. The RNA sequencing data are freely available via GEO GSE199131 , and the proteomics data are available via ProteomeXchange with the identifier PXD035806 . We thank the personnel of the DNA Damage and Toxicology, Proteomics, Genomics, and Bioinformatic Cores at the University of Arkansas for Medical Sciences for their assistance during these studies. We also thank Dr. Musa Gabere for his assistance in analyzing the proteomic data. This work was supported by the 10.13039/100000002 National Institutes of Health (NIH) through a 10.13039/100000054 National Cancer Institute (NCI) grant , an NIH New Innovator Award , a grant from the 10.13039/100000043 American Association for Cancer Research (AACR) to B.M.N., the Winthrop P. Rockefeller Cancer Institute and the Barton Pilot Award program of UAMS College of Medicine to B.M.N. The UAMS Bioinformatics Core Facility is supported by the Winthrop P. Rockefeller Cancer Institute and NIH/ 10.13039/100000057 NIGMS grant P20GM121293 . This research is supported in part by a seed grant from the Vice Chancellor of Research & Innovation at UAMS . The IDeA National Resource for Quantitative Proteomics is supported by 10.13039/100000057 NIGMS grant R24GM137786 . Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the NIH. M.Z.T., A.B., M.J.B., M.J.C., and B.M.N. contributed to the study concept and design, data acquisition, data analysis, data interpretation, and manuscript drafting. M.Z.T., Y.Z., M.T., C.S.S., J.C.C., C.D., O.B., A.L.G., R.S.S., M.E.F.-Z., C.Y.C., D.G.D., B.M., O.M., N.M.E., S.R.P., J.Y., and T.J.K. contributed to data acquisition, data analysis, data interpretation, drafting, and critical revision of the manuscript. C.L.W., D.A., A.G., and S.D.B. contributed to bioinformatic analysis. All authors approved the final, submitted version of the manuscript. The authors declare no competing interests. | Other | biomedical | en | 0.999999 |
PMC11697552 | Aluminium (Al) does not occur naturally, it is rather found with other elements and third commonest element that exists, next to silicon (Si) and oxygen (O) and occupies about 7.3 % of the earth's crust . It exists as fluoride, oxides, double silicates, and basic sulphates. For the most of its existence, Al and related compounds have as well been the main building material used by the automotive and aviation sectors owing to their high thermal and electrical conductivity, good appearance, low density, cost-effectiveness and light weight. Other usage of aluminium includes lithographic plates, domestic and office furniture, road barriers and signs, sporting goods, machined components, high pressure gas cylinders and ladders and access equipment. Usefulness of aluminium is hindered by corrosion. It corrodes by reacting with the environment [ , , ]. Corrosion is the process by which materials deteriorate because of reactions with their environment. It is often seen as the degradation or irreversible destruction of the surface of metal due to chemical reactions involved in the translation of pure metal to a more chemically unchanging form (such as hydroxides, oxides, sulphides) in a corrosion-prone environment. Corrosion-prone setting may be of solid, liquid or gas form. Corrosion of metals is quite a complex and a worldwide phenomenon [ , , , ]. The settings are called electrolytes, while transference of ions (anions and cations) forms two reactions. In a situation of two different metals in an electrolyte, the less noble metals perform as anode and become corroded while the more noble metal perform as cathode and become protected. In a conducting solution, zinc tends to corrode while copper is more likely to remain protected. This is due to the electrochemical series; zinc has a lower electrode potential, making it more susceptible to oxidation and corrosion, while copper has a higher electrode potential and is less prone to corrosion . The high tendency of aluminium and its alloys to resist corrosion is due to the development of a compact, adherent inert oxide film, which is amphiprotic and dissolves to a great extent on exposure of the metal to alkaline or acids solutions. The deterioration of aluminium and its compounds in aqueous solutions comes with substantial cost implication. It is therefore essential to introduce inhibitors to shield the metal from corrosion. Many organic compounds are deployed as corrosion inhibitors for aluminium and its compounds in alkaline and acidic media. The inhibitive performance of these compounds hinge on the chemical composition of the inhibitor, the metal's surface charge, and the type of contact between the molecules of the inhibitor and the surface of the metal. Often, inhibitors perform by sticking to the surface of the metal and creating a coating that shields it. Typically, inhibitors are dispersed from a solution; a portion are part of the preparation of protective coatings [ , , , , ]. Majority of the organic materials deployed as inhibitors are very costly and naturally toxic. Hence, there is need to find non-toxic, eco-friendly, natural and low-cost inhibitors for shielding of alloys and metals from corrosion in aqueous solutions. A viable substitute to these organic compounds has been found to be expired drugs, since they possess the above desirable properties, and have been found to adsorb on metallic surfaces. They create layers and precipitates on the surface of the metal, leading to the obstruction of anodic and cathodic sites. Some of the drugs that have been successfully applied include atenolol drug , cimetidine , antithyroid drugs , among others. The current study intends to advance the usage of drugs as corrosion inhibitor, by using danacid for such purpose. Danacid tablet is a compound of magnesium trisilicate which is used for the management of hyperacidity, heartburn, dyspepsia, peptic ulcer disease and reflux esophagitis. Due to its adsorptive and eco-friendly properties, expired danacid has been found as a good candidate for corrosion inhibition. In our previous publication, danacid was applied as a corrosion inhibitor of aluminium in sulphuric acid medium . In the present study, it is deployed as a corrosion inhibitor of aluminium in HCl media with the aid of electrochemical impedance spectrometry (EIS), potentiodynamic polarization (PDP), quantum chemical computations, as well as modeling and optimization using artificial neural network (ANN) response surface methodology (RSM). In this work, varied concentrations of the inhibitor were prepared. Ten grams of the expired drug was mixed with 1 L of HCl solution. The solutions of the inhibitor were set at concentrations of 0.1–0.9 g/L from the stock solution [ , , ]. Aluminum metal of dimension (3 cm × 3 cm) was cut into coupons. The aluminium has the composition: Mg (0.03 %), Al (99.3 %), Fe (0.02 %), Cu (0.03 %), Zn (0.07 %), V (0.04 %), Ti (0.12 %), Si (0.25 %), Mn (0.14 %). The metal preparation procedure had been reported . Chemical analysis of the expired danacid was done with a GC-MS as well as FTIR spectroscopy (Cary 630 model from Agilent Technologies), as previously reported . The thermometry of the process had been reported . The reaction number (RN) and the inhibitor efficiency (IE) were respectively obtained with Equations (1) , (2) ) . (1) R N = T m − T i t (2) I E ( % ) = ( 1 − R N i n h R N u n i h ) ∗ 100 The gravimetric method had previously been reported. The corrosion rate (CR), weight loss (Δw), surface coverage, and IE, were respectively computed through the application of Equations (3) , (4) , (5) , (6) as previously reported [ , , , , , , , , ]. (3) C R = w i − w f A t (4) Δ w = w i − w f (5) θ = ω 0 − ω 1 ω 0 (6) I E % = ω 0 − ω 1 ω 0 ∗ 100 1 and 0 correspondingly represent the weight loss values in the danacid-HCl and only HCl medium, w f and w i represent the final and initial weights of the metal. A represents the entire specimen area (cm 2 ), and t represents immersion time (h). θ designates the degree of surface coverage. The linearized form of Arrhenius model was deployed to estimate the activation energy of the inhibition process as shown by Equation (7) . (7) Ln ( CR ) = Ln A − ( E a R ) 1 T where CR, E a , A, T and R respectively denote the corrosion rate, activation energy, frequency factor, temperature and gas constant. By denoting the metal's corrosion rates at T 2 and T 1 as CR 2 and CR 1 , Equation (8) is obtained . (8) Ln ( C R 2 C R 1 ) = Ln A − ( E a 2.303 R ) ( 1 T 1 − 1 T 2 ) As previously reported , Q ads (kJmol −1 ) was calculated with Equation (9) . (9) Q a d s = 2.303 R [ log θ 2 1 − θ 2 − log θ 1 1 − θ 1 ] ∗ T 2 . T 1 T 2 − T 1 where R denote the gas constant, θ 2 and θ 1 and correspondingly designate the degree of surface coverage at T 2 and T 1 . The θ data was deployed to evaluate the usability of different isotherm models, such as the Langmuir, Flory-Huggins, Frumkin, and Temkin model as respectively depicted by Equations (10) , (11) , (12) , (13) , as previously reported . (10) C θ = 1 K a d s + C (11) log [ ( θ C ) ] = log K a d s + x log ( 1 − θ ) (12) log [ ( C ) × ( θ 1 − θ ) ] = 2.303 log K a d s + 2 α θ (13) θ = − 2.303 log K a d s 2 a − 2.303 log C 2 a where C, θ, K a d s , x, α respectively denote the concentration of the inhibitor, degree of surface coverage, adsorption equilibrium constant, size parameter and the lateral interaction term. The free energy of adsorption ( Δ G a b s ) was estimated with Equation (14) . (14) Δ G a b s = − 2.303 RT log ( 55.5 K ) PDP and EIS, three electrochemical techniques, were used in this study, as previously applied by Omotioma et al. . Temperature was maintained at 30 ± 1 °C . Molecular modeling and quantum chemical techniques were used to determine the molecular composition and adhesive properties of the inhibitor (danacid) as previously reported [ 19 , , , ]. RSM was deployed in the experiment design of the weight loss procedure. Inhibitor concentration (IC), Temperature, and time, were the variables used in the design while the response was the inhibition efficiency . Comparison of ANN and RSM was done to evaluate their analytical and valuation capabilities with statistical tools namely, root mean square error (RMSE), standard error of prediction (SEP), mean absolute error (MAE), as shown in Equations (15) , (16) , (17) ) . (15) R M S E = ( 1 n ∑ i − 1 n ( Y p r e d . , i − Y exp . , i ) 2 ) 1 / 2 (16) S E P = R M S E Y exp . a v e ∗ 100 (17) MAE = 1 n ∑ i = 1 n | ( Y exp . , i − Y p r e d . , i ) | The functional groups found in danacid had previously been reported . Table 1 depicts the effect of the concentration of the expired danacid on the RN and IE. The RN was determined by the fraction of change in temperature to the maximum time attained. The expired danacid's concentration range stretched from 0.1 to 0.9 g/L. The RN decreased with increase in inhibitor concentration (IC). The IE was evaluated as a function of reaction number (in the presence and absence of the inhibitors). The IE increased with increase in IC and reduction in RN . Table 1 Influence of danacid concentration on the IE and RN. Table 1 IC (g/L) RN ( o C/min) IE (%) 0.0 0.3409 0.1 0.1052 69.14 0.3 0.0703 79.38 0.5 0.0414 87.86 0.7 0.0156 95.41 0.9 0.0231 93.22 The loss in weight of a metal sample in its area multiplied by the duration the experimental work was carried out defines the rate of metal dissolution. The main merit of this method is that it is convenient and simple to determine corrosion conditions and little inhibitor dosage is needed for additional experiments. The disparities of protection efficiency and dissolution rate in the protected and unprotected media are presented in Table 2 . Results displayed in Table 2 shows that danacid is a possible candidate for aluminium protection in acidic environments indicating a slowdown in reaction rate in the inhibited solution in comparison to the uninhibited solution. Close inspection of Table 2 indicates that dissolution rates increased as the temperature was made to rise with the highest values obtained at 323 K in all the systems studied. The corrosion IE rises by increasing the danacids's concentration and is further evident as a result of the large part of active constituents of the inhibitor on the corroding surface of the metal. Conversely, protection efficiency reduced to a great extent as the temperature was increased. This is due to the fact that rise in temperature scatters the extract molecules from the aluminium surface (breaks the heterocyclic bonds found in the danacid, hence, decreasing the surface coverage) . Table 2 Results of weight loss of Al in HCl. Table 2 Time (h) Temperature (K) Inhibitor conc. (g/L) Weight loss (g) CR (g/cm 2 h) IE (%) SC (θ) 5 303 0.0 0.097 0.0022 0.3 0.050 0.0011 48.45 0.4845 0.7 0.036 0.0008 62.89 0.6289 0.9 0.021 0.0005 78.35 0.7835 313 0.0 0.109 0.0024 0.3 0.059 0.0013 45.87 0.4587 0.7 0.042 0.0009 61.47 0.6147 0.9 0.035 0.0008 67.89 0.6789 323 0.0 0.129 0.0029 0.3 0.069 0.0015 46.51 0.4651 0.7 0.051 0.0011 60.47 0.6047 0.9 0.045 0.0010 65.12 0.6512 4 303 0.0 0.091 0.0025 0.3 0.049 0.0014 46.15 0.4615 0.7 0.036 0.0010 60.44 0.6044 0.9 0.028 0.0008 69.23 0.6923 313 0.0 0.100 0.0028 0.3 0.056 0.0016 44.00 0.4400 0.7 0.041 0.0011 59.00 0.5900 0.9 0.035 0.0010 65.00 0.6500 323 0.0 0.114 0.0032 0.3 0.065 0.0018 42.98 0.4298 0.7 0.054 0.0015 52.63 0.5263 0.9 0.046 0.0013 59.65 0.5965 3 303 0.0 0.069 0.0026 0.3 0.041 0.0015 40.58 0.4058 0.7 0.034 0.0013 50.72 0.5072 0.9 0.023 0.0009 66.67 0.6667 313 0.0 0.080 0.0030 0.3 0.049 0.0018 38.75 0.3875 0.7 0.039 0.0014 51.25 0.5125 0.9 0.033 0.0012 58.75 0.5875 323 0.0 0.085 0.0031 0.3 0.053 0.0020 37.65 0.3765 0.7 0.044 0.0016 48.24 0.4824 0.9 0.036 0.0013 57.65 0.5765 The Q ads and E a for the corrosion control of Al in HCl solution with danacid are shown in Table 3 , Table 4 . The E a was computed using the Arrhenius model. The E a attained in this study is > 80 kJ/mol, which indicates that the inhibitor molecules’ adsorption on the surface of the metal conforms to the physical mechanism of adsorption . Heat of adsorption is an important thermodynamic property since it shows the straight connection with the degree of surface coverage. Negative values were recorded for the Q ads in this work as shown in Table 4 . This shows that the adsorption of the inhibitor on the surface of the metal is exothermic. Table 3 E a for the corrosion control process. Table 3 Temperature (K) CR (mg/cm 2 h) E a (kJ/mol) 303 0.889 32.81 313 0.444 323 1.306 333 2.083 343 2.806 Table 4 Q ads for the corrosion control process. Table 4 IC (g/L) Q ads (kJ/mol) 0.1 −85.822 0.3 −102.246 0.5 −105.096 0.7 −98.062 0.9 −108.677 Langmuir, Frumkin, Temkin, and Flory-Huggins models were used to examine the experimental results for controlling aluminium corrosion in HCl media with expired danacid as inhibitor as presented in Table 5 . The Langmuir, Temkin, Frumkin, and Flory-Huggins plots are correspondingly depicted in Fig. 1 , Fig. 2 , Fig. 3 , and Fig. 4 (a, b). Table 5 Adsorption parameters. Table 5 Adsorption Isotherm Tempe-rature (K) R 2 K ads ΔG ads (kJ/mol) Isotherm property Langmuir Isotherm 313 0.999 0.9649 −10.360 323 0.9807 0.360 −8.043 Temkin Isotherm 313 0.9572 9606008.00 −52.300 a −8.3929 323 0.8513 9921.7702 −35.504 −5.6034 Frumkin Isotherm 313 0.9939 0.0043 3.729 α 3.4717 323 0.9691 0.0613 −3.288 2.0616 Flory-Huggins Isotherm 313 0.8308 14.6690 −17.443 x 0.9252 323 0.6115 4.6946 −14.941 0.9097 Fig. 1 Langmuir model graph. Fig. 1 Fig. 2 Temkin model graph. Fig. 2 Fig. 3 Frumkin model graphs: (a) 313K, (b) 323K. Fig. 3 Fig. 4 Flory-Huggins model graphs: (a) 313K, (b) 323K. Fig. 4 Correlation coefficient values of 0.999 and 0.999 respectively recorded at 313 K and 323 K show that Langmuir model term gave the finest fit to the results of the experiment. By comparison of equations (10) , (11) , (12) , (13) ) with the isotherm plots, the adsorption parameters K, a, x, and α were evaluated . The isotherms and their corresponding parameter values are displayed in Table 5 . PDP measurements were carried out in uninhibited and inhibited acid media containing different concentrations of expired danacid to gain further insight about the behaviour of Al in 1 M HCl. From Fig. 5 , it is clear that the presence of danacid suppressed the anodic and cathodic reactions. The Tafel polarization factors recorded from the PDP experiments such as corrosion current density (i corr ), corrosion potential (E corr ), cathodic (β c ) and anodic (β a ) Tafel slopes are all presented in Table 6 . No certain trend is seen in the corrosion potential shifts in danacid's presence; hence, the inhibitor can be regarded as mixed-type inhibitor. Fig. 5 PDP curves of Al in 1 M HCl in the uninhibited and inhibited solution. Fig. 5 Table 6 Polarization parameters. Table 6 System E corr (mV) I corr (μA/cm 2 ) b a (mVdec −1 ) b c (mVdec −1 ) sc (θ) IE (%) 1 M HCl −478.4 208.7 88.6 46.8 1 M HCl+ 0.5 g/L DNC −477.6 27.4 93.6 54.7 0.8687 86.87 1 M HCl + 0.7 g/L DNC −472.4 12.8 87.6 42.5 0.9387 93.87 As depicted in Table 6 , there are no considerable changes in the β c and β a values in the presence of the inhibitor, hence, cathodic and anodic reactions are not affected and the danacid's inhibition action is majorly due to the geometric obstructive effect implying the decrease of the reaction area on the aluminium surface by obstructing the active reaction sites, which does not affect the corrosion reaction mechanism during the inhibition process. Table 6 reveals that the addition of danacid decreased both cathodic and anodic currents and did not reveal any considerable shift in E corr , which also prove that expired danacid is a mixed-type inhibitor . The inhibition efficiency (η) of expired danacid was computed with Equation (18) . (18) η = [ 1 − i c o r r i c o r r 0 ] × 100 where i c o r r 0 and i corr respectively denote the corrosion current densities in the absence and presence of danacid. Measurements of EIS were implemented in 1 M HCl and with different danacid concentrations to give insight into the corrosion behaviour and the adsorption mechanisms. Additionally, EIS was undertaken as rapid and precise technique to evaluate corrosion rates at the aluminium/1 M HCl boundary in the presence and absence of inhibitors. Fig. 6 a shows the Nyquist plot while Fig. 6 b and c respectively shows the Bode phase angle and Bode modulus plots without and with two concentrations of the inhibitor. Table 7 gives the values of the EIS parameters calculated by fitting the EIS spectra along with the inhibition efficiency (IE, %) values computed with Equation (19) . (19) I E ( % ) = R c t i − R c t 0 R c t i × 100 where R c t i and R c t 0 are charge transfer resistances in presence and absence of inhibitor, respectively. Fig. 6 EIS spectra of Al in 1 M HCl in the uninhibited and inhibited media: (a) Nyquist (b) Bode phase angle and (c) Bode modulus plots, respectively. Fig. 6 Table 7 Impedance parameters of Al in HCl. Table 7 System R s (Ωcm 2 ) R ct (Ωcm 2 ) n C dl (Fcm 2 ) IE (%) 1 M HCl 1.723 39.7 0.88 7.124E-5 1 M HCl + 0.5 g/L DNC 1.686 532.4 0.88 7.221E-5 92.54 1 M HCl + 0.7 g/L DNC 1.716 802.7 0.89 7.172E-5 95.05 As presented in Fig. 6 , the Nyquist plots demonstrate similarity in behaviour with and without the inhibitor signifying that their existence in 1 M HCl did not change the mechanism of the process. It is worthy of note that the Nyquist semicircles diameter in the inhibited media rise gradually, this became more noticeable upon adding cumulative amounts of danacid to the 1M HCl as demonstrated by a substantial rise in charge transfer resistance values, with associated reductions in C dl values as shown in Table 7 . The decrease in the value of C dl in danacid's presence is due to the reduction in local dielectric constant and a rise in the electrical double layer's thickness, owing to the shift taking place between inhibitor and water molecules throughout the adsorption process . From the results displayed in Table 7 , the IE increased from 92.54 to 95.05 % as the concentration of danacid was increased from 0.5 to 0.7 g/L. GC-MS results of danacid had previously been reported. The peaks show numerous heterocyclic compounds found in danacid. The components found include 1-methyl-4-(1-methyl ethyl)-Cyclohexanol, dl-Menthol, trans-13-Octadecenoic acid, Dotriacontane, 9,12-Octadecadienoic acid, 1-chloro- Hexadecane, n-Hexadecanoic acid, 1-chloro-Octadecane, eicosyl vinyl ester, buty l,2-methylpropyl ester, Carbonic acid, tetradecyl ester, cis-Vaccenic acid, 9-Octadecenoic acid, among others . The IE of expired danacid is credited to its adsorption on aluminium surface. In order to show the relationship between the inhibiting property and quantum chemical parameters of expired danacid, DFT computations were made with DFT electronic structure programme DMol 3 executed in Materials Studio Software. The HOMO and LUMO obtained from the optimized molecular structure are depicted in Fig. 7 . It has been well established that the reactivity of an inhibitor can be characterized in terms of its HOMO and LUMO. The HOMO depicts the electron donation while LUMO depicts the electron acceptance capability of the inhibitor molecules. From the frontier orbital theory's postulation, E HOMO represents a species' aptitude to donate electrons, signifying that species with higher value of E HOMO are more likely to achieve the best inhibition efficiency. On the other hand, E LUMO depicts a species' ability to accept electrons, hence, an effective inhibitor usually has low values of E LUMO . A molecule's energy gap (ΔE) is represented by the difference between E HOMO and E LUMO of the molecule. Low values of ΔE signifies that a molecule is likely to give high inhibition efficiency. The electron density, optimized structure, LUMO, HOMO, side view, top view, and front view of danacid (Mg 2 O 8 Si 3 ) molecule on the Al surface are correspondingly shown in Figures (7(a, b, c, d, e, f, g)) . The DFT parameters are depicted in Table 8 . Fig. 7 Danacid (magnesium trisilicate (Mg 2 O 8 Si 3 ) model: (a) Electron density, (b) Optimized structure, (c) LUMO, (d) HOMO, (e) Side view, (f) Top view, (g) Front view. Fig. 7 Table 8 DFT parameters. Table 8 Inhibitor (molecule) E HOMO E LUMO Energy gap (eV) Molecular mass (gM −1 ) Adsorption Energy (eV) Danacid −6.026 −3.703 2.323 260.857 −137 To assess the collaboration between the inhibitor molecules and the Al surface, the adsorption energy (E ads ) of each scheme was computed with Equation (20) . (20) E Interact = E total – ( E DNC + E Al ) E total is considered as the total energy of the area investigated, comprising molecules of danacid and the aluminium surface, E DNC , E Al and E total represent an individual molecule's strength on the Al slab. From Table 9 , the maximum inhibitor efficiency for the corrosion protection of Al in HCl was documented as 94.65 %, at the IC of 0.7 g/L, temperature of 313 K, and time of 4 h. The high IE value signifies that the inhibitor is fitting for checkmating corrosion of aluminium in HCl media. There is also an observed rise in the concentration of the inhibitor with rise in IE. It may be related to the nature and effect of molecular construction on their inhibition properties. Table 9 RSM result for corrosion protection of Al in HCl with danacid. Table 9 Std Run Factor 1 A: IC g/L Factor 2 B: Temperature K Factor 3 C: Time h Response IE % 2 1 0.9 303 3 81.03 1 2 0.5 303 3 68.11 7 3 0.5 323 5 66.14 9 4 0.5 313 4 87.63 12 5 0.7 323 4 84.64 14 6 0.7 313 5 94.23 5 7 0.5 303 5 79.55 10 8 0.9 313 4 92.98 20 9 0.7 313 4 94.65 3 10 0.5 323 3 56.97 11 11 0.7 303 4 88.77 19 12 0.7 313 4 94.65 18 13 0.7 313 4 94.65 17 14 0.7 313 4 94.65 15 15 0.7 313 4 94.65 16 16 0.7 313 4 94.65 13 17 0.7 313 3 91.67 4 18 0.9 323 3 77.01 6 19 0.9 303 5 83.14 8 20 0.9 323 5 79.95 Table 10 displays the ANOVA model of the inhibitor efficiency of Al in HCl. The F-value of 47.59 indicates the model is vital since there is 1 out of 100 probabilities that an F-value of up to 47.59 could ensue owing to noise. P-values <0.0500 specify model components are vital. In this study C 2 , B 2 , A 2 , AC, AB, A, B, C are vital model terms. The predicted R 2 of 0.8447 is in decent vicinity with the Adjusted R 2 of 0.9567; i.e. the variance is < 0.2. Adequate Precision ratio of 22.767 designates a satisfactory signal. The model obtained can be deployed to explore the design space [ , , , , , , , , ]. Table 10 ANOVA of Quadratic model. Table 10 Source Sum of Squares df Mean Square F-value p-value Model 2284.92 9 253.88 47.59 <0.0001 significant A-Inhibitor concentration 310.36 1 310.36 58.18 <0.0001 B-Temperature 128.81 1 128.81 24.15 0.0006 C-Time 79.64 1 79.64 14.93 0.0031 AB 37.58 1 37.58 7.05 0.0241 AC 30.26 1 30.26 5.67 0.0385 BC 0.2592 1 0.2592 0.0486 0.8300 A 2 126.09 1 126.09 23.64 0.0007 B 2 295.80 1 295.80 55.45 <0.0001 C 2 46.82 1 46.82 8.78 0.0142 Residual 53.35 10 5.33 Lack of Fit 53.35 5 10.67 Pure Error 0.0000 5 0.0000 Cor Total 2338.27 19 Std. Dev. 2.31 R 2 0.9772 Mean 84.99 Adjusted R 2 0.9567 C.V. % 2.72 Predicted R 2 0.8447 Adeq Precision 22.7673 The coded mathematical model for this study is given as Equation (21) . It is valuable for classifying the comparative impact of the factors by comparison of the factor coefficients. The model in terms of coded factors could be deployed to make estimates about the response for specified levels of each factor . (21) IE = + 95.62 + 5.57 A − 3.59 B + 2.82 C + 2.17 AB − 1.95 AC − 6.77 A 2 − 10.37 B 2 − 4.13 C 2 The equation in terms of actual factors is presented as Equation (22) (22) IE = − 9944.83889 − 35.46102 ∗ Inhibitor concentration + 63.87921 ∗ Temperature + 48.27441 ∗ Time + 1.08375 ∗ Inhibitor concentration ∗ Temperature − 9.72500 ∗ Inhibitor concentration ∗ Time − 169.28409 ∗ Inhibitor concentration 2 − 0.103714 ∗ Temperature 2 − 4.12636 ∗ Time 2 Fig. 8 (a–d) show the graphical results of IEs of danacid for controlling aluminium corrosion in HCl media. Feasibility of the inhibitor was defined with predicted against actual IE and 3-D graphs. Fig. 8 a shows the plot of predicted versus actual IE. The points aligned on the best fitted line, signifying that the model attained successfully defined the experimental IE of danacid . Fig. 8 (a) Experimental against predicted values (b) IE against IC and temperature (c) IE against IC and time (d) IE against temperature and time. Fig. 8 The 3-D graphs reveal the combined effect of temperature, IC and time on the IE of danacid. At the different optimum values of the parameters, the IE of danacid for the corrosion protection of aluminium in HCl medium was recorded as 93.57 %. The high IE value recorded confirms the appropriateness of the inhibitor. The performance graph for controlling Al corrosion in HCl is displayed in Fig. 9 , showing 4 epochs with the finest validation value of 5.3946 at the 1st epoch signifying that ANN is fitting for calculating the IE of danacid . The regression graphs are shown in Fig. 10 (a–d). These show that ANN is an appropriate predictive tool for the inhibition efficiencies obtained. Fig. 9 Performance plot. Fig. 9 Fig. 10 Regression plots showing: (a) training, (b) validation, (c) testing, and (d) overall plot. Fig. 10 The evaluation of ANN and RSM results is displayed in Table 11 . A comparative analysis was implemented to establish the predictive abilities of ANN and RSM using some statistical models as presented in Table 12 . From the statistical investigation performed, ANN gave improved estimate compared to RSM, as authenticated by the lesser values of error parameters such as MAE, RMSE, and SEP . Table 11 ANN and RSM predicted results for corrosion protection of Al in HCl. Table 11 Std Run Factor 1 A: IC g/L Factor 2 B: Temperature K Factor 3 C: Time h Response 1 IE % Experimental RSM values ANN values 2 1 0.9 303 3 81.03 80.29 80.8682 1 2 0.5 303 3 68.11 69.59 68.7234 7 3 0.5 323 5 66.14 67.61 66.8716 9 4 0.5 313 4 87.63 83.28 87.0722 12 5 0.7 323 4 84.64 81.66 84.2616 14 6 0.7 313 5 94.23 94.32 93.2762 5 7 0.5 303 5 79.55 79.48 79.477 10 8 0.9 313 4 92.98 94.42 92.1012 20 9 0.7 313 4 94.65 95.62 93.671 3 10 0.5 323 3 56.97 58.44 58.2518 11 11 0.7 303 4 88.77 88.84 88.1438 19 12 0.7 313 4 94.65 95.62 93.671 18 13 0.7 313 4 94.65 95.62 93.671 17 14 0.7 313 4 94.65 95.62 93.671 15 15 0.7 313 4 94.65 95.62 93.671 16 16 0.7 313 4 94.65 95.62 93.671 13 17 0.7 313 3 91.67 88.67 90.8698 4 18 0.9 323 3 77.01 77.80 77.0894 6 19 0.9 303 5 83.14 82.40 82.8516 8 20 0.9 323 5 79.95 79.20 79.853 Table 12 Comparison of ANN and RSM models. Table 12 Parameters RSM ANN RMSE 1.6330 0.7617 SEP 1.9215 0.8963 MAE 1.2630 0.6698 The results depicted in Table 13 shows the optimum concentration of the inhibitor, temperature, time and IE of danacid. Value of optimum IE was obtained as 93.57 %, which indicates that the inhibitor is appropriate for controlling aluminium corrosion in HCl media. The result obtained here was validated with a percentage deviation of 0.68 %. Table 13 Optimum values. Table 13 Media Optimum IC (g/L). Optimum temperature (K) Optimum time (h) Optimum IE (%) Al in HCl with danacid 0.67 313.36 3.80 93.57 From the results obtained in this work, expired danacid revealed excellent IE for aluminium in 1 M HCl medium. The IE of danacid increased with increase in its concentration and got to 78.35 % at 0.9 g/L, from the gravimetric study. The IE was however found to reduce with increase in temperature. Polarization results show that the expired danacid acts as a mixed-type inhibitor. The adsorption study carried out demonstrate that the inhibition process followed Langmuir isotherm. PDP measurements indicate rise in charge transfer resistance and IE with rise in IC, reaching an IE of 93.87 % at 0.7 g/L. The value of activation energy recorded in this work indicates that the inhibitor molecule's adsorption on the aluminium surface conforms to the mechanism of physical adsorption. Results of computational calculations using density functional theory show good reactivity of expired danacid on the aluminium surface and correlates with the results obtained by electrochemical measurements. Optimization of process parameters established the optimum points for the inhibition process while ANN modeling improved the inhibition efficiency values predicted by RSM. Hence, danacid proved to be a viable inhibitor for aluminium corrosion. O.D. Onukwuli: Visualization, Supervision, Project administration. I.A. Nnanwube: Writing – review & editing, Writing – original draft, Software, Formal analysis, Data curation. F.O. Ochili: Validation, Resources, Methodology, Investigation, Conceptualization. J.I. Obibuenyi: Resources, Investigation. Data will be made available on request. This research received no funding. The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | Other | other | en | 0.999997 |
PMC11697557 | Recently, novel materials called high entropy alloys (HEAs) were developed by Yeh et al. [ , , , , ]. Two definitions were provided for HEAs. The basis of the first definition is the chemical composition and the basis of the second one is their configurational entropy. In the first definition, HEAs must contain at least five elements, where each of their concentrations varies between 5 and 35 %. In the second definition, the configurational entropies of HEAs must exceed 1.5R (R is gas constant), regardless of whether they are single-phase or multiphase at room temperature. High hardness, good wear properties and excellent corrosion resistance are observed properties of high entropy materials [ , , , , ]. Casting , powder metallurgy , cladding , spraying , mechanical alloying and welding procedures are the current methods used to produce the HEA systems that have been routinely considered in different studies. Among these, AlNiCoCrFe coatings produced via the tungsten inert gas (TIG) process represent a promising avenue for enhancing the performance of materials in corrosive environments. The TIG process, known for its ability to produce high quality welds and coatings, allows for precise control over the microstructural characteristics of the deposited alloy . However, there is abundant literature that notes that HEAs contain multiple elements as a bulk material. Furthermore, the dimensions and form of bulk ingots made using pointed methods were limited. Hence, some scientists have been trying to probe the two-dimensional production of a layer of high entropy alloy on cheap metallic substrates [ , , ]. One of the current methods is laser processing [ , , ] and another current and low-cost methods of surface hardening and alloying is the use of tungsten inert gas (TIG), which can easily provide the strong and sound bond between layer and substrate . The layer preparation of multi element alloys has seldom been considered. In situ synthesized high-entropy alloy coatings have been focused with the production approach of the multicomponent system with the TIG procedure . For instance, Jie-Hao Chen et al. have fabricated a HEA layer on steel substrate using the tungsten inert gas process in which they used Ni, Co, Cr, Mo and Al. In another experiment, Y.C. Lin had successfully blended two powder mixtures of NiCrAlCoW and NiCrAlCoSi as an initial material. The coating was produced by TIG on plain carbon steel substrate to produce an in situ synthesized layer. The TIG process is a low-cost method for producing HEA coatings, making it a more economical alternative to traditional coating methods. The importance of the HEA coatings in recent research lies in the evaluation of properties and behaviour of this kind of material in different conditions like harsh environments, especially in corrosive solutions. Furthermore, there is still a need to understand the effects of processing parameters, such as electrical current, on the microstructure, phases, and properties of HEA coatings produced by the TIG process. The AlNiCoCrFe alloy has unique combination of properties, including high corrosion resistance, good wear resistance, and high hardness. This alloy has been shown to exhibit a single-phase face-centered cubic (FCC) structure, which is beneficial for achieving high corrosion resistance and mechanical properties. Qiu et al. investigated the electrochemical properties of Al2CrFeCoxCuNiTi HEAs coatings prepared on Q235 steel by laser cladding. The results showed that the coating with x = 2 showed better corrosion resistance than 304 stainless steel in the H 2 SO 4 solution. The effect of Al on corrosion properties of AlxFeCoNiCuCr HEAs coatings prepared on AISI 1045 steel was reported by Ye et al. . The results indicated that AlxFeCoNiCrTi coatings prepared by laser cladding are better than 314L stainless steel in corrosion resistance, in which Al1.8FeCoNiCrTi coatings serve the best. This study aims to investigate the possibility of the production of coatings containing high entropy alloys on plain carbon steel substrate using the tungsten inert gas process under controlled conditions to achieve a suitable coating and improve surface properties and increase the corrosion resistance of the substrate. In this way, the same equimolar powder mixtures were used on AISI 1050 medium carbon steel. The effect of electrical current on the microstructure, phases and properties of the coatings during the TIG process has not yet been investigated. The substrate samples were AISI 1050 carbon steel with thickness of 20 mm, length of 400 mm and width of 20 mm. the samples were cut from a slab. The elements Aluminium, Cobalt, Chromium and Nickel were used as the principal elements in the powdered form. The powders were with purity higher than 99.5 % and particle size of approximate 60 to 120 μm. These elements were blended to prepare a base material for the layer. After blending for 8 h, 5 wt% polyvinyl alcohol was added to the mixtures as a binder. The ball milling machine was used to blend and mix the powders with 250 r/min speed. Then, a uniformed mixture was applied as a pre-layer on the surface of 20 × 40 mm 2 of AISI 1050 carbon steel. The thickness of the pre-layer was 1 mm. In the next step, samples were heated to 70 °C for 4 h to dry, so that the moisture would evaporate. Finally, surface alloying was carried out by using TIG. As a side note, it is important that the powder mixture does not contain Iron. The steel substrate provided the Iron element during surface melting. An inverter TIG welding machine (Pars-Digital PSQ 250 AC/DC) was used with a constant voltage of 220 V an electrical current of 90, 110 and 130 A to heat and melt the pre-layer and substrate. The substrate was moved by a CNC table at a speed of 4.3 mm s−1. At the same time, Argon gas was used to protect the arc. The torch moved several times on all the points of part of the surface with an overlap of 50 %. The specimens after being cut, mounted and polished were characterized by optical microscopy (OM), a scanning electron microscope , an energy dispersive X-ray spectroscope (EDS, Oxford Inca, Oxford Instruments) and X-ray diffraction (XRD, X'pert) to determine the present phases in the layer. Also, the hardness was measured by a hardness tester (Hysitron Inc., TriboScope® nanomechanical test instrument) in a Vickers unit with a load of 0.5 kg. All electrochemical measurements were performed using the ivium vertex potentiostat in 1M HCl solution at ambient temperature after 30 min immersion. A conventional three-electrode cell was used with the bare and coated samples as the working electrodes, the platinum electrode as the counter electrode and the saturated calomel electrode as the reference electrode. It should be noted that all the reported potential in this text refers to SCE. The EIS measurements were carried out by impressing a 10 mV amplitude of ac signal and a wide frequency spectrum of 100 kHz–0.01 Hz on the OCP. Polarisation measurements were performed at a scan rate of 1 mV/s. Fig. 1 shows the effect of the applied current on the thickness of the layer. As shown in Fig. 1 , the thickness of the layer increased with an increase in current. As the current increased from 90 to 130 A, the depth of the molten layer increased from 192 to 1862 μm. The change was considerable at the same time with the increase in current from 110 to 130 A, an increase in the depth of the layer was observed from 717 to 1862 μm. Fig. 2 a-c shows the SEM images of the section of the substrate and layer at various electrical currents. Fig. 1 Influence of the TIG current on the depth of the melt pool. Fig. 1 Fig. 2 SEM images of the TIG current on the depth of the melt pool of (a) 90, (b) 110 and (c) 130 A. Fig. 2 This thickness was not completely uniform. At various measures of current such as 90, 110, and 130 A, the average depths of the layers were about 192, 717, and 1862 μm, respectively. The SEM (BSE) images of the layer specimens ( Fig. 3 (a–c)) show the various phases that were formed in the layers. Layer, interface, and substrate are clear in these images. The chemical composition gradient in the coatings was also measured using the EDS line scans . Fig. 3 SEM microstructure of the cross-section of the coating/substrate at TIG current of (a) 90, (b) 110 and (c) 130 A. Fig. 3 Fig. 4 EDS of the cross-section of the coating produced using TIG current of (a) 90, (b) 110 and (c) 130 A. Fig. 4 At 90 A , the points showed about 9 at% Al, 10 at% Cr, 10 at% Co, 11 at% Ni and 60 at% Fe. At 110 A , the concentration of Fe was decreased, and the concentration of other elements was increased. The points displayed about 17 at% Al, 18 at% Cr, 16 at% Co, 17 at% Ni and 32 at% Fe. Unlike 110 A, at 130 A , with the increase of the electrical current, the concentration of Fe increased while the concentration of other elements decreased. The points displayed about 6 at% Al, 6 at% Cr, 7 at% Co, 7 at% Ni and 74 at% Fe. Finally, with regard to high entropy alloy definition, the results showed that high entropy alloy coating was produced at the electrical current of 110 A. SEM image of the coatings obtained at 110 A determined the stable phases in the coating. Their chemical compositions (characterised by EDS) are shown in Fig. 5 . A phase (shown by letter A) in a matrix with another phase (shown by letter B) was observed at 110 A. The average concentration of Fe at points A and B was 32 and 27 at%, respectively. Fig. 6 represents the XRD results that were obtained from the layer formed using an electrical current of 110 A. The results illustrated that at the current of 110 A, the surface layer included BCC and FCC phase structures. One of the main problems in this kind of coating is the possibility of the formation of intermetallic compounds. But a significant presence of intermetallic compounds was not observed in samples. The optical microscope (OM) images of the layer at different electrical currents are displayed in Fig. 7 a-c, which indicate a dendritic structure throughout the entire coating under different conditions. Also, no voids or cracks were observed in the entire coating. Fig. 5 BSE micrographs obtained from the cross-section of the coatings produced at a TIG current of 110 A. Fig. 5 Fig. 6 XRD spectra obtained from the coating that was produced using a TIG current of 110 A. Fig. 6 Fig. 7 OM microstructure of the cross-section of the coating/substrate at TIG currents of (a) 90, (b) 110 and (c) 130 A. Fig. 7 Fig. 8 shows the average microhardness of the surfaces of the layer that were formed at the electrical current of 110 A. The layer coated by 110 A current had a microhardness of about 518–658 HV at the surface. There was a significant difference in microhardness between the coating and substrate. In all tests, the hardness of the substrate was in the range of 180–190 HV. Fig. 8 Average micro-hardness of the cross-section of the coating/substrate produced using a TIG current of 110 A. Fig. 8 In order to investigate the corrosion behaviour of the samples, an electrochemical impedance test was performed on the samples. Electrochemical impedance is regarded as one of the practice tests to evaluate the behaviour of the interface between coatings and solution. The Nyquist plots for the corrosion of bare and coated samples in 1M HCl at ambient temperature are depicted in Fig. 9 . As can be seen, in the HCl solution, all electrochemical Nyquist curves had the same shape, such that the compact semicircle could be clearly seen. Fig. 9 also displays that the diameters of the electrochemical Nyquist curves were visibly different from each other. The analysis of these curves was performed based on the equivalent circuit, as shown in Fig. 10 . In this circuit, R-Q is attributed to the charge transfer reaction; it has been replaced by the constant phase element due to the non-ideal dual-layer capacitor. R L is the resistance of the induction process and L represents pseudo-inductance. The results of this analysis are presented in Table 1 . Fig. 9 EIS Nyquist curves recorded on the bare and coated samples in 1M HCl solution at ambient temperature. Fig. 9 Fig. 10 Equivalent circuit model for impedance data fitting. Fig. 10 Table 1 The result of Nyquist plots for the corrosion of bare and coated samples in 1M HCl. Table 1 TIG Current Rs Rp CPE n R L L (Hcm2) (Ωcm 2 ) (Ωcm 2 ) (μSs n cm -2 ) (Ωcm 2 ) substrate 10.5 62.5 545.3 0.84 4.35 9.77 90 20.5 271.1 68.2 0.85 44.16 1.95 110 16.3 834.2 23.7 0.77 419.6 13.56 130 22.9 566.2 45.7 0.82 290.7 8.17 In order to more accurately investigate the electrochemical properties of the coatings on the substrate, an electrochemical polarisation test was performed on the samples. Fig. 11 shows the results of this test in 1M HCl solution. As can be seen, by applying the coating to the steel specimen, the polarisation curves were shifted to the left and the lower current densities. On the other hand, by increasing the current in the TIG process from 90 to 110 A, the polarisation curve first moved to the left; with a further increase in current, it moved slightly to the right. As shown in Fig. 11 , in the HCl solution, the passive layer was not formed even on the 110 A sample. Table 2 shows the results of the analysis of electrochemical polarisation curves. Fig. 11 Potentiodynamic polarisation curves of bare and coated samples in 1M HCl at ambient temperature. Fig. 11 Table 2 The results of the analysis of electrochemical polarisation curves in HCl solution. Table 2 sample i corr (μA/cm 2 ) E corr vs. SCE (mV) substrate 572.2 −489.5 90 127.2 −407 110 39.7 −450 130 63.6 −429 Surface features of the substrate and 110 A samples were examined under SEM after 48 h of immersion in HCl solution. Fig. 12 shows SEM images of substrate and 110A coated samples. Fig. 12 SEM images and EDS analysis of substrate and 110 A samples after 48 h immersion in HCl solution. Fig. 12 The increase in the electrical current in the TIG welding process would lead to an increase in heat input, thus, increasing the depth of the molten pool. The relationship between the depth of the molten pool and the electrical current is not linear, because Eq. (1) shows the correlation between the parameters of welding voltage (E), electric current (I), electrode movement speed (V) and thermal efficiency (η) with the heat input (Q). In this case, the heat is directly related to the input current and voltage, and the increase in current causes a simultaneous increase in the voltage. In other words, in addition to directly increasing the heat, the current also increases the heat by increasing the voltage. Therefore, the increase in the depth of the molten pool was significantly greater from 110 A to 130 A compared to 90 A–110 A . (1) Q = η × E × I The relationship between the electric current and the depth of the melted layer is significant because it determines the chemical composition of the coating, especially the part of the coat composition provided by the substrate iron which melts along with the pre-coat. Also, the increase in the depth of the molten pool results in a longer solidification time, which allows time for the distribution of the alloy elements. Therefore, the forces applied to the molten weld pool in the TIG welding process, such as Buoyant forces, the Lorentz force, and Shear stress are increased, and there is sufficient time to apply them. As a result, the liquid mixing will be improved reasonably in the molten weld pool to provide the necessary conditions for creating a uniform layer. The volume of the molten weld pool plays a critical role in creating a desirable layer, especially for high entropy coatings. This is because if the molten pool is shallow the amount of iron entering the pool will not be sufficient, and the iron required to create the high entropy alloy, which should be at least 5 at%, will not be available. On the other hand, if the electric current (and consequently the depth) of the molten pool is very high, the amount of iron entering the molten pool exceeds the maximum permissible value of 35 at% and does not meet the requirement for a high entropy alloy. Therefore, it is essential to achieve optimum electric current. Contrary to the previous supposition that formation of multiple intermetallic compounds, as suggested by statistical thermodynamics and also the Boltzmann equation, indicates that increasing the number of elements in an alloy reduces the possibility of solid solution formation , it is stated that as the number of elements increases, the entropy of the system also increases. This reduction in Gibbs free energy (ΔG) variation enhances the possibility of solid solution formation. Accordingly, another requirement for the formation of high entropy alloys is the presence of at least five elements in the compound. A comparison of these conditions with the actual governing conditions of the coatings shows that the layer created under the 90A current cannot be a high entropy alloy because its iron content is greater than 60 at% . However, contrary to the expectation that at 110 A with increased electric current and depth of the molten weld pool the iron content would increase, instead the iron content decreased to 32 at% and other elements were in the range of 16–17 at% . At 90 A, the force applied by the electric arc on the pre-coat was not probably sufficient and the powders compacted on the substrate moved forward, so that a small portion of the initial layer melted and the majority of the molten pool was formed from the iron of substrate. However, at 110 A the electric arc provided the force required to prevent the motion of the pre-shield, and in addition to the substrate, a majority of the pre-coat also melted and formed the compounds of the molten weld pool. By increasing the current to 130A and increasing the depth of the molten weld pool, the iron content increased to 73 at% and exceeded the threshold determined for the high entropy alloys. The results of the XRD on the coat created by the 110A current showed that the coat consists of two-phase structures of FCC and BCC solid solutions, BCC being the matrix and FCC the second phase structure. The XRD analysis and calculation of Bragg's law reveal the presence of a BCC phase with a lattice constant of a = 0.286 nm and an FCC phase with a lattice constant of a = 0.357 nm. Cai et al.’s results confirm obtained results because they calculated lattice constant for FCC and BCC 0.366 and 0.287, respectively. The SEM images in Fig. 5 are also in full agreement with the XRD results, as two distinct phases (one shown by A and the other by B) can be distinguished. As shown in Fig. 5 , phase A has approximately 32 at% of iron and Phase B has approximately 27 at% of iron. The phase richer in iron has a BCC phase structure while the phase with a low amount of iron has an FCC phase structure. The atomic percentage of iron in two mentioned phases is similar, which is why the color contrast between the two phases is not significantly different . However, according to the evidence obtained and the findings of Cai et al., the FCC phase is stable for iron amounts below 30 atomic percent while the BCC phase is stable for higher amounts of iron. Both of them have five major elements, and all the elements have concentrations between 5 and 35 at%, and thus are high-entropy alloys. On the other hand, previous studies show that the AlCrCoNiFe compound is prone to the formation of a solid solution. Also, in the calculations based on Eqs. (2) , (3) , (4) ) and the atomic percentages obtained from the shielding elements , the value of Ω the parameter introduced by Yang and Zhang was 1.7, and the formation of the solid solution could be predicted from the diagrams presented by Yang et al. . (2) ΔS mix = RΣX i LnX i (3) Ω=(T m .ΔS mix ) / |ΔH mix | (4) T m = ΣX i (T m ) i The main reason for the lack of intermetallic compounds is rapid solidification. Because different atomic radii in the HEA composition leads to the increase of the solid-liquid interface energy and the difficulty of the long-range diffusion of atoms in the crystal lattice, thus favouring the nucleation of solid solution and decreasing the growth rate of intermetallic compounds. Zhang et al. delicately studied the influences of laser rapid solidification on the microstructure and phase structure in the HEA coatings. They calculated the nucleation incubation time for various competing phases and indicated that the growth of intermetallic compounds will be hampered if the solidification rate is sufficiently high. As shown in Fig. 7 , the microstructure of the shield was completely dendritic. Similar microstructures have been observed in various studies . The major determinants in the formation of microstructures by solidification are the temperature gradient ratio and growth rate . In the molten pool resulting from the TIG process, with the development of solidification, the ratio of the gradient to the rate of formation is reduced and the cells are formed on the surface rather than deeper. The direction of cell growth is toward the substrate, which is influenced by the transfer of heat into the molten weld pool. The hardness predicted in the studies [ , , , , ] for the high-entropy AlCrCoNiFe alloy has been between 200 and 700 Vickers based on the exact chemical composition, production method and subsequent heat treatment. As shown in Fig. 8 , the hardness of the layer is between 500 and 700 Vickers and is significantly different from the substrate due to the precipitation hardening and the presence of a second phase in the layer structure. Compact semicircles in electrochemical Nyquist curves indicate that the electrochemical behaviour of the samples is controlled by the charge transfer process . In addition to the compact semicircle at high frequencies, there is a small loop at the low frequencies. This semicircle, which is below the y-axis, can be the result of the release of the adsorbed ions on the surface and the separation of the corrosion products from the surface . The diameter of Nyquist curves has a direct relation to corrosion resistance; so, the higher diameter of the curves indicates higher corrosion resistance . According to Fig. 9 , in the HCl solution, the diameter of the semicircle of the substrate was smaller than that of the coated samples, thus indicating improvement in corrosion resistance by creating a high entropy coating on the surface. The corrosion resistance of the samples can be predicted as follows: Substrate< 90A < 130A < 110A The polarisation resistance can be used to evaluate the corrosion resistance of the samples. The higher the value of Rct, the higher the density of protective coating, which means the better the effect of isolating corrosive medium. It is clear from Table 1 optimum coating (formed via 110 A) has a higher Rp value of 834 cm 2 compared to the other HEA coating. As shown in Table 1 , by creating the coating on the substrate, the values of the constant phase element were decreased, thus indicating a decline in the access of corrosive ions to the substrate surface, and therefore, the corrosion process . The lack of a passive layer in electrochemical polarisation curves can be explained through the reactions occurring on the surface. In fact, after placing the sample with the coating in the solution, an oxide film consisting of iron oxides, nickel oxide, aluminium oxide, cobalt oxide and chromium oxide was formed on the surface. However, in the acidic solution at 25 °C, the ΔG 0 value related to the reaction of these oxides with hydrochloric acid was negative (except aluminium). For example, the reaction for nickel oxide is as follows: NiO + 2HCl → NiCl 2 + H 2 O, ΔG 0 = −90 kJ/mol Following this reaction, the oxide form of oxygen is replaced by a chlorine ion, leading to the formation of a metal chloride, which is a water-soluble compound . According to the results of the polarisation test, by creating a coating with a current of 90 A, the corrosion current density of the steel substrate in the HCl solution was decreased from 572 to 127 μA/cm 2 ; by increasing the current up to 110 A, the corrosion current density in the HCl solution was decreased to reach its minimum. Indicating that this coating provides the best corrosion protection in comparison to the other coatings and suggest a much slower dissolution of the 110A sample coating in 1M HCl solution than other samples. According to Table 2 , by creating the coating, the corrosion potential was shifted to the positive values; from a thermodynamic point of view, this indicated a lower corrosion tendency for the substrate [ , , , , , ]. According to Fig. 12 , the SEM image of the substrate reveals a generously corroded surface and the major type of corrosion is mainly uniform corrosion. Coated substrate represents the less-corroded area with a relatively flat surface remaining and there is only some white phase on the surface; Chemical composition analysis was conducted by EDS. As presented in Fig. 12 , five alloy elements are well distributed inside the coating, furthermore, the major element of the white phase is oxygen. The comparative analysis indicates that the TIG process for AlNiCoCrFe coating on plain carbon steel offers favorable outcomes in terms of coating thickness and bond strength, making it a viable option for applications requiring robust surface protection. The TIG process resulted in a thicker coating about 700–800 μm compared to PVD and plasma spraying about 5–10 and 100–150 μm, respectively, which can enhance durability but may also affect thermal performance. About hardness, While PVD exhibits superior hardness about 1200–1300 HV , the hardness of the TIG-coated samples about 600–700 HV is still adequate for many applications, providing a good balance between toughness and wear resistance and it is similar to plasma spraying and laser cladding methods. The bond strength achieved with the TIG method is competitive about 30–40 MPa, ensuring good adhesion to the substrate, which is crucial for performance in demanding environments. While, other current methods such as plasma spraying , PVD and laser cladding [ , , ] have an average bonding strength about 25, 15 and 35 MPa respectively. (1) The TIG welding method was utilised successfully to form high entropy alloy coating of AlNiCoCrFe with high density and hardness on a steel substrate. (2) The electrical current and depth of the melted layer produced during the surface treatment had a significant effect on the formation of a high entropy alloy. (3) In this study, the optimum electrical current was 110 A, at which a high entropy alloy with BCC and FCC phase structures was produced. Lower currents could not escape the powders from the steel substrate surface. Higher currents also formed a layer with more depth, which resulted in an increase in the Fe concentration above the critical depth needed to produce a high entropy alloy. (4) The layer coated had a microhardness of about 518–658 HV at the surface. There was a significant difference between the coating and substrate. (5) Electrochemical results showed that coating prepared by 110A electrical current exhibited the optimum corrosion resistance in 1M HCl solution, which had higher corrosion potential, lower corrosion current density and higher charge-transfer resistance than the other coatings. Mahmoud Ardeshir: Writing – original draft, Data curation. Mardali Yousefpour: Writing – review & editing, Supervision, Methodology, Funding acquisition, Data curation. Seyad Mohammad Sadegh Nourbabksh: Supervision. Mansoor Bozorg: Supervision. Data will be made available on request. • The work described has not been published previously except in the form of a preprint, an abstract, a published lecture or academic thesis. See our policy on multiple, redundant or concurrent publication . • The article is not under consideration for publication elsewhere. • The article's publication is approved by all authors and tacitly or explicitly by the responsible authorities where the work was carried out. • Tf accepted, the article will not be published elsewhere in the same form, in English or in any other language, including electronically without the written consent of the copyright-holder. The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | Study | biomedical | en | 0.999997 |
PMC11697579 | Infectious diseases remain a leading cause of morbidity and mortality worldwide, estimated to cause more than 10% of deaths and 28% of disability‑adjusted life‑years (DALYs) attributed to all causes in 2019, with human immunodeficiency virus (HIV), tuberculosis and malaria being the key contributors . Outbreaks of Ebola and coronavirus disease 2019 (COVID‑19) in recent years have led to unprecedented numbers of deaths and cases. New pathogens continue to emerge in animal and human populations, as demonstrated by the emergence of severe acute respiratory syndrome (SARS) in 2003, highly pathogenic avian influenza in poultry and humans in 2004/2005, swine flu in 2009, Middle East respiratory syndrome coronavirus (MERS‑CoV) in 2013, Zika in 2016, severe acute respiratory syndrome coronavirus 2 (SARS‑CoV‑2) in 2019 and recently, the monkeypox virus in 2022 and 2024 . Mathematical models are being increasingly used to understand the transmission of infections and to evaluate the potential impact of control measures or interventions in reducing morbidity and mortality. Mathematical modelling underpinned most of the critical decisions made by the UK government during the COVID‑19 pandemic, including the decision to implement a nationwide lockdown in March 2020, lay down a road map for release from lockdown in February 2021 and implementation of public health interventions in December 2021 during the omicron wave . In the USA, modelling projections for different COVID‑19 scenarios by the Institute for Health Metrics and Evaluation COVID‑19 Forecasting Team also informed crucial policy decisions . The use of mathematical disease models in public health policy is well adapted to the decision‑making process for epidemic and endemic diseases in high‑income countries . Even organisations such as the World Health Organization (WHO) and the Joint United Nations Programme on HIV/AIDS (UNAIDS) have relied on findings from mathematical modelling studies to make crucial choices around selection of the intervention and vaccination strategies for diseases such as influenza, Ebola, HIV and COVID‑19 . It is encouraging to see African countries demonstrating global leadership in infectious disease mathematical modelling through successful north–south collaborations. Organisations such as South Africa’s Modelling and Simulation Hub Africa (MASHA), the South African Centre of Excellence in Epidemiological Modelling and Analysis (SACEMA) and the Centre for Infectious Disease and Epidemiology Research (CIDER) have played a key role in increasing outputs related to disease modelling studies by African researchers. They have also supported national governments in making model‑informed evidence‑based policies . Training and mentorship have been identified as key approaches to strengthening mathematical modelling capacity in Africa and elsewhere . A deliverable‑driven mentor‑led learning‑by‑doing model of capacity building for policy‑makers and public health professionals in Africa was an effective training model for building the capacity in mathematical modelling of diseases . In India, these disease models are not fully integrated into the policy‑making process, primarily due to limited capacity in building mathematical models, lack of trust in the findings given the many assumptions and data limitations and the reluctance of policy‑makers to apply the model findings to formulate policies . Much of this lack of trust or reluctance in adopting the model findings stems from the lack of knowledge about how these models are conceived, constructed and calibrated. During the COVID‑19 pandemic, although several mathematical models were proposed to understand the evolving disease in India, they did not feed into the process of decision‑making . The possible reasons could be: (i) large variations in the model predictions and assumptions, breeding mistrust in the model results; (ii) criticisms around use of simple mathematical models to describe complex processes; (iii) use of models to describe real‑life epidemic scenarios being relatively new; and (iv) lack of knowledge about what goes on in the building of such models. Thus, there was a perceived need to create a critical mass of trained infectious disease experts and modellers within the public health and clinical domain so that they could work closely and support the district, state and national governments to understand disease spread and transmission dynamics during epidemics and support model‑informed decision‑making. This need was more felt during the recent COVID‑19 pandemic. Previous training models in India were short‑term courses predominantly based on didactic teaching ranging from 2 to 5 days and covering only the basics of infectious disease modelling without being deliverable‑driven and devoid of long‑term mentorship. Following the COVID‑19 pandemic, several disease modelling experts have come together to form groups such as the National Disease Modelling Consortium and the Indian Scientists’ Response to CoViD‑19 (ISRC) to develop India‑specific disease models to aid national policy‑makers make informed decisions and improve disease control and elimination efforts . However, training and mentorship have never been at the forefront of their agenda. Recognising this gap in training, the Department of Health Research (DHR), which is the Government of India body for research in India, released a call for applications under the Human Resource Development Scheme to design long‑term capacity‑building programs in key priority areas, including infectious disease modelling. In response to the call, we proposed a 3‑month post‑graduate (PG) certificate course in infectious disease modelling in hybrid mode to the DHR in order to build a team of infectious disease modellers who can prove to be a great asset in tackling future pandemics and emerging threats. Following sanction by the DHR, we developed the course structure and curriculum and delivered the first cycle of the course during July to September 2024, producing the first cohort of 20 infectious disease modeilers in the country. The course curriculum was guided by Kolb’s experiential learning theory, which is an andragogical approach to learning focussing on real‑world experiences and practical applications . This was the first such course on infectious disease modelling in India. The structure, content and key components of the first course, along with the strengths, challenges and way forward from the participants’ and facilitators’ perspective, are discussed in this paper below. Study design: This was a mixed‑methods approach to evaluate a capacity‑building program on infectious disease modelling. Study setting: We describe the design and development of the capacity‑building model below. This is a learning‑by‑doing model of andragogical training conceived and designed by faculty from the All India Institute of Medical Sciences (AIIMS) Nagpur, Post Graduate Institute of Medical Education and Research (PGIMER), Chandigarh, and the Indian Council of Medical Research (ICMR), New Delhi, India. The faculty are experienced epidemiologists with special interest and expertise in infectious disease epidemiology and modelling. The development of the curriculum was guided by Kolb’s experiential learning theory . There are four stages, which begin with having a concrete learning experience, followed by reflective observation and abstract conceptualisation, and ending with them actively experimenting with the knowledge they gained. We delivered concrete learning experiences through a series of online lectures, recorded videos, self‑reading materials and practical exercises with reflections after each practical exercise, open discussion forums and Q&As. We built in opportunities for the participants to conceptualise the process through biweekly assignments which were reviewed, and in‑depth inputs were provided. Every participant was supposed to submit a project by the 11th week involving designing and optimising a specific infectious disease model by applying the knowledge learnt during the course. This provided the participants with the chance to experiment with their newly gained insights in a practice situation in a highly mentored environment. The overall goal of this initiative was to strengthen the capacity in mathematical disease modelling to enhance their use in decision‑making and effective communication of modelling outputs to policy‑makers in India. The course participants included regular faculty/scientists/PhD students/post‑doctoral students from medical colleges and research institutes, biostatisticians, veterinarians, public health and clinical researchers from government institutes, non‑governmental organisations (NGOs) or other organisations from India and policy‑makers and disease control professionals with interest and background in infectious disease modelling. Specialist mathematical training was not a prerequisite. However, some familiarity with spreadsheet packages (Microsoft Excel) was desirable. Selection of participants was competitive and individuals with prior experience in the infectious disease domain and those who committed to taking this capacity‑building initiative forward in their respective institutions were preferred. The applications were scored using a structured scoring sheet. The criteria used to score were: highest educational qualification, graduation marks, work experience in the field of infectious diseases, publications and research projects in the domain of infectious diseases and any fellowship/diploma/PG or any equivalent course in infectious diseases. No course fee was charged to the participants. A participant successfully completes the course and gets the certificate if they fulfil all the following criteria: Attends at least 75% of the online sessions Submits all four assignments and the project work to the satisfaction of the facilitators before the deadline Attends all the offline contact sessions Completes the final exit examination scoring at least 50% marks The details of the course structure and the delivery of the course are described in Table 1 and Figure 1 . Supplementary appendix 1 shows the details of the curriculum including the week‑wise course content and the teaching learning methods. The course was delivered via a six‑step process: 8‑week online course: This was delivered through live online video lectures, online software demonstrations and exercises, once weekly live discussion forums, bi‑weekly assignments and reading materials. The total duration of teaching was around 45 hours per week. Bi‑weekly assignments: After the completion of every 2 weeks, assignments were given. All four assignments had to be submitted within the specified deadlines. (Milestones 1‑4). Project work: A project‑based assignment was given wherein they will be practically applying the principles learnt. The final project report had to be submitted before the completion of the 10th week of the course. The format in which the project report is to be submitted is given in Supplementary appendix 2 (Milestone 5). Online revision and discussion classes in Week 11. 3‑day contact programme: A 3‑day contact programme was held in the 12th week of the course to discuss and revise the key concepts, clarify doubts and give them a mentored hands‑on practice on the key exercises. Exit examination: The contact programme was followed by an exit examination the very next day. Table 1 Details about the course structure and delivery. DOMAIN COURSE DETAILS Approach Deliverable‑driven hands‑on approach to training with intensive mentorship during practicum and in‑person sessions Mode of delivery Hybrid mode (online video lectures, discussions and demonstrations, offline contact programme and exit examination) Target trainees Public health professionals, medical college faculty, biostatisticians, microbiologists and scientists working in the domain of infectious diseases Duration of the course 12 weeks (including 8 weeks of online training, followed by assignments and project submission, face‑to‑face contact session and examination) Deliverables Four assignments and project work Course advertisement Advertised within the priority organisations, professional networks and on social media such as LinkedIn, Facebook and Twitter Trainee selection and number A total of 24 participants were selected on the basis of their previous clinical, programmatic or research experience in the domain of infectious diseases out of 224 applications received. Course format Live online video lectures, online hands‑on practical exercises and software demonstrations, live discussion forums and Q&A, case studies, journal clubs, bi‑weekly assignments and project submission, 3‑day contact session followed by final exit examination Course fee No course fee was charged to the participants. However, the participants had to bear their cost of travel, accommodation and other expenses during the face‑to‑face contact session and the exit examination Assessment Formative assessment: Four assignments (25 marks each) Summative assessment (100 marks): End‑of‑course exit examination (75 marks) Theory (50 marks) Practical (25 marks) End‑of‑course project submission (25 marks) Course feedback and evaluation Participants evaluated the structure and content of the training at the end of each week of training through formal and informal feedback mechanisms to inform subsequent sessions. In addition, trainees provided overall evaluation of the course at the end of the training, including training logistics. Figure 1 Details about the course structure, modes of delivery, milestones, deliverables and assessment methods. Course structure, teaching/learning methods, milestones, deliverables, and assessment methods Table 2 provides details about the week‑wise course topics and milestones. Study participants: The study population included all participants ( n = 20) who completed all milestones (required online and offline session attendance, submission of assignments and project work) and were eligible for the final exit examination and the course facilitators. Self‑administered, semi‑structured questionnaires were emailed to the course participants ( n = 20) via Google form after completion of the face‑to‑face offline sessions. Anonymous feedback was collected to get appropriate responses without any desirability bias. Identifying information and email IDs were not collected. The questionnaire included closed‑ended quantitative and open‑ended qualitative variables. The quantitative variables included feedback on the overall course content, learning objectives, balance between theory and hands‑on, delivery of the course, contribution of the course towards learning and skill and responsiveness of the facilitators. A five‑point Likert scale was used to record the responses. The qualitative variables included open‑ended questions assessing strengths (what worked well?), weaknesses (what did not work well?) and suggestions to improve the delivery of the course in subsequent cycles. Facilitators’ feedback regarding the strengths and weaknesses of the course and suggestions for improvement in the subsequent cycles was also taken in a meeting of the facilitators after the course. The data (both quantitative and qualitative) were captured in MS Excel format. Quantitative variables were summarised using proportions. The responses ‘very good’ and ‘excellent’, as well as ‘agree’ and ‘strongly agree’, were combined to form a single category. Manual descriptive content analysis of the textual responses to the open‑ended questions was carried out by two authors (J.P.T. and P.D.), who are experienced in qualitative research. Themes were generated in consensus using standard procedures by a deductive approach . Any disagreement between the two authors was resolved by mutual discussion. The participants were contacted again by email or telephone in case any clarification was required. Statements in italics represent direct quotes from the participants. Obtaining feedback from the participants was performed as part of routine evaluation of the training mandated by the funding agency. Thus, approval from the ethics committee was not deemed necessary. Feedback was completely anonymous and participants were free not to respond to the questionnaires. Out of 224 applicants, a total of 24 participants were selected for the first cohort. The mean age of the participants was 37.7 years (standard deviation 4.9), ranging from 29 to 48 years. About 42% ( n = 10) were female. Most of them belonged to a public health background ( n = 16, 66.6%), followed by biostatisticians ( n = 4, 16.7%) and microbiologists ( n = 4, 16.7%). Those from a public health background came from diverse domains including medical college faculty, scientists from ICMR institutes, junior and senior residents, national consultants working with the World Health Organization, state‑level public health administrators, etc. Only one of them (4.2%) had attended a course on mathematical modelling of infectious disease before. Of the 24 selected participants, 20 (83.3%) successfully completed the course; 1 dropped out of the course very early in the first month and the remaining 3 could not attend the contact workshop due to other competing personal or professional commitments, and thus were ineligible for the final exit examination. Out of 20 participants, about three‑fourths ( n = 15, 75%) felt that the contribution of the course towards enhancing their knowledge was ‘very good’ or ‘excellent’. Most of them felt (‘agree’ or ‘strongly agree’) that the learning objectives were clear ( n = 18, 90%), course content was well organised and delivered ( n = 19, 95%) and the course structure allowed all participants to fully participate ( n = 19, 95%) in the learning process. They believed that the course instructors were effective teachers ( n = 20, 100%), stimulated student interest ( n = 19, 95%) and were available and helpful ( n = 20, 100%). All the participants ( n = 20, 100%) found this course useful and would recommend it to their colleagues enthusiastically. The following broad themes emerged: strengths of the course, challenges and way forward from a participants’ perspective. COURSE CONTENT: THEORY FOLLOWED BY HANDS‑ON SESSIONS Most of the participants felt that practical exercises after the basic theory lecture were extremely helpful in understanding the concepts and their applications. The practical hands‑on sessions and the discussions following that were especially useful in the understanding of complex concepts. The participants suggested more such exercises and discussions in subsequent courses. Most of the participants found the 3‑day contact workshop very useful as there were many practical hands‑on activities, small group activities, interactive discussions and less theory lectures, which was not possible during the online sessions. It also helped them revise and consolidate the concepts learnt earlier, especially the complex topics during the latter half of the course. Some of them even said that some complex topics such as modelling HIV/sexually transmitted infections (STIs) and mixing of populations were difficult to follow online, but the in‑person sessions were useful in clarifying them. The participants reported that support from trainees’ institutions to pursue the course and attend all online and offline sessions was important. The application process mandatorily required applicants to submit a letter of support from their employers, which meant that they could focus on the course and devote sufficient time without having to worry about their full‑time work commitments. The facilitators reported that trainer–trainee communication through various forums and the trainee’s commitment were critical to the success of this cohort. A key feature of this program critical for trainees’ success was the regular communication between trainees and trainers through regular online sessions, online discussion forums, Q&As and the practical hands‑on sessions which provided trainees the space to implement the concepts learned and to receive feedback. We created a WhatsApp group to facilitate easier communication between trainers and trainees as well as knowledge sharing and networking among trainees. Additionally, during in‑person sessions, trainers were available for discussions at the end of each training day. Another facilitator of success was the trainees’ commitment, demonstrated by completing assignments and project work on time and attending evening online sessions regularly amidst competing commitments from their full‑time work. SCANTY COVERAGE OF THE BASICS OF INFECTIOUS DISEASES The participants also gave useful feedback on the challenges they encountered during the course. A participant from a non‑medical background commented that the basics of common infectious diseases and their natural history should be discussed thoroughly. More real‑life examples of mathematical models from the literature for common diseases should be discussed. LESS TIME FOR DISEASE‑SPECIFIC MODELLING Some of the participants felt that the disease‑specific modelling topics such as TB, HIV and sexually transmitted infections (STIs) were difficult to grasp and needed more time. TIMING OF ONLINE SESSIONS AND NON‑AVAILABILITY OF RECORDED VIDEOS Timing of the evening online sessions and non‑availability of recorded videos were also reported by some participants as a challenge. Table 3 presents the challenges and recommendations given by the participants and suggested a plan of action for future courses. This is the first study using a mixed methods approach to evaluate learner’s perceptions of an innovative 3‑month hybrid training program in infectious disease modelling targeting mid‑career professionals in India. This paper describes the structure, curriculum and delivery of the course and also highlights the strengths and challenges in training the first cohort of disease modellers along with recommendations for the subsequent cohort. Some of the participants who belonged to the non‑medical background suggested that more focus should be on the basics of infectious disease epidemiology, disease transmission, natural history of diseases and their prevention and management. Accordingly, we plan to include more recorded lectures and discussions on those topics, including the clinical aspects of these diseases in the first 2 weeks, so that everyone is on the same page irrespective of their educational background before we move into the modelling of these diseases. The working professionals reported that attending evening online lectures around 6 PM, 5 days a week was challenging, as it was their commuting time. Non‑availability of recorded videos was also reported by many as it did not allow them to revise the concepts and make up for their missed classes, if any. To offset these challenges, we are designing a web‑based course portal for the subsequent courses wherein lecture videos and other resource materials will be uploaded and the participants can complete the course at their own pace. The number of applications ( n = 224) far exceeded the number anticipated by the team, which demonstrated the demand of the course. These applications were processed objectively using a structured scoring sheet which took longer than planned and required substantial effort from the trainers. Additional personnel support for program coordination might have helped. Further, given that most of the content was developed originally for this course, the time and effort required to prepare the course content was substantial. However, we can leverage the course materials of the first cohort for future courses, although the materials need to be tailored to specific trainee populations. A major limitation in this study was the self‑reporting of strengths and weaknesses by the authors of the papers, who were also the respondents in this study. Thus, responder bias cannot be ruled out. However, responses from the study participants were completely anonymised to minimise social desirability bias. In addition, responses to the open‑ended questions were obtained online, leaving no scope for probes and further in‑depth exploration, thus affecting the richness of the qualitative data. This is the first structured 3‑month PG certificate course in India attempting to build the capacity of researchers in the field of infectious disease modelling and its applications. The first cycle of the course yielded 20 trained infectious disease modelers in the country. There were some challenges and recommendations from the first cycle which will feed into the subsequent course cycles. Future courses are planned to be hosted on an online platform to facilitate the completion of the course at the participants’ own pace and be able to access the course materials and online videos at any time. More collaboration with various stakeholders, nationally and internationally, will be sought to improve the content, delivery and robustness of the program. | Review | biomedical | en | 0.999995 |
PMC11697582 | Individual differences exist in the experience of interacting with others. The speed and intensity of reaction, the amount of information perceived, and the way information is processed are influenced by the degree of sensitivity one possesses. Building upon the studies of Aron & Aron , it is proposed that the ability to detect and respond to internal and external stimuli, including interactions with others, may be particularly pronounced in individuals with Sensory Processing Sensitivity (SPS). This temperament trait is characterized by 1) deep cognitive processing of stimuli, 2) heightened emotional reactivity, 3) vulnerability to sensory overstimulation, and 4) heightened awareness of subtleties in the environment, including people’s emotional states . In detail, fMRI studies on individuals with SPS demonstrate that this trait is associated with deep processing and heightened responsiveness to emotional signals from others. These studies highlight increased activation in brain regions associated with reward processing, memory, emotion, empathy, and awareness. For this reason, the role of the parenting environment in the development of emotional regulation strategies in individuals with SPS has also been studied. Specifically, it is reported that adverse environments, poor parenting, or lack of social support have a worse impact on children with SPS , often resulting in depression and anxiety problems in adulthood . Conversely, environments with low levels of stress and adequate emotional support promote greater creativity , improved social skills , and more effective emotional regulation strategies . As noted, interpersonal interaction, especially in the early years, plays a more crucial role in individuals with SPS than those without this trait, largely influencing their well-being or psychological development. Furthermore, SPS characteristics such as heightened awareness of subtleties and emotional states of others and intense emotional experiences suggest that this trait is also accompanied by high interpersonal sensitivity. This characteristic refers to an increased sensitivity to the emotional states of individuals with whom one interacts. But unlike the emotional reactivity proposed by Davis , which involves the activation of various cognitive and emotional processes to understand the emotional state of others, as well as adopt an appropriate perspective and respond correctly, high interpersonal sensitivity refers to enhanced responsiveness to the perception of the emotional components of interaction with others (input) rather than an emotional and/or cognitive reaction as a result (output). On the other hand, the most used definitions regarding interpersonal sensitivity are: “the ability to feel, perceive accurately, and respond appropriately to one’s personal, interpersonal, and social environment” (p. 3) and “an undue and excessive awareness of, and sensitivity to, the behavior and feelings of others” (p. 342) . Both refer to observing behaviors that also occur in people with SPS. However, they are insufficient to describe high interpersonal sensitivity because they are based on a specific and distinct theoretical foundation. In other words, neither definition likely considers the existence of the trait of SPS with its innate condition. Bernieri refers to an acquired skill or capacity. Although Boyce & Parker describe characteristics related to SPS, they are likely those observable in the presence of psychopathology. Additionally, the literature search yielded no studies explicitly exploring the relationship between high interpersonal sensitivity and SPS. It did not reveal a specific reference within the conceptual framework of SPS as a distinctive feature within the construct’s definition. However, the results reported in studies on SPS and its relationship with caregiving environments highlight the need to understand how high interpersonal sensitivity seems to enhance the beneficial or harmful effects of interactions with others from the earliest years of life. Therefore, establishing the theoretical and empirical link between high interpersonal sensitivity and SPS is a necessary and pertinent contribution. It is worth noting that scales designed to assess interpersonal sensitivity, such as the Perceptual Decoding Ability Test (PDA) , the Test of Sensitivity to Social Interactions (TESIS) , or the Interpersonal Sensitivity Test , are instruments that evaluate an adaptive social ability or, in the case of the latter, a maladaptive personality trait. Regarding the SPS, it is valid to assume that since it is a temperament trait that appears to be innate, high interpersonal sensitivity does not either correspond to an acquired capacity or skill, or an alteration or malfunction but rather to an inherent characteristic. Therefore, assessing interpersonal sensitivity in individuals with SPS should focus on identifying the effect that interactions with other people can have on their emotional and psychological functioning, and the mentioned instruments were not designed with that objective. In the same way, the most widely used scale in studies on SPS, the Highly Sensitive Person Scale (HSPS) , does not include a dimension or subscale specifically investigating interpersonal sensitivity. This instrument only features two items related to interaction with others: Do other people’s moods affect you? When people feel uncomfortable in a physical space, do you usually identify or know what needs to be done to make them feel more comfortable? (e.g., changing the light or the seating arrangement) . Though it considers SPS characteristics like emotional reactivity, awareness of subtleties, and empathetic responsiveness, it does not explore interpersonal interaction and its effects with depth. Recently, the Sensory Processing Sensitivity Questionnaire (SPSQ) was developed to incorporate the positive aspects of the SPS that the HSPS failed to include, as noted by some studies . To do this, items from both the HSPS and the Adult Temperament Questionnaire (ATQ) were taken . The social and affective dimensions included a social-affective sensitivity and an understood associative sensitivity. This dimension is made up of seven items. Through statements such as: Sometimes I notice sad eyes hidden by a smile ; I’m usually surprised when a person’s tone of voice doesn’t match their words , or; When people are uncomfortable, I know how to calm them down , it seems focused primarily on recognition of emotional states and heightened awareness of subtleties. These characteristics of the SPS are fundamental to understanding the effect of interactions with other people. Still, it is necessary to consider different elements of interest that have not been sufficiently explored in this sense, such as overstimulation and intense emotional experience. The aim of this research is to develop and validate an instrument for assessing the presence of high interpersonal sensitivity in adults. Based on in-depth interviews with 20 adults identified with SPS, 45 statements were drafted related to their emotional experiences in interactions with others since early childhood. Participants were asked to reflect on experiences that were pleasant, unpleasant, significant, insignificant, memorable, emotionally impactful, or that affected their way of relating to others. It should be noted that since cutoff scores for the scale used to assess SPS presence have not been defined, low, medium, and high levels were determined using total scores obtained from the HSPS (Highly Sensitive Person Scale) (minimum, maximum, first and third quartiles, mean, and median). The defined levels were low, 22 to 62 points; medium, 63 to 88 points; and high, 89 to 114. The statements resulting from the interviews were grouped considering their potential similarity to SPS characteristics outlined in the Introduction section as part of its definition . They were categorized as follows: 1) awareness of subtleties , referring to heightened awareness of subtle environmental cues, including the emotional states of others; 2) emotional reactivity , based on increased emotional responsiveness; 3) empathic response , involving greater empathy towards others, and; 4) overstimulation , centered on vulnerability to sensory overstimulation. It is important to note that a set of statements was crafted based on a common theme observed across all interviewees: a prolonged emotional effect accompanying interpersonal interactions where intense, mostly unpleasant emotions were experienced. Although this specific reference was not found in the reviewed empirical evidence related to SPS, it was also decided to include it, termed persistent effect . Within the 45 drafted statements, it was noted that some expressed similar ideas differently. Therefore, a decision was made to select clearer and easier to understand. Additionally, there were discarded statements describing characteristics such as gratitude, responsibility, or commitment. Once a version of 25 items (five dimensions with five items each) was finalized, it underwent content validity assessment by five expert judges. They reviewed each statement’s wording and syntax to ensure each item corresponded accurately to its designated dimension or characteristic. The 25 proposed items were retained because they achieved Aiken’s V and Kendall’s W scores above 0.80, indicating high agreement regarding relevance and clarity. Using convenience sampling, 429 university students from various educational programs participated, excluding psychology, to avoid potential bias in the results. The sample consisted of 250 (58.27%) women, 172 (40.09%) men, and 7 (1.63%) individuals of a different gender, aged between 18 and 29 years, with a mean age of 20.41 and an SD of 1.91. To participate in the study, subjects met the following inclusion criteria: (a) be over 18 years of age and (b) sign an informed consent form agreeing to participate voluntarily in the research. High Interpersonal Sensitivity Scale (HISS). This self-report instrument was developed for the present research to assess levels of high interpersonal sensitivity in adults. It consists of 25 items evenly distributed across five dimensions: 1) awareness of subtleties , 2) emotional reactivity , 3) emphatic response , 4) overstimulation , and 5) persistent effect . For each statement, respondents are asked to indicate on a 4-point Likert scale the extent to which the statement describes them. The response options are: 1 = not at all, 2 = slightly, 3 = moderately, and 4 = completely. Highly Sensitive Person Scale (HSPS) . Translated and adapted for the Mexican population . It is a self-report scale designed to measure the degree of sensitivity in adults. It consists of 17 items with Likert-type responses ranging from 1 (Not at all) to 7 (Extremely), which are answered based on the person’s feelings. All items are scored in the same direction, so higher scores indicate higher sensitivity. Principal component analysis suggested a two-factor solution that explained 30% of the variance: 1) processed sensitivity (PS) with 13 items, and 2) low sensory threshold (LST) with 4 items. Reliability analysis reported an α coefficient of 0.89. Interpersonal Reactivity Index (IRI) .Translated and adapted for the Mexican population . It is a self-report scale with 28 items designed to assess empathy in a multidimensional manner. It comprises a cognitive component with two dimensions, perspective taking and fantasy , and an affective component with two dimensions: empathic concern and personal distress . These two components are distributed across four subscales with 7 items each. Each dimension has 7 items, and responses are given on a Likert scale with five response options, where higher scores indicate a higher presence of the measured dimension. The reported reliability for the total scale was α = 0.81. Once authorized by the directors and professors of Schools and Faculties at a public university, students were invited to participate in the study during class time. Those who signed the informed consent form completed the HSPS, IRI, and HISS instruments. The administration of the scales took approximately 25 minutes. Before conducting the research, the project was reviewed and approved by the Faculty of Psychology ethics committee at the Universidad Michoacana de San Nicolás de Hidalgo. Participation was voluntary and ensured by signing an informed consent form that guaranteed the confidentiality and anonymity of personal data. Data analysis was conducted using R statistical software version 4.0.2 . Exploratory Factor Analysis (EFA) was performed using the psych package , and Confirmatory Factor Analysis (CFA) was conducted using the lavaan package . Internal consistency (Cronbach’s alpha and McDonald’s omega) was assessed using JASP software version 0.8.5.1 . The total sample was randomly split into two parts. Exploratory Factor Analysis (EFA) was conducted on the first half of the sample, with Confirmatory Factor Analysis (CFA) reserved for the second half. Before conducting the Exploratory Factor Analysis (EFA), Kaiser-Meyer-Olkin (KMO) and Bartlett’s test of sphericity were performed as measures of the adequacy of the data for factor analysis. A KMO value above 0.5 and a significance level below 0.05 for Bartlett’s test were set as criteria. The EFA was conducted using the principal axis factoring method with varimax rotation. Criteria for retaining items included factor loading greater than 0.45 on a single factor, factor loadings not exceeding 0.30 on other factors, and item content congruence with the factor. CFA was conducted using the maximum likelihood method. Model fit was evaluated using various indices. Absolute fit indices included χ 2 and the standardized root mean square residual (SRMR). Comparative fit indices included the Comparative Fit Index (CFI), Tucker-Lewis Index (TLI), and Akaike Information Criterion (AIC). The Root Mean Square Error of Approximation (RMSEA) was also used to assess model parsimony . For the χ 2 test, a non-significant result indicates a good fit. CFI and TLI values should exceed 0.90 (higher values indicate better fit). SRMR values below .08 are considered acceptable, with lower values indicating better fit. RMSEA values should be below .08 for acceptable fit or close to .05 for good fit . AIC, an unbounded selection criterion, compares models fitted to the same data; smaller values indicate a better fit. Reliability was estimated using Cronbach’s alpha and McDonald’s Omega coefficients, with a confidence level of 95%. Values above 0.7 were considered acceptable . Convergent validity was assessed through correlation analysis, evaluating the relationship between the HISS’s total scores and its factors and the HSPS’s and IRI’s total scores and factors. The KMO index (0.85) and Bartlett’s test (p < 0.001) indicated appropriate values for conducting exploratory factor analysis (EFA). The principal axis factoring method with varimax rotation was used for the 25 items initially included in the HISS. Firstly, five dimensions were obtained based on criteria of eigenvalues greater than 1. However, considering the factor loadings of each item, their theoretical coherence, and their grouping within factors, 11 items describing dimensions of emotional reactivity and empathetic response were discarded. With the remaining 14 items, EFA revealed a three-factor solution using the same extraction method and rotation. These items exhibited loadings above 0.45 in their respective factors, collectively explaining 47% of the total variance (see Table 1 ). The three derived dimensions from the analysis were: factor 1, awareness of subtleties , explaining 19% of the variance with 5 items; factor 2, overstimulation , explaining 15% of the variance with 5 items; and factor 3, persistent effect , explaining 13% of the variance with 4 items. Confirmatory factor analysis compared four different models: The findings are presented in Table 2 . Regarding the χ 2 test, it is important to note that none of the models showed a satisfactory result, as this test should be non-significant to confirm a good fit. However, Littlewood and Bernal point out that χ 2 is prone to error when applied to large samples, i.e., more than 200 observations. Therefore, other indices are recommended for model confirmation. Models 1, 3, and 4 exhibit a χ 2 /df value <5, suggesting an acceptable fit . Additionally, CFI and TLI indices were adequate for models 1 and 3. Concerning SRMR and RMSEA, the best values are reported for models 1 and 3. However, AIC confirms that model 1 (derived from exploratory factor analysis) demonstrates the best parsimony. Therefore, the model derived from EFA with 14 items and three factors ( awareness of subtleties, overstimulation , and persistent effect ) shows the best fit . Internal consistency for the total HISS score and each factor was assessed using Cronbach’s alpha and McDonald’s omega coefficients (see Table 3 ). The total HISS score yielded α = 0.803 and ω = 0.804. Factor 1, awareness of subtleties , showed the highest indices, similar to those obtained for the total score. The other factors ( overstimulation and persistent effect ) demonstrated values above 0.6 for Cronbach’s alpha and close to 0.7 for McDonald’s omega, which are considered acceptable (see Table 3 ). The results regarding the relationship between factors indicated that their correlations were significant but low. Each factor also showed a significant, moderate correlation with the total scale score. As evidence of convergent validity, a correlation analysis was conducted between the final version of HISS and its factors with HSPS and IRI scores and their respective dimensions. As observed in Table 4 , only the correlations between the overstimulation dimension (HISS) and empathic concern (IRI) were not statistically significant. The results of correlations between the total scores of the scales show moderate correlations, with the highest correlation observed between HISS and HSPS, followed by the correlation between HSPS and IRI (see Table 4 ). The results showed weak and moderate correlations between the HISS dimensions and other variables. Weak correlations were observed between the awareness of subtleties dimension and the rest of the variables (HSPS and its factors, IRI and its factors), as well as between the overstimulation dimension and IRI and its dimensions, except for the personal distress dimension, which showed a moderate correlation. Moderate correlations were also found between the overstimulation dimension and HSPS and its factors. The persistent effect dimension exhibited moderate correlations with the rest of the variables (HSPS and its factors, IRI and its factors), except for the IRI dimensions of perspective taking and fantasy , which showed weak correlations (see Table 4 ). Finally, concerning the descriptive data, it should be noted that factor scores were derived from the sum of item values divided by the number of items in the factor, and the total score was obtained from the sum of the scores of the three factors. The data are presented in Table 5 . Factor and total scale scores cover practically all possible scores, and the skewness and kurtosis values for all scores indicate a normal distribution. The main objective of this research was to develop an instrument to assess high interpersonal sensitivity. It has been noted that individuals with Sensory Processing Sensitivity (SPS) are also more sensitive in their interactions with others , especially in parenting contexts . However, an instrument specifically investigating this aspect of SPS still needs to be developed, hence the relevance of this research. The goodness-of-fit indices obtained in the initial exploratory factor analysis and the distribution of items across factors provided decisive elements for excluding 11 items from the initially proposed version, which led to the omission of two of the first considered dimensions: emotional reactivity and empathetic response . The final structure of the HISS retains three of the five initially considered dimensions: awareness of subtleties, overstimulation , and persistent effect . The reliability index of the awareness of subtleties factor is adequate. However, the reliability of the other two factors falls just short of being acceptable; nevertheless, some authors consider values above six and a half acceptable for the McDonald’s omega coefficient . The low reliability of these factors could be attributed to the small number of items in each factor. The correlation between the factors was significant but low; however, each factor’s correlation with the total scale was moderate. These results suggest that these dimensions are independent but contribute to a larger construct. Specifically, the three dimensions comprising the final internal structure of the HISS capture distinctive characteristics of SPS that describe the process of exposure to stimuli in individuals with high interpersonal sensitivity. Increased awareness of subtleties can lead to overstimulation of the nervous system when exposed for prolonged periods, thereby producing a persistent emotional effect from the experience. The results of the exploratory factor analysis suggest that the items designed to capture dimensions of emotional reactivity and empathic response are grouped into a single factor. This may indicate a need for more specificity in item wording and potential shared content between these two dimensions. Given that these aspects of SPS have primarily been studied using methods such as fMRI, it is crucial to explore them with more precise and differentiated assessments, considering they refer to heterogeneous constructs. Similarly, these results reinforce the proposal that high interpersonal sensitivity describes an increased responsiveness to the perception of emotional components present in interactions with people rather than an emotional response derived from this perception. Therefore, since it is not an empathic or emotional reactivity, it can explain why the empathic response and emotional reactivity dimensions were not part of the HISS. Regarding convergent validity, the moderate correlations between the HISS and HSPS and between the HISS and IRI indicate a relationship but no overlap. This pattern is also reflected in the correlations between the factors of the three scales, where only certain factors showed moderate correlations, most of which were weak values. In particular, the moderate correlation between the dimension of overstimulation in the HISS and the HSPS with its factors aligns with expectations based on findings reported by Montoya-Pérez et al. . Their study revealed a moderate correlation between the total score of the HSPS and the DESR-E, an instrument designed to assess difficulties in emotional regulation. Consequently, they suggested a potential bias in the HSPS structure towards problems associated with SPS. The results of the current study appear to confirm this hypothesis, especially considering that the dimension of personal distress in the IRI also moderately correlated with the HSPS and its factors, similar to the overstimulation dimension in the HISS. Moreover, the moderate correlation between the dimension of persistent effect in the HISS and the HSPS with its factors and with the IRI and its empathic concern and personal distress factors strengthens the notion that SPS amplifies emotional experience. High interpersonal sensitivity can lead to both adaptive and comforting social experiences and overwhelming and demanding experiences, depending on the quality of interactions and the context in which they occur. Similarly, these moderate correlations between the HSPS and the HISS support the notion that high interpersonal sensitivity can be considered an integral aspect of the SPS trait. The current research findings suggest that a comprehensive assessment of SPS should consider sensitivity to physical stimuli and sensitivity in interpersonal relationships. Furthermore, the positive relationship between interpersonal sensitivity and empathy, as measured by the IRI, supports the idea that the SPS trait could confer an evolutionary advantage under favorable environmental conditions (such as parenting). Within the limitations of this study, one notable aspect is the lack of control over variables related to SPS, such as social anxiety , social skills , and emotional intelligence , which could enhance the evidence of validity. Additionally, test-retest reliability has yet to be investigated. It would be beneficial for future studies to explore the relationship between the scale and these variables and assess test-retest reliability. Future studies should also examine the HISS’s discriminant validity and establish an optimal cutoff point based on its sensitivity and specificity for distinguishing between individuals with high personal sensitivity traits and those without. A standard based on classification through in-depth interviews conducted by expert evaluators would be necessary. Another significant limitation is that the sample comprised only young people and university students. Therefore, the results cannot be generalized to older populations or those with lower levels of education. Future research should consider expanding the age range and educational levels to increase the representativeness of the findings. Finally, the results of this study indicate that the HISS (High Interpersonal Sensitivity Scale) has adequate psychometric properties to discriminate in adults the presence of high interpersonal sensitivity, a distinctive characteristic of HSP expressed through heightened sensitivity in interactions with others. HISS (High Interpersonal Sensitivity Scale) and supplementary data to this article can be request to the corresponding author. | Other | biomedical | en | 0.999997 |
PMC11697585 | Migraine is a primary headache disorder typically characterized by recurrent attacks of disabling headache and associated with relevant personal and societal burden ( 1 ). The detailed pathophysiological mechanisms causing migraine are still elusive, however, there is evidence that central iron metabolization might play a role ( 2 , 3 ). Evidence suggests that iron imbalance, particularly iron deficiency or overload, may be associated with migraines through mechanisms involving oxidative stress, neurotransmitter regulation, and vascular health ( 4 , 5 ). Iron is a metabolically very active component. Some of the reasons for high iron levels in brainstem structures include overproduction of transferrin, increased iron uptake reflecting increased activity, and sequestered iron following cell damage regardless of the mechanism, abnormally high or low iron affects homeostasis and is a marker of altered function ( 5–7 ). Various magnetic resonance imaging (MRI) methods enable measurement of iron in vivo in the human brain ( 8 , 9 ). R 2 (=1/ T 2 ) or R 2 ∗ (=1/ T 2 ∗ ) relaxometry are one of the most commonly used MRI based iron mapping techniques, as there is a strong linear relationship between R 2 and R 2 ∗ values with the underlying iron content in brain structures ( 10 ). Several MRI studies observed altered iron sensitive quantitative MRI measures in various brain structures of patients with migraine, indicating an iron accumulation compared to healthy controls ( 6 , 7 , 11–14 ). This increase in iron content was associated with pain processing and the frequency of migraine attacks ( 7 , 11 , 14 ). Although, there is a strong correlation between R 2 ∗ and iron content in gray matter, several confounding factors, such as variations in tissue microstructure, myelin content and water content, exist which counteract the effect of iron on R 2 ∗ in white matter ( 15 , 16 ). In white matter, the high amount of myelin leads to a strong confounding effect on R 2 ∗ , as both, an increase in iron and an increase in myelin, leads to higher R 2 ∗ values and vice versa. Furthermore, R 2 ∗ in white matter is sensitive to the orientation of anisotropic tissue structures with respect to the B 0 field of the MRI system ( 17 , 18 ). There two main sources of R 2 ∗ anisotropy in the brain are (I) myelinated nerve fibers ( 17 , 18 ) and (II) the anisotropic component of the vasculature (larger vessels tend to run in parallel with nerve fiber tracts) ( 19 ). However, the fiber orientation dependency of R 2 ∗ can be utilized to separate the effect of iron and anisotropic structures on R 2 ∗ in white matter. Therefore, R 2 ∗ is combined with the fiber angle, estimated using diffusion tensor imaging (DTI), within each white matter voxel to compute the fiber orientation independent (isotropic) and fiber orientation dependent (anisotropic) R 2 ∗ components ( 20 , 21 ). Although, there are many studies on structural MRI in patients with migraine ( 22 , 23 ) there is limited data on MRI measurements during an acute migraine attack. Therefore, the aim of this study was to investigate if there are dynamic fluctuations in isotropic and anisotropic R 2 ∗ , which can be related to tissue components, such as iron, in the brain during a migraine attack. To achieve this, we acquired, quantitative MRI, including R 2 ∗ relaxometry and DTI of a patient with migraine on 21 consecutive days, comparing migraine-free days and 2 days with an acute migraine attack. A 26-year-old male patient diagnosed with episodic migraine with aura according to ICHD-3 ( 1 ) criteria since the age of 15 years participated in this study. He reported aura symptoms as mainly flashing lights lasting 15–20 min in the left visual field, mostly before the onset of the migraine headache. The participant had a history of 4 to 5 migraine attacks per month before entering the study. During the study the participant experienced two migraine attacks (on day 12 and day 16), which fulfilled the criteria for a migraine attack as defined by the ICHD-3 ( 1 ). The attacks lasted up to 24 h and were always localised on the left frontal side. Pain maxima ranged from 70–80 on a visual analogue pain scale of 0–100, with 100 representing maximum pain. Individual migraine attacks were always preceded by a prodrome, characterised mainly by tiredness and yawning. After the migraine attacks had subsided, a postdrome phase was clinically observed, characterised by fatigue and impaired concentration. During the 21 consecutive scanning days, the participant voluntarily decided not to take any preventive or acute medication and refrained from any other medication. The participant was a non-smoker, did not drink alcohol during the study period and maintained a constant daily routine prior to each measurement. The subject’s informed consent was obtained in accordance with the Declaration of Helsinki and approved by the Ethics Committee of the Medical University of Innsbruck. Written informed consent was obtained from the subject for the publication of any potentially identifiable images or data included in this article. MRI was performed at the same time on each day on a 3 T MR system (MAGNETOM Skyra, Siemens Healthineers, Erlangen, Germany) using a 64-channel head coil. The following sequences were acquired in this study: For structural overview and tissue segmentation, a 3D T 1 weighted magnetization prepared rapid acquisition gradient echo (MPRAGE) sequence with echo time (TE) = 2.1 ms, repetition time (TR) = 1,690 ms, inversion time (TI) = 900 ms, flip angle = 8° and a 0.8 × 0.8 × 0.8 mm 3 isotropic resolution. For the estimation of the white matter fiber angle θ , a diffusion weighted single-shot echo-planar imaging DTI sequence with TE = 92 ms, TR = 9,600 ms, flip angle = 90°, 30 isotropically distributed diffusion directions, b -value = 1,000 s/mm 2 , three images with b -value = 0 s/mm 2 and a 2 × 2 × 2 mm 3 isotropic resolution. For quantification of R 2 ∗ relaxation, a multi-echo gradient echo (GRE) sequence with TE = 4.92, 9.84, 14.7, 19.6, 24.6 and 29.51 ms, TR = 35 ms, flip angle = 15°, and 0.9 × 0.9 × 0.9 mm 3 isotropic resolution. R 2 ∗ maps were computed voxel by voxel assuming a mono-exponential relaxation using MATLAB 2019a (The MathWorks Inc., Natick, Massachusetts, United States). DTI data was analyzed with the FMRIB Software Library (FSL version 6.0.5.1) using FSL DTIFIT ( 24 , 25 ) to calculate the diffusion tensor model and estimate the eigenvalues and eigenvectors. To correct for distortions induced by eddy currents and head motion FSL’s eddy_correct was used. The fiber angle θ was calculated as the angle between the first eigenvector and the direction of the main magnetic field B 0 for each voxel, where θ = 0° represents fibers parallel to B 0 and θ = 90° represents fibers perpendicular to B 0 . For plotting fiber orientation dependent R 2 ∗ , the fiber angle θ was divided into 18 intervals of 5° and voxels from the entire white matter were pooled. R 2 ∗ anisotropy ( Equation 1 ) ( 20 ) was calculated based on orientation dependent R 2 ∗ according to where R 2 , max ∗ represents the maximum and R 2 , min ∗ the minimum value of R 2 ∗ as function of fiber angle. T 1 weighted images were used for automated segmentation of white matter, deep gray matter and cortical brain structures using FreeSurfer software (version 7.3.2). Automated tissue segmentation was performed independently on each day. A list of all segmented brain regions can be found in Table 1 . Statistical analysis was performed using R (version 4.0.3, The R Foundation for Statistical Computing, Vienna, Austria). A Shapiro–Wilk test was used to test for normal distribution of the data. Depending on the data distribution, a t -test or Mann Whitney U test was used to assess differences of R 2 ∗ in various brain regions of the left and right hemisphere. To test if R 2 ∗ was different on days with migraine, an analysis of variance (ANOVA) or Kruskal–Wallis test was used depending on the distribution of the data. As post-hoc tests, pairwise t -test or Mann Whitney U test were applied. For p -value adjustment Bonferroni correction was used. We repeated the entire analysis, excluding days right after the migraine attack from the migraine-free days, to investigate if there are potential changes on these days. Study personnel was fully blinded regarding migraine status (migraine-free vs. migraine) until completed data acquisition and computation of the quantitative MRI parameter maps. The transfer/disclosure of raw MRI data to 3rd party had not been approved in the ethical approval obtained for this study . Prominent focal veins were visualized on the left hemisphere, and asymmetry in the appearance of the cortical vessels was more prominent on the left side in the susceptibility weighted imaging (SWI). Enlarged and clustered perivascular spaces (PVS) are visualized in 3D T 1 weighted images in the deep grey matter bilaterally, mildly accentuated on the left side. Accentuated PVS were also present in the supratentorial white matter, specifically in the centrum semiovale, with no clear side difference. PVS in this patient were considered radiological within normal ranges. No diffusion restrictions suggesting ischemic processes-were present and the cerebral fluid system was normal. Overall, structural MRI was without findings and considered radiological within normal range. Figure 1 shows representative T 1 weighted images, average R 2 ∗ maps of all migraine-free days, average R 2 ∗ maps of all migraine days and the R 2 ∗ difference (Δ R 2 ∗ map) between migraine-free and migraine condition of a axial (top row) and coronal (bottom row) slice. The Δ R 2 ∗ map highlight areas which are altered during a migraine attack, indicating an increase in R 2 ∗ in red and a decrease in R 2 ∗ in blue. During migraine attacks, R 2 ∗ was found to be significantly altered in various brain regions. Overall, an increase in R 2 ∗ is predominantly observed in brain regions of the left hemisphere, whereas a decrease of R 2 ∗ is predominantly observed in brain regions of the right hemisphere. In the caudate, R 2 ∗ increased by 4.9% from 20.6 1/s to 21.6 1/s ( p = 0.021) in the left hemisphere and decreased by −5.3% from 20.8 1/s to 19.7 1/s ( p = 0.114) in the right hemisphere, from migraine-free days to during migraine, respectively . These alterations in R 2 ∗ are also evident in the Δ R 2 ∗ map shown in Figure 1 . R 2 ∗ increased in the left ventral diencephalon by 5.7% from 23.0 1/s to 24.3 1/s ( p = 0.011) and left cerebral white matter by 1.9% from 21.0 1/s to 21.4 1/s ( p = 0.021) on days with migraine. During a migraine attack, R 2 ∗ decreased in the right superior frontal cortex by −1.9% from 15.6 1/s to 15.3 1/s ( p = 0.026), in the right caudal middle frontal cortex by −3.0% from 16.6 1/s to 16.1 1/s ( p = 0.021) and in the right pericalcarine cortex by −4.6% from 19.7 1/s to 18.8 1/s ( p = 0.046). All other structures showed no statistical significant changes in R 2 ∗ during a migraine attack compared to migraine-free days. A summary of all R 2 ∗ values in each region and grouped by condition is given in Table 1 . R 2 ∗ orientation dependency was assessed to separate isotropic and anisotropic R 2 ∗ contributions in cerebral white matter at each day. On average, R 2 ∗ increased with increasing fiber angle from 19.6 ± 0.3 Hz at 0° to 22.1 ± 0.3 Hz at 90° (12.6%, p < 0.001) in the left cerebral white matter and from 19.8 ± 0.3 Hz at 0° to 21.9 ± 0.4 Hz at 90° (10.7%, p < 0.001) in the right cerebral white mater. Grouping by condition, revealed alterations in isotropic and anisotropic R 2 ∗ during a migraine attack compared to migraine-free days as shown in Figure 3 . In the left cerebral white matter R 2 ∗ increased by 1.8% ( p = 0.021) and R 2 ∗ anisotropy decreased by −1.9% ( p = 0.853), where as in the right cerebral white matter R 2 ∗ decreased by −1.0% ( p = 0.286) and R 2 ∗ anisotropy decreased by −16.6% ( p = 0.011). R 2 ∗ anisotropy differs between left and right cerebral white matter by −9.2% ( p = 0.009) on migraine-free days and by −23.6% ( p < 0.001) on days with migraine. In contrast to the literature, where quantitative MRI in patients with migraine is mainly acquired in cross-sectional studies, we aimed to identify potential short-term changes in quantitative MRI to study tissue composition during the migraine cycle, and in particular during an acute migraine attack. To the best of our knowledge, this is the first study in which quantitative MRI was acquired on multiple consecutive days in a patient with migraine, including days with imaging during an acute migraine attack. The results of this study suggest that R 2 ∗ relaxometry is suitable for detecting short-term changes in brain tissue composition that are indicative of central iron involvement during an acute migraine attack. We propose that the observed changes in R 2 ∗ in deep grey matter and cortical brain regions are related to changes in iron content, whereas in white matter an increase in iron content is accompanied by microstructural changes related to anisotropic tissue components, such as vascular structures. Fluctuations in iron content and anisotropic tissue components during a migraine attack are fully reversible within the time period observed. Several studies observed higher R 2 , R 2 ∗ or magnetic susceptibility values in patients with migraine compared to healthy controls, indicating an increased iron accumulation in various regions of the brain ( 22 ). An increased iron content was mainly observed in deep gray matter and cortical gray matter ( 6 , 7 , 11–14 , 26 ). Higher iron content in migraine patients were correlated with disease duration and the frequency of migraine attacks ( 6 , 7 , 12 , 26 ). Furthermore, it was shown that iron content in the basal ganglia differs between patients with chronic migraine compared to patients with episodic migraine ( 11 , 14 , 26 ). Studies conducted by Dominguez et al. ( 11 ), and Chen et al. ( 26 ) showed that patients with chronic migraine have an increased accumulation of iron in areas involved in the nociceptive network such as the red nucleus and periaqueductal gray (PAG). The role of the PAG as a contributory generator in migraine attacks warrants further investigation since several studies, such as the one conducted by Welch et al. ( 27 ), point to its importance in the development of migraine attacks. Welch et al. ( 27 ) demonstrated that iron homeostasis in the PAG may be affected by recurrent migraine attacks by observing a significant increase in mean R 2 ′ (= R 2 ∗ − R 2 ) and R 2 ∗ in patients with both episodic migraine and chronic daily headache, although there was no significant difference between the episodic migraine and chronic daily headache groups. In a recent study, investigated changes in iron deposition after treatment with erenumab showed lower R 2 ∗ values in the PAG and anterior cingulate cortex (ACC), indicating less iron deposition, in responders compared to non-responders after 8 weeks of treatment ( 28 ). Overall, the majority of studies investigating iron content in migraine indicate a general increased iron accumulation and differences in iron content between migraine types ( 22 ). In contrast to literature, our study allowed to investigate short-term dynamic alterations in iron content during the migraine cycle. By daily measuring R 2 ∗ we could observe both, an increase and a decrease in R 2 ∗ , in various regions of the brain being dependent on migraine attack status. In deep gray matter structures and in the cortex, these R 2 ∗ alterations are highly likely to be driven by iron changes. These dynamic alterations in R 2 ∗ differed not only between regions but also between hemispheres, indicating a shift in regional iron content. An increase in R 2 ∗ is predominantly observed in the left hemisphere, which was also the hemisphere where the pain was located. This could indicate that in these brain regions a higher demand of energy metabolism, and thus a higher need of iron, is present during a migraine attack ( 29 ). The accompanied decrease of R 2 ∗ in contralateral brain regions could indicate a shift of iron between the hemispheres. However, after the migraine attack, iron content in deep gray matter and white matter, reached the same level as on migraine-free days. Our results indicating a short-term alteration in brain iron levels during a migraine attack do not contradict an overall abnormal long-term iron accumulation in patients with migraine. In white matter the effect of iron on R 2 ∗ is overshadowed by the effect of diamagnetic myelin ( 15 , 16 ), and orientation effects of anisotropic tissue components. In white matter, there are two main sources of orientation dependency of R 2 ∗ : (I) the orientation of myelinated white matter fibers with respect to B 0 ( 17 , 18 ) and (II) the anisotropic part of the vasculature ( 30 ). Blood vessels have an isotropic component (capillary bed) and an anisotropic component (larger vessels). There is evidence that these larger blood vessels in white matter converge in parallel to main white matter fiber tracts and therefore contribute to the orientation dependent MR signal ( 31–34 ). Therefore, we acquired orientation dependent R 2 ∗ to differentiate between isotropic effects of iron and anisotropy effects of myelin and vascular components contributing to R 2 ∗ in cerebral white matter. By separating isotropic and anisotropic contributions to R 2 ∗ , it was possible to identify an increase in white mater iron content on days with migraine. To the best of our knowledge, this study is the first of its kind, which reported alterations in white matter iron content in patients with migraine. Our results could potentially indicate a shift in iron content from deep gray matter structures to white mater during an acute migraine attack. Beside changes in iron content, migraine is also associated with changes in vascular structures, including veins and perivascular spaces (PVS) ( 22 ). Breiding et al. ( 35 ) observed a higher total cerebral vein volume in patients with migraine compared to healthy controls. Furthermore, in patients with unilateral migraine, the veins were more prominent in one hemisphere ( 35 ). This is in line with our observation of more prominent venous structures in the left hemisphere of our patient. We observed that R 2 ∗ anisotropy is approximately 10% higher in the left hemisphere compared to the right hemisphere on migraine-free days, indicating higher venous and or PVS volume. On days with migraine, the difference in R 2 ∗ anisotropy between left and right hemisphere increased up to approximately 30%. This indicates an involvement of vascular mechanism’s during a migraine attack in addition to a slight increase in iron content in cerebral white matter. The decrease in R 2 ∗ anisotropy during migraine could be explained by a lower venous volume, or altered perfusion which is commonly observed in migraine ( 36 , 37 ) or by a reduction of the PVS volume. A closure of the PVS causing an impaired glymphatic flow during migraine was observed in a previous study ( 38 ). Studies investigating PVS in patients with migraine showed inconclusive results where both increased and decreased, PVS volume was observed compared to healthy controls ( 38–41 ). Overall, the majority of the studies indicated an increase in PVS volume. This would be inline with the observation of enlarged PVS in our patient. However, an overall higher PVS volume in patients with migraine does not contradict a decrease in PVS volume during an acute migraine attack. Altered mean arterial blood pressure can induce cerebral blood volume shifts, detectable through quantitative susceptibility mapping (QSM) ( 42 ). Changes in blood volume or deoxygenated hemoglobin concentration during acute migraines may decrease R 2 ∗ anisotropy, recovering post-attack. Besides vascular factors, R 2 ∗ anisotropy reflects axonal and myelin-related alterations. Granziera et al. ( 13 ) noted thalamic changes in migraine patients, including myelin and cellularity differences, along with iron content changes. Palm-Meinders et al. ( 6 ) observed elevated R 2 values in migraine patients initially, which were decreasing after 9 years. Thus, migraine-related iron changes might be obscured by age or disease-related tissue changes. Numerous studies used DTI to assess white matter alterations, showing heterogeneous results in migraine ( 43 ). Alterations of tissue microstructure in patients with migraine will clearly affect R 2 ∗ anisotropy in general. However, we observed that after migraine attacks, the altered R 2 ∗ anisotropy values reach the same level as on migraine-free days. These short-term alterations in R 2 ∗ anisotropy are more likely to be explained by vascular effects, rather than microstructural tissue changes linked to myelin, as dominant source of R 2 ∗ anisotropy. To investigate if there are brain changes on migraine-free days right after the migraine attack, we repeated the entire analysis, excluding these days. We did not observe any change in our overall results and present a summary of all results without days right after the migraine attack as Supplementary material . Although our results are solely based on a single patient, this study is the first-of-its-kind acquiring MRI on multiple consecutive days, comprising migraine-free days and days with acute migraine attacks. Acquiring MRI of multiple patients with migraine on 21 or more days, including days with a migraine attack, would be very challenging. It is worth to note that is extremely difficult to recruit patients which are willing to undergo an MRI examination during an acute migraine attack, especially if a refrain from taking any preventive of acute medication is required for study purpose. Furthermore, results of multiple subject could not directly be averaged, as multiple factors, such as unilateral or bilateral migraine location of the pain, duration and frequency of the migraine attacks, age and sex will influence the observed R 2 ∗ alterations. However, further multi-parametric MRI studies with a larger number of patients, ideally with image acquisition during migraine attacks, will be needed to further identify the cellular mechanism contributing to isotropic and anisotropic R 2 ∗ changes in migraine and to obtain reproducible results. Furthermore, future studies should also consider the severity of the disease, e.g., attack frequency and duration. Moreover, the exact onset and the end of the migraine attack cannot always be assessed with high accuracy due to clinical considerations such as sleep terminating the migraine attack. In our study, we observed a relative short time period of 21 days in comparison to years of disease duration, thus no conclusions about long-term alterations in iron content or vascular structures can be made from our results. Although, our results indicate that changes in iron content related to migraine attacks are reversible, a long-term alteration in iron content can still accompany short-term dynamic fluctuations in iron content. It is worth to note, that plotting of R 2 ∗ on each day in combination with the boxplots is important, as every day was assigned to only one condition (e.g., migraine-free or migraine), yet, as shown in Figure 2A , altered R 2 ∗ values can sometimes be observed already on days prior or after the migraine attack. This can lead to a bias in the boxplots and statistical analysis. In addition, potential outliers, as shown in Figure 2A can be identified by combining day plots with boxplots. While R 2 ∗ is highly sensitive to iron content, it can not be directly converted to an absolute iron concentration. Although, studies have shown a strong correlation between R 2 ∗ values and iron concentration in brain tissue, particularly in deep gray matter structures ( 44 ), the exact relationship is not perfectly linear and can vary across brain regions. An approximation of the altered iron content can be made based on a post-mortem study, where R 2 ∗ was validated as reliable measure for iron content using mass spectrometry ( 10 ). Langkammer et al. ( 10 ) reported a slope of 0.27 1/s per mg/kg iron for gray matter. Based on this study, the R 2 ∗ decrease of around 5% in the caudate would correspond to a change in iron content of approximately 3.7 mg/kg wet tissue. We conclude that migraine attacks lead to short-term changes in R 2 ∗ in specific brain regions, which further differ between the left and right hemispheres. Our study identified both specific brain regions with increased and brain regions with decreased iron content during a migraine attack. This suggests that different metabolic processes have an increased need for iron, which could potentially be resolved by shifting iron between brain structures. Furthermore, by separating isotropic and anisotropic R 2 ∗ components, we were able to distinguish between iron and non-iron related tissue changes in the cerebral white matter. Our observed decrease in R 2 ∗ anisotropy during a migraine attack suggests the involvement of vascular components, such as a decrease in PVS volume, a change in venous volume, or a blood pressure-induced shift in magnetic susceptibility during an acute migraine attack. However, the observed R 2 ∗ changes fully return to baseline after the migraine attack has resolved. This supports the involvement of vascular structures rather than changes in axonal fibre architecture and myelin content as the dominant source of R 2 ∗ anisotropy. In conclusion, the time-dependent mapping of R 2 ∗ during a migraine cycle opens new possibilities to study short-term changes in the brain during a migraine attack, which appear to be partially different from long-term tissue changes in migraineurs. Taken together, our results indicate dynamic alterations in iron metabolism and vascular processes during an acute migraine attack. | Study | biomedical | en | 0.999999 |
PMC11697586 | Diabetes mellitus is a metabolic disease characterized by hyperglycaemia that affects a large population, is highly dangerous and is difficult to treat ( 1 ). Peripheral neuropathy (DPN) is one of the most common complications in people with diabetes. It is characterized by numbness, pain, burning or other abnormal sensations in the limbs. The WHO predicts that by 2030, there will be approximately 360 million diabetic patients worldwide, and more than 50% of them may have DPN symptoms, and most diabetic amputations or disabilities are caused by DPN ( 2 , 3 ), and the quality of life of diabetic patients with DPN will be seriously reduced once it occurs, and even lead to the death of the patients. Studies have shown that the relative likelihood of death within 5 years of lower limb amputation due to diabetic foot ulcers is greater than for diseases such as prostate and breast cancer ( 4 ). In addition, amputation imposes a significant financial burden on both the healthcare system and society. In the United States, the total annual cost of care for symptomatic DPN (pain) and its complications (foot ulcers and lower limb amputations) is estimated to be between 460 million and 1,370 million US dollars. As much as 27% of the direct medical costs associated with diabetes are attributed to DPN ( 5 ). Diabetic peripheral neuropathy is usually irreversible. The medical community does not have a consistent and effective treatment plan to manage the disease. Treatment options at this stage are generally used to prevent disease transmission and complications. Most treatment options tend to use symptomatic treatment such as nerve nutrition and improvement of neural microcirculation ( 6 ). Commonly used drugs include alpha-lipoic acid, which is an antioxidant stress, neurotrophins such as vitamin B1, B12, gangliosides, nerve growth factor, etc., and drugs to improve neural microcirculation include prostaglandin E1, scopolamine and hexacosanolone coccolithophore, etc. ( 7 ). Drugs are mainly used to relieve neuropathic pain and sensory abnormalities, but they cannot solve the problem of decreased nerve function. There is still a lack of specific therapeutic measures, and there are no effective treatments and medications, and most of the medications have certain side effects, and patients must be able to tolerate the side effects of drug therapy ( 8 , 9 ). In addition, there are individual differences between patients, and western medical treatment has more adverse effects and poor long-term results ( 10 ). Acupuncture is a characteristic external treatment method of traditional Chinese medicine, which mainly plays a therapeutic role through various physical stimulation effects on acupoints. Nowadays, this therapy has been widely used in the treatment of diabetic complications, including diabetic foot, diabetic bladder, diabetic peripheral neuropathy and so on. From the clinical study report, acupuncture can significantly reduce the symptoms of numbness, pain and superficial sensory impairment of the extremities in patients with DPN, with certain efficacy and fewer side effects ( 11 ). In addition, acupuncture has the advantages of multi-targeting and bidirectional regulation of the mode of action ( 12 ). It is currently believed that the pathological mechanism of DPN is closely related to inflammation, oxidative stress, endoplasmic reticulum stress, microvascular lesions, neurotrophic disorders and immune dysfunction ( 13 ), and its pathological changes are peripheral nerve demyelination or axonal degeneration ( 14 ), or both. Acupuncture has the ability to modulate inflammatory reaction, oxidative stress, ER stress, increase peripheral nerve blood flow, ameliorate microangiopathy, increase neurotrophic factor content, ameliorate peripheral nerve electrophysiological function, promote axonal and myelin repair, and so on. The key to the therapeutic effects of acupuncture in DPN may be found in the above mechanisms ( 15 ). A recent systematic review of acupuncture for the treatment of diabetic peripheral neuropathy concluded that acupuncture can effectively improve the neurological and clinical symptoms of diabetic peripheral neuropathy, but further work is needed to develop a uniform standard for the treatment of diabetic peripheral neuropathy with acupuncture ( 16 ). Although we also use acupuncture for the treatment of diabetic peripheral neuropathy in the clinic, there is a lack of systematic data studies on the efficacy of acupuncture for the treatment of diabetic peripheral neuropathy. Therefore, a meta-analysis was performed in this paper to summarize the randomized controlled trials of acupuncture for diabetic peripheral neuropathy published by previous investigators. The search will include major Chinese and English databases such as PubMed, Web of Science, Cochrane Library, AMED, CINAHL, China Knowledge Network (CNKI), Wanfang Database and Wipro Database (VIP), supplemented by references to included trials, clinical trial or research registry platforms, expert consultation and gray literature. All publications in Chinese and English from the time of the library’s inception to 30 December 2023 will be searched, regardless of country or article type. The key search terms were composed of the following group terms: “acupuncture,” “acupuncture,” “electropuncture,” “fire needle,” “plum blossom needle,” “acupoint,” “auricular acupuncture” and “peripheral nervous system diseases,” “diabetic peripheral neuropathy,” “DPN” and “randomized controlled trial,” “RCT,” “random,” “blind,” “control.” The studies which were included must meet the following eligibility criteria. English and Chinese RCTs, excluding trials, case studies, case series, qualitative studies and uncontrolled studies. Trials that did not report detailed outcomes were also excluded, and there were no restrictions on time to publication or geographical location of the study. Patients meeting the diagnostic criteria for DPN or using self-developed Chinese or Western diagnostic criteria. There were no restrictions on baseline information such as sex, age, race and region of patients, but they must be comparable. The experimental group was acupuncture therapy, acupuncture therapy plus drug therapy, or acupuncture therapy plus usual care. The control group was drug therapy (Details of the intervention have been descripted in Table 1 ), sham acupuncture needling, or usual care (Maintain blood sugar within the normal range of fasting, without using any other therapeutic approaches). Treatment efficacy; sensory nerve conduction velocity (SNCV) of the median, common peroneal and tibial nerves; motor nerve conduction velocity (MNCV) of the median, common peroneal and tibial nerves; and visual pain scale (VAS); Symptom scores. The following were excluded: duplicate publications; unavailability of original literature; literature with incomplete or questionable data; non-RCT literature such as case studies, case series, qualitative studies and uncontrolled trials; comorbidities with other causes of peripheral neuropathy; and patients in the study group who had received other TCM drugs or therapies during the course of their disease. The literature was screened independently by two researchers in accordance with the inclusion and exclusion criteria and the literature search strategy, and the retrieved literature was imported into Endnote20 software for duplicate checking and removal. The title and abstract were read first to exclude literature that clearly did not meet the inclusion criteria. The remaining literature was read in full and screened again to identify literature that met the inclusion criteria. In the case of conflicting opinions, the third researcher was asked to participate in the discussion for assessment. The extraction of data was conducted independently by two investigators, encompassing fundamental details pertaining to the study, such as title, author, publication date, and journal, alongside essential study characteristics, including mean age, gender, sample size, subgroups, measures, treatment duration, follow-up duration, and outcome measures. The quality of the included trials was assessed by the researchers using the risk of bias assessment tool recommended in the Cochrane Handbook for Systematic Evaluators, and the results of the assessment included the following six items: whether the random allocation method was appropriate, whether allocation concealment was correctly applied, whether blinding was correctly applied, whether there was no selective reporting of results, whether outcome data were complete, whether there was any other risk of bias. Meta-analysis was performed with the use of RevMan 5.3 software. For dichotomous variables (e.g., clinical effectiveness), the relative risk (RR) was used as the effect size, and for continuous variables (e.g., sensory nerve conduction velocity SNCV), the mean difference (MD) was used as the effect size, and the 2 effect sizes were expressed by 95% CI. The degree of heterogeneity was determined by the I2 and p values; when the heterogeneity between studies was small (I2 ≤ 50%, p > 0.05), a fixed-effects model was chosen; when heterogeneity between studies was present (I2 ≥ 50%, p < 0.05), a random-effects model was chosen. A p value less than 0.05 signifies a statistically significant difference. Sensitivity analysis was performed to verify the robustness of the results of the heterogeneity tests by excluding studies case-by-case. In addition, the Baujat plot was used to further characterize the contribution of each study to overall heterogeneity and identify high heterogeneity studies. If more than 10 studies were available, publication bias was assessed using funnel plots. In addition, Egger test or Peters test was used to further formally test for potential publication bias. A total of 1,423 studies were identified in eight databases. A total of 446 articles were removed due to duplication. A total of 884 studies were screened by reading titles and abstracts, leaving 93 articles. After reading the full text of 93 articles, 73 articles were excluded for the reasons described in Figure 1 . Finally, 20 studies ( 17–36 ) met the inclusion criteria and were meta-analyzed . A total of 20 papers were included in this review, all of which were published, including 6 in English ( 17–20 , 26 , 36 ) and 14 in Chinese ( 21–25 , 27–35 ), involving a total of 1,455 patients. All studies reported comparable baseline data between groups. All trials included adults, and the mean age of participants ranged from 45 to 81 years, with all being middle-aged to older adults. Eleven trials compared AT with medication ( 19 , 21 , 23–27 , 31–34 ), and four trials used AT + medication as an intervention and medication alone as a control ( 22 , 26 , 30 , 35 ), with one trial having two intervention groups: AT and AT plus medication ( 26 ). Three trials compared AT with sham AT ( 18 , 20 , 36 ). Three trials compared AT + conventional treatment with conventional treatment ( 17 , 28 , 29 ). The results of the risk of bias assessment are displayed in Figure 2 . According to the Cochrane Risk of Bias Assessment Tool, 20 studies mentioned randomization and all of the literature used the random grouping method, of which 11 articles used the random number table method ( 20 , 22 , 23 , 25 , 26 , 28 , 30 , 32 , 34–36 ), three used the method of using computer random grouping ( 17 , 18 , 29 ), and the rest of the articles did not mention the specific random grouping method; one study ( 18 ) mentioned allocation concealment by using sealed opaque envelopes, one study ( 17 ) had randomization performed by a single research nurse and notified the study doctors and patients of the allocation results by telephone, and the rest did not describe the method of allocation concealment; 2 studies ( 18 , 20 ) blinded patients using a sham-needle technique, the remaining studies referred to in-process blinding during the trial, and blinding of patients or clinicians was considered to be at high risk due to the variable differences between the intervention and control groups; 1 study ( 17 ) referred to blinding of data analysts, the remaining studies did not; 2 studies ( 17 , 18 ) reported off-case results, the remaining studies did not report off-case results, with good data completeness; no selectivity was reported in all studies, with a low risk of other biases. Eleven RCTs were included ( 19 , 21 , 23–27 , 31–34 ), and the heterogeneity test showed that the heterogeneity between studies was small (I 2 = 7%, p = 0.37), and a fixed-effects model was adopted. The results of the meta-analysis showed that the overall efficacy rate of patients treated with acupuncture was significantly better than that of patients treated with drugs, and the difference was statistically significant . Four RCTs were included ( 22 , 26 , 30 , 35 ). Heterogeneity analysis revealed substantial heterogeneity among the studies (I 2 = 77%, p = 0.004), necessitating the application of a random-effects model. The meta-analysis demonstrated that the combined use of acupuncture and drug therapy yielded a higher overall efficacy rate compared to drug therapy alone, with a statistically significant difference . Two RCTs were included ( 28 , 29 ), with a high level of heterogeneity observed (I 2 = 88%, p = 0.003), prompting the adoption of a random-effects model. The meta-analysis revealed no significant difference in the overall efficacy rate between patients treated with AT plus usual care and those receiving usual care alone . Four studies ( 21 , 23 , 26 , 34 ) were analyzed, demonstrating low heterogeneity (I 2 = 40%, p = 0.17), thus supporting the use of a fixed-effects model. The meta-analysis indicated that patients in the acupuncture group experienced a statistically significant enhancement in median sensory nerve conduction velocity, compared to the drug group, with a mean difference of 2.61 . Two studies were included ( 26 , 35 ), and the heterogeneity analysis indicated minimal inter-study variability (I 2 = 49%, p = 0.16), leading to the adoption of a fixed-effects model. The analysis demonstrated that the protocol combining acupuncture and drug was superior to drug alone in enhancing median nerve sensory nerve conduction velocity in patients with diabetic peripheral neuropathy, with the difference being statistically significant . In a study ( 20 ), the protocol for the acupuncture treatment group was found to outperform the sham needle group in improving sensory nerve conduction velocity in the median nerve of patients with diabetic peripheral neuropathy (DPN). This superiority was accompanied by a statistically significant increase in median nerve conduction velocity, indicating a notable difference in the treatment outcomes . Four studies ( 19 , 23 , 26 , 34 ) were included, showing no significant heterogeneity (I 2 = 68%, p = 0.03), thus justifying the use of a fixed-effects model. The comparison revealed that the acupuncture treatment group outperformed the drug treatment group in improving motor nerve conduction velocity of the median nerve in patients with diabetic peripheral neuropathy, with the observed difference being statistically significant . Three studies ( 22 , 26 , 35 ) were analyzed, revealing negligible between-study heterogeneity (I 2 = 0%, p = 0.53), which supported the utilization of a fixed-effects model. The meta-analysis outcomes demonstrated that the acupuncture plus medication group exhibited superiority over the medication group in improving motor nerve conduction velocity of the median nerve in diabetic peripheral neuropathy patients, with a statistically significant discrepancy noted . One study was included ( 20 ), and the meta-analysis outcomes indicate that the acupuncture group protocol does not exhibit a significant difference compared to the sham needle group in enhancing the motor nerve conduction velocity of the median nerve in patients suffering from DPN . Four studies ( 23 , 26 , 31 , 34 ) were incorporated, and the heterogeneity analysis revealed minimal between-study variability (I 2 = 73%, p = 0.01), thus justifying the use of a fixed-effects model. The meta-analysis outcomes indicated that the acupuncture treatment group outperformed the drug treatment group in enhancing the sensory nerve conduction velocity of the common peroneal nerve in diabetic peripheral neuropathy (DPN) patients, with a statistically significant difference observed . Six studies ( 19 , 21 , 23 , 24 , 26 , 34 ) were included, and a significant level of inter-study heterogeneity was detected (I 2 = 0%, p = 0.60), prompting the adoption of a random-effects model. The comparison indicated that the acupuncture treatment group outperformed the drug treatment group in enhancing the motor nerve conduction velocity of the common peroneal nerve in diabetic peripheral neuropathy (DPN) patients, with a statistically significant difference . Two studies ( 22 , 26 ) were included, with a substantial heterogeneity found between them , prompting the use of a random effects model. The meta-analysis did not identify a statistically significant difference between the protocols of the acupuncture plus medication group and the medication group in terms of improvement in motor nerve conduction velocity of the common peroneal nerve in patients with diabetic peripheral neuropathy (DPN) . Three studies ( 24 , 32 , 33 ) were examined, and the heterogeneity test indicated a negligible degree of between-study heterogeneity (I 2 = 0%, p = 0.69), prompting the utilization of a fixed-effects model. The meta-analysis findings revealed that the acupuncture treatment group exhibited superiority over the drug treatment group in enhancing the sensory nerve conduction velocity of the common peroneal nerve in DPN patients, with a statistically significant difference detected . Three studies were included ( 21 , 32 , 33 ), and the heterogeneity test showed that there was a large heterogeneity between studies (I 2 = 52%, p = 0.15), and a random-effects model was adopted. All three studies showed that, compared with medication, the acupuncture group had a statistically significant effect on the improvement of motor nerve conduction velocity of the tibial nerve in DPN patients . Two studies were included ( 30 , 35 ), and the heterogeneity test showed a large heterogeneity between studies (I 2 = 84%, p = 0. 01), and a random-effects model was adopted. Meta-analysis results showed that the acupuncture plus drug group protocol was superior to the drug group in improving VAS in DPN patients, and the difference was statistically significant . One study showed ( 18 ) that no statistically significant difference acupuncture group protocol and the sham-needle group in improving VAS in DPN patients . Only one study ( 17 ) met the inclusion criteria, and it was noted that the acupuncture plus conventional group protocol exhibited a statistically significant improvement in VAS for DPN patients compared to the conventional group . Four studies ( 23 , 27 , 31 , 34 ) were selected, showing significant heterogeneity , which led to the use of a random-effects model. The Meta-analysis results showed that the Meta-analysis results showed that the acupuncture group was better than the drug group in improving the symptom score of DPN patients . Only one study ( 28 ) was deemed eligible for inclusion, and the comparative analysis outcomes revealed that the acupuncture plus conventional group protocol was more effective than the conventional group in enhancing VAS in DPN patients, with a statistically significant difference observed . A total of 47 adverse reactions were reported in the four included studies ( 20 , 30 , 33 , 36 ), including 2 cases of needle sickness, 18 cases of small hematomas, 1 case of localized swelling, 6 cases of pain, 1 case of itching, 7 cases of transient paresthesia, 1 case of cramps, 4 cases of transitory intensifying of DPN-related symptoms, 2 case of Mild dizziness, 1 case of chest pain, and 6 cases of tiredness. The total clinical effective rate is an important indicator of clinical efficacy. Therefore, a sensitivity analysis was conducted on the effective rate results for the acupuncture and drug groups that included the largest amount of data. By excluding studies individually, there was no significant change in the pooled effect size of the effective rate. From the results of Baujat chart, we found that three studies ( 21 , 27 , 31 ) led to the heterogeneity of Baujat chart results . As regards the high heterogeneity found in the comparison of acupuncture with drug vs. drug on effective rate (I 2 = 77%) and acupuncture vs. drug on MNCV in tibial nerve (I 2 = 93%), we performed the sensitivity analysis. By excluding studies individually, there was no significant change in the pooled effect size of the effective rate, but an apparently weak decrease in heterogeneity was observed when one study was excluded ( 22 ). Moreover, from the results of the Baujat plot, we found that two studies ( 22 , 30 ) unduly influenced heterogeneity as well as the pooled effect of the MNCV in tibial nerve . In the sensitivity analysis of nerve tibialis, the results showed that an apparently weak decrease in heterogeneity was observed when one study was excluded ( 32 ), and there were two studies ( 32 , 33 ) that contributed overly to the heterogeneity from the results of the Baujat plot . We drew the funnel plot and used Peters’ test to calculate the outcome of the total effective rate, which indicated no publication bias. However, publication bias in the outcome of effective rate may exist due to the asymmetrical funnel distribution and Egger’s test . The trim-and-fill method showed that it was necessary to fill four potential unpublished studies in the funnel plot . A meta-analysis was re-performed for all the studies, results show a heterogeneity test is low , using fixed effect model, The combined results of the effect indicators did not change significantly, indicating that the results were still statistically significant, with no reversal, so the combined results were robust . Our review aims to evaluate and refine the evidence from recent randomized controlled trials on acupuncture for the treatment of diabetic peripheral neuropathy. When comparing acupuncture with medication, conventional therapy, and sham acupuncture, our findings suggest that acupuncture is more effective in treating DPN and in improving nerve conduction velocity. Additionally, the combination of acupuncture and medication demonstrates a more significant improvement in nerve conduction velocity compared to medication alone. The combination of acupuncture and usual care improves DPN symptoms more effectively than usual care. In this meta-analysis, all included trials used acupuncture as a treatment option, and the clinical use of acupuncture for the treatment of DPN is varied, such as acupuncture, electroacupuncture, acupoint injection, and warm acupuncture. It can be seen that physical stimulation of acupoints is a safe and effective treatment for the potential treatment of diabetic peripheral neuropathy, and acupoints are an important basis for the efficacy of treatment, such as BL18, BL20, BL23, BL25, BL60, GB30, GB34, SP6 and ST36 are the most commonly used acupoints in the literature included in this study. Acupressure is an external Chinese medicine treatment, and both acupressure and acupuncture use physical stimulation of acupoints to achieve therapeutic effects. Therefore, these two therapies have similar mechanisms of action and clinical efficacy. According to a previous review published by Fu et al. ( 37 ), the combination of herbal foot bath and acupressure therapy effectively improved sensory nerve conduction velocity (SNCV), motor nerve conduction velocity (MNCV), overall efficacy rate and neuropathy syndrome score compared with various types of control groups, such as Western medicine, oral Chinese medicine, other Western symptomatic treatments and blank control, and there were no case reports of adverse effects. The results of this trial are consistent with the findings of this study, which suggest that herbal footbath combined with acupressure may be safer and more effective in the treatment of DPN. In contrast to previous systematic reviews, we identified 20 new randomized controlled trials ( 17–36 ) and successfully assessed the treatment evidence. Acupuncture, as a special therapy of Chinese medicine, has a wide range of indications, significant efficacy, safety and no side effects, and this review will identify the advantages and possibilities of using acupuncture in the treatment of DPN. In the early stages of DPN, the main manifestation is abnormal sensation in the limbs, which is distributed like a sock or glove, accompanied by numbness, pins and needles, burning, ants and ants, coldness, or as if stepping on a cotton pillow ( 38 ). This is followed by pain in the limbs, which is vague, tingling or burning, and is worse at night and during the cold period. In advanced stages, clinical manifestations of motor nerve disorders occur, such as hypotonia, muscle weakness to myasthenia and paralysis. Therefore, screening scales such as electromyography (EMG) for sensory nerve conduction velocity (SCV) and motor nerve conduction velocity (MCV) ( 39 ), Michigan Diabetic Neuropathy Score (MDNS) ( 39 ) and Toronto Clinical Scoring System (TCSS) ( 40 ) are important indicators of the patient’s condition. In a comparison of acupuncture with pharmacological therapies, the evidence showed that acupuncture had a significant effect on increasing treatment efficiency and was effective in improving sensory and motor nerve conduction velocities in the tibial and median nerves. Acupuncture was more effective than oral medication in relieving pain symptoms and improving quality of life in patients with painful diabetic peripheral neuropathy. Acupuncture showed better results than neurotrophic agents in improving circulation and significantly reducing clinical symptoms in patients with diabetic peripheral neuropathy of the lower extremities. The combination of acupuncture and medication was more beneficial than medication alone in improving nerve conduction velocity. The pathogenesis of DPN is highly complex. Current research suggests that it is primarily caused by metabolic disorders resulting from hyperglycemia, dyslipidemia, and insulin resistance. These disorders include abnormal glycolytic pathways ( 41 ), increased advanced glycation end products (AGEs) ( 42 ), and alterations in protein kinase C signaling pathways. These metabolic disturbances further enhance oxidative stress ( 43 ) and inflammatory responses ( 44 ), leading to endoplasmic reticulum stress, mitochondrial dysfunction, DNA damage, and inflammation, collectively contributing to the onset of DPN. There is a certain connection between the pathogenesis of DPN and the therapeutic mechanism of acupuncture. Acupuncture may exert its therapeutic effects by regulating the pathophysiological processes of DPN. Studies have shown that the mechanisms by which acupuncture treats DPN may include the regulation of neurotrophic factor expression, such as nerve growth factor (NGF) and calcitonin gene-related peptide ( 45 ). Additionally, acupuncture improves glycolipid metabolism, such as reducing the accumulation of advanced glycation end products (AGEs) ( 46 ), and inhibiting the secretion of inflammatory mediators, such as interleukin-6 (IL-6), interleukin-8 (IL-8), and tumor necrosis factor- α (TNF-α) ( 47 ). However, the analyses of acupuncture for DPN did not show a significant combined effect, partly due to the limited number of trials included. Therefore, the role of acupuncture in the treatment of DPN needs to be further investigated future. We also observed that more acupuncture sessions may introduce some heterogeneity, possibly due to increased reporting bias and poor compliance with long-term protocols. Screening scales such as TCSS and MDNS scores were significantly lower after acupuncture treatment than before treatment, and compared with drug therapy treatment before and after treatment changes plus significant, Chinese medicine symptom scores and symptom scores were significantly lower goodness to increase the effectiveness of treatment, acupuncture in combination with drugs and conventional therapies is more favorable than drugs alone. Furthermore, the incidence of adverse effects was significantly lower with acupuncture compared to drug therapy. Based on our assessment, there is a high risk of bias in most of the included studies, which could lead to false positives, especially for subjects and staff blinded to an assessment. Blinding is difficult due to the specific nature of acupuncture therapy, where needles need to penetrate the skin and take time to remain there, whereas medications are taken orally as prescribed, where patients can easily distinguish whether they are receiving needles or medications, and where practitioners need to physically manipulate the patient’s skin to know the location and depth of the needles when performing acupuncture. Currently, the placebo needle methods commonly used in the design of acupuncture clinical trials include: acupoint/non-acupoint dermal surface needling, non-acupoint deep puncture needling, non-acupoint shallow puncture needling, specific acupoint dermal surface set of overlapping blunt-tipped needles, simulated dermal surface electrical stimulation, laser needling, and so on, but they all have certain limitations ( 48 ), and there is no accepted more mature method of placebo needling that can simulate the sensation of needle puncture without producing a therapeutic effect, which results in The experimental design of clinical trials of acupuncture is difficult to achieve strict blinded control. In addition, the clinical trials of acupuncture that have been conducted to date have not been able to blind the practitioners ( 49 ). Of the randomized controlled trials included in this review, only three trials ( 18 , 20 , 36 ) compared acupuncture with sham acupuncture. Two trials found that acupuncture was more effective than sham acupuncture in improving nerve conduction velocity. Another trial found that acupuncture was well tolerated and had no significant side effects. However, because of the differences in the outcomes between the two trials, they could not be analyzed together. So the placebo effect of acupuncture needs to be investigated further. There are also some limitations to this review. Firstly, many of the included trials had an unclear risk of bias, and most of the literature did not describe the specific method of random allocation and whether or not allocation was concealed. Second, despite the large number of randomized controlled trials, the outcome measures were very heterogeneous, which prevented large-scale meta-analyses. As a result, the level of evidence was mostly low or very low. Finally, most of the literature does not mention follow-up, and long-term effectiveness is more difficult to assess. In the future, studies of acupuncture for diabetic peripheral neuropathy should be more rigorously designed and focusing on randomized methods, allocation concealment, blinding, selection of objective and comprehensive outcome indicators, and long-term follow-up to provide high-level clinical evidence. Clinical trials with a large sample size should also be carried out to clearly demonstrate the benefits of acupuncture and to provide robust evidence for clinical decision making. In addition, further studies should be conducted to provide guidelines for the clinical use of acupuncture for diabetic peripheral neuropathy in terms of acupuncture points, duration, frequency and treatment cycles. In conclusion, acupuncture has the potential to be used as a routine treatment for diabetic peripheral neuropathy. Acupuncture has been shown to have better outcomes and fewer adverse effects than conventional Western medicine. The combination of acupuncture and pharmacological therapy is superior to pharmacological therapy alone for diabetic peripheral neuropathy. However, the level of evidence is low due to a high risk of bias and small sample sizes. To obtain high-quality and comprehensive evidence, future data from more rigorous clinical trials are needed. In addition, acupuncture has demonstrated better clinical outcomes for other complications of diabetes, such as diabetic nephropathy, diabetic foot, diabetic bladder and diabetic retinopathy, and has played an integrative role in improving the subsequent quality of life of diabetic patients. | Review | biomedical | en | 0.999995 |
PMC11697590 | Infectious diseases pose a long-term threat to human health and disrupt the normal social order. Infectious diseases are continuous and fast-spreading diseases that can be transmitted by an infected person to more than one person, exponentially increasing the total infected population. Common infectious diseases include swine influenza, avian influenza in birds, severe acute respiratory syndrome (SARS), coronavirus disease (COVID-19), dengue fever, and malaria. People are at risk of exposure to viruses and diseases that can affect their normal lives during daily commuting or by participating in social activities. The spread of infectious diseases has accelerated because of population growth and improved transportation systems. COVID-19 aroused extensive worldwide attention on infectious diseases during the related global pandemic. COVID-19 was rapidly dispersed internationally because of its wide distribution and difficulties with protection. Research into the impact of transportation on the spread of epidemics had increased during the SARS period ( 1 ). Some scholars observed that the contact rate was a key parameter in the study of the evolution of diseases ( 2 ). After the outbreak of COVID-19 in Wuhan, many scholars paid close attention to the pandemic ( 3 ) and conducted research using mathematical models ( 4 , 5 ). To predict the trend of a pandemic slowdown, there are articles which studied the outbreak of COVID-19 in Greece using a time series model, probability distribution, and a susceptible–infected–recovered (SIR) model ( 6 ). Some researchers noted that the COVID-19 pandemic could spread in family settings ( 7 ). Among the first scholars to study the spread of COVID-19 on buses, Edwards et al. ( 8 ) confirmed the effectiveness of surgical masks and the use of air conditioning systems to suppress the spread of the virus. Moghadas et al. ( 9 ) believed that the vast majority of COVID-19 incidences were related to a silent transmission caused by a combination of pre-symptomatic diagnosed patients and asymptomatic infected patients. The susceptible–exposed–infected–recovered (SEIR) model is suitable for the study of transmission trends because susceptible individuals do not always develop symptoms immediately after infection. Tang et al. ( 10 ) used the SEIR model to calculate and analyze data during the outbreak of the pandemic in Wuhan and explored the implementation effects of various intervention measures. Xue et al. ( 11 ) observed that the Omicron variant of COVID-19 was more infectious than the Delta variant and seasonal influenza; however, its mortality rate was lower. The high infectivity of the Omicron variant has ensured the continuation of COVID-19 infections, increasing the risk of infection among people. Prem et al. ( 12 ) used an age-structure-based SEIR model and observed that measures used to maintain a physical distance had different implementation effects in different age groups. Maintaining a certain social distance effectively reduced the incidence rate of infections in school-age children and the older adult. Transportation as a requisite for daily commuting should not only provide travel but also prevent the large-scale spread of epidemics ( 13 , 14 ). It is imperative to adopt effective epidemic prevention measures in transportation. There are researchers who have found epidemic prevention measures such as city closures and travel restrictions on domestic airlines were effective ( 15 ), based on passenger volume data from Japan's public transportation network. Anderson et al. ( 5 , 16 ) affirmed the contribution of vaccine developments, patient isolation, and self-protection to suppress the spread of epidemics. Some researchers posited that the channels connecting an epidemic area to other areas should be controlled during the early stages of an epidemic ( 17 ). When an epidemic spreads, relevant departments can effectively prevent the further spread of infections such as COVID-19 through the control of transportation hubs. Liu et al. ( 18 ) observed that the spread of an epidemic could effectively be prevented by implementing traceability measures to promptly isolate infected individuals and their close contacts. Rail networks are an important aspect of an urban public transportation system; they are also a critical component in epidemic prevention measures ( 19 ). Research into the transmission of asymptomatic infections in urban rail networks has received little attention in consideration of epidemic prevention measures. In this study, we included asymptomatic infected patients and explored the spread of infectious diseases and related factors within the subway system according to subway passenger flow. Beginning in 1927, the SIR model ( 20 ) marked the inception of mathematical modeling within the field of epidemiology. Since that time, a variety of mathematical models built upon this foundational framework have been continuously developed and extensively discussed. During the SARS epidemic, many researchers used the SEIR model developed by the SRI model as the basis to further explore and change the mathematical model to deal with the problems at that time ( 21 , 22 ). In the COVID-19 era, the SEIR model is still a common tool for many scholars to find ways to prevent the epidemic in the face of a more complex environment. Several studies have used SEIR models to talk about the changing patterns of the global pandemic ( 23 , 24 ) or to predict the effectiveness of government anti-epidemic policies ( 25 , 26 ). In this study, taking the subway in Z city as an example, the SEIR model was further innovated, and a model that can be used to simulate the spread of COVID-19 in the subway was proposed. We constructed an improved SEIR model called the SEIA (susceptible–exposed–infected–asymptomatic infected). The SEIA model considered asymptomatic infected patients to be virus spreaders. We studied the transmission mechanism of infectious diseases in subway systems with highly concentrated populations, based on the impact of changes in passenger flow and infection rates on the spread of infectious diseases. The key elements influencing the spread of infectious diseases in the subway were analyzed in the model calculations on the basis of the scale of exposed people. This enabled us to understand the spread of infectious diseases in a subway and further analyze and predict trends in the spread of diseases. Our research extends and complements prior theoretical research on the spread of infectious diseases in urban rail systems. The mathematical models of infectious diseases define the categories of different populations based on their different states and then divide them into a square or rectangular warehouse. Figure 1 depicts a warehouse model in which susceptible (S) is classified as a susceptible warehouse, infected (I) is classified as an infected warehouse, etc. The state of different types of people changes in a warehouse model, so researchers can reassign people to corresponding warehouses according to their new transformed state. If a susceptible (S) individual is infected, that individual is transferred to the infected warehouse. After treatment and recovery, infected (I) individuals enter the recovered warehouse. This model is usually represented by differential equations that can be used to predict the number of infected individuals, the scale of infections, and the duration of the epidemic. Common warehouse models include SI, SIR, and SEIR. The SIR warehouse model ( 20 ) is a relatively basic infectious disease model that is suitable for the study of diseases such as smallpox and parotitis, which occur quickly but produce antibodies after recovery to ensure no immediate re-infection. The initial total population is assumed in the model without considering the migration status of the population and increases or decreases in birth and death rates. The number of infected (I) individuals increases by β × S N × I within a certain period of time, and the total number of people is N = S + I . Recovered (R) comprises people who contain antibodies in their body after rehabilitation and who will not immediately be re-infected, so this number is not included in the total number of people. Susceptible (S) individuals are transformed into infected (I) individuals with a probability of β after contact with infected (I) individuals in the SIR model. Assuming that the recovery rate of infected (I) individuals from the state of illness to the state of recovery is γ, then infected (I) individuals are cured with a probability of γ after a period of treatment to become a recovered (R) individual. The specific formula for the SIR model is as follows: During the transmission process, infected (I) individuals have the ability to transmit disease after being infected and can spread to R 0 individuals on average during the period of disease ( R 0 = β / γ in the absence of any intervention measures). When R 0 > 1, the number of infected (I) individuals monotonically increases toward the highest value; when R 0 < 1, the number of infected (I) individuals monotonically decreases, leading to the final elimination of the disease. Certain diseases such as COVID-19 have an incubation period. After a susceptible (S) individual encounters an infected (I) individual, the susceptible person is not immediately infected with the disease; a period of incubation is required to develop the disease. This group is known as exposed (E). Figure 2 presents the state transition diagram of the SEIR model. Exposed (E) individuals transform into infected (I) individuals based on infection rate of σ. The infection rate σ is usually the inverse of the average incubation period. The differential equation of the SEIR model is represented as follows: Susceptible (S) individuals are transformed into exposed (E) individuals with a probability of β after contact with infected (I) individuals in the SEIR model. There is an infection rate of σ in a population of exposed (E) individuals that causes exposed (E) individuals to be infected with the disease. Thus, infected exposed (E) individuals move from the exposed warehouse to the infected warehouse. Patients in the infected warehouse are cured based on a probability of γ after treatment and become recovered (R) individuals. Consequently, they move from the infected warehouse and enter the recovered warehouse. We primarily considered infectious diseases with latent periods that result in the creation of antibodies within a short period of recovery such as H1N1 and COVID-19. The SEIR model provides a greater alignment with research requirements than infectious disease models such as SI and SIR because of the addition of the exposed (E) and recovered (R) categories of population segmentation. The basic assumptions of classical infectious disease models do not consider factors such as population migration and natural death, so they are suitable only for the study of the short-term process of virus transmission in subway carriages. We chose the SEIR model as the basic model to study the transmission mechanism of infectious diseases in subways. The traditional SEIR model involves four types of people: susceptible (S), exposed (E), infected (I), and recovered (R). Infected (I) individuals may not experience a secondary transmission in a single ride because passengers may have a limited time traveling on a subway and there is a certain incubation period for exposed (E) individuals to transform into infected (I) individuals. The transmission model considers only the process of susceptible (S) individuals becoming exposed (E) individuals after they encounter infected (I) individuals. Susceptible (S) individuals may not actually be infected after coming into contact with infected (I) individuals. In this study, this latent population was categorized as exposed (E). The asymptomatic infected (A) parameter was added to the model to construct the improved SEIR model because asymptomatic infected (A) individuals also have the ability to spread infections. Our model included the following four population types: susceptible (S), exposed (E), infected (I), and asymptomatic infected (A), abbreviated as SEIA. According to the particularities of the subway environment, u t was introduced as the number of people boarding at time t (a certain stop) and g t was used to depict the number of people alighting at time t (a certain stop). We studied the spread of infectious diseases in subways using these two parameters to simulate the increase or decrease in the number of people in the subway when a train stops. In the formula, S t is the number of susceptible (S) individuals in the subway at time t . E t is the number of exposed (E) individuals in the subway at time t . I t is the number of infected (I) individuals in the subway at time t . A t is the number of asymptomatic infected (A) individuals in the subway at time t . r is the effective number of infected (I) and asymptomatic infected (A) individuals who encounter susceptible (S) individuals (the average number of carriers). β 1 is the probability of susceptible (S) individuals being infected after contact with infected (I) individuals. β 2 is the probability of susceptible (S) individuals being infected after contact with asymptomatic infected (A) individuals. ∑ g t is the total number of people alighting at all stops. The specific formula of the subway infectious disease transmission model is as follows: The improved SEIR model added asymptomatic infected (A) individuals to the traditional model as the source of infection. This ensured its suitability for infectious diseases with a silent transmission such as the influenza of a virus and COVID-19. We assumed that the initial total population was N without considering the migration status of the population and increases or decreases in births and deaths. The formula for the increase in the number of exposed (E) individuals over a period of time is as follows: The total number of people in the SEIA model was calculated as N = S + E + I + A . Susceptible (S) individuals were transformed into exposed (E) individuals after contact with infected (I) individuals or asymptomatic infected (A) individuals. The number of susceptible (S) individuals changed in different ranges because trains constantly stop at stations and passengers constantly enter and leave subways. Correspondingly, the number of exposed (E) individuals increased as the number of subway stops increased. The number of exposed (E) individuals reached maximum value when the train arrived at the final stop. The model diagram of the improved SEIR model is depicted in Figure 3 . Not all susceptible (S) individuals are directly exposed to infection within the contact range of infected (I) or asymptomatic infected (A) individuals. This may expose certain susceptible (S) individuals to the range of the virus transmission. It is not guaranteed that susceptible (S) individuals exposed within the range of virus transmission will contract the virus; rather, they have an infection probability of β. Susceptible (S) individuals who encounter infected (I) individuals may be infected with a virus at an infection rate of r × β 1 . Susceptible (S) individuals who encounter asymptomatic infected (A) individuals may be infected with the virus at an infection rate of r × β 2 and can transform into exposed (E) individuals. A certain period of incubation is required before determining whether individuals have been infected with a disease and for symptoms to appear. The exposed (E) category only indicates the population that may be infected; it is not equivalent to those infected during a single ride or encounter with a subway. Certain susceptible (S) individuals transform into exposed (E) individuals after contact with infected (I) or asymptomatic infected (A) individuals with the probability of r × β 1 or r × β 2 . With each subway or stop, u t is added to the number of susceptible (S) individuals and g t is deducted from the number of susceptible (S) individuals. Infected (I) and asymptomatic infected (A) individuals decrease in proportion to g t ∑ g t with each stop. The process of virus transmission begins from the time infected (I) and asymptomatic infected (A) individuals enter the subway and ends when there are no infected (I) individuals in the subway. We used MATLAB to randomly iterate and generate the average passenger flow data of boarding and alighting. A scenario analysis can be employed to study the spread of infectious diseases by assigning different values for the effective contact number r . The analysis can compare differences in the number of exposed (E) individuals with the use of protective measures inside the subway or not and can judge the effectiveness of the prevention of infection. The degree of transmission of different infection groups can be studied according to the different values of the infection rate of infected (I) and asymptomatic infected (A) individuals. Subsequently, the degree of influence of the two groups of infected (I) and asymptomatic infected (A) individuals on exposed (E) individuals can be ascertained. This enables the identification of key factors in the spread of infectious diseases in subways, the simulation of trends in the spread of infectious diseases, and the exploration of disease transmission patterns. We used City Z in China as our case study for analysis. City Z has a population of over 10 million in the central region of China. City Z is a transportation hub with highways, railways, and aviation and information facilities. It has a transportation network composed of three modes of transportation: railways, highways, and aviation. An integrated urban public transportation system has also been formed within the city with rail and rapid transport as the backbone, conventional public transportation as the main body, and a slow traffic extension. Currently, there are seven rail transport systems in operation. We selected Line B as our research object because it was representative and had a large passenger flow. The comparison between the hourly passenger flow of Line B during daily periods and the hourly passenger flow during an epidemic period is illustrated in Figure 4 . Figure 4 illustrates that the hourly passenger flow of Line B in City Z significantly decreased to fewer than 2000 during an epidemic period of infectious diseases. However, the passenger flow Line B was >10,000 during a normal working day. It reached a peak passenger flow during peak weekdays, with a maximum of over 50,000 in Line B. The hourly passenger flow was relatively homogeneous at the weekend, but it still markedly increased during the peak period. We used the real passenger flow data of Line B subway in City Z as our original data. As not all newly infected (I) individuals use a subway on a particular day, the number of cases of infected (I) and asymptomatic infected (A) individuals on a certain day may be reduced. This reduced number was used in our model to simulate the spread of infected (I) individuals in the subway. In different scenarios, that is, where protective measures are and are not administered, a change in the number of exposed (E) individuals can reflect whether the protective measures are effective. We were able to ascertain the effect of protective measures on the scale of exposed infections from the simulation results during the distribution of infectious diseases in a subway. We assumed that infected individuals encountered 10 stations accessed by subways during the peak period. The effective contact number r was set to 9.5 under the scenario of administering no measures ( 18 ). The infection rate of infected individuals was β 1 = 0.2. Similarly, the infection rate of asymptomatic infected (A) individuals was β 2 = 0.2. The results are depicted in Figure 5 . Figure 5 reveals that the number of susceptible (S) individuals in the subway gradually increased during the peak period with continuous stops in the subway system. The change in the number of susceptible individuals revealed a fluctuating upward trend, increasing from an initial 98 to 221 at the last stop. The curve of exposed (E) individuals gradually rose with an increase in stops, which indicated that the number of exposed individuals was related to the number of stops of the subway. There were 7.04 exposed individuals after the subway passed through one station; the number increased to 27.27 when it reached the fifth station. The number of susceptible dramatically changes in 6 to 7 stations. Subsequently, the rate of the increase in the number of exposed individuals becomes more gently until the tenth station, when it increased to 42.63. The reason may be that there are many people boarding the train at the seven stations. We set the number of people who get on and off at each stop in the model, which is also a feature of our model. Our data are a random number that are randomly generated based on real statistics. The sixth station has a large number of people, and it may be that the sixth station is a larger station or transfer station. This phenomenon is also common in life. We assumed that infected (I) individuals would use the subway during the peak period. The effective contact number r was set to 3.4 under the scenario of implementing preventative measures ( 18 ). The infection rate of infected individuals was β 1 = 0.2. Similarly, the infection rate of asymptomatic infected (A) individuals was β 2 = 0.2. The results are depicted in Figure 6 . Figure 6 reveals that the growth in the number of susceptible individuals presented a fluctuating upward trend. The number of susceptible individuals reached 253.02 at the tenth station. The increasing trend of the number of exposed individuals with protective measures tended to be more gently than with the situation without measures. The number of exposed individuals was only 2.58 when stopping at the first station, but it increased to 16.47 after passing ten stations. It was evident that the increase in the number of susceptible individuals in the scenes with protective measures was greater than that without protective measures at each station when comparing the scenes with and without protective measures. The number of susceptible individuals slowly decreased after administering protective measures. The rate of transition from susceptible to infected individuals slowed. The number of exposed individuals decreased by 32.3 when preventative measures were implemented, presenting a reduction of 158.83% from the perspective of an increase or a decrease in the number of infected individuals and changes in their percentage. Thus, administering relevant protective measures is effective when using a subway. It is necessary to use certain protective measures in closed places such as subways, both for subway operations and personal travel. As a key parameter in the COVID-19 infection model, the infection rate β affected the number of exposed individuals. We analyzed the number of changes of infected and asymptomatic infected individuals under different infection rates by adjusting the transmission rate of symptomatic and asymptomatic infections β 1 and β 2 . We determined the extents to which the two types of infected people had an influence on the trends in the spread of infectious diseases. We used the average of the hourly passenger flow during a peak period of a certain day as the raw data and incorporated it into the propagation model. We assumed that infected individuals used the subway through 10 stations in the peak period for the convenience of comparison. The effective contact number was r = 3.4. We used two types of infection rates and compared them with two scenarios. The simulation results of the model experiments for the scenarios are presented in Figures 7 , 8 . Figure 7 illustrates Scenario 1, where β 1 value was 0.2 and β 2 values were 0.2 and 0.5, respectively. Figure 8 depicts Scenario 2, where β 1 value was 0.5 and β 2 values were 0.2 and 0.5, respectively. In Scenario 1, the transmission rate of infected individuals was β 1 = 0.2 and the β 2 infection rates of asymptomatic infected individuals were 0.2 and 0.5, respectively. The simulation results of different infection rates during the peak period are depicted in Figure 7 . The simulation results illustrated in Figure 7 revealed that the number of susceptible individuals during the peak period gradually decreased with an increase in the β 2 values during the peak period. The number of susceptible individuals increased when the trains continued to stop at stations and new passengers entered the subway. The change in the number of the susceptible individuals was not significantly different, indicating that the results of different values in the scenario had little effect on susceptible individuals. The curve of exposed individuals revealed a steady upward trend that decreased after the seventh station; nonetheless, it continued to grow. The number of exposed individuals increased from the first station of 2.58 to 16.45 at the tenth station when the β 1 value is 0.2. However, the number of exposed individuals increased from the first station of 5.36 to 33.13 at the tenth station when the β 1 value is 0.5. According to the different values of β 1 , the number of exposed individuals has a difference of 16.68. In Scenario 2, the transmission rate of infected individuals was β 1 = 0.5 and the β 2 infection rates of asymptomatic infected individuals were 0.2 and 0.5, respectively. The simulation results of different infection rates during the peak period are illustrated in Figure 8 . The number of susceptible individuals decreased with an increase in the β 2 values in Scenario 2. The β 2 values for asymptomatic infected individuals were the highest because of the β 1 transmission rate of infected individuals. Thus, the corresponding curve of the number of exposed individuals was also the steepest in the two scenarios. The number of exposed individuals reached 22.69 at the tenth station of the subway when the β 2 value is 0.2. Similarly, the number of exposed individuals reached 39.09 when the β 2 value is 0.2. According to the different values of β 2 , the number of exposed individuals has a difference of 16.4. We compared the different values of the β 1 infection rate of infected individuals in the two scenarios. When the β 2 infection rate of infected individuals was 0.2, the numbers of exposed individuals were 16.45 and 22.69, respectively, according to the different values of β 1 . When the β 2 infection rate of infected individuals was 0.5, the numbers of exposed individuals were 33.13 and 39.09, respectively, according to the different values of β 1 . From a horizontal comparison of the β 2 values in Scenarios 1 and 2, the number of infected individuals was >16. When comparing the β 1 infection rate of infected individuals with the β 2 infection rate of asymptomatic infected individuals, we observed that the β 2 values had a greater impact on exposed individuals. Infected individuals may consciously avoid travel and self-test their health at home when they experience symptoms such as fevers and coughs. Infected individuals often choose self-driving, walking, or using well-ventilated public transportation when there is an essential requirement to travel. Certain passengers who use the subway may also consciously reduce their contact with other passengers during the subway ride. It is difficult for asymptomatic infected individuals to ascertain whether they are infected as they do not present clinical symptoms. Asymptomatic infected individuals may maintain normal social activities and may not consciously maintain a social distance or reduce activities in crowded places. The transmission caused by asymptomatic infected individuals in subways is more covert, causing difficulties for subways and leading to an accelerated spread of infectious diseases. After a pandemic, the aim of prevention and control should shift to exploring trends in the spread of infectious diseases for daily epidemic prevention and control. The influence of different factors on the trend of an epidemic can be identified by exploring the mechanisms of the transmission of infectious diseases. Cost-effective dynamic prevention and control measures can be then administered based on these results. The patterns of disease transmission must be studied to ascertain the transmission process of infectious diseases in subways. In this study, we first determined the number of effective contacts, the infection rate of infected individuals. We also added asymptomatic infected individuals to the SEIR model together with infected individuals as the source of infection in the transmission process of infectious diseases in subways. We constructed an SEIA infectious disease transmission model based on the classic SEIR model. The SEIA model considered asymptomatic infected individuals and the uniqueness of subway operating sites. We added changes in the number of people boarding and alighting the subway to the process of the spread of an infection using subway passenger flow characteristics. The model proposed in this study is suitable for the study of the spread of infectious diseases in subways. It could also be applied to other transportation systems. The accuracy of the model in future research could be improved by adding other factors such as the historical passenger flow of the route stations and if the stopping stations are in an epidemic area. | Other | biomedical | en | 0.999996 |
PMC11697591 | A significant number of prevalent human diseases are linked to climate fluctuations, and warming trends in recent decades have led to increased morbidity and mortality from diseases such as CVDs in many parts of the world ( 1–5 ). A meta-analysis of studies has demonstrated that CVD-related mortality increases with increasing ambient temperature, and that the risk of death from stroke increases by 3.8 per cent and the risk of death from CAD by 2.8 per cent for every 1°C rise in ambient temperature, and has demonstrated that the risk of CVD varies geographically and is affected by a number of underlying climatic conditions ( 6 ). To date, a large amount of literature has confirmed a strong correlation between high temperatures and increased mortality from CVD ( 7–10 ). However, most of these studies only focused on the impact of temperature as an influencing factor on the human body, while humidity was controlled as a confounding factor ( 11 ). With global warming, the earth’s climate is becoming warmer and wetter, and focusing only on temperature or humidity can no longer better quantify the impact of climate change on CVD. The complexity of the impact of climate on disease makes it challenging to study the relationship of a single factor in isolation, and because there is often a joint effect between the factors, a single-factor analysis cannot accurately reflect the real climate situation. Physiologically, it has been confirmed that high humidity at high temperatures can prevent the cooling effect of the cooling system. In a hot and humid environment, high humidity reduces the body’s own cooling ability ( 12–15 ), which can lead to an increase in the body’s core temperature and in turn put a strain on the cardiovascular system ( 16 ). On this basis, some scholars have begun to suggest that there may be a combined effect between temperature and humidity, and that this effect may exacerbate the damage to the cardiovascular system caused by high temperatures, leading to an increased risk of death ( 17 , 18 ). Of course, some scholars believe that high humidity in hot conditions may be a protective factor ( 19 , 20 ). To date, these conclusions are inconsistent, highlighting the need for a systematic investigation of the joint effects of humidity and temperature on CVD mortality under sweltering conditions. The effects of damp heat in various regions are likely to vary due to weather conditions, air pollution, socioeconomic status and demographic characteristics. Our study aims to use a dataset from Huizhou City, Guangdong Province, to investigate the combined effects of relative humidity and high temperature on CVD mortality, and to generalize the results to subtropical monsoon humid climate zones, especially those that experience hot and humid weather year-round. This will enable us to protect people at risk of cardiovascular disease and reduce their exposure to risk in advance of hot and humid weather, so as to prevent CVD mortality. This study was conducted in Huizhou City, Guangdong Province, which is located between 22°24′ and 23°57′ north latitude and 113°51′ and 115°28′ east longitude, in the south of China, with a population of about 6,042,900 people. The region falls within a typical subtropical monsoon humid climate zone, with a mean annual precipitation of 1,770 mm mainly from May to September and a mean annual temperature of 22°C, with the highest temperature in summer often reaching over 30°C. The mortality data of permanent residents in Huizhou from 2015 to 2021 were retrieved from the death information registration and management department of Huizhou City. The mortality data encompassed fundamental individual information, such as gender, age, time of death, and cause of death. In accordance with the International Classification of Diseases, 10th revision (ICD-10, coding: I00-I99), the mortality data of CVDs were extracted, and on this basis, CAD and stroke were further screened out. All the above data is divided and analyzed in units of days. The meteorological data of the same period are from the Huizhou Meteorological Information Centre, including daily mean temperature (°C), daily maximum temperature (°C), daily mean wind speed (m/s), daily mean relative humidity (%), etc. Temperature-humidity index (THI) has been employed extensively in China since the advent of the 21st century, primarily as an indicator of human comfort. While research has been conducted on the impact of temperature and humidity on CVDs, there have been few attempts to assess these effects using a recognized comprehensive index. As a well-established and widely utilized index, THI is particularly suited to this study. Its calculation formula is ( 21 ): In equation 1 : THI-temperature-humidity index; T-daily mean temperature; RH-relative humidity. When THI ≥ 75, it is defined as sweltering. Calculations and statistics are based on the above formula and the standard ( Supplementary Table S1 ). The annual distribution of THI was obtained by analyzing the death number of CVDs and meteorological element data in Huizhou from 2015 to 2021 , and it was found that THI ≥ 75 mainly appeared from May to September each year. Therefore, the scope of the study was narrowed to include May to September as the focused analysis period. This study describes and analyses data on deaths from CVDs and meteorological data for residents of Huizhou City from 2015 to 2021, and calculates the mean, variance, minimum, quartile, maximum and other values of each indicator. The general algebraic modeling system (GAMs) is suitable for analyzing complex nonlinear relationships between dependent variables and several explanatory variables. It is widely used in epidemiology and environmental health. The explanatory variables can be fitted using various smoothing functions to represent the degree of influence of each explanatory variable on the dependent variable. Since the effect of changes in THI on the risk of CVDs mortality is not limited to the observed time period and may also have a certain lag, the distributed hysteresis nonlinear model (DLNM) proposed by Gasparrini is introduced to model the relationship between exposure events and a series of future outcomes ( 22 ). Therefore, this study used a Poisson distribution GAM combined with DLNM to assess the association between THI and the risk of CVDs mortality in residents. Before establishing the model association, in order to avoid the collinearity of the factors in the model, the Separman correlation coefficient between the meteorological factors was tested ( 23 ). If the correlation between the two factors is strong (| r | > 0.8) ( 24 ), it indicates that the two variables are highly collinear and should not be included in the same model. According to the results of the Spearman correlation analysis, the control variables in this study were set to wind speed, long-term time trend, day of the week effect, and holiday effect. The GAMs formula is as follows: In equation 2 : Y-the number of CVDs deaths on day t; E(Y t )-the expected value of the number of CVDs deaths on day t; lag–lag time; s-cubic spline function; df-degree of freedom parameter; α - intercept; TI t-lag -THI lagged by t days; W S t -mean wind speed on day t; time–time variable, with 7*year selected as the degree of freedom to control for long-term temporal trends; DOW-day of the week; Holiday–holiday as a confounding factor, added as a dummy variable. The regression coefficientβand standard deviation SE were estimated in accordance with equation 2 , and the relative risk (RR) and its 95% confidence interval (95% CI) were calculated ( 25 ). Please refer to equations 3, 4 for further details: Based on the above effects, the DLNM was constructed to predict the RR of CVDs deaths in residents under different THI values. First, a cross-base matrix was generated for the primary research factor THI, and the additional lag time dimension of the exposure-response relationship, that is, the combination of the two functions of prediction and lag effect, was combined into a two-dimensional matrix. The lag dimension was set to 7 days ( 26 , 27 ), and the model framework was as follows: In the above formula: cb-cross-base matrix, where the three internal nodes are the 10th, 75th, and 90th percentiles of the temperature distribution; and polynomial functions are used to construct its lagged effects, with the maximum number of lagged days set to 7 d. To test the sensitivity of the model and the effect of THI in this study, the following Sensitivity analyses were performed to demonstrate the robustness of our model formulation: 1. Changing the time degrees of freedom 7*7; 2. Including CVD, stroke and CAD death data into the model for testing separately. The results calculated under different degrees of freedom were subjected to a significance t-test with α = 0.05 against the data from the main model. p < 0.05 indicates a statistical difference. All statistical analyses in this study were performed using R4.4.1 software, and the mgcv., dlnm, and ggplot2 packages were used to assess the impact of sweltering on the number of deaths from CVD and the two core disease types in different genders, as well as the cumulative lag effect and data visualization. Statistical tests were two-sided probability tests, with a test standard of α = 0.05. All results are expressed as RR and 95% CI, and a p value of <0.05 was considered statistically significant. From May to September 2015–2021, 19,525 people died from cardiovascular and cerebrovascular diseases in Huizhou (male: 10,144; female: 9,381). Total deaths from stroke: 7,521 (male 3,893; female 3,628), total deaths from coronary heart disease: 7,769 (male 4,155; female 3,614). The specific statistical characteristics of the number of CVD deaths and meteorological factors in Huizhou City are shown in Table 1 . According to our preliminary statistics on the number of sweltering conditions days and the corresponding number of deaths, it was concluded that the number of sweltering conditions days during 2015–2021 was 1,012 d, with an annual mean of 144.6 d. The mean daily number of CVD deaths during the sweltering conditions period was 18.5, of which 9.5 were males and 8.8 were females. This is higher than the mean daily number of deaths during non-sweltering conditions. The cumulative lag effect of Sweltering on CVDs mortality from May to September, 2015 to 2021 is shown in Table 2 . In terms of the overall effect, the cumulative lag effect of Sweltering on CVDs mortality is more significant, and the effect on women is more significant than that on men. Second, in a separate analysis of the data on deaths from coronary heart disease ( Tables 3 , 4 ), the risk of death from coronary heart disease increased with the cumulative sweltering effect for both men and women. The effect of perceived sweltering on coronary heart disease in men peaked at lag 1, with a 2.8% increase in mortality (RR, 1.028; 95% CI [1.009–1.048]), while in women the peak was reached on the cumulative lag day 2, with an increase in mortality of 3.5% (RR, 1.035; 95% CI [1.015–1.054]). The effect of perceived sweltering on the risk of death from stroke also had a cumulative effect. The effect of perceived sweltering on stroke in the total population peaked on day 2 of the lag, with an increase in mortality rate of 4.6%. Among men, the effect of perceived sweltering on stroke peaked on day 3 of the lag, with an increase in mortality rate of 5.4% (RR, 1.054; 95% CI [1.029–1.079]); the effect of feeling sweltering on stroke in women peaked on the second day of the lag, with an increase in mortality of 4.6% (RR, 1.046; 95% CI [1.023–1.069]). This may indicate that the effect of sweltering on stroke in men may be more serious and long-lasting. At the same time, by comparing Tables 3 , 4 , it can be found that the lagged effect of stroke mortality is longer and more severe than that of coronary heart disease mortality in the general population. There existed an evident nonlinear connection between THI and RR . When THI equaled 74.2 (corresponding to the minimum number of deaths), the average RR was the lowest, and subsequently, RR rose along with THI. Once THI exceeded a specific range, RR increased rapidly as THI increased. Furthermore, the impact of sweltering (THI > 75) on the human body possesses a cumulative lag effect. With the accumulation of the sweltering effect, the risk of CVDs death also accumulates, and the death risk reaches the peak on the 2nd lag day. As shown in Supplementary Table S2 , after adjusting for the degree of freedom of the long-term time trend, the impact of THI on CVD and mortality rates of the two core diseases did not change significantly, and the result of the t-test was p > 0.05, indicating that there was no significant difference between the changes in the data and that the model used in this study was reliable. This study first used the GAM to analyze the impact and lag effect of sweltering on CVD and mortality rates of two core diseases in different gender groups. It was found that the risk of CVD mortality in the total population increased by 3.0% under sweltering; and the RR showed a trend of first increasing and then decreasing with the increase in the number of lag days. In terms of cardiovascular disease, women showed more sensitivity; in terms of cerebrovascular disease, men showed more sensitivity. Then, DLNM was further used to predict the impact of different sweltering indices on the population of CVD mortality and their lag effects. These findings highlight the need to strengthen the prevention and treatment of cardiovascular and cerebrovascular diseases when sweltering weather occurs. At present, many literatures have confirmed various mechanisms to explain the increase in body temperature caused by the imbalance of the body caused by hot weather, which in turn leads to an increased risk of death from cardiovascular and cerebrovascular diseases ( 6 ). In a hot and humid environment, the body’s heat dissipation is limited ( 11–13 ), which is more likely to lead to an increase in body temperature after the body temperature is out of balance. The increase in body temperature after the body temperature is out of balance will ultimately lead to vascular damage and trigger the coagulation/fibrinolysis pathway ( 28 , 29 ). These physiological changes may lead to microvascular thrombosis or excessive bleeding, resulting in an increased risk of ischemic stroke and heart disease. In addition, high temperatures can lead to the destabilization of blood vessel plaques ( 30 ) and accelerate the progression of atherosclerosis, increasing the risk of acute coronary syndrome ( 31–34 ). Previous studies have also observed a strong correlation between high temperatures and mortality from CVDs. A meta-analysis of 266 studies showed that for every 1°C above the reference temperature, the risk of mortality from CVDs increased by 2.1%, and the risk of CAD increased by 2.8% ( 19 ), a retrospective study by Luo Q found that high temperatures increased CVD mortality by 3% ( 10 ). Increased body temperature increases the risk of cardiovascular dysfunction ( 35 ); Reduces coronary blood flow ( 36 ), High body temperature can also cause heart muscle damage shortly after exposure to heat ( 37 ). However, for women, the risk of death from coronary heart disease increases significantly with the cumulative sweltering effect. The research results of Zhao et al. also show that in extreme heat, coronary heart disease is more sensitive in women than in men ( 38 ). In addition, this study also independently proves the strong correlation between sweltering and stroke and CAD mortality. Similar results have also been observed in recent studies. In the study by Luo et al., every increase of 1°C above the reference temperature increased the cerebrovascular mortality rate by 2% ( 10 ), in this meta-analysis, it was observed that for every 1°C increase in temperature above the reference temperature, the risk of stroke increased by 3.8% ( 19 ), high temperatures have been shown to be a risk factor for ischaemic stroke (IS) ( 39 ), the higher the temperature of the brain, the greater the extent of the cerebral infarction ( 40 ). A retrospective article on animal models of high-temperature-induced cerebral ischaemia explains the specific molecular mechanisms of high-temperature-induced cerebral ischaemia: for example (1) more extensive disruption of the blood–brain barrier ( 23 , 24 ); (2) The number of potentially damaging ischaemic depolarizations in the ischaemic penumbra increases ( 41–43 ). Our results also show that the impact of sweltering on stroke is more significant than CAD, which is consistent with the research results of Liu et al. ( 19 ). However, there are differences with the research results of Luo et al. ( 10 , 44 ). The blood–brain barrier is very sensitive to temperature changes in the event of cerebral ischaemia, and high temperatures can lead to widespread damage to the blood–brain barrier ( 45 , 46 ), and after the blood–brain barrier is damaged, the accumulation of water in the brain and changes in ion homeostasis can aggravate heat injury ( 47 ). In addition, the loss of the blood–brain barrier leads to an imbalance in the immune system of the central nervous system, and the associated inflammatory response can further aggravate the deterioration of the stroke ( 48 ). There is a cumulative lag effect on the mortality rate of CVDs due to sweltering conditions. The effect of sweltering conditions on CVDs in women is greater than that in men. Mesdaghinia et al. showed that the short-term effect of heat exposure on the risk of CVDs in men was 1.1% (RR, 1.011; 95% CI [1.009–1.013]), and 1.4% (RR, 1.014; 95% CI [1.011–1.017]) for women 38. The cumulative lag effect of sweltering is more pronounced in women than in men for CVDs. Finally, the images in this study are different from the exposure-response curve images in the literature related to the relationship between high temperatures alone and CVDs ( 49 ). The images in this study are divided into two sections. In the first half of the images, as the THI increases, the risk of CVDs death increases slowly. Considering that when the ambient temperature is not too high, the body can still dissipate heat through heat radiation, even if the high humidity environment hinders the discharge of sweat, the body is not prone to body temperature imbalance. In the second half of the graph, as the THI increases, the body finds it difficult to cool down through heat radiation in a high-temperature environment. The body’s temperature is mostly cooled down by sweating, but in a high-humidity environment, the excretion of sweat is significantly hindered, which greatly reduces the body’s cooling efficiency in the same high-humidity environment, increasing the risk of elevated body temperature and, in turn, increasing the risk of CVDs ( 50 ), This result is consistent with the effect of wet bulb temperature on mortality. As the wet bulb temperature increases, the risk of human death also increases. When the wet bulb temperature reaches 35°C, the human cooling mechanism fails ( 11–13 ). As global temperature continues to rise, the impact of humidity and heat on CVD becomes stronger and stronger ( 9 ). How to effectively prevent and control CVD, especially under what specific weather conditions need to be strengthened, is an urgent issue to be addressed nowadays. Our study site is located in Huizhou City, Guangdong Province, China, which has a humid subtropical monsoon climate. Studying the effect of sweltering conditions on CVD mortality in this location is an important reference for research in subtropical monsoon humid climate zones. There are still several constraints in our investigation. Initially, our primary emphasis was on examining the correlation between sweltering conditions and cardiovascular disease mortality specifically in Huizhou City. We did not include individuals residing in different climatic regions. Furthermore, in our investigation, we employed the ambient temperature at the outdoor detection point as a substitute for personal exposure. However, this simplified approach may result in inaccuracies in measuring exposure. It is important to acknowledge and address these limitations in future studies in order to better elucidate the correlation between meteorological conditions and CVDs. Sweltering conditions can increase the risk of death from CVD, and the greater the THI, the more pronounced the increase in mortality, and beyond a certain range, the mortality rate increases significantly. There was also a gender difference in this effect, with the effect being more significant in women than in men. In addition, there is a cumulative lag effect of sweltering on CVD mortality, which generally peaks after 1–3 days. In addition, the lag effect is longer and deeper for stroke deaths than for CAD deaths. Studying the effects of sweltering on CVDs has important public health and clinical implications for the prevention of CVD deaths. | Study | biomedical | en | 0.999998 |
PMC11697592 | Stiff person spectrum disorders (SPSDs) are a rare group of neuroimmunological disorders characterized by progressive rigidity and triggered painful spasms of the limb muscles. Despite the first description by Moersch and Woltman in 1956 of the formerly coined “stiff man syndrome” ( 1 ) or as a gender-neutral term of “stiff person syndrome (SPS),” ( 2 ) this condition has a clinical spectrum that includes not only classical SPS but also other SPS variants, such as progressive encephalomyelitis with rigidity and myoclonus (PERM) ( 3 ). Classical SPS is the predominant clinical form and presents as an insidious onset with rigidity and stiffness of the trunk muscles, which advance to joint deformities, impaired posturing, and abnormal gait ( 1 , 3 ). Patients may also develop painful generalized muscle spasms triggered by unexpected stimuli and may be associated with other autoimmune disorders ( 3 , 4 ). The clinical features of SPS variants include focal or segmental SPS (“stiff limb syndrome”), jerky SPS, SPS with epilepsy, SPS with dystonia, cerebellar, and paraneoplastic variants ( 3 – 5 ). In addition to axial and limb muscle stiffness and diffuse myoclonus, patients with PERM (“SPS-plus syndrome”) exhibit relapsing–remitting brain stem symptoms, breathing issues, and prominent autonomic dysfunction ( 6 ). Despite significant advances in the treatment of SPSDs, the prognosis remains unpredictable, with an inadequate response in many patients, leading to severe disability and sudden death ( 5 , 7 ). Moreover, most patients receiving standard-of-care medications may require progressively higher doses, leading to intolerable adverse events ( 5 ), among other limitations of pharmacological interventions discussed later. Therefore, there is a need to identify innovative therapies in which we describe the potential use of extracorporeal photopheresis (ECP) as a rational approach for patients with SPSDs, specifically classical SPS. Of note, there are no case reports, patient cohorts, or clinical trials have been reported on the use of ECP in SPS yet. Accordingly, this study aims to propose ECP as a potential treatment for SPS by analyzing the current evidence supporting its clinical application. SPSDs are associated with high titers of autoantibodies to different antigens of inhibitory synapses, generating low level of synthesis and release of γ-aminobutyric acid (GABA) on presynaptic or postsynaptic neuronal junctions within the central nervous system (CNS), resulting in impaired functioning ( 3 , 8 ). Glutamic acid decarboxylase (GAD), a cytoplasmic enzyme with two isoforms (GAD67 and GAD65) that transforms glutamate into GABA, has been widely recognized as a primary target identified in classical SPS, predominately anti-GAD65 antibodies ( 3 , 8 ). However, other autoantibodies have also been reported, and various correlations with SPSD variants have been established, including antibodies against GABA receptor-associated protein and dipeptidyl-peptidase-like protein-6 (DPPX) in classical SPS, amphiphysin and gephyrin in paraneoplastic variants, and glycine receptor associated with PERM ( 3 , 9 ). The classical SPS etiopathophysiology has been explained by the B cell-mediated inhibition of GABAergic neurons and their synapses, whereas GAD65-specific T cells accumulated in the CNS could drive the intrathecal GAD65 IgG production ( 3 , 10 ). T cell-mediated cytotoxicity has also been reported in SPS, as GAD65-specific T cells can initiate cytotoxic immune responses ( 11 ). Despite evidence suggesting that GAD65-specific T cells are likely to be scarce and mainly confined to the naïve repertoire in blood ( 10 ), there is a systemic and oligoclonal immune response mediated by stable B cell clones ( 12 ) leading to serum titers that are 50-fold higher than cerebrospinal fluid (CSF) titers ( 4 ). Interestingly, the serum and CSF anti-GAD antibodies first reported by Solimena et al. in a patient with SPS, diabetes mellitus, and epilepsy ( 13 ) were not consistently correlated with the clinical fluctuations of the disease ( 4 , 11 ). These autoantibodies are directed to GAD65 intracellular antigens and have been postulated to interact with peptide fragments during GABA exocytosis on neuronal surfaces, exerting a change in the synaptic transmission by blocking either GAD function or synthesis ( 14 ). GAD65-specific memory T cells could enter the CNS and mount effector responses against GAD65-expressing neurons, including infiltrating CD8 + T cells ( 11 ) detected in the spinal cord of deceased patients with SPS, along with neuronal loss and axonal swelling ( 15 ). SPS treatment includes drugs that increase the GABAergic tone in combination with immunomodulating or immunosuppressant agents ( 4 , 5 ). At the onset of SPS symptoms or appropriate diagnosis, diazepam or other benzodiazepines (GABA agonists) are commonly used as the cornerstone of symptomatic therapies. However, other drugs, including muscle relaxants, botulinum toxin injections, and centrally acting agents, are also used ( 11 ). SPS immunotherapies are usually the first-line treatment and include corticosteroids, therapeutic plasma exchange, high-dose intravenous immunoglobulins (IVIg), and subcutaneous immunoglobulins (SCIg) ( 11 ). Anti-B cell therapies have recently been proposed as a rational approach in second-line therapies, along with mycophenolate mofetil, azathioprine, or a combination of therapies ( 4 , 5 , 11 ). Treatment with autologous anti-CD19 chimeric antigen receptor (CAR) T cells has also been successfully reported in a patient with refractory SPS ( 16 ). Third-line therapies include cyclophosphamide or a combination of therapies (e.g., IVIg and rituximab or mycophenolate mofetil) ( 11 ). Autologous non-myeloablative hematopoietic stem cell transplantation (HSCT) in disabled patients with SPS has also been reported, despite its variable beneficial effects (fourth-line therapies) ( 11 , 17 ). Commonly, SPS pharmacological treatment is combined with nonpharmacological interventions (e.g., selective physical therapy, deep tissue massage techniques, heat therapy, osteopathic and chiropractic manipulation, and acupuncture) in a multifaceted approach ( 11 ). Nevertheless, current pharmacological interventions lead to heterogeneous clinical responses and pose various limitations ( Table 1 ), which support exploring further strategies, such as ECP, that might be added to the SPS therapeutic armamentarium. ECP is a leukapheresis-based immunotherapy in which autologous leukocytes are exposed to a photosensitizing agent and ultraviolet-A (UVA) irradiation before being reinfused. The photosensitizing agent 8-methoxypsoralen (8-MOP) conjugates with the DNA of leukocytes upon UVA photoactivation, resulting in the inhibition of DNA synthesis and cell division and the induction of apoptosis, generating a cascade of events ( 18 ). It has been approved for the palliative treatment of cutaneous T cell lymphoma, and many other indications have been successfully explored, including graft-versus-host disease, rejection of solid organ transplantation, and a few autoimmune diseases ( 18 ). During a regular ECP procedure, nearly 5%–10% of the total blood-circulating mononuclear cells are drawn and exposed to 8-MOP and UVA, and the susceptibility to ECP-induced apoptosis varies from cell to cell ( 18 , 19 ). For instance, B and T cells are highly susceptible to 8-MOP/UVA exposure, whereas monocytes and regulatory T cells (Tregs) are more resistant to ECP ( 18 ). ECP exerts “direct effects,” including apoptosis of treated leukocytes, followed by phagocytosis, which trigger cascades of downstream “indirect effects.” ( 20 ) Many cell interactions initiate a cascade of immunological changes, differentiation of monocytes into dendritic cells (DCs), and successive presentation of antigens ( 18 ). ECP-treated cells also recruit other modulators, such as phagocytes, via soluble and membrane-bound “find me” signals ( 21 ). The “indirect effects” of ECP include the eradication of (pathogenic) clonal cells, a shift in antigen-presenting cell (APC) populations, changes in cytokine secretion, and modulation of Tregs and regulatory B cells (Bregs) ( 20 , 22 ). Although the CNS has been considered an immunoprivileged site, current evidence shows the effective recruitment of immune cells across the blood–brain barrier (BBB) into perivascular and parenchymal spaces ( 23 ). T cell responses targeting CNS antigens are initiated in secondary lymphoid organs, and not in the CNS ( 10 ). In fact, activated T cells may penetrate the BBB, regardless of their specificity, and intrathecally are retained those T cells which encounter their cognate antigen ( 24 ). In this regard, Skorstad et al. indicated that GAD65-specific T cells may first be activated in the periphery and later accumulate in the CNS, including proliferation and promotion of B cell differentiation into GAD65 IgG-producing plasma cells within the intrathecal compartment of patients with SPS ( 10 ). Compared with serum anti-GAD65 antibodies, the CSF antibodies of patients with SPS exhibit a 10-fold higher binding avidity, indicating intrathecal synthesis by clonally restricted GAD65-specific B cells driven by local antigens within the confines of the BBB ( 4 , 10 ). Additionally, DCs involved in both primary and secondary immune responses can migrate not only into the perivascular space under degeneration and neuroinflammation ( 23 ), but also into the CSF-drained spaces of the CNS, even in the absence of neuroinflammation ( 25 , 26 ). Furthermore, DCs can traffic to peripheral lymphoid organs (e.g., cervical lymph nodes) and present CNS antigens to T cells in the periphery ( 26 ). Therefore, although the BBB may diminish the effects of ECP, the periphery–CNS trafficking of immune cells and anti-GAD65 antibody production can justify its investigational use in preclinical models and, eventually, in clinical trials. Unlike standard immunosuppressive therapies, ECP does not cause general immunosuppression; instead, it appears to exert complex specific effects ( 27 ) across different immune pathways ( 22 ). Analyzing the various immune specificities in the variations of the clinical phenotypes of SPSDs, we herein describe some potential mechanisms and caveats of ECP to be considered in the context of classical SPS. Previous clinical experience with ECP has been documented in other immune-mediated CNS disorders, such as MS, in which a few case reports and small clinical trials verified the safety of ECP, but the results were inconclusive in terms of efficacy ( 39 , 40 ). For instance, Besnier et al. reported that ECP transiently modified the course of severe secondary chronic progressive MS with a rebound after treatment discontinuation ( 41 ), and Cavaletti et al. reported evidence of adequate efficacy in a subgroup of patients with MS not responsive to or ineligible for standard immunomodulating treatments ( 42 ). Regarding the use of photopheresis in patients with classical SPS, our group has proposed to execute the termed OPTION study, a pilot open-label trial using ECP as an add-on investigational intervention comprised of one ECP cycle (two consecutive days) every other week for three months, followed by one ECP cycle every month for additional three months. This trial will evaluate safety outcomes as the primary endpoints, but the efficacy will be preliminarily assessed through changes in the Distribution of Stiffness Index (DSI) and Heightened Sensitivity Score (HSS) ( 43 ). Figures 1A, B summarize the main etiopathophysiological CNS events and postulated mechanisms of ECP in SPS, respectively. With the aforementioned pieces of evidence, being a well-tolerated and safe procedure with long-term effects in approved indications, ECP might overcome various gaps faced with current SPS treatments, which commonly provide a shorter duration of clinical improvement or variable beneficial effects ( 5 , 7 , 16 , 17 ). For instance, instead of the therapeutic approach of controlling disease symptoms (e.g., benzodiazepines and muscle relaxants), targeting some of the critical cells involved in the etiopathophysiology (e.g., anti-B cell therapies) or even “rebooting” the immune system (autologous HSCT), ECP possesses established immunologic effects that, in combination with those treatments, may gradually modulate the dysregulated immune response observed in SPS. Although the exact mechanism of action of ECP remains unclear and requires further studies in SPS, its wide-ranging immunomodulatory effects may be beneficial in this disabling disorder. By exploring the effect of ECP in preclinical models and formal clinical trials, this approach may also foster its use in SPS and potentially in other neuroimmunological diseases. | Review | biomedical | en | 0.999995 |
PMC11697594 | Tobacco is one of the most important cash crops in China, and according to the World Health Organization (WHO), China is also the world’s largest producer and consumer of tobacco ( https://www.who.int/china/health-topics/tobacco ). The cultivation area of tobacco in China had reached 1,000,520 ha by the end of 2022 (National Bureau of Statistics data, https://data.stats.gov.cn/ ). Meanwhile, India and Brazil, ranking second and third respectively, had cultivation areas of 450,000 ha (India Brand Equity Foundation, https://www.ibef.org ) and 261,740 ha (Associação Brasileira dos Produtores de Tabaco, https://afubra.com.br ). Therefore, improving the yield and quality of tobacco can create enormous economic benefits in China. Fertilizer is the material basis for increasing tobacco yield, and the rational application of fertilizer is an important measure for improving tobacco yield and quality . Modern agriculture is characterized by high input, high yield, and high efficiency , and high crop yields can be achieved with limited arable land and minimal manpower. Continuous and uncontrolled application of fertilizer has become a basic means to increase yield . However, excessive fertilizer use damages land resources, wastes fertilizer, and causes a chemical imbalance in tobacco leaves . The purpose of rational fertilization is not only to increase tobacco yield but also to improve the quality of tobacco leaves. Organic fertilizer contains a large amount of organic material derived from organic waste, such as animal and plant remain after composting. Compared with inorganic fertilizer, organic fertilizer contains more trace elements and has the ability to regulate the soil structure and improve soil water conservation, fertility, and permeability , thereby promoting enzyme and microbial activity in soil. Moreover, the long-term application of organic fertilizer causes less damage to the environment compared with that of inorganic fertilizer . However, the application of organic fertilizers alone leads to a series of problems, such as slower fertilizer release, and in certain regions, higher costs compared to inorganic fertilizer . It also affects the normal growth and nutrient accumulation of tobacco plants. Therefore, the use of organic-inorganic fertilizer has shown a significant development trend . Studies have shown that organic-inorganic fertilizer can combine the advantages of organic and inorganic fertilizers thereby improving the yield and quality of tobacco . The correlation between yield increase and quality improvement in tobacco has not been fully established, which may be influenced by numerous factors, including the fertilizer type, chemical composition, and tobacco variety. Gaining a deeper understanding of the relationship between yield increase and quality improvement under the application of organic-inorganic fertilizer is conducive to the continuous and stable increase in the yield, quality, and economic benefits of tobacco crops. The organic-nitrogen ratio in organic-inorganic fertilizer significantly impacts the fertilization effects on tobacco. A nutrient release rate corresponding to a 25% organic-nitrogen ratio in the fertilizer aligns with the tobacco plant’s growth and development requirements. This alignment is beneficial for enhancing the agronomic indicators, yield, and quality of tobacco, as well as for coordinating the chemical composition within tobacco leaves. However, if the organic nitrogen ratio is too high, the organic-inorganic fertilizer may negatively affect the yield and quality of tobacco. The variety of tobacco is also an important factor that affects the yield of tobacco, and some varieties may show higher yield potential in specific environments because of their genetic characteristics . Additionally, the variety also affects the chemical composition of tobacco, such as nicotine content, total nitrogen content, reducing sugar, K content, among others , which directly determine the quality and taste of tobacco . China has a vast territory, and different planting areas plant different varieties according to their climate and environmental conditions . Therefore, establishing the quantitative relationship between varieties and changes in tobacco yield and chemical quality after the application of organic-inorganic fertilizer will help tobacco farmers choose the correct organic-inorganic fertilizer to suit their needs. Several basic field experiments have been carried out in different tobacco planting areas in China to study the different effects of organic-inorganic fertilizer on tobacco. The experiment results of Li et al. reported that the use of organic-inorganic fertilizer increased the yield (11.4%), output value (18.3%), and high-grade tobacco rate (11%) of the Y97 variety compared with the application of inorganic fertilizer alone, and the authors suggested that the application of organic-inorganic fertilizer with organic-nitrogen ratio of 25-50% was more conducive to the growth and development of tobacco and improved the yield and quality. Ma et al. found that the application of organic-inorganic fertilizer significantly increased the total sugar content (27.86%), the reducing sugar content (23.00%), sugar-to-nicotine ratio (72.60%), nitrogen-nicotine ratio (22.66%), and K content (6.21%) in tobacco leaves but reduced the total nitrogen content (5.29%) and nicotine (-27.21%), thus leading to a more balanced chemical composition. However, owing to the great differences in climatic conditions, soil physicochemical properties and field management measures in different regions, the experimental results obtained from different studies are inconsistent. To achieve production goals, a suitable organic-inorganic fertilizer ratio scheme should be formulated according to the chemical composition requirements, tobacco varieties, and other factors before planting and fertilization. Therefore, our study performed a meta-analysis of 169 peer-reviewed studies to (1) identify the specific effects of organic-inorganic fertilizer on tobacco yield and chemical components in tobacco leaves; (2) determine how different tobacco varieties and fertilizer components alter the effects of organic-inorganic fertilizer on tobacco; and (3) reveal the impact of applying organic-inorganic fertilizer on the balance between tobacco yield and quality. This study provides a scientific theoretical reference for improving the fertilization regime of tobacco. We searched relevant articles published between 1990-2023 from the China National Knowledge Infrastructure and Web of Science. The search keyword included “flue-cured tobacco” or “tobacco” and “organic fertilizer” or “inorganic fertilizer” and “yield” or “quality” or “chemical composition”. According to the data requirements of the meta-analysis and the purpose of this study, articles were screened using the following criteria: (1) tests in the article should include the application of inorganic fertilizer alone and organic-inorganic fertilizer for comparison; (2) the test materials and environmental background of the test sites should be described (the test sites are located in China); (3) the test results should include the mean and standard deviation of indicators, as these parameters are essential for meta-analysis. (4) the fertilizer treatment section should include the organic-nitrogen ratio; and (5) only one article from the same study can be selected. After screening and evaluation, 169 articles were finally obtained for follow-up analysis. For each study, we extracted the mean, standard deviation and sample size of tobacco yield, high-grade tobacco rate, output value, total nitrogen content, nicotine content, reducing sugar content, K content and Cl content (the chemical compositions come from the middle leaves of tobacco). These tobacco indicators are the primary subjects of study in Chinese research, reflecting the key aspects of tobacco yield and quality. Extract the mean and standard deviation directly from the article’s tables; use Origin 2023 to extract them from figures; and if only the mean is provided, calculate the standard deviation based on other parameters reported. The following relevant information was collected for analysis: climate conditions (planting site,average annual precipitation, average annual temperature, and average annual sunshine), soil conditions (pH, organic matter content, available nitrogen content, available phosphorus content, and available potassium content), field management measures (planting density, type of organic fertilizer, and organic-nitrogen ratio in mixed fertilizer), and tobacco varieties (K326, Y85, Y87, Y97, and others). A total of 632 sets of observations were selected from drawings and graphs in the 169 articles . Rosenthal’s fail-safe number was calculated to test publication bias in the studies, if its coefficient >5n + 10 (n is the sample size), then the variable had no publication bias ( Supplementary Table S1 ). The location distribution of each experiment in the meta-analysis is shown in Figure 1 . Meta-analysis is a quantitative analysis method that summarizes the results of several relatively independent similar studies and draws conclusions . To better study the effects of organic-inorganic fertilizer on the yield and chemical composition of tobacco and determine the different influences of other factors on the fertilizer’s effects, we performed a meta-analysis of the data in the database and used the log response ratio (lnRR) as the statistical effect value indicator . Individual lnRRs for each observation were calculated using Equation 1 : where Ye and Yc represent the mean values of the treatment and control groups, respectively. Owing to the large spatiotemporal span of the data in this study and the great differences in planting methods, climate conditions, soil physical-chemical properties in different regions, random effect model (REM) was selected for calculation. The meta-analysis weighted the log response ratio of each observation to obtain the variance (V), weighted factor (Wi), weighted log response ratio (lnRR++), and standard deviation of the weighted log response ratio (SD). They can be calculated using Equations 2 – 5 : where N e nd N c represent the sample sizes of the treatment and control groups, respectively; and SDe and SDc represent standard deviation for the treatment and control groups, respectively. where i represent the i-th treatment, and k represent the number of observations. τ 2 represent the variance between studies due to different studies. Positive values of lnRR ++ indicated that the variable increased after the application of organic-inorganic fertilizer, and vice versa. Equation (6) was used to calculate the 95% confidence interval (CI) of lnRR ++ . If the 95% CI did not contain 0, then the application of organic-inorganic fertilizer has a significant impact on this indicators . To perceive the rate of change more clearly, lnRR ++ and its 95% CI were transformed back to the percentage change, as shown in Equation 7 . Data processing and statistical analysis for the meta-analysis were performed using R version 4.3.1 by package “metafor” . Random forest analysis was carried out using the “rfPermute” package in R software, and all images were drawn using the “ggplot2” package in R software. Correlation analysis was performed to examine the pairwise relationships between the lnRR ++ of the indicators. Optimal model regression analysis was performed to explain the influence of fertilizer composition on the effect of organic-inorganic fertilizer and the relationship between yield and quality of tobacco. The omnibus test (Qm-test) was used to compare the response of indicators to application of organic-inorganic fertilizer among different subgroups. If the p-value of Qm< 0.05, it suggested a significant effect of this factor on the overall effect ( Supplementary Table S2 ). After performing an overall analysis of the 632 sets of data from all 169 studies, we found that the application of organic-inorganic fertilizer significantly increased the yield of tobacco leaves (3.4%), output value (10.1%), high-grade tobacco rate (10.3%), K content (3.76%), and reducing sugar content (5.5%) and significantly decreased the nicotine content (-5.6%) compared with inorganic fertilizer alone . However, significant changes were not observed in the Cl or total nitrogen content in tobacco leaves. From the network correlation analysis of indicator lnRR ++ , the output value (R=0.796, p<0.01), high-grade tobacco rate (R=0.234, p<0.01), total nitrogen content (R=0.177, p<0.01) and K content (R=0.168, p<0.01) in tobacco leaves was strongly positively correlated with yield. Notably, total nitrogen content and reducing sugar content had significant negative correlations (R=-0.214, p<0.01). We collected the organic-nitrogen ratio and the amount of total nitrogen in fertilizers used in different experiments and performed regression analysis of the optimal model with indicators lnRR. As shown in Supplementary Figure S1 within the range of total nitrogen collected in the study (0-116 kg/hm 2 ), the high-grade tobacco rate (p=0.013) and reducing sugar content (p=0.030) in tobacco leaves increased regardless of the total nitrogen, whereas the nicotine content (p=0.087) decreased. The tobacco yield (p<0.001) and output value (p<0.001) only when the amount of total nitrogen exceeded 30 kg/hm 2 . In particular, when the amount of total nitrogen was 50-60 kg/hm 2 , the application of organic-inorganic fertilizer effectively improved the yield, output value and high-grade tobacco rate of tobacco, reduced the nicotine content and increased the content of some chemical components. For the yield (p<0.001) and output value (p<0.001) of tobacco, within the range of organic-nitrogen ratio collected in the study (7-100%), their lnRR decreased as the organic-nitrogen ratio increased compared with inorganic nitrogen application alone, and the change of Cl content (p=0.003) was similar to yield . Notably, the tobacco yield decreased when the organic-nitrogen ratio exceeded 50%. Regarding the chemical indicators in tobacco leaves, we found that nicotine content (p=0.043) and total nitrogen content (p=0.023) decreased as the organic-nitrogen ratio increased. The reducing sugar (p=0.015) and K content (p=0.089) increased regardless of the organic-nitrogen ratio. When the organic-nitrogen ratio in fertilizer was in the range of 50-60%, the reducing sugar and K content showed the greatest increase, and the nicotine content also decreased significantly after fertilization. We analyzed the influence significance of various factors on the application of organic-inorganic fertilizer, including climatic conditions (planting site, annual average precipitation, annual average temperature, and annual average sunshine), soil conditions (pH, organic matter content, available nitrogen content, available phosphorus content, and available potassium content), planting density and tobacco varieties . Contrary to our initial expectations, climatic factors had a relatively low impact on the effectiveness of organic-inorganic fertilizer; they were not the primary determinants. In contrast, soil factors showed a more pronounced influence on the application of organic-inorganic fertilizer, with significant differences observed in tobacco yield, high-grade tobacco rate, nicotine content, and total nitrogen content under varying soil conditions. Notably, among the various factors assessed, planting density and tobacco varieties exerted the most significant influence on the application effects of organic-inorganic fertilizer. Four main tobacco varieties (K326, Y85, Y87 and Y97) that have been frequently studied and cultivated in China were selected and analyzed in this study. The yield and quality of K326 tobacco showed a weak in response to organic-inorganic substances, only K content was significantly increased (6.67%, p<0.001). Y85 and Y87 were closely related, their yield (5.59%, p<0.001; 5.82%, p<0.001), high-grade tobacco rate (14.92%, p<0.001; 11.04%, p<0.001), output value (11.94%, p<0.001; 10.78%, p<0.001) and reducing sugar content (4.25%, p<0.05; 5.82%, p<0.001) were significantly increased after applying organic-inorganic fertilizer. For Y97 tobacco, yield (4.03%, p<0.05), high-grade tobacco rate (5.94%, p<0.001), output value (5.71%, p<0.01), K content (5.46%, p<0.05) and total nitrogen content (6.06%, p<0.01) were all significantly increased after applying organic-inorganic fertilizer . Regression curves were constructed for the organic nitrogen ratio and indicators lnRR for the four main varieties. For K326, the results showed that organic-inorganic fertilizer with an organic nitrogen ratio below 50% slightly increased the yield (p<0.001), value (p<0.001), and the rate of high-grade tobacco leaves (p=0.013) of K326 tobacco. When the organic nitrogen was about 50%, the K content significantly increased . For Y85, yield (p<0.001) and output value (p<0.001) were increased when the organic-nitrogen ratio was less than 80%. Moreover, nicotine content (p<0.001) and total nitrogen content (p<0.001) in Y85 tobacco were significantly affected by the organic-nitrogen ratio. The higher the organic-nitrogen ratio, the greater the reduction in nicotine and total nitrogen . As shown in Figure 6 , among the four varieties, K326 and Y87 showed strong correlations with each indicator while Y85 showed the weakest correlations. The output value of all varieties was positively correlated with the yield (p<0.01), high-grade tobacco rate (p<0.01), and the total nitrogen content was positively correlated with the nicotine content (p<0.01). Reducing sugar content was positively correlated with yield (R=0.166, p<0.05), output value (R=0.095, p<0.05), and high-grade tobacco rate (R=0.071, p<0.05) in K326 tobacco. The yield of Y85 was positively correlated with the K content (R=0.319, p<0.05), and the K content was positively correlated with the reducing sugar content (R=0.333, p<0.05). In Y87, increase in the K content significantly increased the high-grade tobacco rate (R=0.309, p<0.01). Figure 7 ; Supplementary Figure S3 show the regression relationship between yield and other indicators of four varieties after applying organic-inorganic fertilizer. With an increase of yield, the high-grade tobacco rate (p<0.001, p=0.038, p=0.005, p=0.047) and output value (p<0.001, p<0.001, p<0.001, p<0.001) of the four varieties increased. Organic-inorganic fertilizer application can simultaneously improve yield and quality. However, except for K326, the curve between the high-grade tobacco rate and yield of the other three varieties is similar to a parabola, which means that the increase in the high-grade tobacco rate become declined when the yield increased to a certain extent and even had a negative effect. In Y85, K content (p=0.010) and reducing sugar content (p=0.045) increased while the total nitrogen (p=0.045) and nicotine (p<0.001) content significantly decreased as the yield increased . We found that the yield, output value and high-grade tobacco rate of tobacco were significantly increased after applying organic-inorganic fertilizer, and the increase in yield also affected the increase in output value and high-grade tobacco rate . Tobacco has strict nitrogen requirements at different growth stages . In the early growth stage, sufficient nitrogen is required to maintain the full growth of tobacco until flowering, and a high nitrogen supply is not subsequently required. The application of inorganic fertilizer alone provided nutrients required by tobacco in the early stage. Compared with inorganic fertilizer, the release of nutrients in organic fertilizer is slower . Therefore, the application of organic-inorganic fertilizer can provide tobacco with an appropriate nutrient supply during the entire growth and development period, thereby ensuring the nutrient absorption and growth of tobacco. Moreover, organic fertilizer can improve the plant root environment, enrich the types of soil microorganisms, enhance the activity of soil extracellular enzymes , and improve the content of nutrient element in the soil, which contributes to an increase in the tobacco yield and high-grade tobacco rate. However, compared with the increase of output value and high-grade tobacco rate, the increase of yield was small, indicating that the main effect of the application of organic-inorganic fertilizer on tobacco was not to increase yield but to improve the quality of tobacco . Superior quality tobacco exhibits a balance and coordination between carbohydrates and nitrogen compounds . Compared with inorganic fertilizer alone, the application of organic-inorganic fertilizer significantly increased the reducing sugar content in tobacco . Organic -inorganic fertilizer can improve soil invertase activity , and the invertase in the rhizosphere soil of tobacco can split disaccharides, increase the available carbon content in the soil, and promote the tobacco to absorb and use carbon to synthesize carbohydrates. Meanwhile, the application of organic fertilizer brought a large amount of humic acid to the soil . Humic acid can affect the physiological metabolism of tobacco by promoting root growth and nutrient absorption, thereby increasing the accumulation of reducing sugar content . Organic-inorganic fertilizer also reduced nicotine content in tobacco , and increased the sugar-to-nicotine ratio, which shifted the sugar-to-nicotine ratio to the high-quality range. The application of organic-inorganic fertilizer improved the situation of excessive N supply from inorganic fertilizer and slow N supply from organic fertilizer, making the N supply from fertilizer more stable and lasting. Nicotine and total nitrogen were positively correlated in the tobacco plants , thereby, adjusting the nitrogen was also conducive to the decrease of nicotine content . In general, organic-inorganic fertilizer has been found to harmonize the carbon-nitrogen relationship in tobacco and enhance tobacco quality. As mentioned above, the absorption of organic and inorganic fertilizers by tobacco is related to different growth periods. Therefore, if the organic-nitrogen ratio in organic-inorganic fertilizer is unbalanced, the content of available nitrogen in the soil during the early stages will be lower and the yield will be reduced . we found that organic-inorganic fertilizer with low organic-nitrogen ratio could increase tobacco yield, and when the organic-nitrogen ratio exceeded 50%, organic-inorganic fertilizer would reduce tobacco yield . The organic-nitrogen ratio can affect tobacco yield by controlling the synthesis and degradation of chlorophyll in leaves. Organic-inorganic fertilizer with 15%-30% organic nitrogen increases the chlorophyll content of tobacco leaves in the early growth stage, ensuring the normal degradation of chlorophyll in the later stage, and show normal yellowing maturation, which enhances the photosynthetic rate and promote the accumulation of tobacco dry matter, thereby helping to increase the yield of tobacco leaves. Whereas when organic nitrogen is more than 45%, the chlorophyll cannot be degraded normally at maturity, thus causing late ripening of tobacco and reducing the yield . The yield and output value required the amount of total nitrogen to exceed 30 kg/hm 2 to increase in Supplementary Figure S1 , which also confirms our thesis. When the ratio of organic nitrogen was within the range of 50-60% and the amount of total nitrogen was controlled within 50-60 kg/hm 2 , organic-inorganic fertilizer had the best effect on coordinating the chemical composition . At this ratio, organic-inorganic fertilizer can enhance the C metabolism in the sugar accumulation period and effectively regulate the N metabolism of tobacco , which is conducive to the coordinated development of carbon and nitrogen and the improvement of quality of tobacco. Liu et al. also found that when the organic nitrogen substitution ratio was 50%, the abundance of beneficial bacteria in soil could be maximized. These beneficial bacteria not only participate in the decomposition of soil organic matter and also accelerate the soil nitrogen cycle , but also directly carry out biological nitrogen fixation, carbon fixation, oxygen increase and perform other physiological activities to improve soil fertility . Beneficial bacteria help tobacco roots to better absorb and utilize nutrients and coordinate the chemical composition of tobacco leaves. In summary, we believe that the ratio of organic nitrogen applicable to increasing yield and coordinating chemical composition of tobacco is different. Only a small ratio of organic nitrogen can achieve a good yield-increasing effect, whereas a higher ratio of organic nitrogen is required to improve the chemical composition. This finding may play a guiding role in the scientific fertilization of tobacco and improving its economic benefits. Different tobacco varieties responded differently to the application of organic-inorganic fertilizer. The effect was the best in Y85 and Y87 varieties, followed by Y97, and was least effective in K326 . This variation is mainly attributed to differences in growth habits, genetic characteristics, root distribution, and nutrient requirements. K326 is widely planted in various regions of China . Observations of the microstructure of the tobacco leaf tissues and stomata showed that K326 had a balanced leaf thickness, tissue, and stomatal structure and good ecological adaptability, making it suitable for planting in most environments. Our study suggests that K326 can absorb sufficient nutrients for normal growth and development even with less effective fertilizers because of its good adaptation and resilience . Therefore, the yield and most chemical components of this variety did not change significantly after the application of organic-inorganic fertilizer, while only the K content in tobacco leaves increased significantly. Organic-inorganic fertilizer can promote the synthesis of more K in tobacco. The K content of K326 was approximately 2%, higher than that of the other varieties. More nutrients related to K synthesis need to be absorbed during growth and development . Consequently, the K content of K326 increased significantly after the application of organic-inorganic fertilizer, with an organic-nitrogen ratio of approximately 50% being particularly beneficial for enhancing K content in K326 . The yield and output value of Y85 and Y87 responded positively to organic-inorganic fertilizer . Chen et al., 2024 found that the leaf growth and dry matter accumulation rate of Y87 and Y85 were slower than those of K326 and other varieties. Thus, the application of organic-inorganic fertilizer is particularly beneficial for accelerating yield formation in these varieties. Compared with Y87, total nitrogen content and nicotine content in Y85 were more affected by organic-inorganic fertilizer . Total nitrogen and nicotine contents decreased significantly and continued to decline with an increasing proportion of organic nitrogen. Nicotine content in roasted tobacco leaves can neutralize the acidic substances produced by the burning of carbohydrates, which is conducive to the formation of a good taste , but when its content exceeds a certain range, it will have a certain negative impact on the taste . The nicotine content in Y85 tobacco leaves was higher compared to other varieties . The application of organic-inorganic fertilizer can reduce the nicotine content to a certain extent, which is helpful to improve the sensory quality of this variety. In conclusion, the application of organic-inorganic fertilizer significantly increased the K content of K326, improved the yield and quality of Y87 and Y85, effectively reduced the total nitrogen content and nicotine content of Y85 tobacco leaves, and coordinated the chemical composition of Y85 better. Due to the limited nutrients within crop, increasing crop yield and improving crop quality are two aspects that are closely related but contradictory. The high-grade tobacco rate is the ratio of tobacco leaves rated as superior relative to the total number of tobacco leaves. This parameter comprehensively considers the agronomic indicators, chemical quality, and sensory quality of tobacco, and its level directly reflects the quality of the tobacco. We found that after the application of organic-inorganic fertilizer, the yield and high-grade tobacco rate of the four tobacco varieties increased to different degrees . However, the high-grade tobacco rate does not continue to increase with an increase in yield, and when the yield increase reaches a certain value, the increase in high-grade tobacco rate begins to decline. When the yield is too high, plants use more energy and nutrients for the growth of stems and leaves, thus ignoring the formation of tobacco quality substances . Wang et al. also found that the relationship between tobacco yield and quality was similar to a parabola. When the yield ranged from 2040 kg/hm 2 to 2 775 kg/hm 2 , the quality of tobacco increased; however, after the yield exceeded 2775 kg/hm 2 , the quality of tobacco leaves began to deteriorate. The K and reducing sugar content in Y85 were positively correlated with the yield , and they both increased with an increase in yield . This is consistent with our expectations, because the essence of the increase in tobacco yield is that the leaves become larger and thicker, and the leaf area increases, which is more conducive to the photosynthesis. Photosynthesis is the main method of accumulating carbohydrates in plants, and an increase in photosynthesis increase the reducing sugar content . K participates in the movement of leaves and stomata and is an important cation that promotes the synthesis and transportation of photosynthetic and assimilation products. K deficiency will enhance stomatal and mesophyll resistance, and reduce the absorption of CO 2 at the leaf surface . Therefore, an increase in yield must be accompanied by an increase in photosynthesis, K content and reducing sugar content in Y85 tobacco leaves. Bilalis et al. found that plants need more nutrients to be transported to the leaves when the tobacco yield increases, which leads to the redistribution of some nutrients and the removal of some alkaloid substances from the roots. Total nitrogen and nicotine in the tobacco leaves may be reduced or transferred to other parts of the tobacco plant. This is consistent with the results of the present study, in where we found that an increase in yield led to a significant decrease in total nitrogen in Y85 tobacco leaves and a decrease in nicotine content . In conclusion, organic-inorganic fertilizer has the potential to coordinate the distribution of nutrients within tobacco plants and simultaneously improve yield and quality. However, further research is needed to achieve this goal. Our meta-analysis results showed that although the application of organic-inorganic fertilizer improved the yield of tobacco, the main effect was to improve the balance of the chemical composition and improve the quality of tobacco. Second, by analyzing the effects of organic-inorganic fertilizer components on the application effect, we concluded that organic-inorganic fertilizer with a low ratio of organic nitrogen (15–30%) was more beneficial for increasing tobacco yield while fertilizer with a medium and high ratio of organic nitrogen (50–60%) had a better effect on improving tobacco chemical quality. Application of organic-inorganic fertilizer had the best effect on Y85 and Y87 and improved the yield and quality, and it also effectively reduced the total nitrogen and nicotine content of Y85 tobacco leaves. It had the worst effect on K326, which only showed an increase in the K content. This study also concluded that organic-inorganic fertilizer simultaneously increased the yield and high-grade tobacco rate of the four main varieties under certain conditions. Moreover, organic-inorganic fertilizer also increased the reducing sugar and K content, reduced the nicotine content in Y85 while increasing the yield. | Other | other | en | 0.999997 |
PMC11697595 | Cerebral apoplexy, or stroke, is the third leading cause of disability in adults and the second leading cause of deaths globally ( 1 , 2 ). Post-stroke spasticity (PSS) is a form of increased muscle tone where pathological changes in the upper motor neurons lead to impaired sensory and motor controls. It is a motor disorder characterized by a velocity-dependent increase in tonic stretch reflexes with tendon hyperreflexia resulting in abnormal postures and movement patterns in stroke patients. It is a major contributing factor to high post-stroke disability rates ( 3–5 ). Studies have found the treatment cost to be higher in stroke patients with PSS than those without PSS ( 6 ). The pathogenesis of PSS is complex and various researchers have proposed different ideas and definitions ( 7–9 ). At present, modern medicine has made great progress in the treatment of PSS, including botulinum toxin injections, intrathecal baclofen pumps, etc., and “early detection, early treatment” has become a general consensus for the treatment of PSS in the clinic ( 10 , 11 ). This study was analyzed from the perspective of prevention. Through the investigation and study of the relevant samples, the study aims to understand the incidence of spasticity after stroke, screen the relevant risk factors of spasticity and construct a risk prediction model, which will further provide a reliable theoretical basis for exploring the early rehabilitation therapies, reducing the incidence of spasticity and slowing down the degree of spasticity. Therefore, this study investigated and studied the relevant samples to understand the incidence of PSS, and screened the relevant risk factors of spasticity to provide additional reliable theoretical basis for the effective prevention of PSS in clinical practice. This is a retrospective study. A total of 436 stroke patients who visited the Neurology Department of the Third Affiliated Clinical Hospital of Changchun University of Chinese Medicine from June 2020 to November 2020 were selected as study subjects, and finally 257 patients were included in the final analysis, and divided into 101 cases with spasticity and 156 cases without spasticity, depending on whether the individual patient experienced spasticity in the 6 months after the stroke (Any muscle considered to be in spasticity if a value of 1 or more in any muscle in the Modified Ashworth Scale) . From the electronic database of medical records, the investigators recorded information such as the age and gender of study subjects, their medical history in terms of smoking, drinking, hypertension, diabetes, and hyperlipidemia, their dominant hands (left or right hand), as well as observed and recorded the cerebral hemorrhage or infarction by side (left or right), the site of cerebral hemorrhage or infarction (frontal lobe, parietal lobe, temporal lobe, occipital lobe, thalamus, hippocampus, basal ganglia, cerebellum, midbrain, etc.). The volume of the cerebral hemorrhage or infarction was derived from MRI examinations and FLAIR images were used to measure the size of the focal area. Post-processing software was applied on the disclosed relevant sequence to outline the contours of the cerebral hemorrhage or infarction at each layer of the cerebral hemorrhage or infarction in order to automatically calculate the area, these areas were then added layer by layer, and finally multiplied by the layer thickness and inter-layer spacing to obtain the cerebral hemorrhage or infarction volume. Neurological deficit scores (NIHSS scores) and modified Ashworth scores assessed on admission in patients’ e-cases were collected. The data were summarized in a database of observation tables using Excel, then validated and checked for errors before statistical analyses were performed. Based on the diagnostic criteria for cerebral infarction and cerebral hemorrhage in the Chinese Guidelines for Diagnosis and Treatment of Acute Ischemic Stroke 2018 ( 12 ) and the Chinese Guidelines for Diagnosis and Treatment of Subarachnoid Hemorrhage 2019 ( 13 ). All cases were confirmed by cranial CT or MRI. The modified Ashworth method was used to evaluate the severity of limb spasticity. A final diagnosis was then carried out to confirm if there was limb spasticity. (1) Met the diagnostic criteria for cerebral hemorrhage or infarction; (2) Course of disease was between 2 weeks to 6 months; (3) Age ≥ 18; (4) Consciousness and stable vital signs. (5) Patients or family members gave informed consent and participated voluntarily. (1) Patients with comorbidities that could cause limb spasticity; (2) Individuals with severe primary diseases such as impairment of liver, kidney, hematopoietic system, and endocrine system; (3) Individuals with severe cognitive impairment, mental illness; (4) Pregnant and lactating women; (5) Information in medical record was severely lacking. The sample size for this trial was estimated by calculating the required sample size based on logistic regression analysis in Medical Statistics, Second Edition ( 14 ): requires a minimum sample size of more than 10 times the number of independent variables, so as to reflect more realistically the relationship between the independent variable and the dependent variable. Sample size = number of independent variables × 10. In the formula: the number of independent variables is 20. Therefore, it was calculated that at least 200 were required for this survey. The total number of cases finally collected in this study was 257 which fulfils the requirement of study design. Statistical analyses of the data were performed using SPSS 27.0. Measurement data with a normal distribution was presented as _x ± s while data with a non-normal distribution was presented as M (IQR), and the Mann–Whitney U test was used to compare between the groups. Count data was represented by n(%), and the chi-square test or Fisher’s exact test was used for comparisons between the groups. Logistic regression was used to identify the factors that affect PSS. The difference is considered statistically significant if p < 0.05. As seen in Table 1 , comparisons between the groups showed that the differences in involvement of basal ganglia, cerebral hemorrhage or infarction volume, and NIHSS scores to be statistically significant ( p < 0.05), an indication that these may be factors that affect spasticity. However, the impact of confounding factors for spasticity was not accounted for. Therefore, a multivariate regression analysis that took into account the effects of confounding factors was carried out to identify independent factors that affect spasticity. As seen in Table 2 , results from the multivariate regression analysis showed that basal ganglia as the cerebral hemorrhage or infarction site, cerebral hemorrhage or infarction volume and NIHSS scores are independent influencing factors and independent risk factors for spasticity ( p < 0.05) . Specifically, spasticity is more likely to occur when the cerebral hemorrhage or infarction site is the basal ganglia, the larger the area of cerebral hemorrhage or infarction the more likely it is to lead to spasticity, while a higher NIHSS scores indicates a higher probability of spasticity. All other indicators are not independent influencing factors for spasticity. A risk prediction model for spasticity in stroke patients is derived with the multivariate logistic regression analysis: Logit (P) = 1.595 * Basal ganglia +0.084 * infarct volume + 0.208 * NIHSS scores – 2.092. The Hosmer-Lemeshow test results in Table 3 are X 2 = 13.828, and p = 0.086, which means that there is no significant difference between the predicted value and the actual value in the Hosmer-Lemeshow test. An evaluation of the goodness of fit using the ROC curve showed AUC (95% CI) = 0.786 (0.730–0.843), an indication of a high degree of model fit . Spasticity is a common post-stroke complication, and approximately one-third of stroke patients will experience spasticity within 3 months of onset of stroke ( 15–17 ). Spasticity is harmful to stroke patients, requiring them to undergo long-term rehabilitation and causing a series of physical and psychological problems that seriously affect their motor function and daily living activities ( 18 , 19 ). Studies have shown that patients with PSS often suffer from psychological problems such as depression and anxiety, cognitive impairment such as memory loss, and poor concentration ( 20–24 ). Post-spasm pain can also lead to sleep disorders. All these seriously affect the patient’s quality of life. In recent years, as modern medical science and technology develops, more clinical treatments for PSS have emerged, such as oral antispasmodics, botulinum toxin injections, physiotherapy, antispastic positioning, as well as acupuncture, moxibustion and traditional Chinese medicine ( 25–31 ). Currently, the most rapid and effective western medical treatments for spasticity are oral anti-spasmodic drugs and local injection of botulinum toxin ( 32 , 33 ). The early stage of PSS has also achieved better clinical outcomes through antispastic positioning ( 25 ). Acupuncture and moxibustion, massage (tuina) and traditional Chinese medicine have the advantages of simplicity and speed, and have achieved remarkable efficacy in the clinical treatment of PSS ( 34 ). If the point of treatment is too late, resulting in abnormal movement patterns and postures that have developed, the only way to treat it is through surgery, which is effective but has the disadvantages of a high coefficient of difficulty and high treatment costs ( 35 , 36 ). Therefore, early detection of spasticity and carrying out effective and rapid treatment are currently the focus of clinical treatment of PSS, which not only reduces related complications, but also shortens the treatment cycle and reduces the burden on the patient’s family. Clarifying the risk factors of PSS can help to detect and treat the functional disorders caused by PSS at an earlier stage, improve the rehabilitation efficacy of the patients, and enhance their ability to return to their families and society. There are many risk factors for PSS ( 37–39 ). NIHSS scores is an important indicator for assessing post-stroke neurological damage. A higher NIHSS scores means a more severe decline in the patient’s neurological functions ( 40 , 41 ). Studies have shown that PSS patients have relatively higher NIHSS scores ( 15 ). This study found the NIHSS scores to be significantly higher in the group with PSS when compared to the group with no PSS. There is a significant correlation between the incidence of PSS and NIHSS scores ( p < 0.05). In the multivariate analysis, NIHSS scores is an independent risk factor for PSS (OR = 1.515), and consistent with the findings of Ryu et al. ( 42 ). Relevant studies have also found ( 42 ) NIHSS scores to be a significant predictor of the occurrence of PSS. Basal ganglia refers to a group of nerve nuclei located deep in the brain, and comprises of the striatum, caudate nucleus, and globus pallidus. These sub-components play key roles in motor, emotional, cognitive, and focus. A damaged basal ganglia can lead to symptoms like muscle tone disorders, and spasticity ( 43 , 44 ). Studies have demonstrated a close relationship between the site of brain injury and the occurrence of spasticity ( 45 ). This study found the proportion of basal ganglia injury to be higher in patients with PSS than those without PSS. There was a significant correlation between the incidence of PSS and basal ganglia injury ( p < 0.05). Based on the relevant multivariate analysis, basal ganglia injury is an independent influencing factor for the incidence of PSS (OR = 6.693). Studies worldwide have also confirmed ( 46 , 47 ) that patients with basal ganglia injury have the highest risk of PSS. The size of cerebral hemorrhage or infarction also has a correlation with the occurrence of PSS, and this study showed that there was a significant difference in the comparison of the size of cerebral hemorrhage or infarction between patients with spasticity after stroke and those without spasticity ( p < 0.05), suggesting that large cerebral hemorrhage or infarction may be one of the factors influencing the development of limb spasticity after stroke. Related studies have shown that patients with less spasticity after stroke have smaller areas of cerebral hemorrhage or infarction, while the opposite is true for patients with severe spasticity ( 47 , 48 ). Some studies have found that the incidence of spasticity is higher in hemorrhagic strokes than in ischemic strokes, which may be related to the fact that hemorrhagic strokes have a higher degree of disability ( 49 ). Hemorrhagic stroke and ischemic stroke have very different pathological mechanisms. In addition to early local cerebral hemorrhage, hemorrhagic stroke is accompanied by a variety of pathological changes in the brain tissue in the hemorrhage area, such as ischemia, hypoxia, inflammatory response, neuronal degeneration, necrosis and apoptosis ( 50 ). Hemorrhagic stroke and ischemic stroke have different degrees and extent of damage to the central nervous system, and the onset of spasticity in different stroke types was not further analyzed in this study, the effects of cerebral hemorrhage and cerebral ischemia on spasticity will be further specifically analyzed in future studies. Other researchers have found that stroke patients with a history of previous stroke, that is, patients who are not the first stroke, have a higher proportion of spasticity, and the reason is related to the aggravation of brain tissue damage after a second stroke, and further neurological function damage leads to an increased likelihood of spasticity ( 51 ). It is now generally accepted that the incidence of spasticity is relatively low in the acute phase of stroke, and the incidence will gradually increase as the stroke course progresses and lengthens ( 52 ). Therefore, in this study, patients in the recovery period after stroke were selected as the research subjects to further clarify the importance of prevention of spasticity after stroke and early rehabilitation intervention on the recovery of neurological function after stroke. This study retrospectively analyzed data on the patient’s conditions and used multivariate logistic regression to identify factors that may influence spasticity, including the involvement of basal ganglia, cerebral hemorrhage or infarction volume, and NIHSS scores. Independent influencing factors for spasticity ( p < 0.05) include basal ganglia as the cerebral hemorrhage or infarction site, cerebral hemorrhage or infarction volume, and NIHSS scores. Due to the limited time and funding, this study has shortcomings in areas like study design and methods, and the results cannot comprehensively encompass all risk factors for PSS. Furthermore, this study adopts a retrospective approach, and is not able to dynamically track and observe the development of spasticity. In selecting the sample, only patients from a single center were selected for the study, so the generalization of the results to patients in other geographical regions should be approached with caution. In future research, a multicenter study with a larger sample size and longer follow-up duration will be conducted for a more comprehensive and in-depth investigation on the risk factors of PSS. A more scientific study plan will be adopted to provide a better scientific basis with regards to the prevention of PSS. Independent risk factors for Post-stroke spasticity include basal ganglia as the cerebral hemorrhage or infarction site, cerebral hemorrhage or infarction volume and NIHSS scores. | Study | biomedical | en | 0.999998 |
PMC11697598 | Among severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2) variants, delta (B.1.617.2) and omicron (B.1.1.529) variant viruses caused a worldwide pandemic due to increased transmissibility compared to that of the Wuhan virus ( 1 ). It is urgent to elucidate the molecular mechanisms governing the onset and progression of these variants to develop effective strategies aimed at reducing recurrence rates and improving therapeutic potency. Metabolomics has the potential to enhance our understanding of host−pathogen interactions in infectious diseases. In particular, metabolomics has been widely applied for biomarker discovery and to investigate the immunometabolic response of individuals infected with various viruses, including more recently SARS-CoV-2 ( 2 – 7 ). Notably, targeting specific metabolic pathways that are crucial for viral replication can potentially disrupt virus growth and reduce infection severity ( 8 ). Depletion of GSH due to viral infection leads to disruption of the redox balance in the lungs and results in tissue damage ( 9 ). High kynurenine/tryptophan ratios were observed in the plasma of patients with moderate and severe COVID-19 ( 10 ). L-arginine is metabolized to L-ornithine in the urea cycle by arginase and serves as a substrate for the production of nitric oxide (NO) by nitric oxide synthase (NOS), a signaling molecule involved in inflammatory responses ( 11 ). Interestingly, arginine administration and modulation of nitric oxide (NO) production have emerged as promising therapeutic strategies with high potency in the management of patients with severe coronavirus disease 2019 (COVID-19) ( 12 , 13 ), but molecular mechanistic studies regarding arginine metabolism are still limited. Therefore, deeper metabolic pathway studies based on metabolomics are crucial to fully understand the arginine-NO metabolic pathway and its implications for therapeutic interventions in patients with severe COVID-19. The golden Syrian hamster model is valuable for studying pulmonary pathology during COVID-19 due to the high genetic similarity of these hamsters to humans ( 14 , 15 ). Recent years have seen continuous efforts to explore the pathophysiology of SARS-CoV-2 infection using various animal models such as hamsters, minks, and ferrets infected with the Wuhan virus, shedding light on changes including TCA cycle, purine metabolism, pentose phosphate pathway, kynurenine pathway and triacylglycerol accumulation ( 16 – 18 ). Moreover, multi-omics studies have elucidated the underlying mechanism involved in SARS-CoV-2 pathophysiology such as a shift toward enhanced glycolysis ( 19 ) and significant phospholipid metabolic alterations ( 20 ). However, metabolomic studies focusing on pulmonary pathophysiology in preclinical models of SARS-CoV-2 infection are still lacking. Current research on delta and omicron variant infections focuses on transcriptional changes in inflammatory mediators and specific genes but lacks a comprehensive view of systemic metabolic alterations in host-pathogen interactions ( 21 ). Here, we performed molecular profiling through metabolomic and transcriptomic analysis to acquire a comprehensive understanding of the systemic effects and metabolic alterations induced by SARS-CoV-2 variants, including delta and omicron. Overall, our study provided insights into how delta and omicron viruses manipulate host’s lung metabolism. We performed metabolomic profiling and integrated transcriptomic analysis, offering valuable insights into potential therapeutic targets for the treatment of SARS-CoV-2 delta and omicron variant infections in hamsters. Golden Syrian hamsters (6 weeks old, male) were purchased from Central Laboratory Animal Inc. (Seoul, South Korea). Our study examined male hamsters because male animals exhibited less variability in phenotype. The animals were maintained under a 12 h light and dark cycle and fed a standard diet and water ad libitum. The hamsters were divided into three groups (n=5/group): the negative control, delta variant and omicron variant groups. The hamsters were anesthetized, and thereafter, the infection was established by intranasally administration of 20 μL (10 5.0 TCID 50 /ml) of SARS-CoV-2 delta variant (B.1.617.2) or SARS-CoV-2 omicron variant (B.1.1.529). The body weights of all infected hamsters were monitored daily until sacrifice. Five hamsters from each group were sacrificed at 0, 4, and 7 days post-infection (dpi), and the lungs were collected to assess the metabolic changes following viral infection . Lung samples were divided for metabolic profiling, transcriptomic analysis, and H&E staining and were stored at -80°C until use. This study adhered to the guidelines of Jeonbuk National University and was approved by the Institutional Animal Care and Use Committee , and the experimental protocols requiring biosafety were approved by the Institutional Biosafety Committee of Jeonbuk National University . All animal experiments were carried out at the Animal Use Biosafety Level-3 (ABL-3) facility at the Korea Zoonosis Research Institute, which is certified by the Korea Disease Control and Prevention Agency of the Ministry of Health and Welfare (certification number: KCDC-16-3-06). To measure the viral loads of SARS-CoV-2 in lung tissue samples, quantitative real-time PCR was performed to detect the N gene of SARS-CoV-2 using TaqMan Fast Virus 1-Step Master Mix (Thermo Fisher Scientific, MA, USA) as previously described ( 22 , 23 ). One gram of tissue samples from all hamsters were placed into soft tissue homogenizing CK14 tubes (Precellys, Betin Technologies) prefilled with ceramic beads and DMEM and then homogenized using a Bead blaster 24 (Benchmark Scientific, NJ, USA). Viral RNA was extracted from the homogenized tissues using a QIAamp viral RNA Mini Kit (Qiagen) according to the manufacturer’s protocol. Real-time PCR was conducted using a CFX96 Touch Real-Time PCR Detection System (Bio-Rad, Hercules, CA, USA). All animals were euthanized using an intraperitoneal injection of xylazine and succinyl choline at the end of the experiment. At necropsy, gross lesions in the lung were examined, and then the lung tissues were collected and fixed in 4% neutral-buffered formalin for 1 week. Tissues embedded in paraffin blocks were sectioned at a thickness of 4 μm and then mounted onto glass slides. The slides were deparaffinized in xylene, rehydrated through a series of graded 100% ethanol to distilled water and then stained with hematoxylin and eosin. All tissue samples were assessed by a blinded veterinary anatomic pathologist. To extract metabolites from lung tissue, 100 mg of lung sample was weighed and mixed with 600 μL of methanol/water (1:1, v/v) in a 1.5 mL Eppendorf (EP) tube containing zirconium oxide beads. The mixed sample was homogenized at 5,000 rpm twice using a Precellys 24 tissue grinder (Bertin Technologies, France) and centrifuged after homogenization. After adding 600 μL of chloroform, the sample was vortexed for 1 min and incubated at 4°C for 10 min. The mixture was centrifuged at 12,700 rpm for 20 min at 4°C. For the extraction of serum metabolites, 50 μL of each serum sample were mixed with 550 μL of chloroform/methanol mixture (2:1, v/v) and vortexed for 1 min. Next, 100 μL of water was mixed with the lung and serum samples, respectively and incubated at 4°C for 10 min. The mixture centrifuged at 12,700 rpm for 20 min at 4°C. Then, 150 μL of the upper aqueous supernatant from lung tissue and 50 μL supernatant from the serum were transferred into a new 1.5 mL tube and dried using a speed vac evaporator. The dried lung and serum extracts were redissolved in 200 μL of an acetonitrile/water mixture (75:25, v/v) containing internal standards (0.1 μg/ml betaine-D 11 , 10 μg/ml glutamate- 13 C 5 , 5 μg/ml leucine- 13 C 6 , 2 μg/ml phenylalanine- 13 C 6 , 10 μg/ml succinate- 13 C 4 , 10 μg/ml taurine- 13 C 2 , and 10 μg/ml uridine- 13 C 9 , 15 N 2 ). Liquid chromatography (LC)-electrospray ionization (ESI)-mass spectrometry (MS) analyses for metabolomics of lung tissue extracts were performed on a triple TOF™ 5600 MS/MS system (AB Sciex, Canada) combined with a UPLC system (Waters, USA). LC separations were carried out on a ZIC-HILIC column (2.1 mm × 100 mm, 3.5 μm; SeQuant, Germany). The column temperature and flow rate were set to 35°C and 0.4 mL/min, respectively. The mobile phases used were 10 mM ammonium acetate and 0.1% formic acid in water/acetonitrile (10:90, v/v) (A) and water/acetonitrile (50:50, v/v) (B). The linear gradient program was as follows: 1% B from 0 to 2 min, 1–55% B from 2 to 8 min, 55–99% B from 8 to 9 min, 99% B from 9 to 11 min, 99–1% B from 11–11.1 min, and 1% B from 11.1 to 15 min. The injection volume of the sample was 2 µL for both positive and negative ionization polarity modes. Quality control (QC) samples, which were pooled identical aliquots of the samples, were analyzed regularly throughout the run to ensure data reproducibility. The spectral data were analyzed by MarkerView™ (AB Sciex, Canada), which was used to find peaks, perform peak alignment, and generate peak tables of m/z and retention times (min). The data were normalized using the total area of the spectra. To identify reliable peaks and remove instrumental bias, peaks with coefficients of variation below 20 in QC samples were selected. Metabolites were identified by comparing the experimental data against an in-house library and the online database MS-DIAL. Total RNA from lung tissues was isolated and prepared using the TRIzol cell RNA extraction protocol. The libraries were prepared for 151 bp paired-end sequencing using a TruSeq Stranded mRNA Sample Preparation Kit (Illumina, CA, USA). Namely, mRNA molecules were purified and fragmented from 1 μg of total RNA using oligo (dT) magnetic beads. The fragmented mRNAs were synthesized as single-stranded cDNAs through random hexamer priming. By applying this single-stranded cDNA as a template for second strand synthesis, double-stranded cDNA was prepared. After the sequential processes of end repair, A-tailing and adapter ligation, cDNA libraries were amplified with polymerase chain reaction (PCR). The quality of these cDNA libraries was evaluated with an Agilent 2100 Bioanalyzer (Agilent, CA, USA). The libraries were quantified with a KAPA library quantification kit (Kapa Biosystems, MA, USA) according to the manufacturer’s library quantification protocol. Following cluster amplification of denatured templates, paired-end sequencing (2×151 bp) was performed using an Illumina NovaSeq 6000 (Illumina, CA, USA). The adapter sequences and the ends of the reads with a Phred quality score less than 20 were trimmed, and simultaneously, the reads shorter than 50 bp were removed by using cutadapt v.2.8 ( 24 ). Filtered reads were mapped to the reference genome related to the species using the aligner STAR v.2.7.1a ( 25 ) following ENCODE standard options (refer to “Alignment” of the “Help” section in the html report) with the “-quantMode TranscriptomeSAM” option for estimation of transcriptome expression level. Gene expression estimation was performed by RSEM v.1.3.1 ( 26 ) considering the direction of the reads that correspond to the library protocol using the option –strandedness. To improve the accuracy of the measurement, the “–estimate-rspd” option was applied. All other options were set to default values. To normalize the sequencing depth among samples, FPKM and TPM values were calculated. Based on the estimated read counts in the previous step, differentially expressed genes (DEGs) were identified using the R package TCC v.1.26.0 ( 27 ). The TCC package applies robust normalization strategies to compare tag count data. Normalization factors were calculated using the iterative DESeq2 ( 28 ) method. The Q-value was calculated based on the p value using the p.adjust function of the R package with default parameter settings. The DEGs were identified based on the q-value threshold less than 0.05 for correcting errors caused by multiple testing ( 29 ). We constructed a network based on correlation coefficients among the metabolites, transcriptome, and cytokines using Cytoscape v.3.10.1 ( https://cytoscape.org ). In the network graph, the metabolites and transcripts within the three selected metabolic pathways and significantly altered cytokines within the SARS-CoV-2 variant group are represented as nodes. The thickness of the lines connecting each node was determined by the Pearson’s correlation coefficient values. SIMCA-P+ v.16.0 (Umetrics, Sweden) was used to conduct multivariate analysis. All metabolite levels were scaled to unit variance prior to principal component analysis (PCA). PCA was applied to provide an overview of metabolomic data. All the results were analyzed using the Statistical Package for Social Sciences software, v.28.0 (SPSS Inc., USA) and plotted using GraphPad Prism, v.8 (GraphPad Software, Inc., USA). Statistical significance was assessed using one-way ANOVA with Tukey’s multiple comparisons post hoc test. After performing robust scaling on the metabolomics and transcriptomics data using Google Colab ( colab.research.google.com ), Pearson’s correlation analysis was conducted on the scaled data. Pathway analysis was performed in the MetaboAnalyst computational platform ( www.metaboanalyst.ca ) ( 30 ). To elucidate the immune response and pathogenic molecular mechanisms of SARS-CoV-2 variants, we used the hamster model for delta and omicron variant infection . After intranasal infection with the variants, the body weight of the hamsters was measured daily. In comparison to the non-infected control group, the groups infected with the delta and omicron variants showed significant weight loss, indicating a successful viral infection in the hamster model according to clinical signs . Specifically, the delta variant group demonstrated a more pronounced reduction in body weight than the omicron variant group, indicating a heightened severity of viral infection within the delta group. In addition, SARS-CoV-2 viral RNA copy numbers from lung tissue in both the delta and omicron variants showed significant increases at 4 and 7 dpi compared to those of the control group . However, no statistically significant difference was observed in the viral load between the two variants. Next, histological analysis of lung tissue was performed to evaluate pulmonary lesions . Histopathological changes such as perivascular inflammatory cell infiltration, pneumocyte hyperplasia, alveolar hemorrhages, and septal thickening were observed in the hamsters challenged with the delta or omicron variant at 4 dpi and 7 dpi. These findings indicate that SARS-CoV-2 variant viruses infect hamster lung tissues, with delta variant causing more significant inflammatory pathology compared to omicron variant. To investigate host-pathogen interactions and changes in host’s metabolism infected by delta and omicron variants, LC/MS-based metabolic profiling was conducted on lung tissues, a key target organ in SARS-CoV-2 pathology. A total of 5,427 and 3,110 peak features were detected in positive and negative ion modes, respectively. Tightly scattered quality control (QC) samples in principal component analysis (PCA) score plots indicated good analytical reproducibility during the LC/MS experiment . Regarding metabolic pattern recognition after infection with the delta variant, PCA score plots showed distinct separation between pre- and post-infection in both positive and negative ion modes , while lung tissue samples derived from hamsters infected with omicron were slightly separated between pre- and post-infection in PCA score plots. On the other hand, no significant differences were observed PCA score plots between pre- and post-infection in control group . These results suggest that SARS-CoV-2 variants can modulate lung metabolism, with the delta variant exhibiting a greater impact on lung metabolism reprogramming than the omicron variant. A heat map was generated to visualize the changes in the levels of 88 identified metabolite in the lung tissues of hamsters infected with the delta and omicron variants of SARS-CoV-2 . In both the delta and omicron groups, we observed significant elevation of the levels of several amino acids, including arginine, phenylalanine, asparagine, histidine, tryptophan, cystine, lysine, ornithine, serine, threonine and S-adenosyl-L-methionine (SAM), after variant infection. On the other hand, there were lower levels of taurine, allantoin and 1-methyladenosine after delta and omicron infection. Interestingly, the levels of S-adenosyl-L-homocysteine (SAH), cholic acid, glycochenodeoxycholic acid, malate, 3-hydroxy-3-methylglutaric acid, and kynurenine and the ratio of kynurenine to tryptophan were markedly increased only after delta infection. Next, to identify key metabolic pathways affected by SARS-CoV-2 variant infection at each distinct symptomatic phase (e.g., 7 dpi for delta and 4 dpi for omicron) ( 31 ), metabolic pathway analysis was performed based on differentially regulated metabolites specific to those time points . The results of metabolic pathway analysis revealed distinct changes specific to each variant group. Arginine biosynthesis and taurine and hypotaurine metabolism were important metabolic pathways for both the delta and omicron variants. In the delta variant group, tryptophan metabolism and glutathione (GSH) metabolism were identified as key metabolic pathways. Conversely, the omicron variant group showed arginine and proline metabolism, as well as histidine metabolism, played significant roles following infection. These results demonstrated distinct metabolic changes occurring in the lung tissue of hamsters as a direct consequence of infection with the SARS-CoV-2 variants. Based on the comprehensive examination of a heat map and pathway analysis, notable metabolic alterations were observed in three pathways: arginine biosynthesis, GSH metabolism, and tryptophan metabolism. The levels of most metabolites involved in arginine biosynthesis showed an increasing trend in both the delta and omicron groups compared to those pre-infection. In particular, significant accumulation of arginine and ornithine was observed after delta and omicron infection. In GSH metabolism, a remarkable increase in cystine and a decrease in GSH levels were observed in the delta variants at 7 dpi compared to those at 0 dpi. The levels of taurine were lower after delta and omicron infection than before infection. Within tryptophan metabolism, a significant increase in kynurenine levels was observed at 4 and 7 dpi, while tryptophan levels showed a decrease specifically in the delta group compared to those at baseline, indicating that tryptophan was being converted to kynurenine. To investigate systemic metabolic changes in response to coronavirus variants infections, we also examined alterations in those three pathways in the serum . Increased levels of citrulline and ornithine were observed in the serum, mirroring the trends identified in lung tissue for both the delta and omicron groups . Arginine levels showed an increasing trend in the serum of delta group, while a contrasting decrease was noted in omicron group. Additionally, a reduction in aspartate was observed in the serum. In the context of glutathione metabolism, a significantly reduction in cystine levels was observed in the serum, in contrast to the lung tissue. Additionally, there was an increase in both glutamine and GSSG levels in two variant groups. In tryptophan metabolism, we observed increase of kynurenine levels in both variant groups, mirroring the findings in lung tissue. The delta group exhibited a reduction in tryptophan whereas the omicron group exhibited an increase in tryptophan. Furthermore, a decline in kynurenic acid was also observed in the serum. Supplementary Figure S2 visually represents the individual trends of metabolite levels in three specific metabolic pathways between pre- and post-infection in lung tissue and serum in both the delta and omicron groups. We observed the correlation between the metabolic profiles of lung tissue and serum in the delta group at 7dpi and the omicron group at 4dpi , which exhibited distinct metabolic changes after infection. In the delta group, predominantly positive correlations were observed among various metabolites . Notably, arginine in lung tissue were positively correlated with arginine, glutamine and kynurenine in serum. Cystine in lung tissue were positively correlated with arginine, citrulline, proline, cystine and SAM in serum. Lung kynurenine also showed positive correlation with serum citrulline, proline and cystine. Conversely, in the omicron group, predominantly negative correlations were observed among various metabolites . Particularly, proline in lung tissue showed a significant negative correlation with ornithine, GSSG and SAM in serum. These findings underscore that the delta and omicron variants induce different metabolic alterations in the host’s lung tissue and serum following infection, and imply that a coronavirus infection impacts not only the pulmonary tissue but also has systemic effects throughout the body. Next, an RNA-Seq analysis was conducted to investigate the transcriptional alterations in genes linked to each of the three identified metabolic pathways, as outlined in the Kyoto Encyclopedia of Genes and Genomes (KEGG). The genes associated with the three metabolic pathways showed mostly similar trends of changes in transcription for both the delta and omicron variants, especially the magnitude of the significant change, which was much larger in the delta group than in the omicron group . In arginine biosynthesis, the levels of Ass1 were significantly increased at 7 dpi compared to those at 0 dpi in the delta group but not in the omicron group. The transcription level of most genes involved in GSH metabolism, including Gpx1, Ggt1, Gsr, Pgd, Anpep and Lap3, was significantly higher post-infection than pre-infection in the delta group but not in the omicron group. In tryptophan metabolism, increased patterns of transcription of genes related to kynurenine synthesis, including Tdo2 and Ido1, were observed, while the levels of Cyp1a1 were significantly lower at 7 dpi than at 0 dpi in the delta group. Based on metabolomic and transcriptomic analyses, we were able to identify altered metabolic pathways in response to the SARS-CoV-2 variants. By combining the two sets of analyses, the modified metabolic pathways could be depicted in a single figure . Upregulation of arginine biosynthesis and the urea cycle was observed with both the delta and omicron variants . An examination of the integrated metabolic pathway for GSH metabolism revealed distinct alterations in the context of the delta variant, wherein the synthesis of GSH was found to be suppressed, concomitant with an augmented production of cystine . In the context of the metabolic pathway related to tryptophan metabolism, enhancement of the synthesis of kynurenine was observed with only the delta variant . These findings demonstrate that SARS-CoV-2 variants induce alterations in the metabolic pathways of hamster lung tissue. Specifically, it was shown that the delta variant of the virus had a stronger impact on the lung metabolism of hamsters upon infection than the omicron variant. The levels of cytokines, including IL-6, IL-1β, IL-10, IFN-γ, tumor necrosis factor-α (TNF-α), and colony-stimulating factor (CSF), gradually increase with the severity of COVID-19 and play a crucial role in the immune response to SARS-CoV-2 infection ( 10 , 32 ). Thus, the mRNA levels of cytokines were examined to gain insights into their role in the immune response to SARS-CoV-2 delta and omicron variants . Most cytokine levels within the lung tissue were elevated after infection with the delta and the omicron variants, consistent with previous studies. In particular, we observed increase in the levels of cytokines known to contribute to cytokine storms as IL-1β, IL-6, IL-12A, IL-12B, IFN-γ, and TNF-α as well as various chemokines and CSFs upon infection with the COVID-19 variants . Moreover, these alterations were more notable in the delta group than in the omicron group. Next, a correlation analysis was conducted to explore the association between metabolites and genes involved in infection-induced altered metabolic pathways and all cytokines changed after infection . To visualize and interpret the modulation of metabolic pathways in relation to metabolic and transcriptional changes in the immune response, we created integrated metabolic network diagrams based on the correlation analysis of cytokines, metabolites, and genes for both the delta and omicron variants . For the delta variant, the network showed predominantly strong positive correlations among cytokines, metabolites, and transcripts . In particular, arginine exhibited positive correlations with major proinflammatory cytokines, such as CCL4 and CCL5. And proline and GSH were positively correlated with IL-12B . Additionally, oxidized glutathione (GSSG) and SAM were positively correlated with CCL8, while taurine exhibited a negative correlation with CXCL17 . Regarding transcriptome profiles, strong positive correlations were observed between genes and cytokines, mirroring the correlations between metabolites and cytokines . Genes related to arginine biosynthesis, such as Got1l1, Otc, and Asl, exhibited positive correlations with most cytokines. Notably, Asl showed a significant positive correlation with cytokines from the TNF and transforming growth factor-beta (TGF-β) families, while Arg1 and Otc showed negative and positive correlations with IL-12B, respectively. In GSH metabolism, Gpx1 and Pgd exhibited a negative correlation with IL-1β, while Anpep showed positive correlations with TNFAIP8L2, TNFSF12, TNFSF13b, and TGF-β1. Additionally, Lap3 was positively correlated with CCL12 and IL-18bp, and Gstt3 exhibited a positive correlation with CCL5. In tryptophan metabolism, Ido1 and Kynu showed positive correlations with CXCL10, TNFSF12, and TGF-β1, while Tdo2, Inmt, and Aldh7a1 exhibited positive correlations with IL-12B. For the omicron variant, the network primarily showed negative correlations . Specifically, all metabolites exhibited negative correlations with CCL5, CCL8, TNFAIP8L2, and CSF1, but showed positive correlations with XCL1 . No significant correlations were observed between genes related to arginine biosynthesis and cytokines for the omicron variant . In glutathione metabolism, Ggt1 and Pgd were negatively correlated with TNFSF10, while Gpx1 exhibited a positive correlation with TNFAIP8L2. Additionally, Anpep showed negative correlations with CCL5, CCL8, and CSF1. In tryptophan metabolism, Ido1 and Kynu showed negative and positive correlations with TNFSF10, respectively, while Cyp1a1 exhibited negative correlations with CCL5 and CSF1.These results indicate that SARS-CoV-2 variant infection triggers an inflammatory response associated with arginine biosynthesis, glutathione metabolism and tryptophan metabolism in the lungs of hamsters by modulating metabolite and transcript levels, and the delta and omicron variant viruses exert distinct inflammatory responses on hamster lung tissue, as evidenced by different correlations with cytokines. In this study, we investigated a comprehensive molecular mechanism in hamster lung tissue infected with delta and omicron SARS-CoV-2 variants by integrating metabolomics and transcriptomics. Following viral infection, arginine biosynthesis, GSH metabolism, and tryptophan metabolism were concurrently regulated at both the metabolic and genetic levels in lung tissue. Importantly, these metabolic pathways were notably associated with the production of inflammatory cytokines. Interestingly, the delta variant induced a stronger impact on lung metabolism and inflammatory responses compared to the omicron variant, according to metabolic profile patterns , levels of metabolites and genes , and changes in cytokine levels ( Supplementary Table S3 ). Additionally, these metabolic alterations were reflected in the serum, emphasizing the systemic impact of the virus on various metabolic processes. Viruses can influence host metabolic processes and induce physiological dysfunction ( 33 ). Understanding the pathophysiology of SASR-CoV-2 through the elucidation of molecular mechanisms via metabolomics and transcriptomics, as well as exploring metabolic interventions as novel therapeutic strategies, may contribute to the prevention and treatment of COVID-19. Hence, this study can provide potential molecular targets for therapeutic exploration in the quest for new drugs targeting the host pulmonary immune response following infection with delta and the omicron variants. In this study, an increase in arginine synthesis was observed in both the delta and omicron variant viruses. Arginine serves as a substrate for the generation of nitric oxide (NO), which is a signaling molecule in inflammatory responses. Previous studies reported decreased levels of arginine and a dysregulated urea cycle in plasma from patients with severe COVID-19 ( 34 , 35 ). Within the urea cycle, arginine is converted to ornithine and then recycled back to arginine via the enzymes Otc, Ass1, and Asl ( 36 ). Therefore, the increase in arginine levels can be derived from ornithine, as indicated by the upregulation of enzymes such as Arg1, Ass1, Otc, and Asl within the urea cycle. The alterations in arginine biosynthesis could be attributed to the actions of the urea cycle toward enhancing the reduction of elevated NO levels induced by the inflammation triggered by infection. Interestingly, some studies have suggested that arginine supplementation therapy in COVID-19 patients could improve immune function and reduce inflammation ( 34 , 37 – 39 ). Additionally, targeting arginine depletion by regulating arginine biosynthesis enzymes, aiming to inhibit viral replication may present a potential therapeutic strategy for the treatment of COVID-19 patients ( 36 ). Previously, a decrease in the levels of GSH along with an increase in the levels of GSSG was observed after coronavirus infection ( 40 ) indicating enhanced intracellular free radical generation and increased oxidative stress. Lung tissue functions as a reservoir for cellular thiols, primarily in the form of GSH. Viral infections deplete GSH and disrupt the redox balance in lung tissue, inducing cellular stress with lung damage ( 9 ). In patients experiencing hypoxemia due to SARS-CoV-2 infection, a reduction in serum cysteine has been reported, consistent with our research findings ( 41 ). Furthermore, we observed a significant increase in the levels of cystine, Ggt1 and Lap3. When GSH levels within the lung tissue are maintained, cystine from outside the cells enters and undergoes reduction to cysteine inside the cells ( 41 , 42 ). The decrease in GSH levels due to viral infection is anticipated to result from a reduction in serum cysteine levels required for GSH synthesis and the inhibition of the conversion of cystine to cysteine in lung tissue, leading to the accumulation of cystine. Thus, this study suggested that the alteration in GSH metabolism during SARS-CoV-2 variant infection can serve as an indicator of how the coronavirus affects oxidative stress and contributes to lung damage. In tryptophan metabolism, kynurenine, primarily known as an inflammatory marker, was significant enriched, along with a notable decrease in tryptophan levels after delta variant infection. Additionally, increased expression of genes such as tryptophan 2,3-dioxygenase 2 (Tdo2) and indoleamine 2,3-dioxygenase 1 (Ido1) was observed, indicating the enhancement of kynurenine synthesis after delta infection. Previous studies reported that kynurenine and tryptophan are associated with COVID-19 severity ( 35 , 43 , 44 ). Furthermore, Kynu and Ido1, which are involved in tryptophan metabolism, are upregulated during coronavirus infection. In particular, the reduction in tryptophan levels due to the action of Ido1 has long-term immunosuppressive effects ( 45 ). Consequently, our findings suggest that the enhancement of kynurenine synthesis represents a distinct inflammatory response in the lung tissue following infection with the delta variant. Interestingly, our findings are consistent with those of a previous human study. Li et al. found significant up-regulation in arginine metabolism and the urea cycle as well as tryptophan metabolism in plasma samples obtained from omicron patients compared to healthy controls ( 46 ). Notably, disruption of the urea cycle was observed, with a significant increase in ornithine cycle-related metabolites such as N2-acetyl-L-ornithine and asparagine, which were associated with cytokine storm. Additionally, these findings suggested that homoarginine and ornithine play a role in liver detoxification ( 35 ). Therefore, we suggested the potential for clinical application of SARS-CoV-2 research using the hamster model. The networks of cytokines and metabolic pathways suggested the presence of an inflammatory response and immune activation due to delta and omicron infection. Numerous studies have reported increased levels of inflammatory cytokines in COVID-19 patients, which supports our findings ( 32 ). Coronaviruses infect the respiratory tract and trigger a cytokine storm characterized by the production of inflammatory cytokines such as IL-1, IL-6, IL-8, IL-12, TNF-α, and other chemokines. This excessive release of inflammatory cytokines causes a rapid increase in cytokine levels in the bloodstream, leading to systemic inflammation. As a result, it can cause not only lung damage but also multiorgan failure, which is closely related to the severity of the disease. In patients with severe COVID-19, a high correlation was observed between circulating inflammatory cytokines, such as IL-6, CXCL10 (IP-10), and CSF1 (M-CSF), and arginine metabolism as well as tryptophan metabolism ( 43 ). Arginine is closely associated with inflammatory responses due to its essential role in T-cell activation, regulating both innate and adaptive immunity ( 47 ). These results reveal a strong correlation between TNF family cytokines and transcripts related to GSH metabolism, suggesting a potential link between the release of inflammatory cytokines and oxidative stress. The release of these inflammatory cytokines can potentially induce damage to lung tissue ( 48 ). Tryptophan metabolism is known to have the strongest correlation with IL-6 ( 43 , 49 ). Additionally, TNF-α, IL-6, and IL-1β induce elevated Ido1 expression in the context of immunosuppression in lung cancer progression ( 50 ). In conclusion, this study can be considered a notable advance as it included a comprehensive approach involving metabolic and transcriptomic profiling in animal models, which is relatively unexplored in the context of SARS-CoV-2 and its variants. We suggest that arginine biosynthesis, GSH metabolism and tryptophan metabolism are key metabolic pathways, shedding light on their relationship with the pulmonary immune response to both the delta and omicron infections. Furthermore, these pathways could be potential targets for therapeutic interventions aimed at mitigating the impact of these two SARS-CoV-2 variants. Overall, this study demonstrates that metabolic profiling with transcriptomic profiling is a valuable tool for exploring the immunometabolic responses associated with infectious diseases. | Study | biomedical | en | 0.999996 |
PMC11697606 | During the last decade, Bayesian geostatistical models have increasingly been used to determine spatio-temporal patterns of malaria risk, capture the effects of control interventions, and identify environmental and socioeconomic factors that are related to changes in the distribution of malaria risk . In most low- and middle-income countries, the data used to fit geostatistical models are mainly collected by national households surveys such as the Demographic and Health Surveys (DHS) and the Malaria Indicators Survey (MIS) . A two-stage sampling design was used to select survey clusters and households within clusters. The clusters included typically around 25 households per cluster and were geo-referenced according to their centroid. However, to ensure confidentiality of the health status of the enrolled individuals, the longitude and latitude of the cluster centroids were randomly jittered (displaced) from their original positions within a radius of 0 to 10 km according to the type of location (rural / urban) . Some studies have either assessed or mitigated the influence of imprecise geographical locations on model fit . In particular, studies on jittering DHS data have investigated the impact of spatial displacement on the estimates of the effects of distance-based covariates such as proximity to health services or areal covariates such as poverty measures defined in areas around a cluster location. These studies have been conducted in the field of HIV infection and using simulated and real data to assess the potential effects of location shift on model parameter estimates. However, within the Bayesian geostatistical modelling framework, studies assessing the effects of the cluster displacement on the pixel-level predictions of disease risk such as malaria and on the estimates of the covariates, for example climatic factors or control intervention effects are rather lacking. The fourth DHS in Cameroon was combined with the Multiple Indicators Cluster Survey (MICS) in 2011 and carried out between January and August, a period which unfortunately did not overlap with the high malaria transmission season. In the same year, the National Malaria Control Program (NMCP), the National Institute of Statistics (NIS) and other partners conducted a MIS from September to November within the high malaria transmission season on a subset of clusters previously surveyed by DHS. The geographical coordinates of the DHS clusters involved in the MIS were registered without any alteration . Our study assessed the influence of jittering of cluster locations on geostatistical model-based malaria risk estimates at high spatial resolution and on the estimates of the control interventions effects. A large simulation study using the jittered locations was carried out based on the MIS cluster locations and the random displacement procedure of DHS. Bayesian geostatistical models were applied on the simulated data and the results were compared with the non-jittered data. Cameroon, a country in Central Africa has a population of around 24 million inhabitants with an annual population growth of 2.5 % within the territory surface of 475,650 km 2 . Fifty one percent of the population lives in urban areas . In 2017, the gross domestic product rate was 3.1 % and the last estimates of human development index done in 2014 was 0.518 . The country is spanned by different ecological environments with various lengths of malaria transmission, namely: the dry Sahelian in the Far North region and Sudano-Guinean in the North region (4–6 months), the highlands of Adamawa region and West (7–12 months); the equatorial forests in Centre, East and South regions; the Atlantic coastal in Littoral, South-West and part of South regions where malaria transmission is perennial (12 months) . The Cameroon Malaria Indicator Survey (MIS) of 2011 was nationally representative and funded by the Global fund to fight AIDS, Tuberculosis and Malaria with the aim to collect malaria indicators additional to those in DHS and to compare the overall malaria parasite prevalence obtained by the MIS and DHS data . The MIS was conducted in 257 clusters randomly selected out of the 580 clusters of the Cameroon DHS 2011 and involved 6040 households and 4939 children aged between 6 and 59 months . Rapid Diagnostic Tests (First Malaria Response Antigen) were used for malaria screening of children with the approval of adults in charge . Apart from the malaria parasite data, the survey collected information on malaria interventions and socio-economic status proxies. Fig. 1 : Observed malaria parasite risk in children under 5 years at 257 MIS locations. Fig. 1 Data on malaria interventions was processed to create the following intervention coverage indicators as proposed by the Global Malaria Action Plan and Roll Back Malaria monitoring and evaluation group: (a) proportion of children in the households who slept under an insecticide treated-net (ITN) the night before the survey, (b) proportion of households in the cluster with at least one ITN, (c) proportion of households in the cluster with one ITN per two persons, (d) proportion of population with access to an ITN in their household. Adherence to the health system was calculated by the proportion of children with fever who sought treatment at hospital, tested and treated with the recommended anti-malaria drugs (Artemisinin-based combination therapy) during the last two weeks . The analysis included the education level of women of reproductive age and the household welfare index as socio-economic proxies. The education level was categorized into three levels (primary, secondary and university). The household asset index was available in the database and it classified households into the poorest, poor, middle, rich and richest categories. The area type (urban or rural) was extracted from the MIS data. One hundred datasets were generated from the original MIS data, each with randomly jittered cluster locations from the MIS coordinates according to the jittering algorithm used by the DHS program. In particular, clusters in urban areas were randomly displaced within a radius of 2 km; whilst 99 % of those in rural areas were shifted within a radius of 5 km from their original locations. The remaining 1 % of rural clusters were displaced up to a radius of 10 km, as these clusters remained sparsely populated . The simulated data differed from the MIS data in the cluster coordinates. The prevalence, intervention and socio-economic information were maintained the same as at the original locations. Environmental and climate proxies were obtained from satellite sources ( Table A.1 in the Appendix). Day and night Land Surface Temperature (LSTD, LSTN), Normalized Difference Vegetation Index (NDVI), Enhanced Vegetation Index (EVI) and Rainfall estimates (RFE) were averaged over the year prior to the survey. The covariates forest, savannah, cropland and distance to permanent water bodies (DWB) were retrieved or calculated from Land Cover satellite maps. The data were extracted at the MIS cluster locations and at the locations of simulated datasets. Bayesian geostatistical binomial regression models were fitted on the malaria parasite data (MIS and simulated ones) aggregated at cluster locations (See Additional file 1). The models incorporated geostatistical variable selection to identify the most important climatic and environmental covariates including their functional forms (i.e. continuous or categorical). The categorical covariates were derived by analyzing the relationship between malaria cases and continuous climatic predictors. The cut-off points were validated using linear regressions. In particular, a categorical indicator was created from each climatic predictor, taking the values 0, 1 and 2 which corresponds to the exclusion of the predictor from the model or its inclusion in continuous or categorical form, respectively (See Additional file 2). It was assumed that the indicator arose from a multinomial distribution with probabilities defining the variable-specific exclusion/inclusion probabilities (in continuous/categorical forms) in the model. A threshold of 50 % was considered for the probability of inclusion (i.e. posterior inclusion probability) into the predictive geostatistical model . The predictive performance of the models obtained from each simulated dataset was evaluated using the log predictive score comparing model-based predictions at the MIS locations with the observed MIS survey data . Bayesian kriging as described in the additional file 1 was separately applied on the observed data as well as on the simulated data with the best and least predictive performance (i.e. maximum and minimum log predictive score, respectively) namely Model 1a (MIS data), Model 1b (simulated data with the best predictive performance) and Model 1c (simulated data with the worst predictive performance). For each of the above models, a gridded surface of malaria parasite risk was estimated over 117,192 cells/pixels of 2 × 2 km 2 spatial resolution covering the country. To assess the effect of jittering on individual level covariates such as the ITN coverage indicators, geostatistical Bernoulli models were fitted on individual level data obtained from the observed MIS and the simulated data. As described above, three models were fitted i.e. Model 2a (applied to MIS data), Model 2b and 2c (applied to simulated data) with the best and worse predictive ability, respectively. We implemented geostatistical variable selection to identify the most important ITN indicators . The individual level models were adjusted for the confounding effects of the climatic predictors selected by the corresponding cluster-level model and the socio-economic proxies. Due to high correlation among the ITN indicators, only one indicator was allowed into the model. Covariates were statistically important when the corresponding Bayesian Credible Interval (BCI) did not include the one in the odds ratio scale, so the covariates were statistically significant if they did not include 0 in their BCI. Computation was performed on a dual processor workstation (2 × 2.6 Ghz, 128GB RAM). OpenBUGS version 3.2.3 (Imperial College and Medical Research Council, London, UK) was used for Bayesian model fit and prediction . Data management and analysis were carried out in R statistical software . Convergence was assessed by the Geweke statistic, visual inspection of the traceplots and achieved in less than 200,000 iterations . Maps were drawn in ArcGIS version 10.2.1 ( http://www.esri.com/ ) . The overall malaria prevalence estimated by the MIS data was 33 %. In the rural areas, 43 % of children were tested positive, meanwhile this proportion was 19 % in urban areas. The most affected areas were located in the North, East and South regions of Cameroon with a malaria risk of 57.2 %, 56.5 % and 50.9 %, respectively. The proportion of mothers that had attended university was 6.7 % and those without any education were 23.3 %. The proportion of households with at least one ITN was 46 %. Only 9 % of the population had access to an ITN in their household and 12 % of children with fever who sought treatment at hospital received a recommended Artemisinin-based combination therapy (ACT) during the last two weeks. Sixty-eight percent of households were most poor or poor. Fig. 2 displays the distribution of the distances between the original and shifted locations across the 100 simulated datasets depending of the area type (rural, urban). As expected by the DHS jittering algorithm, the median distance between the true and their jittered locations was larger in rural than urban clusters . Fig. 2 : Distribution of the distances (km) between the original and shifted locations across the 100 simulated datasets according to urban and rural cluster type. Fig. 2 Geostatistical analysis of MIS data The geostatistical variable selection performed at the cluster level model (Model 1a) fitted on the original MIS data identified NDVI and altitude (in continuous form), EVI and DWB (in categorical form) and the presence of forest (binary) as the most important predictors of parasitaemia risk ( Table 1 ). Estimates of the final geostatistical model ( Table 2 ) indicated that the malaria parasite risk was positively associated with NDVI, EVI, and presence of forest, and negatively associated with altitude. The individual level model (Model 2a) fitted to the original MIS data selected the proportion of households with 1 ITN per 2 persons as the most important predictor ( Table 1 ). The association of this predictor with the parasitaemia risk was negative as shown in the final geostatistical model ( Table 2 ). Table 1 Posterior inclusion probabilities (%) of the climatic predictors and intervention coverage indicators based on the geostatistical variable selection applied to the three datasets i) observed MIS (cluster-level Model 1a, individual-level Model 2a) ii) simulated data with the best predictive ability (Model 1b, Model 2b) and iii) simulated data with worst predictive ability (Model 1c, Model 2c). Inclusion probabilities of the selected predictors are in bold. Table 1 Model Predictor MIS data Simulated data with best predictive ability Simulated data with worst predictive ability Excluded Continuous Categorical Excluded Continuous Categorical Excluded Continuous Categorical Model 1a, 1b, 1c: Cluster level RFE 36 18 46 56 18 26 26 10 64 NDVI ⁎ 12 83 5 2 98 0 20 58 22 LSTD 55 23 22 59 19 22 60 17 23 EVI ⁎ 16 17 67 0 0 100 12 42 46 DWB ⁎ 23 25 52 33 26 41 10 14 76 Altitude ⁎ 1 96 3 3 79 18 2 98 0 Forest ⁎ 34 – 66 41 0 59 36 0 64 Savannah 69 – 31 68 0 32 74 0 26 Cropland 72 – 28 81 0 19 75 0 25 LSTN 54 29 17 46 54 0 24 10 66 Model 2a,2b,2c: Individual level % of population access to an ITN in their household 84 16 – 87 13 – 82 18 – % of households with at least one ITN 100 0 – 84 16 – 98 2 – % of households with one ITN per two persons ⁎ 47 53 – 42 58 – 77 23 – % of children slept under ITN previous night ⁎ 69 31 – 87 13 – 43 57 – % of children with fever who received recommended anti-malaria drugs (ACT) 73 27 – 76 24 – 70 30 – ⁎ : the climatic or intervention indicator is selected. Table 2 Estimates (posterior median and 95 % BCI) of the geostatistical model parameters based on the cluster level (Models 1a, 1b, 1c) and the individual level models (Models 2a, 2b, 2c). Table 2 Factor MIS data Simulated data with the best predictive ability Simulated data with the worst predictive ability Model 1a Model 2a Model 1b Model 2b Model 1c Model 2c OR (95 % BCI) OR (95 % BCI) OR (95 % BCI) OR (95 % BCI) OR (95 % BCI) OR (95 % BCI) 0–30 mm 1 1 RFE 30–60 mm 0.83(0.24; 2.49) 2.54(0.86; 7.97) >60 mm 0.36(0.08; 1.65) 1.07(0.30; 3.98) NDVI 1.55 (1.12; 2.12) 1.33(0.97; 1.82) 1.63(1.18; 2.29) 1.31(0.96; 1.82) 1.6 (1.23; 2.09) 1.32(1.03; 1.70) EVI <0.21 1 1 1 1 0.21–0.38 1.90 (1.03; 3.51) 1.38(0.82; 2.33) 1.95(1.03; 3.67) 1.33(0.79; 2.22) >0.38 1.25 (0.51; 3.02) 0.92(0.41; 2.1) 1.24(0.49; 3.08) 0.9(0.40; 1.98) DWB <70 m 1 1 1 1 ≥ 70 m 1.82 (1.005; 3.45) 1.60(0.90; 2.86) 1.98(1.09; 3.95) 1.74(0.98; 3.17) Altitude 0.39 (0.26; 0.57) 0.37(0.25; 0.53) 0.53(0.3; 0.91) 0.42(0.25; 0.72) 0.38(0.24; 0.6) 0.35(0.23; 0.54) Forest No 1 1 1 1 1 1 Yes 1.55 (1.002; 2.39) 1.17(0.77; 1.79) 1.49(0.95; 2.34) 1.18(0.77; 1.81) 1.46(0.93; 2.28) 1.06(0.68; 1.62) LSTN_continuous 1.37(0.83; 2.29) 1.19(0.76; 1.89) 0–14 1 1 LSTN_categrical 14–18 1.82(0.29; 18.01) 2.06(0.63; 14.04) >18 1.45(0.21; 16.22) 1.87(0.53; 13.42) Gender Female 1 1 1 Male 0.99(0.86; 1.15) 0.99(0.86; 1.15) 1(0.87; 1.15) Area type Rural 1 1 1 Urban 0.55(0.38; 0.80) 0.54(0.38; 0.78) 0.56(0.39; 0.81) Wealth Index Most poor 1 1 1 Very poor 0.60(0.46; 0.76) 0.6(0.47; 0.76) 0.6(0.47; 0.76) Poor 0.66(0.49; 0.89) 0.66(0.48; 0.88) 0.65(0.48; 0.87) Less poor 0.46(0.32; 0.66) 0.45(0.31; 0.65) 0.46(0.32; 0.66) Least poor 0.39(0.25; 0.61) 0.39(0.25; 0.61) 0.38(0.25; 0.60 Education level of mothers No education 1 1 1 Primary 1.15(0.92; 1.43) 1.14(0.91; 1.42) 1.15(0.92; 1.44) Secondary 0.92(0.70; 1.22) 0.92(0.7; 1.21) 0.93(0.7; 1.22) University 1.03(0.57; 1.84) 1.03(0.56; 1.83) 0.98(0.53; 1.73) Age 0–1+ 1 1 1 1–2 1.31(0.96; 1.77) 1.32(0.97; 1.79) 1.34(0.99; 1.81) 2–3 2.29(1.70; 3.10) 2.30(1.71; 3.10) 2.32(1.73; 3.12) 3–4 2.57(1.90; 3.48) 2.59(1.92; 3.49) 2.60(1.93; 3.51) >4 3.49(2.62; 4.65) 3.38(2.51; 4.54) 3.40(2.54; 4.57) % households with 1 ITN per 2 persons 0.16(0.05; 0.47) 0.14(0.05; 0.44) % of children with fever in the last two weeks who received ACT 0.35(0.18; 0.66) Spatial parameters Posterior median Posterior median Posterior median Posterior median Posterior median Posterior median (95 % BCI) (95 % BCI) (95 % BCI) (95 % BCI) (95 % BCI) (95 % BCI) Spatial variance 1.81 (1.24; 2.92) 1.62(1.10; 2.76) 1.88(1.22; 3.6) 1.64(1.17; 2.5) 1.87(1.29; 3.13) 1.64(1.1; 2.8) Range (km) 1 154.8 (89.50; 292.96) 188.09(100.35; 353.63) 111.43(62.26; 217.13) 214.98(120.37; 487) 143.29(75.72; 301.36) 158.87(94.18; 321.1) 1: Smallest distance that spatial correlation is <5 %. Geostatistical variable selection applied to each of the simulated data identified 26 sets of climatic and environmental predictors that were included in the selected model ( Table A.2 in Appendix). Two simulated models among one hundred had the highest posterior inclusion probabilities equal to 19 % and 18 %. Both models included NDVI and altitude (continuous), EVI (categorical) and forest presence. Furthermore, the DWB was included in the second most frequent model (with inclusion probability of 18 %). In accordance with the original (un-jittered MIS data), all simulated data identified the altitude (in continuous form) as an important predictor, while the LSTD, savannah and cropland land use types were excluded from all the selected models ( Table 3 ). LSTN was rarely included in the set of important predictors. Table 3 Relative frequencies of the climatic predictors and their functional forms identified by the geostatistical variable selection across the 100 simulated data. The predictors selected by the original data are in bold. Table 3 Climatic predictors Functional form RFE NDVI LSTD EVI DWB Altitude Forest Savannah Cropland LSTN Continuous 0 % 95 % 0 % 24 % 0 % 100 % 0 % 0 % 0 % 1 % Categorical 25 % 1 % 0 % 51 % 64 % 0 % 90 % 0 % 0 % 3 % Excluded 75 % 4 % 100 % 25 % 36 % 0 % 10 % 100 % 100 % 96 % The estimates of the effects of climatic predictors based on the selected models were overlapping between the simulated datasets . Altitude (in continuous form) was always statistically important and negatively associated with the parasitaemia risk. NDVI was positively associated and statistically important for the malaria parasite risk in most simulated data. Malaria parasite risk had a positive and most often statistically important relationship with the presence of forest, EVI and DWB. On the other hand, the importance of RFE, LSTN or LSTD on parasitaemia risk varied with the data. Fig. 3 Effects (posterior median, 95 % BCI) of the categorical covariates estimated by the selected geostatistical model for each simulated data (1−100) ordered according to the logarithmic predictive score values and of the data 0 corresponding to the observed data. Fig. 3 Fig. 4 Effects (posterior median, 95 % BCI) of continuous covariates estimated by the selected geostatistical model for each simulated data ordered according to the logarithm predictive score values of the models. Fig. 4 Table 1 presents posterior inclusion probabilities of the selected models based on the simulated data with the best (Model 1b) and worst predictive performance (Model 1c) and on the observed MIS data (Model 1a). The difference between Model 1b and Model 1a was that the former included LSTN and excluded DWB. Model 1c included RFE, LSTN which were not in Model 1a and excluded EVI. Regarding the selection of intervention indicators from the individual-level model, the Model 2b with the best predictive performance among the simulated data gave similar results to the true model (Model 2a). The model with the worse performance among simulated data (Model 2c) was not able to capture the statistically important effect of the malaria intervention indicator (i.e. proportion of households with 1 ITN per 2 persons and the proportion of children who slept under an ITN the previous night). The direction of the effects was estimated by the Bayesian geostatistical models ( Table 2 ). The global spatial patterns of disease risk in the East, North and Coastal parts of the country were well captured by the three cluster-level models. Maps drawn on the same scale clearly indicated similar geographic patterns predicted by the three models (Model 1a, Model 1b, and Model 1c), therefore the models with best and worst predictive performance were able to capture the disease risk distribution of the MIS dataset. However, the prediction uncertainties of Model 1b and Model 1c over the gridded surface were greater than the ones obtained from Model 1a . Fig. 5 : Malaria parasite risk estimates (median of predictive posterior distribution) among children less than 5 years, obtained from i) Model 1a (left), ii) Model 1b (center) and iii) Model 1c (right). Fig. 5 Fig. 6 : Predictive uncertainty (standard deviation of predictive posterior distribution) of estimated parasite risk among children less than 5 years, obtained from i) Model 1a (left), ii) Model 1b (center) and iii) Model 1c (right). Fig. 6 The spatial variance estimates and uncertainty obtained from the simulated data with the worst and best predictive abilities were close to the ones produced by the observed MIS data. The residual spatial correlation estimated by the different clusters and individual models remained high, indicating the presence of unmeasured spatially structured factors related to the geographic distribution of the parasitaemia risk. This study is the first to assess the effects of jittering of DHS/MIS cluster locations on the estimates of the geographical distribution of malaria risk and of the intervention effects obtained by Bayesian geostatistical modelling . A large number of jittered datasets were simulated from real data and geostatistical variable selection was applied to determine the impact of jittering on the model formulation. Different subsets of climatic factors in the simulated data were identified as important predictors of malaria risk. However, in 18 % of the datasets, the models included the same predictors with the fitted model obtained by the observed MIS data, while the model with the highest posterior inclusion probability (19 %) could not capture the statistically importance of the DWB predictor. NDVI and altitude were selected in more than 95 % of the simulated data and DWB was identified in 64 %. Furthermore, the jittering of cluster locations had an influence on the selected functional form of the climate predictors (continuous/categorical). These results showed that spatial displacement can influence the risk factor analysis and the estimation of the effects of malaria interventions on the disease risk. Similar findings have been reported for distance-based covariates in the study presented by Warren et al. . The direction of the relation between parasite risk, NDVI and altitude remained the same in all simulated data. In particular, the continuous form of NDVI was statistically important in most of simulations and as expected, the altitude was always negatively related and statistically important to the malaria parasite risk. Those associations were confirmed with the estimates obtained from the true dataset and findings from others studies . The jittering did not affect the direction of the relationship between malaria risk, NDVI and Altitude. However, the jittering had an effect on the uncertainty estimates of the covariate effects and therefore on their statistical importance. Warren et al. have also concluded that displacement of clusters led to an increase in the estimated uncertainty of the regression coefficient . In addition, Cressie et al. proved that in the presence of spatial location error, the prediction estimates and regression coefficients were influenced . In most simulations, the BCIs of the altitude and NDVI (continuous forms), EVI and DBW (categorical forms) were overlapping. The majority of datasets were able to capture the statistical importance of those covariates. Altitude and vegetation index change little in the space within the radius corresponding to the random displacement of cluster's coordinates, most likely due to small environmental gradient within each ecological zone of the country. Thus, locations inside the displacement buffer shared in most cases the same environmental conditions and therefore their parameter estimates were not affected by jittering . The simulated data with the highest predictive performance was the one with location configuration among the closest to the true data. The parameter estimates of this model were also similar to the fitted model on true data. According to the simulation, the proportion of households with 1 ITN per 2 persons or the proportion of children who slept under an ITN the previous night before the survey were identified as important predictors of the individual-level malaria risk model. Variable selection applied on the intervention coverage indicators revealed that jittering influenced their posterior inclusion probabilities into the model and therefore the inference about the effects of malaria interventions. This result could be due to the confounding effects of climatic predictors. All the wealth index categories were statistically important and negatively associated to the malaria parasite risk in the true model. The three individual level models showed that, posterior parameter estimates of socioeconomic factors were relatively stable, irrespective of the model. The socioeconomic factors were related to the individual risk rather than the malaria prevalence at the location level, and thus estimates of the socio-economic effects were not much influenced by the displacement of the clusters. Similarly, demographic factors also related to the individual were not affected by the jittering. The gender was not statistically important and a gradient of risk was noted in the age groups as already supported by other studies . The effect of the selected intervention indicator was statistically important and negatively associated to the parasitaemia risk regardless of the simulated dataset. Similar to the socioeconomic status and demographic factors, intervention effects were more likely to be higher at the individual and household than the community; therefore the changes of cluster locations did not influence the direction of the relationship between ITN coverage indicators and malaria parasite risk after adjusting for socioeconomic factors. The BCIs of the spatial correlation parameters of the true, best and worst models were overlapping and their spatial variances were not dramatically changed. Spatial range parameters depend on the cluster locations and were sensitive to the distance between locations. The individual level models overestimated spatial correlation especially for the model having the worst predictive ability. The change of cluster locations could lead to a misspecification of the spatial dependence structure of the disease risk . The geographical patterns obtained from the simulated data with the highest and lowest predictive performance were similar to the ones obtained from the true data. The relationship between malaria risk and the climatic factors was rather stable within the same ecological zone and therefore was not strongly influenced by the jittering of the locations. This result was expected since several studies showed a local interdependency between climatic factors within small buffer zones. . These results were based on the assumptions of a stationary and isotropic spatial process of the malaria risk. Violation of these assumptions may influence the results of geostatistical variable selection and therefore the impact of jittering on model specification. The major overall limitation of this study was that it focussed on a single dataset from a single country, examining a specific outcome within two quite specific modelling frameworks. However, Cameroon settings encompass most of the African settings which include Sahelian, Semi-Sahelian, Cold and Forest area. The changes observed on malaria risk estimates and on interventions predictors obtained from non-stationary geostatistical models could be potentially generalized in the other African contexts . Moderate spatial modifications in the geographical positions of the clusters surveyed might have little influence on the estimation of the spatial patterns of malaria risk in Cameroon, especially when the climatic and environmental conditions are similar within the radius of the random displacement of locations. Nevertheless, the jittering of cluster locations has an impact on the selection of climatic predictors used to estimate the disease risk at high geographical resolution and could affect the interpretation of the relationship between malaria parasite infection with environmental and climatic factors that support the disease transmission. According to the Cameroon law, the DHS, MICS and MIS are carried out by the National Institute of Statistics, and because of blood samples collection, the clearance of the national ethical committee on health was obtained before the field step of survey. During those surveys, the head of household or the person in charge of children must give their consents before answering of questionnaire and blood screening. This work was supported by the 10.13039/501100000781 European Research Council (ERC) IMCCA grant number 323180 and the Swiss National Foundation (SNF) program for Research on Global Issues for Development (R4D) project number IZ01Z0–147286 . PV had conceived, designed the study and contributed to the analysis. KC and RW contributed to the design, collect of the DHS and MIS data. SM had analysed the data and drafted the manuscript. PV, SM, KC and RW revised the manuscript and provided the intellectual content. All authors read and approved the final manuscript. Salomon G. Massoda Tonye: Writing – review & editing, Writing – original draft, Visualization, Validation, Software, Resources, Methodology, Investigation, Data curation, Conceptualization. Romain Wounang: Methodology, Data curation, Conceptualization. Celestin Kouambeng: Writing – review & editing, Validation, Resources, Data curation, Conceptualization. Penelope Vounatsou: Writing – review & editing, Supervision, Software, Resources, Project administration, Methodology, Funding acquisition, Data curation, Conceptualization. The authors declare that they have no competing interests. | Study | biomedical | en | 0.999999 |
PMC11697610 | Esophageal cancer is the eighth most common cancer worldwide and the sixth leading cause of cancer-related deaths . Statistics show that the 5-year survival rate is only 15–20 %, leading to over 500,000 deaths annually . By 2040, the global incidence of esophageal cancer is expected to reach 987,723 new cases, with 914,304 deaths . Current treatment modalities include surgical resection, radiation therapy, chemotherapy, and palliative care . Surgical resection of esophageal tumor tissues usually necessitates removal of the larynx, impairing vocal function and potentially leading to postoperative complications . Both radiation and chemotherapy have the widespread use, but lack selectivity for tumor cells and can result in severe side effects such as radiation pneumonitis, pleural effusion, and pericardial effusion [ , , ]. Consequently, there is an urgent need for novel therapeutic approaches aimed at improving survival rates and quality of life for esophageal cancer patients. In this context, photodynamic therapy (PDT) has emerged as a promising innovative treatment due to its higher selectivity and fewer systemic adverse effects [ , , , , , , , , , , ]. However, the conventional use of implanted optical fibers for PDT is plagued by complex equipment, cumbersome procedures, and increased patient discomfort, highlighting the need for more portable and effective light delivery systems for the esophageal cancer patients [ , , ]. Recently, some research groups have explored small devices for PDT, embedding light-emitting components within the body and powering them through wireless method to achieve effective internal illumination of living tissues [ , , , , , , , , ]. For esophageal cancer patients, tumor progression often leads to esophageal stenosis, causing eating difficulties [ , , ]. Esophageal stent placement is a widely used minimally invasive intervention that rapidly relieves dysphagia and obstruction symptoms, improving nutritional status and being widely employed in clinical treatments [ , , , ]. Therefore, the integration of small PDT unit and commonly used esophageal stricture-relieving stent may be a suitable system for the novel internal esophageal tumor treatment . Furthermore, it is also needed to dynamically adjust the position of the PDT unit based on the tumor's status in the esophageal to achieve efficient and precise treatment. The tumors have proliferative and metastatic properties, and during treatment, tumor progression can enlarge the lesion area or cause metastasis [ , , , , , , , ], weakening the therapeutic effect on regions distant from the light source. Soft robotics research has shown significant potential in the medical field, with flexible structures and excellent deformability suitable for operations in unknown and unstructured environments [ , , , ]. Developing soft actuators is a core task in soft robotics research, primarily responsible for driving or controlling the systems . Among different driving mechanisms, pneumatic soft actuator is simple in structure, cost-effective, highly efficient, quick in response, and environmentally friendly . Previously, our team used electrochemical pneumatic soft actuator to perform in vivo surgeries, e.g. inducing eye shape changes for the treatment of high myopia . Consequently, the pneumatic soft actuator for moving the PDT unit is suitable to integrate into the system. Here, we propose a wireless, battery-free, multifunctional therapeutic system that integrates a PDT module and an electrochemical pneumatic soft actuator into an esophageal stricture-relieving stent. This system not only alleviates esophageal stenosis symptoms and rapidly improves swallowing difficulties but also achieves precise and targeted treatment of tumor cells. The system comprises an esophageal stent, two piezoelectric transducers, an electrochemical pneumatic soft actuator, a micro light-emitting diode (μ-LED), flexible circuits, and biocompatible packaging. The μ-LED serves as the treatment module, providing a light source for PDT to activate photosensitizers that produce cytotoxic reactive oxygen species (ROS). The electrochemical pneumatic soft actuator, consisting of an electrolysis chamber and a long soft silicone tube track, houses the treatment module inside the track, allowing unidirectional movement along the track, with the entire actuator spirally wound inside the esophageal stent. When the tumor grows or metastasizes, the actuator can move the μ-LED to the new tumor site. The piezoelectric transducers convert external ultrasound waves into electrical energy, powering the therapeutic and actuation processes. These two processes are independently controlled by two piezoelectric transducers, each selectively responding to different external ultrasound frequencies, allowing independent and non-interfering operation. This innovative therapeutic approach holds promise for providing more effective and personalized treatment options for esophageal cancer patients, offering new avenues and methods for clinical intervention. Fig. 1 illustrates the structure and operation of the wireless, battery-free, and movable PDT system designed for esophageal tumor treatment. The system is installed at the site of esophageal stenosis caused by tumor growth, as shown in Fig. 1 A. Fig. 1 B depicts the overall structure of the system, which integrates a commercial nitinol esophageal stent as the framework, a therapy module that provides the PDT light source, and an electrochemical pneumatic soft actuator that supplies moving force and guidance for the therapy module. The wireless unit comprises two high-sensitivity lead zirconate titanate (PZT) piezoelectric transducers: PZT 1 and PZT 2. These transducers harvest energy from ultrasound waves and convert it into electrical power for the respective modules. The therapy module, consisting of PZT 2 and a μ-LED, utilizes the energy collected by PZT 2 to illuminate the μ-LED, providing the light source for PDT. The electrochemical pneumatic soft actuator comprises PZT 1, a control circuit, interdigitated electrodes, an electrolysis chamber, ionic solution, a track tube, and a piston. Using the energy harvested by PZT 1, the sophisticated configuration of the interdigitated electrodes and ionic solution induces electrolysis within the sealed chamber, generating bubbles that cause piston displacement, thereby controllably adjusting the position of the therapy module along the actuator track. The actuator is helically wound and adhered to the inside of the esophageal stent using a biocompatible flexible polymer (polydimethylsiloxane, PDMS). The helical structure ensures maximum coverage of the track within the stent, maximizing the area reachable by the therapy module. The entire system weighs 4.8 g and the integrated stent measures 7.8 cm in length. This compact and lightweight design facilitates minimally invasive implantation. Fig. 1 Overview of the esophageal stent for tumor treatment. (A) The stent installed in the affected area. (B) The structure of the stent. (C) Photograph showing various configurations of the actuator. (D) The sizes of PZT 1 and the therapy module, smaller than a grain. The width and gap of the interdigitated electrodes are 80 μm, finer than a syringe needle. (E) The actuator can be twisted and bent, demonstrating its capability to integrate into the esophageal stent. (F) The process of implanting the stent into the cancerous area. (G) The treatment process of the esophageal cancer using the movable PDT stent system. Fig. 1 Fig. 1 C shows the integrated design of the electrochemical actuator, which includes the control circuit, energy module, and interdigitated electrodes on a flexible circuit board. Fig. 1 D demonstrates that the volume of PZT 1 is smaller than a grain of rice, effectively minimizing the patient's sensation of a foreign object. The treatment module's compact size ensures frictionless movement within the actuator track. The interdigitated electrodes, fabricated using microfabrication techniques, have a width and gap of 80 μm, finer than a syringe needle. Fig. 1 E highlights the flexibility of the actuator's solution storage chamber, circuit, and track, which can be twisted many times without performance degradation, ensuring the maintenance of the helical structure within the stent. Fig. 1 F illustrates the implantation process of the system into the esophagus of an esophageal cancer patient. The system is folded and compressed into a sufficiently thin catheter, which is then entirely implanted into the cancerous region of the esophagus. Upon removal of the catheter, the stent rapidly expands and props the esophagus. Fig. 1 G outlines the operating procedure of the system. Ultrasound at 1 MHz is directed at PZT 2, activating it and lightning the μ-LED. The 660 nm red light irradiates the tumor area, inducing the production of ROS in the tumor cells, leading to tumor cell apoptosis. After treating this area, 680 kHz ultrasound is directed at PZT 1 to activate the actuator, moving and positioning the therapy module at the next tumor site. The process is repeated, treating each tumor site sequentially until all affected areas are cleared, after which the entire system is removed. Fig. 2 shows the characterization of the electrochemical pneumatic soft actuator. Fig. 2 A illustrates the exploded view and assembly process of the actuator. Given the excellent biocompatibility and flexibility of silicone , tubes of various diameters and lengths were selected to construct the actuator's transparent solution reservoir and control channels. Essential electronic components, including PZT 1, were soldered onto a custom-designed flexible circuit board, which was then encapsulated with PDMS to ensure biocompatibility for in vivo applications [ , , ]. PZT 1, a cylindrical component with a diameter of 3 mm and a height of 1 mm, operates at a resonance frequency of 680 kHz. Fig. 2 Structure and performance of the electrochemical actuator. (A) Exploded view and fabrication process of the actuator. (B) Workflow of the actuator in treating esophageal cancer. (C) Schematic of the actuator's energy harvesting circuit. (D) Simulation of 680 kHz ultrasound propagation. (E) Short-circuit current and open-circuit voltage peaks of PZT 1 at distances of 1 – 30 mm from the ultrasound source. (F) Current-voltage characteristics of the electrochemical actuator within the voltage range of 0 – 3 V. (G) Output power of the electrochemical actuator in different media after receiving ultrasound. (H) Volume of gas produced by the solution electrolysis. (I) Displacement of the therapy module caused by the actuator's operation. (J) Optical image of the therapy module displacement within 100 s. Fig. 2 To enhance electrochemical performance, we opted for interdigitated electrodes with smaller gaps and larger opposing surface areas. This design provides a stronger electric field for the electrolysis reaction: 2H₂O (liquid) → O₂ (gas) + 2H₂ (gas), thereby accelerating gas generation. Simultaneously, the gold electrodes, which exhibit stable chemical properties, ensure stable conditions for the electrolysis process. These interdigitated electrodes are sealed within a solution reservoir, printed on both sides of the circuit, and extend throughout the solution storage channel to guarantee complete immersion in the ionic solution. Following assembly, the ionic solution is injected into the storage tank, and a Vaseline piston is placed within the guide. To maintain sufficient conductivity, a 50 mmol/L NaOH solution was employed as the electrolyte . Sodium hydroxide (NaOH), a strong base, increases the concentration of OH⁻ ions, providing a higher availability of reactive ions at the electrodes. This facilitates current flow through the electrolyte, thus enhancing the actuator's electrochemical activity. Fig. 2 B shows the actuator's operation. A 680 kHz ultrasound source with a 70 % duty cycle activates PZT 1, which converts the ultrasound waves into electrical energy through the piezoelectric effect. The electrical signal is rectified and regulated before being transmitted to the interdigitated electrodes, inducing electrolysis in the sealed electrolysis chamber's ionic solution. This generates bubbles that push the piston in the track, positioning the therapy module near the tumor for precise treatment. If the tumor grows or metastasizes, reducing treatment effectiveness in distant areas, the actuator can reposition the therapy module to the new target location for efficient treatment. Fig. 2 C illustrates the circuit structure, where the rectifier bridge and capacitor provide a stable DC signal for the electrolysis of the ionic solution within the actuator. One Schottky diode in the rectifier bridge is replaced with a μ-LED, which serves as an indicator while maintaining rectification functionality. The dynamic process is shown in Movie S1. Fig. 2 D simulates the sound pressure propagation of 680 kHz ultrasound in water, which has acoustic properties similar to human tissue (Z_water = 1.48 MRayl, Z_tissue = 1.63 MRayl, average for human tissue). The simulation results indicate that ultrasound can easily transmit 5 cm through tissue with minimal attenuation, and the system can be remotely driven and collect stable ultrasound energy during operation. Fig. 2 E demonstrates the stable output of PZT 1 at different depths in various media, with the open-circuit voltage peak remaining around 10 V and the short-circuit current peak stabilizing around 15 mA, ensuring the device's reliable performance. The output of PZT 1 with a 70 % ultrasound duty cycle is shown in Fig. S1 (Supporting Information), and the current output with a 100 % duty cycle is shown in Fig. S2 (Supporting Information). Fig. 2 F shows the current-voltage characteristics of the actuator within a voltage range of 0–3 V. The device starts operating at 1.0 V, with the interdigitated electrodes in the actuator promoting the electrolysis reaction. To ensure adequate conductivity, a 50 mmol/L NaOH solution is used as the electrolyte. Fig. 2 G presents the actuator's power output at a depth of 10 mm in different media, with a power output of 14.9 mW in gel, 14.2 mW in tissue, and 12.7 mW in water. Fig. 2 H shows the volume-time relationship of gas production at room temperature, indicating that the device can stably provide a gas source during operation. This dynamic gas generation process is further illustrated in Supplementary Movie S2. Optical images also show significant bubble generation in the solution reservoir after 100 s. Fig. 2 I illustrates the displacement-time curve of the therapy module driven by the actuator's continuous operation. The actuator can consistently provide power, enabling the therapy module to move forward uniformly by 20 mm within 800 s. Fig. 2 J demonstrates the actuator's propulsion capability, moving the therapy module forward by one turn (45 mm) within 100 s. The dynamic process is shown in Supplementary Movie S3. All the materials used in the electrochemical pneumatic soft actuator are biocompatible, and the stable performance of the interdigitated electrodes, along with the actuator's excellent actuation capability, underscores its feasibility for in vivo operation, making future clinical translation possible. Fig. 3 characterizes the structure and performance of the treatment module. Fig. 3 A shows the structure and assembly process of the treatment module. A flexible circuit board is folded and soldered to the electrodes on both sides of PZT 2, with the other side connected to a μ-LED. The device is placed in the track of the actuator, resting solely on the outer end of the piston. The μ-LED (0.6 × 0.35 × 0.20 mm) and PZT 2 (1 × 1 × 0.6 mm, resonant frequency fc = 1 MHz) are small in size, ensuring the assembled module is compact and lightweight, allowing for easy movement within the actuator's track. Fig. 3 B illustrates the working process of the treatment module. The μ-LED is pushed by the actuator near the tumor. PZT 2 receives 1 MHz ultrasound and converts the acoustic energy into electrical energy, lighting up the μ-LED. The 660 nm red light illuminates the tumor injected with photosensitizer, performing PDT on the tumor. Fig. 3 C shows the circuit structure of the treatment module, where PZT 2 collects 1 MHz ultrasound, converts it into electrical energy, and transmits it to the μ-LED. Fig. 3 Structure and performance of the treatment module. (A) Exploded view of the treatment module. (B) Workflow of the treatment module in esophageal cancer therapy. (C) Energy harvesting circuit diagram of the treatment module. (D) Simulation of 1 MHz ultrasound propagation. (E) Open-circuit voltage output of PZT 2 at a 60 % duty cycle and 3 mm distance from the ultrasound source. (F) Short-circuit current and open-circuit voltage peak values of PZT 2 at distances of 1–30 mm from the ultrasound source. (G) Light power output of the μ-LED in the treatment module after receiving ultrasound in different media. (H) (I) Output performance of PZT 1 and PZT 2 at 680 kHz and 1 MHz, respectively, demonstrating that the actuator and treatment module operate without mutual interference. Fig. 3 Fig. 3 D presents a simulation of 1 MHz ultrasound propagation in water, indicating that ultrasound at this frequency can transmit with minimal attenuation through tissues, providing stable ultrasonic energy to the device. Fig. 3 E shows the real-time open-circuit voltage output of PZT 2 at a distance of 7 mm from the ultrasound source, with a peak value reaching 15 V, sufficient to light up the μ-LED. Fig. 3 F demonstrates that PZT 2 maintains stable output at various medium depths, with an open-circuit voltage peak around 15 V and a short-circuit current peak around 2.5 mA, ensuring stable operation at different depths. Fig. 3 G evaluates the light power emitted by the device in different media: 0.65 mW in gel, 0.58 mW in tissue, and 0.55 mW in water, providing a stable light source for PDT. The dynamic process of ultrasound activating the treatment module to emit light is shown in Supplementary Movie S4. Fig. S3 (Supporting Information) shows the transmittance of different wavelengths of light through the silicone track of the actuator. The transmittance of 660 nm light is 86 %, indicating that most of the light produced by the therapeutic module can be used for PDT. Additionally, Fig. S4 (Supporting Information) describes the penetration capability of 660 nm red light. As the thickness of the pork tissue increases (1, 2, 3, and 4 mm), the transmittance of red light decreases (4.8 %, 2.38 %, 1.3 %, and 0.8 %, respectively). When the thickness exceeds 5 mm, the transmittance drops to below 0.05 %. These results highlight the limited penetration ability of light through tissue, making it challenging for traditional external illumination to penetrate the tissue effectively. However, our wireless system overcomes this limitation, effectively delivering the PDT light dose to the target area. Fig. 3 H and I depict the output performance of PZT 1 and PZT 2 at different ultrasound frequencies. PZT 1 is more sensitive to 680 kHz ultrasound, while PZT 2 is more sensitive to 1 MHz ultrasound. The different frequency selectivity of the two PZTs effectively prevent mis-activation in their respective working zones. In terms of biological safety, we conducted further tests on the impact of the therapeutic module on local tissue . After operating continuously for 1 h, the muscle tissue containing the module showed a minimal increase of less than 2 °C (25.2 °C–27 °C), demonstrating that the system does not cause any thermal damage to the local tissue . Fig. 4 illustrates the efficacy of PDT treatment module in eradicating esophageal cancer cells, as well as the impact of light source distance on PDT effectiveness in vitro. Fig. 4 A illustrates the process of PDT conducted through the treatment module. The base layer comprises an ultrasound source with the treatment module positioned on ultrasound gel. When activated, the ultrasound triggers the μLED to illuminate, facilitating PDT. At the topmost layer, a cell plate containing the human esophageal squamous cell carcinoma (ESCC) cell line KYSE-150 (K150), pre-treated with the photosensitizer chlorin e6 (Ce6), is positioned for PDT. To assess whether the photosensitizer Ce6 exerts any effects on cells proliferation, we cultured the K150 with varying concentrations of Ce6 for 24 h. Our CCK-8 assay results confirmed that Ce6, even at a concentration of 32 μM, does not inhibit the viability of K150. . However, when Ce6 was present in conjunction with the μLED, PDT was effectively achieved. The efficacy of PDT was positively correlated with both the duration of illumination and the concentration of the photosensitizer . Specifically, at a Ce6 concentration of 16 μM and an illumination duration of 30 min, the cell viability of K150 was reduced to 23.66 %. Fig. 4 Significant cytotoxic impact of PDT on ESCC in vitro. (A) Schematic diagram of the PDT. (B) Cell viability of K150 under different concentrations of Ce6, n = 3. (C) The viability of K150 cells was assessed using the cell counting kit-8 (CCK-8) assay across various of Ce6 concentrations and exposure durations, n = 3. (D) The effect of PDT on the viability of K150 cells at varying treatment distances, n = 3. (E) WSI and microscopy imaging of Calcein-AM/PI staining in different treatment groups. Scar bars, 1000 μm for WSI and 50 μm for microscopy imaging. (F) Statistical analysis of dead cells proportion in different treatment groups, n = 3. Data are presented as mean ± SEM. Statistical analysis was conducted using ordinary one-way ANOVA with multiple comparisons, not significant (ns), P ≥ 0.05; ∗∗∗∗ P ≤ 0.0001. Fig. 4 To evaluate the impact of light source distance (LSD) on PDT efficacy, we further measured cell viability following treatment at varying distances. Results indicated a marked decrease in viability (to 9.63 %) when cells were positioned close to the light source. In contrast, when the LSD was increasing to 8 cm, the viability of K150 cells remained largely unaffected . Additionally, the Calcein-AM/PI staining method provided further insight into PDT-induced cytotoxicity in the K150 cell line. Whole slide imaging (WSI) of cell plates revealed a concentrated red fluorescence signal at the center of the light source, indicative of a higher cell death rate in this region. As the distance from the center increased, the red fluorescence signal diminished, while green fluorescence, marking viable cells, became more prominent. Quantitative analysis of fluorescence images confirmed that PDT induced cell death in over 88.94 % of cells, underscoring the effectiveness of PDT at close proximity to the light source. . During PDT, the cytotoxic agent singlet oxygen, produced via a type II photochemical reaction, is considered the primary mediator of PDT's biological effects [ , , ]. To detect singlet oxygen generation, we employed 1,3-diphenylisobenzofuran (DPBF) as a specific probe. A decrease in DPBF's relative absorbance indicates an increased rate of photodegradation, reflecting elevated ROS production. We selected porphyrin as a standard photosensitizer and compared it with Ce6 . Under identical PDT conditions, DPBF absorbance decreased significantly with Ce6, while the absorbance change was negligible with porphyrin, indicating that Ce6 continuously generates singlet oxygen over prolonged irradiation, whereas porphyrin produces only trace amounts. These results demonstrate that Ce6 has a higher singlet oxygen generation efficiency compared to porphyrin. Furthermore, UV–visible absorption spectra confirmed that Ce6 exhibits stronger absorption in the red-light spectrum than porphyrin, making it more suitable for red-light-activated PDT. To optimize light penetration depth, we selected a wavelength of 660 nm, as red light penetrates biological tissues more effectively than other wavelengths, such as ultraviolet and visible light. This allows it to reach several millimeters or deeper within tissue, enhancing the treatment effect. Furthermore, our implantable device effectively addresses the limitation of light penetration depth; by directly implanting the light source near the tumor, red light can reach areas inaccessible to external sources, thereby enhancing ROS generation. The efficiency of singlet oxygen generation is a critical factor in determining the effectiveness of PDT. To investigate the PDT yield, we employed the DPBF probe, and Fig. S11 reveals a notable reduction in the absorption intensity of DPBF's UV absorption spectrum after just 10 min of light exposure. This decrease provides strong evidence that our device generates a substantial yield of singlet oxygen during the PDT process. Furthermore, we utilized the DCFH-DA probe to quantitatively assess the ROS levels following PDT. The data indicated a marked increase in the mean fluorescence intensity (MFI) of ROS in the PDT group compared to the control group, highlighting the significant production of reactive oxygen species as a result of the treatment . To investigate the subcellular structures primarily responsible for ROS generation, we performed the MitoSOX staining to assess the expression levels of superoxide in the mitochondria. The experimental results showed that PDT group exhibited a significant increase in red fluorescence intensity, reflecting an elevated production of mitochondrial superoxide . As shown in Fig. 5 D, the substantially higher green-to-red fluorescence ratio in the PDT group quantitatively supports this trend, indicating a significant increase in mitochondrial oxidative stress following PDT treatment. Further, we employed JC-1 fluorescence to detect mitochondrial membrane potential, shown in Fig. 5 E and F. The green/red fluorescence ratio in the PDT group was 2.28, whereas other groups did not exceed 0.5 underscoring that PDT significantly promotes cancer cell apoptosis. Fig. 5 PDT significantly enhances the ROS generation and apoptosis in ESCC cells. (A) Representative images of ROS levels detected by DCFH-DA probe. Scar bars, 50 μm. (B) Statistical analysis of ROS in different treatment groups, n = 3. (C) Detection of mitochondrial superoxide expression by MitoSox red staining (red fluorescence indicates mitochondrial superoxide, green fluorescence marks the mitochondrion). Scar bars, 50 μm. (D) Quantitative analysis of mitochondrial superoxide, n = 3. (E) Fluorescence images of JC-1 staining (red fluorescence indicates JC-1 aggregates, green fluorescence indicates JC-1 monomers). Scar bars, 50 μm. (F) The corresponding statistical analysis of JC-1, n = 3. (G) Apoptosis detection by flow cytometry using the Annexin V-FITC/PI kit in different groups. (H) The corresponding statistical analysis of apoptosis, n = 3. Data are presented as mean ± SEM. Statistical analysis was conducted using ordinary one-way ANOVA with multiple comparisons, not significant (ns), P ≥ 0.05; ∗∗∗∗P ≤ 0.0001. Fig. 5 Additionally, flow cytometric analysis using an Annexin V/PI staining kit revealed that the PDT group exhibited the highest proportion of cells in both early and late stages of apoptosis . These results suggest that PDT induces significant oxidative stress and promotes apoptosis, effectively leading to the death of tumor cells. Fig. 6 shows the in-vivo efficacy of the PDT treatment module. To establish a breast cancer a mouse model, 100,000 4T1 cells were subcutaneously implanted into Balb/c mouse. When the tumor volume reached about 50 mm³, the treatment module was implanted deep into the tumor tissue. Following intra-tumoral injection of 0.5 mg/kg Ce6 for 5 h, a wireless ultrasound probe was used to drive the μ-LED emission and generate photodynamic effects for 30 min per day over a period of 8 days. During this process, there was no significant difference in the body weight changes between the control or experimental groups , indicating the biocompatibility and safety of the treatment. Compared to the Ctrl group, the PDT group exhibited a significant trend of tumor suppression, demonstrating a strong inhibitory effect on tumor growth, while the tumors volume in the Ce6 and LED groups continued to grow rapidly . Fig. 6 In vivo anti-tumor effect of the PDT treatment module. (A) Body weight of mice receiving various treatments, n = 5. (B) Average tumor growth curves for all groups, n = 5. (C) Tumor images collected from each group. (D) Average tumor mass collected from each group, n = 5. (E) Histological analysis of tumor tissue sections via H&E staining and Ki-67 immunohistochemical staining. Scale bar, 100 μm. (F) Quantitative analysis of Ki-67 immunohistochemical index, n = 5. All data are presented as mean ± SD, ∗ P < 0.05, ∗∗∗ P < 0.001, ∗∗∗∗ P < 0.0001. Statistical analysis was conducted using ordinary one-way ANOVA with multiple comparisons. Fig. 6 After the 4T1 cells were implanted for 18 days, the subcutaneous tumors were excised to evaluate the efficacy of the treatment . The average tumor mass in the PDT group similarly decreased by 79.1 %, demonstrating the effectiveness of the therapy . In addition, we performed hematoxylin and eosin (H&E) staining and Ki67 immunohistochemical staining to assess the histological changes in the tumors . The H&E staining showed a significant reduction in tumor cell density and damage of the tumor stromal structure in the PDT group, while the PDT group showed the lowest positive area rate of Ki67 . These findings collectively suggest that the PDT achieved through the treatment module significantly suppresses tumor growth, underscoring its potential as a viable therapeutic strategy for cancer treatment. The morphology of major organs indicates that our implantable treatment module has low side effects and high biocompatibility in vivo. Fig. 7 illustrates the biological safety of the integrated esophageal stent system. We co-cultured the device with mouse embryonic fibroblasts (NIH3T3) for 24, 48, and 72 h. Fluorescence microscopy imaging of Calcein-AM/PI staining showed that the stent could coexist with the cells for an extended period with minimal toxicity, as the proportion of cell apoptosis did not exceed 5 % . Fig. 7 Evaluation of the biosafety of the integrated esophageal stent system. (A) Fluorescence microscopy images of Calcein-AM/PI staining after co-culturing the integrated stent system with NIH3T3 cells for 24, 48, and 72 h. (B) Statistical analysis of the proportion of cell death in different time groups, n = 3. Data are presented as mean ± SD. Unpaired t -test, not significant (ns), P ≥ 0.05. Fig. 7 In the experiments, the ultrasound intensity used was 407.42 mW/cm 2 , which is below the FDA-approved threshold (peak intensity of 720 mW/cm 2 ) to prevent damage to esophageal tissue from high-intensity ultrasound. Furthermore, histological analysis was conducted on the surrounding skin tissue and esophageal region to assess the safety of ultrasound exposure. The results showed that, compared to the control group, there were no significant morphological changes in the skin tissue and corresponding esophageal region following ultrasound irradiation , indicating that our ultrasound parameters are biologically safe. In this study, we propose a novel esophageal stent integrated with a wireless, battery-free, and movable PDT unit, designed for flexible, precise, and real-time treatment of esophageal tumors. This system enables ultrasound-based wireless control of the PDT light source position, providing a novel approach for treating esophageal cancer patients. However, several challenges remain in translating this technology into clinical practice. The primary obstacle is the issue of light penetration and diffusion in heterogeneous tumor environments. One of the main challenges in applying PDT in clinical settings is the limited tissue penetration of light, particularly in deeper tumor tissues. Furthermore, in heterogeneous tumor environments, due to varying tissue densities and compositions, light diffusion may be uneven, potentially leading to suboptimal activation of photosensitizers and incomplete tumor treatment. To mitigate the limitations of light penetration, we utilized longer-wavelength red light, which is more tissue-penetrative and coincides with the absorption peak of Ce6. Additionally, we employed an implantable device to position the light source directly around the tumor. To ensure consistent treatment, we are considering the integration of photodetectors and fluorescence-based ROS sensors to monitor light distribution and ROS production. Concerns about potential off-target ROS effects in healthy tissues are valid. To address this, we leverage the selective accumulation of Ce6 in tumor tissues. Upon systemic or local administration, Ce6 preferentially concentrates in cancer cells due to metabolic changes and increased blood flow in the tumor microenvironment . Additionally, our system maximizes light delivery by positioning the μ-LED light source directly adjacent to the tumor, ensuring precise targeting of the treatment area. Furthermore, we are exploring the use of nanocarriers for the targeted delivery of photosensitizers, enabling controlled release and minimizing unintended damage to surrounding healthy tissues. Regarding regulatory approval and safety, the clinical application of this novel therapeutic system will require extensive regulatory approval processes, including demonstrating the biocompatibility and safety of all system components. The μ-LEDs, soft actuators, piezoelectric transducers, and all electronic and structural components must undergo rigorous biocompatibility testing to ensure they do not induce inflammation, toxicity, or immune responses in vivo. Furthermore, the long-term stability and potential degradation of biocompatible packaging must be evaluated to prevent any adverse effects such as material wear or breakdown. Finally, to enable widespread clinical adoption, the scalability of manufacturing this system must be addressed. The production of such a complex integrated system may face challenges related to cost, consistency, and reliability at large scales. It is crucial to establish standardized protocols for the assembly of the stent, μ-LEDs, actuators, and electronic components in a cost-effective and reproducible manner. Additionally, the integration of this system into existing clinical workflows must be considered to ensure ease of deployment and control. In conclusion, this system offers a novel strategy for esophageal cancer treatment. Overcoming the identified challenges and further optimizing its components will be essential for enabling its clinical translation, potentially providing a new, targeted therapeutic option for managing complex tumor environments. We have designed a wireless, battery-free, movable PDT unit for esophageal stents, enabling flexible and precise treatment of esophageal cancer. This system introduces a novel strategy utilizing an electrochemical pneumatic soft actuator to move PDT light sources in real-time to the vicinity of tumors, achieving precise and efficient treatment of esophageal tumors. The treatment module, characterized by its small size and wireless, battery-free operation, provides the light source for PDT therapy and moves frictionlessly within the track of the actuator. The flexible actuator adapts to various deformations, allowing it to spiral and cover a large area inside the esophageal stent, thereby extending the reach of PDT light sources. The actuator and treatment module exhibit different frequency selectivity to external ultrasound, enabling independent and interference-free operation. In vitro cell experiments have demonstrated the effective killing of tumor cells by PDT, with better efficacy observed at closer distances to the tumor, thereby establishing the practicality of this scaffold system. This innovative treatment approach holds promise for providing more effective and personalized treatment options for esophageal cancer patients, offering new ideas and methods for clinical intervention. PZT was purchased from SCH Technology Co., Ltd. The 660 nm μ-LEDs were obtained from Shenzhen Ruikoo Optoelectronics Technology Co., Ltd. (Cell experiment materials source). Customized PZT 2 (1 × 1 × 0.6 mm) was pre-cleaned with ethanol and deionized water. PZT 2 was soldered to a customized flexible micro-circuit board along with a 660 nm μ-LED. Optical image of PZT 2 is shown in Fig. S5 (Supporting Information), and scanning electron microscope (SEM) images are shown in Fig. S7 (Supporting Information). The circuit board layout is depicted in Fig. S8 (Supporting Information). A flexible copper-clad PI sheet (Cu/PI/Cu, 12/12.5/12 μm) served as the substrate. Patterned transmission and finger electrodes were achieved on the substrate through exposure, development, and etching steps. Holes (300 μm in diameter) were drilled on the substrate, with inner walls copper-plated to ensure electrical connection between top and bottom electrodes. A 75 nm gold layer was chemically deposited on the finger electrodes to prevent oxidation in the presence of NaOH solution. Key electronic components and power supply units, including capacitors, μ-LEDs, diodes, and PZT 1, were assembled on the substrate through soldering. A 6 cm length, 2.5 cm inner diameter, and 3.5 mm outer diameter silicone tube served as the actuator's solution reservoir. A 22 cm length, 1.5 cm inner diameter, and 2.5 mm outer diameter silicone tube served as the delivery track. Both tubes were compactly connected using PDMS: curing agent (SYLGARD 184, Dow Corning, USA) mixed at a 10:1 mass ratio, applied to one end of the track, and connected to the solution reservoir, cured at 70 °C for 3 h. The electrodes were placed into the solution reservoir, and the other end was sealed with PDMS, simultaneously encapsulating the external circuit, cured at 70 °C for 3 h. A 50 mmol L −1 NaOH solution (Shanghai, Macklin Biochemical Co., Ltd., Shanghai, China) was injected into the solution reservoir from the end of the track using a 30 cm long syringe, followed by injecting vaseline at the bottom of the track to act as a piston to prevent leakage. The treatment module was inserted into the track and placed near the vaseline piston. Optical image of PZT 1 is shown in Fig. S6 (Supporting Information), SEM images in Fig. S7 (Supporting Information), and the circuit board layout in Fig. S9 (Supporting Information). The structure and morphology of materials were studied using a SEM (GeminiSEM 300, Germany). The piezoelectric output of the wireless power unit was measured using an oscilloscope and KEITHLEY . Finite element simulations of ultrasound were conducted using COMSOL. Measure the voltage-current curves using an electrochemical workstation (CHI600E). The volume of gas produced was determined by the height of the liquid column. Human esophageal squamous cell carcinoma KYSE-150 cells were cultured in 1640 medium supplemented with 10 % fetal bovine serum and 1 % Penicillin/Streptomycin. Cells were then incubated at 37 °C in a humidified atmosphere containing 5 % CO 2 . Using a 48-well plate, 10,000 cells per well were seeded and treated with 300 μl gradients of Ce6 (1, 2, 4, 6, 8, 16, and 32 μM) in culture medium. After 24 h of incubation, cell viability was assessed using the CCK-8 assay (ABclonal Technology, China). In a 48-well plate, 10,000 cells per well were seeded. After incubating for 5 h in serum-free medium with a fixed Ce6 concentration of (0, 1, 2, 4, 6, 8, 16, and 32 μM) under 5 % CO 2 at 37 °C, the medium was replaced with serum-containing complete medium. Cells treated with Ce6 were then exposed to μ-LED (660 nm, 25 mW) for varying durations (0, 5, 10, 20, 30, and 40 min), and the different distances form light source to the bottom of the cell plates (0, 1, 3, 5, and 8 cm). Cell viability was measured using the CCK-8 assay after 24 h of incubation. K150 cells (100,000 per well) were seeded in 3.5 cm dishes and subjected to control, LED, Ce6, and PDT. After 24 h, the cells were stained with 500 μL of Calcein-AM/PI working solution (Beyotime Biotechnology, China) and incubated at 37 °C in the dark for 30 min. Following staining, WSI was performed using a cell imaging multi-mode microplate reader (Agilent BioTek Cytation 5, America). Additionally, K150 cells were also seeded in 48-well cell culture plates, treated and stained under the same conditions, and then imaged using a confocal fluorescence microscope (Nikon, Japan). Cells (100,000 per well) were seeded in 3.5 cm dishes. Cells from control, LED, Ce6, and PDT groups were incubated with DCFH-DA staining solution (Beyotime Biotechnology, China) at 37 °C for 20 min. ROS levels were measured using a confocal microscope (Nikon, Japan) with excitation and emission wavelengths set at 488 and 525 nm, respectively. Cells (100,000 per well) were seeded in a 6-well plate and subjected to PDT for 1 h. The mitochondria were stained with 5 μM MitoSOX Red (Beyotime, China) at 37 °C for 20 min to label mitochondrial superoxide. Following staining, the cells were washed twice with PBS. Subsequently, the mitochondria were further labeled with 0.1 μM Mito-Tracker Green (Beyotime, China) at 37 °C for 20 min. After another round of washing, the samples were observed using a laser scanning confocal microscope. Cells (150,000 per well) were seeded in 6-well plates and cultured for 48 h in control, LED, Ce6, and PDT conditions. Apoptosis levels were measured using the Annexin V-FITC/PI Apoptosis Kit (4abio tech, China). Cells (100,000 per well) were seeded in 3.5 cm dishes. Divide the cells into four groups: control, LED, Ce6, and PDT. After 24 h of cultivation, the mitochondrial membrane potential was measured by the JC-1staining kit (Beyotime Biotechnology, China). Animal experiments were conducted in accordance with the Principles of Laboratory Animal Care (People's Republic of China) and all animal experiments were conducted with ethic approval . In this experiment, the 4T1 mouse breast cancer cell line was used for in vivo tumor-bearing treatment. First, 100,000 4T1 cells were subcutaneously injected into mice to establish the tumor model. Treatment was initiated when the tumor volume reached approximately 50 mm³. During the treatment period, the body weight and tumor volume of the mice were measured and recorded every two days. The treatment was administered daily for 8 consecutive days. After treatment, the mice were monitored, and tumor volume changes were recorded for an additional 10 days to complete the observation period. Biocompatibility Evaluation of Treatment Formulations NIH3T3 cells were cultured in DMEM supplemented with 10 % FBS and 1 % P/S for 24 h. Using a 96-well plate, 100,000 cells per well were seeded. Integrated scaffold slices were separately co-cultured with normal NIH3T3 cells for 24, 48, and 72 h, followed by Calcein-AM/PI staining to assess cell viability and death. Eight-week-old female SD rats underwent neck hair removal, followed by anesthesia. Gel was applied to the corresponding sites of the cervical and thoracic esophagus, and ultrasound treatment (with specified power) was administered daily for 20 min per session over a period of seven days. Afterward, the esophagus and corresponding skin were harvested for histological analysis using H&E staining to evaluate structural changes. The ordinary one-way ANOVA with multiple comparisons or unpaired t -test was used to perform statistical analysis using GraphPad Prism 9 (GraphPad Software, Inc., California, USA). Statistically significant was concluded at ∗ P < 0.05, ∗∗ P < 0.01, ∗∗∗ P < 0.001, ∗∗∗∗ P < 0.0001. Data are presented as mean ± SD or mean ± SEM. Qian Han: Writing – original draft, Investigation. Pingjin Zou: Writing – original draft, Investigation. Xianhao Wei: Writing – original draft, Investigation. Junyang Chen: Investigation. Xiaojiao Li: Investigation. Li Quan: Investigation. Ranlin Wang: Investigation. Lili Xing: Supervision, Funding acquisition, Conceptualization. Xinyu Xue: Writing – review & editing, Supervision, Funding acquisition, Conceptualization. Yi Zhou: Writing – review & editing, Funding acquisition, Conceptualization. Meihua Chen: Writing – review & editing, Supervision, Funding acquisition, Conceptualization. The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | Review | biomedical | en | 0.999997 |
PMC11697617 | Loneliness among older adults has garnered increased attention as shifting demographic trends and societal dynamics shape experiences of aging on a global scale. While research has been established to assess the relationship between loneliness and the aging process in the United States and various European countries [ 1 – 3 ], there is a lack of research that has sought to understand this relationship among older adults in Latin America. Loneliness has been traditionally understood as a subjective state that contrasts with the condition of physical isolation, implying an imbalance in the desired and achieved level of socio‑affective interaction . In an aging population, the loss of a spouse and diminishing social networks due to death of friends or reduced community integration are common triggers of loneliness . However, it is also critical to consider other factors that play a role in the subjective experience. Research from a Latin American study highlights urbanization of rural areas, retirement from careers, declining birth rates, departure of sons and daughters from the home, and the pervasive influence of technology that potentially isolates older adults in their homes as factors that can precipitate a state of loneliness. The study goes on to say that, while these may be associated with independence in some cultures, in a Latin American environment these may be considered risk factors that potentially alienate older members from a community . Frameworks from prior literature such as Disengagement Theory, which posits that adults intrinsically and responsively reduce the amount of social interactions they have as they age, help to give relevance to behavioral patterns that are seen with old age. The withdrawal from societal roles and interactions has widespread effects on levels of social connectivity that lead to exacerbated feelings of isolation . Personality traits such as poor social skills, shyness, and introversion can further predispose individuals to social isolation, complicating efforts to maintain robust social networks crucial for well‑being in later life. Conversely, higher levels of education and economic resources often serve as protective factors against loneliness, affording individuals greater opportunities for social engagement and support . The distinction between social isolation, defined as minimal contact with others, and loneliness, characterized by the subjective experience of lacking a meaningful social network, is crucial . Loneliness, whether emotional (lack of a close attachment) or social (absence of a broader social network), is increasingly recognized for its association with adverse health outcomes, including high risks of chronic diseases, depression, and decreased quality of life among older adults in Latin America . These implications underscore the urgent need for comprehensive strategies that not only connect older adults to community resources but also address their holistic well‑being through nutrition, physical activity, and spiritual support . The principles of loneliness from prior literature, outlined above, were used in the present study that sought to understand social and emotional loneliness (SEL) in San Vito de Coto Brus, Costa Rica. Trends in life expectancy for Costa Rica exceed other upper‑middle‑income countries (UMICs), including those in Latin America . A study that contrasted health risk factors between Costa Rica and the United States, a country with a life expectancy of 77 years , found lower rates of smoking, obesity, hypertension, and single‑person households in Costa Rica could contribute to this characteristic despite having a much lower GDP per capita compared with the United States . Yet, even with trends of higher life expectancy, it is important to recognize that healthy aging is multifactorial, and understanding how loneliness is implicated in this process is critical, as the burden of elderly populations in countries such as Costa Rica continues to increase. The prevalence of care dependency, or the degree of difficulty that people have performing regular activities of daily living, is about 10% for individuals over 65 years in Costa Rica, with the level of dependency only increasing with age. And with about 12% of the population being over the age of 65 years in Costa Rica , this only perpetuates issues such as familial abandonment and involuntary placement in long‑term care facilities . Additionally, the presence of cultural norms and gender roles, particularly a patriarchal ideology prevalent in Costa Rican society, shape how loneliness is perceived and experienced by older adults. Men, conditioned to be competitive and less emotionally expressive, may struggle more with loneliness due to societal expectations that hinder their ability to seek companionship when needed . While women tend to outlive men and experience higher rates of widowhood, this could manifest as greater feelings of loneliness and lower quality of life, as evidenced by a study of loneliness in China, India, and Latin American countries such as Cuba, Venezuela, and Mexico . Yet, there is counterevidence to show that women, traditionally educated in empathy and caretaking roles, may find it easier to seek and provide social support, potentially mitigating feelings of loneliness through communal ties and spiritual practices—especially in the strong Catholic communities that are characteristic of rural Costa Rica, including San Vito de Coto Brus . Understanding the intricate interplay of demographic, cultural, and psychological factors contributing to loneliness among older Latin American populations is essential for developing effective interventions and policies. Assessing the relationship in the context of a rural community of Costa Rica can help contribute to the framework of loneliness among aging populations in Latin American countries. The study was approved by the institutional review board (IRB) at the University of Maryland, Baltimore and by Costa Rica’s Comité Ético Científico Fundación Instituto Costarricense de Investigación y Enseñanza en Nutrición y Salud . Informed consent was given and signed prior to conducting interviews. Interviews were only conducted after participants agreed to participate once they were informed of the details and goals of the present study. Data from these interviews were not shared outside of the research team for scientific purposes. A cross‑sectional study was conducted that sampled 63 adults aged 65 years or above in the canton of Coto Brus in Costa Rica. Convenience sampling was used to recruit participants for the study, as local contacts in the town of San Vito were used to connect with community groups and nursing homes for a length of data collection that lasted 4 weeks. Investigators conducted face‑to‑face interviews in Spanish with the aid of local translators. Two healthcare professionals from San Vito and San José served as translators throughout the duration of data collection. Translators were present at the time of the interviews and translated the script from English to Spanish in real time. The primary instruments used for the present study were a content‑validated version of the 11‑item De Jong Gierveld Loneliness Scale , and sociodemographic questions that include age, sex assigned at birth, address, civil status, and level of education. The De Jong Gierveld Loneliness Scale was used for this study because the investigators were interested in differentiating feelings of missing an intimate relationship (emotional loneliness) from the feelings of missing a wider social network (social loneliness). Prior literature has demonstrated that the primary instrument has been validated in two different studies: one that looked at the reliability and validity of the 6‑item abbreviated instrument in France, Germany, the Netherlands, Russia, Bulgaria, Georgia, and Japan , and another that looked at the reliability and validity of the 11‑item instrument in Peru and other Spanish‑speaking countries . It was critical to find evidence of the instrument employed in a variety of populations—especially among Latin American populations—to provide evidence of reliability and validity in populations with different languages, cultures, and values. Information of all study participants was collected anonymously, and each participant was given a de‑identification code to ensure data remained anonymous. The standard scoring system available for the De‑Jong‑Gierveld Loneliness Scale was used to compute scores for participants in the study. For items 2, 3, 5, 6, 9, and 10, responses of “More or less” or “Yes’’ would receive a score of 1 for that item. Items 1, 4, 7, 8, and 11 needed to be reverse‑scored, so a response of “No” would receive a goal of 1 for that item. After summing the total from the collection of 11 items for each participant, overall social‑emotional loneliness (SEL) was categorized into not lonely (0‑2), moderately lonely (3‑8), severely lonely (9‑10), or extremely lonely (11) . A total of 63 individuals aged 65 years or older were sampled in the canton of Coto Brus, Costa Rica. In addition to the De Jong Gierveld scale for loneliness, general demographic characteristics were collected for the sample ( Table 1 ). In an analysis of the breakdown of the averaged composite score of total loneliness (TL) (3.37), emotional loneliness (EL) comprised a much larger proportion (2.69; 79.82%) compared with social loneliness (SL; 0.68; 20.18%), indicating that the loss of one close attachment during the aging process can have a profound effect on the individual evaluation of loneliness ( Table 2 ). Most of the study participants ( n = 35) classified themselves as moderately lonely, with a score on the De Jong Gierveld Scale falling between 3 and 8. Among the remaining participants in the study, most of them scored in the category of not lonely ( n = 25), whereas there were a few participants who scored in the category of severely lonely ( n = 3). There were no participants who scored in the category of extremely lonely (score = 11 on the De Jong Gierveld Scale). As a part of the preliminary analysis, the relationships between TL, EL, and SL were explored with each individual demographic factor. Given the age distribution of participants in the study, two distinct age categories (65–74 years and 75 years and above) were used for analysis as opposed to regression. While there appears to be a slight positive association of increasing age with the loneliness score on the De Jong Gierveld survey , especially among TL and EL, a more consistent distribution of individuals over the age of 65 years would be needed to draw any statistical conclusions. When sex assigned at birth was compared with loneliness, there also appears to be a positive association between male sex and loneliness score . While this association is supported in the literature, researchers ran into the issue of only sampling 21 males compared with 42 females. Once again, a more equal distribution of males to females in the sample would provide more information from which to draw any statistical conclusions. The next demographic factor, marital status, was divided into two categories: married and not married. The ‘not married’ category was composed of single ( soltero ), divorced ( divorciado ), widowed ( viudo ), separated ( separado ), and free union ( unión libre ) individuals. Participants who were married in the sample were only found to have slightly lower loneliness scores for TL, SL, and EL . And lastly, level of education provided the strongest association with lower loneliness levels. Prior literature has substantiated education as a protective factor in the development of loneliness, and the data in this preliminary analysis support this claim with evidence that level of education has stronger associations with lower loneliness levels compared with age, sex assigned at birth, and marital status . The study investigates loneliness among an elderly population in the canton of Coto Brus, Costa Rica, focusing on the distinction between loneliness associated with missing a life partner (EL) versus loneliness linked to having a smaller social network (SL). The findings suggest a potentially stronger association with EL, implying a potential need for targeted support strategies. However, the study acknowledges several methodological limitations, primarily stemming from convenience sampling and the possible bias toward individuals with larger social networks. One of the notable strengths of the study lies in the use of a validated instrument with demonstrated reliability and validity across multiple countries [ 3 , 18 ‑ 20 ]. The study’s methodology benefits from the simplicity and clarity of the survey instrument, making it easy to administer and score. This approach enhances the feasibility of conducting similar studies in other communities or expanding the sample size to draw more robust statistical conclusions. The reliance on convenience sampling poses significant limitations. By sampling from community groups and nursing homes, the study may inadvertently exclude elderly individuals who experience severe loneliness and therefore do not actively participate in community activities. This could skew results toward a population with stronger social support networks, potentially underestimating the prevalence and impact of loneliness among the broader elderly population in Coto Brus. Additionally, the community‑centric nature of Coto Brus, where as in many other rural regions in Costa Rica, family plays a pivotal role , raises questions about whether the survey adequately captures the nuances between support from family versus friends. This distinction could influence how loneliness is perceived and experienced by individuals in this cultural context, suggesting a need for tailored survey questions that reflect these dynamics more accurately. The findings underscore important implications for local community leaders in Coto Brus. While existing community outlets may provide everyday fellowship and support, additional targeted interventions are necessary to assist elderly community members coping with the loss of a close emotional attachment, such as a life partner or a significant friend. This could involve bolstering support services that specifically address bereavement and emotional resilience among the elderly. Community leaders could also work to bolster support for education, which the data showed to be one of the strongest protective factors in the development of loneliness. While this may not address the factions of older adults experiencing loneliness potentially due to the effects of a lower level of education over the course of a lifetime, emphasizing the protective nature of education against the development of loneliness, including the other comorbid mental health and chronic health conditions associated with loneliness, will yield positive returns for generations in the future. This study points toward several avenues for future research. Firstly, expanding the sample size beyond convenience sampling methods would enhance the generalizability of findings. This would allow for more robust statistical analyses, exploring potential associations between loneliness and demographic factors such as age, sex assigned at birth, and marital status. Furthermore, future studies should delve into the relationship between loneliness and its potential impact on comorbid mental and physical health conditions among the elderly population in Coto Brus. Understanding these interrelationships could inform comprehensive healthcare strategies that address both the emotional and physical well‑being of older adults. One way to analyze the relationship of loneliness with comorbid mental and physical health conditions associated with older age would be to use the current EBAIS model of healthcare in Costa Rica. The establishment of the EBAIS model on a national scale in 1995 helped to revolutionize the delivery of healthcare throughout the country. The tenets of the EBAIS model, which include primary care, accountability, monitoring, and community involvement, have revolutionized what healthcare access can look like. Monitoring through the EBAIS model has also yielded an increase in the burden of noncommunicable diseases since 1990 as the proportion of individuals above 65 years old continues to increase . Given the association of loneliness in older adults with a variety of noncommunicable diseases, assessment of loneliness in rural regions of the country through local EBAIS clinics could be critical as we continue to build our understanding of the relationship between loneliness and aging in Costa Rica. Endeavors in the future that work to connect loneliness to its dynamic web of causes and comorbidities can help local community leaders better tailor support services to meet the diverse needs of their elderly population, while helping the community as a whole deepen our understanding of loneliness in aging populations not just in Costa Rica but in other Latin American communities as well. | Study | biomedical | en | 0.999994 |
PMC11697618 | Harmful behaviors encompass a range of maladaptive behaviors that can be categorized as either self-oriented (e.g., non-suicidal self-injury or NSSI) or other-oriented (e.g., physical and verbal aggressive behaviors). Different harmful behaviors are often studied independently, even though research suggests that they frequently co-occur and share common underlying mechanisms . When both forms of harmful behaviors coincide, it is referred to as dual-harm . Temperament, defined as individual differences in emotional reactivity and regulation , may influence engagement in harmful behaviors . The present study focuses on the most visible or noticeable types of self- and other-oriented harmful behaviors, being non-suicidal self-injury (NSSI) and aggressive behavior, respectively, or a combination, termed dual-harm, and seeks to identify how temperamental traits differentiate harmful behaviors groups, i.e., no-harm, NSSI-only, aggression-only, and dual-harm (NSSI and aggressive behavior), from each other in a sample of emerging adults. Pairwise comparison of these four groups will provide insight into the unique and shared mechanisms underlying different types of harmful behaviors in emerging adults, which creates opportunities to improve models of psychopathology and inform the development of transdiagnostic strategies to reduce harmful behaviors rather than studying each type of behavior in isolation. NSSI refers to any socially or culturally unacceptable behavior inflicting direct and deliberate damage to an individual’s body tissue without suicidal intent , such as self-cutting and self-burning. Lifetime NSSI is estimated to occur in 18% of adolescents, 13% of emerging adults, and 5% of adults . The age of onset of NSSI is most often situated in mid-adolescence (i.e., 14–16 years old), with a second peak during emerging adulthood . Many individuals who start engaging in NSSI during adolescence also continue to self-injure during emerging adulthood . Across studies, women report slightly higher rates of NSSI than men, but the difference is usually smaller in community samples compared to clinical samples . Aggressive behaviors can be physical (e.g., hitting someone) or verbal (e.g., threatening or yelling at someone) and refer to other-oriented types of harm. Aggressive behavior is characterized by actions intended to cause physical, psychological, or social harm to others . Aggressive behaviors peak between the ages 20–30 years old . Research on sex differences in aggressive behaviors shows that men express more physically aggressive behavior than women, whereas no significant sex differences are found for verbally aggressive behavior . Finally, dual-harm refers to the co-occurrence of self- and other-oriented harmful behaviors. A systematic review study by Shafti et al. provided evidence that dual-harm may not be a distinct clinical entity. Instead, it may emerge from the interaction of intrapersonal and interpersonal risk factors that can also be linked to both self- and other-oriented harmful behaviors. As such, individuals engaging in either self- or other-oriented harmful behaviors may be considered at risk for exhibiting the other . It is difficult to draw clear conclusions on the nature and extent of dual-harmful behaviors, as studies vary in their operationalization of dual-harm. There is a considerable body of research defining dual-harm as the co-occurrence of homicide and suicide , but other harmful behaviors can also be the focus of dual-harm studies. Spaan et al. , for example, measured aggressive behaviors and violent behaviors as other-oriented harm, and NSSI and suicidality as self-oriented harm. Harford and colleagues measured other-oriented and self-oriented harm with five items and four items (e.g., “felt like wanting to die”), respectively. O’Donnell and colleagues found that the majority of the studies on combined self-oriented and other-oriented harmful behaviors report a prevalence rate of dual-harm equaling or exceeding 20%. Dual-harm seems more frequent in men than in women, but these findings should be interpreted with caution as only a handful of studies has looked at sex differences in dual-harm . Importantly, individuals reporting dual-harm have more severe psychopathology and are more likely to die before the age of 35 than individuals who engage in either self- or other-oriented harmful behavior . While prior work has focused on intrapersonal and interpersonal risk factors of self-, other-oriented, and dual-harmful behaviors , still little is known about the role of reactive and regulative temperamental traits in relation to the co-occurrence of harmful behaviors. Previous studies have mainly investigated the influence of temperamental dimensions on other- and self-oriented harmful behaviors separately, without taking dual-harm into account. Temperamental dimensions have been identified as psychological risk factors underlying both types of behaviors . Therefore, we aim to investigate how reactive and regulative temperament dimensions differentiate the full spectrum of harmful behaviors, including individuals not engaging in harmful behavior (i.e., the no-harm group), only engaging in self-oriented harmful behavior (i.e., the NSSI-only group), only engaging in other-oriented harmful behavior (i.e., the aggression-only group), and reporting both types of behaviors (i.e., the dual-harm group). Temperament refers to individual differences in (1) emotional reactivity and (2) self-regulation, which are relatively stable across situations and over time . Reactive temperament has been described in the revised-Reinforcement Sensitivity Theory (r-RST) of Gray and McNaughton . The r-RST comprises three systems: a Behavioral activation system (BAS), a fight-flight-freeze system (FFFS), and a Behavioral inhibition system (BIS). BAS reflects a general approach tendency connected to reward sensitivity, positive affect, and extraversion . BAS consists of four empirically validated dimensions: BAS-Reward interest, BAS-Goal-drive persistence, BAS-Reward reactivity, and BAS-Impulsivity. BAS-Reward interest and BAS-Goal-drive persistence both reflect reward desire and are associated with respectively exploration and drive, whereas BAS-Reward reactivity and BAS-Impulsivity are activated in reaction to rewarding stimuli and are associated with respectively responsiveness and non-planning. In addition, FFFS and BIS reflect a general avoidance tendency connected to punishment sensitivity, negative affect, and neuroticism . In this study, FFFS refers to the flight-freeze system, excluding the fight-component . The flight-freeze system is conceptualized as an avoidance system promoting fleeing or freezing behavior in reaction to a stimulus, depending on the perceived danger of the stimulus. The flight-freeze system is associated with fear and panic . BIS functions as a conflict detector and regulator of BAS reactivity and flight-freeze reactivity: A conflict within or between them is followed by an increase in arousal, motivating individuals to resolve the detected conflict. For example, when a person encounters a situation where fleeing may coincide with a desire to stay (flight-freeze system activation vs BAS), BIS detects this conflict and increases arousal to motivate resolution, such as evaluating the level of threat and opting for the safer option. BIS is linked to increased anxiety . In terms of sex differences, women tend to report more BAS-Goal-drive persistence, BAS-Reward reactivity, flight-freeze reactivity, and BIS compared to men, and men tend to report more BAS-Reward interest and BAS-Impulsivity compared to women . Regulative temperament, described in terms of Effortful control (EC), is defined as the top-down capacity to moderate the reactivity of BAS, the flight-freeze system, and BIS to elicit adaptive behavioral responses . EC generally consists of three components: (1) attentional control is the ability to voluntarily focus or shift attention when needed, (2) activation control involves the ability to act even when lacking motivation, and (3) inhibitory control is the ability to voluntarily inhibit behavior . EC is positively associated with conscientiousness . Studies report no significant sex differences in EC in adults . Several cross-sectional studies have investigated the association between temperament and NSSI. These studies revealed that higher BIS reactivity is related to NSSI engagement . The findings regarding the association between BAS and NSSI are contradictory , with some studies finding positive and other studies finding negative or no significant associations with NSSI. Up till now, studies on the relationship between NSSI and flight-freeze-reactivity do not exist. Additionally, low levels of EC have consistently been linked to more NSSI engagement , even more so in interaction with high BIS . When focusing on the relationship between temperament and aggressive behavior, cross-sectional findings systematically show that BIS is negatively associated with aggressive behavior, whereas BAS, especially BAS-Impulsivity, is positively associated with aggressive behavior . Until now, research on the relationship between aggressive behavior and flight-freeze-reactivity is lacking. Finally, EC is negatively related to aggressive behavior . To date, there are no studies focusing on the associations between temperament and engaging in dual-harm (NSSI and aggressive behavior combined). However, it is known that the borderline personality disorder is highly prevalent (70.7%) among individuals engaging in dual-harm compared to the prevalence (11.4%) in the general population . The prototypic temperamental profile of individuals with borderline personality disorder is characterized by high BIS, high BAS, and low EC . It is important to note that the aforementioned findings on NSSI, aggressive behaviors and dual-harm are mostly stemming from studies that (1) make use of instruments based on the original-RST instead of the newer and empirically supported r-RST , (2) focus on adolescents, students, or adult populations, and (3) consider the relation between temperamental dimensions among either self-oriented harmful behaviors or other-oriented harmful behaviors, but do not include both types of harmful behaviors in one study. By consequence, the similarities and differences considering temperamental traits in emerging adults who engage in no harmful behaviors, in either self- or other-oriented harmful behaviors, or those who engage in a combination of both harmful behaviors, remain unexplored. To address these gaps in the existing literature, the present study examines which temperamental traits can differentiate four different groups (i.e., a no-harm, NSSI-only, aggression-only, and a dual-harm group) in a pairwise manner, while controlling for age and sex, in a sample of emerging adults. We hypothesize that the likelihood of engaging in NSSI-only, compared to no-harm or aggression-only, will be positively associated with higher BIS . Additionally, we expect that individuals who engage in aggression-only will report lower BIS and higher BAS-Impulsivity than the group with no harmful behaviors or the individuals reporting NSSI-only . The hypothesis regarding the flight-freeze system is exploratory in nature. Considering dual-harm, the analyses are exploratory, but based on the strong link between dual-harm and borderline personality disorder , we assume that the odds of engaging in both NSSI and aggressive behavior, compared to no-harm, NSSI-only, or aggression-only, are positively associated with BIS and BAS and negatively with EC . The present study is part of a larger research project focusing on the relationship between self- and other-oriented harmful behaviors and temperament among emerging adults. In total, we collected data from 847 participants. Due to the fact that the present study focuses on NSSI and aggressive behaviors or a combination of both, we excluded participants who did not complete the questionnaires on NSSI and aggressive behavior. The remaining participants were 669 emerging adults aged 18–25 years old ( M age = 21.48; SD = 2.20), of whom 205 (30.64%) identified as men and 464 (69.36%) identified as women. As the data was collected in Belgium, 644 of the 669 participants had the Belgian nationality (96.26%), of which 8 reported a double nationality (1.20% of the total sample), and 25 participants reported a different nationality (3.74%). A snowball sampling technique was used to collect data from October 2021 until April 2022 during the COVID-19 pandemic. Invitations to participate in an anonymous web-based survey (i.e., informed consent form, sociodemographic items and eight questionnaires) were sent to social organizations (e.g., youth movement clubs, sports clubs, music societies, student societies) to distribute among their Dutch-speaking emerging adult members (18 to 26 years old). Only the questionnaires that are relevant for the present study are described below. The study was approved by the Social and Societal Ethics Committee of KU Leuven under file number G-2021–3870-R2(MAR). The sociodemographic variables that were included are sex (man/woman) and age (in years). Lifetime NSSI was assessed by means of a single dichotomous (yes/no) item ‘Have you ever engaged in self-injury without an intent to die?’. A definition of NSSI was offered to the participants to clarify that non-suicidal self-injury included harmful behaviors oriented towards the self, such as carving or cutting oneself, but without suicidal intent. The use of a single dichotomous item is common in NSSI research and often leads to a consistent estimation of lifetime NSSI prevalence . Aggressive behavior was operationalized by the ‘direct aggression’ scale of the Buss-Durkee Hostility Inventory-Dutch . The scale consists of 16 items (e.g., “When I really lose my temper, I am capable of slapping someone”) to be rated as true or false (α = .77). A total score ≥7.05 on the direct aggression scale indicates the presence of aggressive behavior . Utilizing lifetime NSSI and direct aggression, we constructed four groups: (1) those who reported neither NSSI nor aggressive behavior (no-harm group), (2) those who reported only NSSI (NSSI-only group), (3) those who reported only aggressive behavior (aggression-only group), and (4) those who reported both NSSI and aggressive behavior (dual-harm group). Reactive temperament (BAS, flight-freeze system, and BIS) was assessed by means of the Brief-Reinforcement Sensitivity Theory of Personality Questionnaire . The B-RST-PQ consists of 37 items which are rated on a 4-point Likert scale ranging from 1 (Not at all accurate) to 4 (Highly accurate). BAS is split in four BAS-subscales: BAS-Reward interest (5 items; e.g., “I regularly try new activities just to see if I enjoy them”; α = .79), BAS-Goal-drive persistence (5 items; e.g., “I am very persistent in achieving my goals”; α = .85), BAS-Reward reactivity (4 items; e.g., “I find myself reacting strongly to pleasurable things in life”; α = 68) and BAS-Impulsivity (5 items; e.g., “I find myself doing things on the spur of the moment”; α = .73). The flight-freeze system consists of 6 items (e.g., “Looking down from a great height makes me freeze.”; α = .67 in the present study). Finally, BIS consists of 12 items (e.g., “I am often preoccupied with unpleasant thoughts.”; α = .91 in the present study). Regulative temperament (EC) was assessed by means of the ‘Effortful control’ scale of the Adult Temperament Questionnaire Short Form . The ATQ-ECS consists of 19 items (e.g., “I often find it difficult to switch between different tasks” [reverse-coded]); α = .81 in the present study) to be rated on a 7-point Likert scale ranging from 1 (Not at all applicable) to 7 (Completely applicable). SPSS version 28 was used to analyze the data. A series of logistic regression models with two-sided significance tests were performed with the pairwise group comparisons as dependent variables, temperamental traits as independent variables, and age/sex as control variables. Odds ratios, 95% confidence intervals and Nagelkerke R 2 are reported. Odds ratios provide insight into the magnitude and direction of associations between predictor variables and the compared groups. An odds ratio higher than 1 indicates higher odds of belonging to the group under investigation compared to the reference group, whereas an odds ratio less than 1 suggests lower odds of belonging to the group under investigation compared to the reference group. Nagelkerke’s R 2 provides a measure of the model’s overall fit. Given the number of estimated logistic regression models, we conducted a Bonferroni correction to identify strong associations within each model, by dividing the significance level by the number of estimated models ( n = 6), resulting in a significance level of p < .008. The interactions between reactive temperament (BAS, flight-freeze system, and BIS) and regulative temperament (EC) were entered as a second block in each logistic regression model. As they were all found to be statistically non-significant, these were not presented in the manuscript. However, they are added in a supplementary table to the manuscript (Supplementary Materials 1). To further explore the role of significant predictors identified in the results, we will conduct additional ANOVA analyses for each independent variables that emerges as a significant predictor of subgroup membership. Detailed results of these analyses are provided in the supplementary materials (Supplementary Materials 2). Lifetime NSSI was estimated at 32.88% ( n = 220) and aggressive behavior was estimated at 46.34% ( n = 310). Of all participants, 38.86% ( n = 260) reported no harm, 14.80% ( n = 99) reported NSSI-only, 28.25% ( n = 189) aggression-only, and 18.09% ( n = 121) reported dual-harm . Table 1 displays the results of the six logistic regression analyses. The pairwise comparisons are structured as a continuum, ranging from no-harm to single-harm (either self-oriented or other-oriented harm) and dual-harm. This continuum allows us to capture a more nuanced understanding of self-oriented, other-oriented, and dual-harmful behaviors in relation to temperamental traits. Nagelkerke’s R 2 indicates that the sociodemographic and temperament variables in the model comparisons explain between 16.2% and 37.4% of the variance in group membership. After Bonferroni correction, the odds of belonging to the NSSI-only group over the no-harm group (comparison 1 of Table 1 ) are positively related to BIS. Individuals who only engage in NSSI show higher levels of BIS reactivity (anxiety) compared to individuals who do not engage in either NSSI nor aggressive behaviors (see also Supplementary Materials 2, for an overview of the mean scores of BIS across subgroups). The odds of belonging to the aggression-only group over the no-harm group (comparison 2 of Table 1 ) are positively related to BAS-Impulsivity. This result implies that individuals who engage in aggressive behaviors tend to report higher BAS-Impulsivity (approach reward without planning) compared to individuals who engage in neither NSSI nor aggressive behavior (see also Supplementary Materials 2, for an overview of the mean scores of BAS-Impulsivity across subgroups). The odds of belonging to the dual-harm group over the no-harm group (comparison 3 of Table 1 ) are positively associated with BIS reactivity and BAS-Impulsivity. These results indicate that individuals who engage in both NSSI and aggressive behaviors have a tendency to report higher BIS (anxiety) and BAS-Impulsivity (approach reward without planning) compared to those who engage in neither harmful behavior. The odds of belonging to the aggression-only group over the NSSI-only group (comparison 4 of Table 1 ) are positively associated with BAS-Impulsivity and negatively with BIS reactivity. This means that individuals who only engage in aggressive behaviors are more likely to exhibit higher levels of BAS-Impulsivity (approach reward without planning) and lower levels of BIS (anxiety) compared to individuals who only engage in NSSI. The odds of belonging to the dual-harm group over the NSSI-only group (comparison 5 of Table 1 ) are negatively associated with EC, implying that individuals who engage in both NSSI and aggressive behaviors report less EC (conscientiousness) than individuals who only engage in NSSI (see also Supplementary Materials 2, for an overview of the mean scores of EC across subgroups). Finally, the odds of belonging to the dual-harm group compared to the aggression-only group (comparison 6 of Table 1 ) are positively related to BIS. This result implies that individuals who engage in both NSSI and aggressive behaviors are more likely to show higher BIS (anxiety) than those who only engage in aggressive behaviors. The objective of this study was to explore which reactive and regulative temperamental traits can serve as transdiagnostic or unique factors underlying engagement in self-oriented, other-oriented, and dual-harmful behaviors in a sample of emerging adults. The following findings are of particular significance. When comparing the no-harm group with individuals who engage in only NSSI, only aggressive behaviors, or a combination of both, the results support earlier findings which related temperamental dimensions against the presence or absence of either harmful behavior. First, BIS, i.e., high anxiety, has a positive impact on the odds of belonging to the NSSI-only group versus the no-harm group. This finding is in accordance with previous studies that support higher BIS reactivity among individuals engaging in NSSI compared to those without NSSI . These individuals high in BIS often exhibit higher levels of affect-related dysregulation, as found in depression , which may drive individuals to the use of NSSI as an affect-regulatory strategy to alleviate intense negative emotions . BAS-Impulsivity, i.e., a tendency to approach reward without planning, significantly increases the odds of aggressive behavior over no harmful behavior. These results are in line with a considerable body of evidence showing that individuals exhibiting broader impulse control issues show a higher propensity for direct aggressive behavior . Finally, high BIS and high BAS-Impulsivity increase the odds of engaging in both NSSI and aggressive behaviors over engaging in neither harmful behavior. The profile of high BIS and high BAS-Impulsivity is typically seen in individuals with borderline personality disorder who also report intense emotionality and high disinhibition and impulsivity . The above findings support interventions targeting emotional regulation and impulse control, as found in empirically supported treatments such as dialectical behavior therapy , to reduce dual-harmful behaviors . Multiple studies have demonstrated the efficacy of DBT in individuals engaging in NSSI and aggressive behaviors . Future research should seek to investigate the effectiveness of these interventions to those engaging in dual-harm. The flight-freeze system does not seem to differentiate between no-harm and the (co-)occurrence of harmful behaviors, i.e., only NSSI, only aggression, or dual-harm. This is an important result as the relationship between the flight-freeze system and harmful behaviors has not been previously explored. While the flight-freeze system and BIS are jointly part of a general avoidance tendency, the results demonstrate that BIS explains more variance in harmful behaviors than the flight-freeze system. As far as the authors know, no studies compared temperamental reactivity between an NSSI-only group and a group with aggression-only or dual-harm. The results show that BIS (anxiety) has a negative impact on the odds of aggression-only as opposed NSSI-only, whereas BAS-impulsivity (impulsive action without thinking about the consequences of one’s behavior) positively impacts the odds of engaging in aggression-only above NSSI. These findings support earlier research that elevated BAS and reduced BIS are related to aggressive behaviors . These findings also align with the profile of psychopathy . Individuals with a weak BIS (primary psychopathy) do not experience sufficient anxiety to inhibit antisocial behaviors, whereas a strong BAS, especially BAS-Impulsivity (secondary psychopathy), drives aggressive behaviors due to the tendency to act without thinking . The inverse findings hold for NSSI-only group, compared to the aggression-only group. The NSSI-only group is characterized by high BIS and low BAS-Impulsivity reactivity compared to the aggression-only group. Several prior studies have supported a negative relationship between BAS-Impulsivity and NSSI , but the results so far were inconclusive and based on the original RST instead of r-RST. Elevated BIS and reduced BAS fits the temperamental profile of depression , which can explain the link between NSSI and depressive symptomatology . Presumably, individuals may resort to NSSI engagement to evoke positive feelings (missing due to low BAS) and/or to reduce their negative affect . EC, i.e., conscientiousness, is lower in individuals engaging in dual-harm compared to individuals engaging in NSSI-only. This finding is in line with a studies of Slade et al. , Richmond-Rakerd et al. , and Spaan et al. , which showed that individuals who engage in dual-harm struggle with top-down control (or EC) to regulate reactive emotions when facing distressing situations. Studies have not yet examined differences in temperamental reactivity between individuals who engaged exclusively in aggressive behaviors and those who engage in both aggressive behaviors and NSSI (dual-harm). The results indicate that high BIS, i.e., anxiety, increases the likelihood of engaging in dual-harm as opposed to only engaging in aggressive behaviors. These findings highlight the role of BIS in dual-harm, where elevated anxiety may contribute to a repetitive cycle of harmful actions. In contrast, the aggression-only group tends to report lower sensitivity to anxiety, suggesting the behavior is more impulsive rather than driven by internalized distress. These findings support the cognitive-emotional model of dual-harm which suggests that individuals prone to emotional instability, interpersonal difficulties, and maladaptive coping – factors all positively correlated with BIS – may be particularly susceptible to dual-harm. The discussed findings have both theoretical and clinical implications. In terms of theoretical implications, this study underscores the importance of administering the r-RST above the original RST. The r-RST seems to offer more nuance. For example, BIS and the Flight-freeze system show two different dynamics even though they are both part of a general avoidance tendency system. BAS-Impulsivity seems to play a more important role in differentiating engagement in harmful behaviors than the other BAS-subscales. Clinically, the findings in this study support the need for tailored interventions for individuals engaging in different harmful behaviors. Evidence-based treatments of NSSI and aggression, such as Dialectical Behavioral Therapy or Cognitive-Behavioral Therapy , often include training strategies to replace harmful behaviors with more adaptive behavioral strategies. Based on the findings in this manuscript which show that BIS, BAS-Impulsivity, and EC differentiate between no-harm, NSSI-only, aggression-only, and dual-harm, we need to encourage individuals who engage in NSSI to develop emotion-regulating behaviors that are not harmful , whereas for individuals who engage in aggression, we recommend focusing on impulse-regulation skills . In the case of dual-harm, both emotion and impulse regulation skills are needed, which is the case in programs such as DBT. Although the present study offers valuable insights as one of the few studies currently available that considers temperamental dimensions underlying both self-oriented and other-oriented forms of harmful behavior, several limitations warrant consideration. The study used cross-sectional data from a community sample collected through snowball sampling. As we collected only limited sociodemographic information (age, sex, and nationality), our ability to assess the relevance of these findings across different demographic groups is restricted. Although this recruitment method is practical for reaching individuals who engage in NSSI and/or aggressive behaviors, it may limit the generalizability of the findings. Future research could address this limitation by employing randomized sampling methods. Replicating the study in clinical populations would also provide a more comprehensive understanding of these behaviors and allow for exploration of the potential benefits of therapeutic interventions on self-oriented, other-oriented, or dual-harm. Additionally, longitudinal research is needed to examine developmental trajectories and ascertain whether engaging in NSSI, aggressive behavior and dual-harm are more than merely associated with temperamental traits. Future studies should explore whether reactive and regulative temperament differentially predict NSSI, aggression, and dual-harm. Longitudinal research can also contribute to our understanding of engagement in NSSI and aggressive behavior as possible risks for more adverse outcomes over time. Moreover, in the present study, we included a combination of NSSI and aggressive behavior to operationalize dual-harmful behaviors. However, there is no consensus on which types of harmful behaviors should be included to constitute dual-harm, or on whether there should be a cutoff to establish recency or severity of the behaviors included. Future studies should examine a broader range of harmful behaviors in relation to each other, also considering their diverse characteristics, such as behavioral expressions, persistence, and thoughts related to the harmful behaviors. In that perspective, the work of Bresin offers a meaningful overview of diverse types of harmful behaviors (e.g., aggression, NSSI, as well as substance use, binge eating and gambling). Finally, there is a close relationship between intrapersonal functioning (e.g., temperament, emotion regulation) and interpersonal functioning (e.g., parental criticism, abuse), which are accredited in conceptual models of both NSSI and aggressive behaviors . The present study focused on intrapersonal factors, i.e., temperamental dimensions, and did not include other factors that may have mediated or moderated the relation between temperament and harmful behaviors. Building upon the present study, future research could examine how interpersonal factors, as well as interactions between intrapersonal and interpersonal factors might play a role in the association between temperament and harmful behaviors. In summary, the findings of this study reveal that reactive and regulative temperament are important transdiagnostic factors underlying engagement in self-oriented, other-oriented, and dual-harmful behaviors. Specifically, an elevated BIS and a decreased BAS-Impulsivity are linked to a greater likelihood of engaging in NSSI, as opposed to reporting no harmful behaviors or only aggressive behaviors. Conversely, a decreased BIS and an elevated BAS-Impulsivity are linked with a propensity for engaging in aggressive behaviors, compared to reporting no harmful behaviors or only NSSI. Individuals exhibiting dual-harmful behaviors demonstrate a deficit in EC, indicating lower levels of self-regulation, compared to individuals engaging only NSSI, and high BIS and BAS-Impulsivity compared to those engaging only aggressive behaviors or in neither NSSI nor aggressive behaviors. These differential associations highlight the nuanced interplay between temperamental traits and specific manifestations of harmful behaviors among emerging adults, which should be considered in future research and clinical practice. The data supporting the findings of this study are available upon reasonable request. Please contact the first author for access to the data. The additional files for this article can be found as follows: | Study | biomedical | en | 0.999997 |
PMC11697619 | In 2015, the Lancet Commission on Global Surgery (LCoGS) proposed six core surgical indicators to monitor access to safe and affordable surgical and anesthesia care . The indicators were developed to define, assess, and inform the surgical system on preparedness, service delivery, and cost‑efficiency. The first indicator measures the proportion of a country’s population living within 2 h of a bellwether‑capable facility, which provides cesarean section, laparotomy, and management of open fractures . This metric serves as a proxy for timely access to essential surgery, with an 80% 2‑h access (2HA) rate considered adequate . Geospatial mapping of 2HA has been conducted in various regions worldwide; however, comprehensive studies in Indonesia are lacking. Indonesia, an archipelagic nation comprising approximately 17,000 islands, presents unique geographical challenges for healthcare delivery . As the world’s fourth most populated country, with a population exceeding 273 million in 2020, the dispersion of its population across numerous islands hinders the provision of timely healthcare services. Previous studies have highlighted significant barriers to timely, safe, and affordable surgery in Southeast Asia, underscoring the need for targeted research in this context . In this study, we conducted a geospatial analysis of access to emergency obstetric services within a 2‑h drive or 30‑min walk in Indonesia, focusing on the country’s substantial geographical barriers. There are two main objectives in this study: first, to determine the proportion of the reproductive‑age population in Indonesia that can reach a hospital with emergency obstetric services within a specified timeframe and, second, to identify areas lacking adequate access and suggest potential sites for infrastructure improvements. The purpose is to provide a comprehensive estimate of access based on population distribution, hospital locations, and road networks, which can assist the Indonesian government and other stakeholders in making informed national decisions. In this study, we adopted an observational cross‑sectional design to evaluate geospatial access to emergency obstetric surgery services in Indonesia. Secondary data sources included the hospital location from the Ministry of Health (MoH), obstetric gynecologist (OBGYN) practice location from the electronic management office (EMOP) of the Indonesian Society of Obstetrics and Gynecology (ISOG/POGI), and population estimates from the Facebook high‑resolution settlement layer (HRSL). We focused on women of reproductive age (15–49 years) and evaluated their access to hospitals with obstetric services within a 2‑h drive or 30‑min walk. The analysis identified underserved areas by mapping population density and hospital distribution and aimed to inform surgical workforce planning and infrastructure development in Indonesia. The latest maternal mortality ratio (MMR) data available for 2020 were obtained from www.bps.go.id . Indonesia can be divided into seven main island groups: Sumatra, Java, Bali and Lesser Sunda, Kalimantan, Sulawesi, the Maluku Islands, and Papua. These islands are divided into 38 provinces distributed across three time zones; of these provinces, five are Special Autonomy Provinces (Aceh, Yogyakarta, Jakarta, Papua, and West Papua). In December 2023, all Indonesian hospitals were identified using a comprehensive database of accredited hospitals published by the MoH . According to the Indonesian MoH Decree No. 56 of 2014, all hospitals are classified into two types using the type of service provided. General hospitals provide healthcare services across all fields and for all disease types. In contrast, specialty hospitals primarily focus on a specific field or disease type based on discipline, age group, organ, type of disease, and other specializations. Hospitals are classified into public and private using their management system. Public hospitals are managed by the state or local government or a non‑profit legal entity. In addition, private hospitals are managed by legal entities with the purpose of profiting from the private organizations or legal entities. For this study, all 3,202 recorded hospitals from the Indonesian MoH database were retrieved on 4 September 2024 and matched with the database of OBGYN, including their practice locations, from ISOG. Following manual data cleaning, a final list of 2,566 hospitals was obtained. Subsequently, the Global Positioning System (GPS) coordinates of each hospital were identified using Google Earth Pro, which automatically tabulated the latitude and longitude of hospitals using their address. Hospital names and their GPS coordinates were input into ArcGIS Pro (version 3.2) and stored as point features. The spatial population data for women of reproductive age (WRA), defined as those aged 15–49 years, were obtained from Facebook HRSL for 2020, the most recent available year . The detailed methodology for these data, combining census data, satellite imagery, and machine learning algorithms, was explained in another study . The LCoGS outlined timely access to surgical care as 2‑h access to a hospital providing surgical care. In addition, 30‑min access was measured to evaluate access to essential obstetric care, as recommended by the American College of Obstetricians and Gynecologists (ACOG). This suggests a 30‑minute benchmark for access to emergency cesarean sections (CS). We utilized the Network Analyst tool from ArcGIS Pro with a road network database sourced from the Indonesian Geospatial Information Agency (Badan Informasi Geospatial Indonesia), scaled at 1:250,000, to estimate walking and driving time. The walking speed was estimated to be 5 km/h; in contrast, the vehicle driving speed was set to 50 km/h. Service area maps were generated around each hospital with available OBGYNs, delineating areas reachable within 30 min of walking and 2 h of driving with a vehicle. Speed limits were embedded within the road network dataset, with the assumption that all patients always adhered to these speed limits. First, a service‑area analysis layer was created by importing a road network map. Subsequently, driving time parameters and hospital locations were selected, and a solving tool was used to perform the analysis. The population estimate raster was combined with the driving time analysis areas to calculate the total population for each travel time. These data were summarized using zonal statistics. The detailed methodology for geographic information system (GIS) analysis was previously described in another study . This study was approved by the Health Research Ethics Committee of the Faculty of Public Health, Universitas Airlangga . There are 2,855 hospitals across Indonesia with an available obstetric gynecologist (OBGYN) providing emergency obstetric surgical services . In Indonesia, 89.2% of 3,202 hospitals have an obstetrician‑gynecologist who can provide emergency obstetric surgical services. Overall, 94.5% of the population lives within 2 h of a hospital that provides emergency obstetric surgery, which is notably above the LCoGS target rate of 80% for every country by 2030. Among the seven island groups, five met the LCoGS indicator, ensuring that at least 80% of the population reached a hospital capable of performing emergency CS within 2 h of travel time. The total WRA population was highest in the Java Island group, with 99.2% having 2‑h access to emergency obstetric surgical care (EOSC). In contrast, the Maluku Islands and Papua had the lowest WRA populations and the lowest 2‑h access coverage at 69.2% and 60.7%, respectively . Of 5,305 OBGYNs in Indonesia, 448 (8.4%) were either inactive or retired. The provinces with the highest percentages of inactive or retired OBGYNs were the Riau Islands (29.8%) and Riau (21.2%), indicating significant regional disparities in active OBGYN practitioners. In addition, 120 OBGYNs (2.3%) worked exclusively in private clinics, with the highest proportions in the Bangka Belitung Islands (6.5%) and East Kalimantan (5.0%) (Supplementary Table 1). Meanwhile, 189 and 20 general and surgical hospitals (7.0% and 64.5%), respectively, across various provinces did not have actively practicing OBGYNs (APO). Notably, in regions such as Bengkulu, North Maluku, and Southeast Sulawesi, all surgical hospitals lacked OBGYNs. In addition, other hospital categories consistently demonstrated the highest proportion without OBGYNs, reaching 85.6% nationally (Supplementary Table 2). Furthermore, 108 Class D hospitals (12.3%) and 57 Class D primary hospitals (82.6%) lacked actively practicing OBGYNs, indicating a significant gap in specialist availability among lower‑classified hospitals. Notably, class C hospitals were affected, with 109 facilities (6.3%) lacking OBGYNs, which reflects the disparities in the availability of OBGYNs across various hospital classes, particularly in rural or underserved areas . At the provincial level, the geospatial analysis showed that eight provinces did not achieve the first LCoGS indicator target of 80%, including West Kalimantan, the Riau Islands, Maluku, North Maluku, Papua, Papua Mountains, South Papua, and Central Papua. Access to EOSC is lowest (42.8%) in South Papua. In contrast, 2HA to EOSC is highest in Jakarta, which is the current capital of Indonesia, with 100% and 95% of the WRA population within 2HA and 30‑min walking time, respectively. The 2HA to EOSC is largely affected by population distribution compared with the island distribution and land area. The densely populated areas had higher access rates. For example, Jakarta has a 2HA of 100% and the highest population density at 4,849 WRA per km 2 . Conversely, access rates were lowest in South Papua (42.8%) and the Papua Mountains (46.7%), with WRA population densities of 2 and 5 per km 2 , respectively ( Table 2 ). Indonesia’s national maternal mortality ratio (MMR) in 2020 was 186 per 100,000 live births. However, the MMR levels vary greatly in each province in Indonesia . As shown in Figure 4 , the MMR (per 100,000) is significantly and negatively correlated with the number of APOs , the OBGYN‑to‑10,000‑WRA ratio , the percentage of the WRA population within a 30‑minute walking distance , and the percentage of the WRA population within 2HA . However, no significant correlation was observed between the MMR and the number of hospitals with an actively practicing OBGYN , or between the MMR and the number of maternal and child HAPOs . This is a novel study on the location and coverage of obstetric‑gynecological services in Indonesia nationally and in the provinces. In addition, it is the first study to utilize geospatial analysis in calculating reproductive‑age population estimates with access to emergency obstetric services in these provinces within a set time. This aligns with and adds to findings from similar studies in Southeast Asia, specifically in Malaysia and the Philippines, where the national 2HA target set by the LCoGS was achieved . However, regional disparities are evident in remote and underserved areas. These studies highlight that significant gaps in access persist locally, particularly in regions with challenging geography and lower population densities, despite averages meeting global standards. Access to facilities with emergency obstetric services in Indonesia varies depending on geographic location. Based on region, there were evident disparities in access to essential surgery, as eight provinces did not meet the LCoGS target. This implies that health equity in surgery varies across the different regions of Indonesia. Moreover, despite having more than 4,857 active OBGYNs, only 2,855 of the 3,202 hospitals had OBGYNs. This implies that many OBGYNs have multisite practices in locations where other OBGYNs are present. Such clustering may lead to inaccuracies in assessing physician availability in specific areas, which can hinder the development of effective interventions and public policies, as observed in the United States . Specialist doctors, particularly OBGYNs, usually establish their practices in densely populated areas, as previously reported in an Indonesian study . This clustering can be attributed to multiple factors, including the assurance of better welfare, access to advanced medical facilities, professional growth opportunities, and higher patient volumes . Urban areas often offer more robust healthcare infrastructure, better educational opportunities for children, and enhanced personal and professional networks, making these locations more attractive for specialists . However, this concentration underscores the disparity in healthcare access between urban and rural areas, highlighting the need for targeted policies to distribute medical professionals evenly across regions. Global surgery, which encompasses essential obstetric surgical services, plays a pivotal role in achieving the Sustainable Development Goals 2030 (SDGs) by addressing key objectives such as eradicating poverty (SDG 1), enhancing health and well‑being (SDG 3), promoting economic growth and decent work (SDG 8), and reducing inequalities (SDGs 5 and 10) . This broad impact highlights the importance of accessible and comprehensive surgical, anesthetic, and obstetric (SAO) care, particularly in regions with geographical challenges limiting service availability. Geographical access to SAO care must be clearly defined, as it can facilitate the expansion of surgical care and strategy development to enhance geographic access for the population. SAO care includes a wide array of procedures, with obstetric care forming a crucial foundation that significantly affects WRA. Numerous studies have evaluated geographical surgical access using GIS software in various countries [ 5 , 6 , 11 , 17 – 23 ]. Any effort to scale up surgical care access requires a thorough evaluation of the distribution of facilities providing such care. The data gathered in this study should be integrated into Indonesia’s National Surgical, Obstetric, Anesthesia, and Nursing Plan (NSOANP), representing an initial critical step toward enhancing the surgical system and improving surgical care . The geospatial analysis results can inform national surgical planning and policy development, aiming to improve access to safe, affordable, and timely surgical care. According to geospatial analysis, eight provinces have failed to meet the first LCoGS indicator objective of 80%. The higher maternal mortality rate (MMR) levels in certain provinces—Papua province, for instance, had 565 per 100,000 live births—is consistent with this reality. This number is significantly higher than the national MMR (186 per 100,000 live births). These data contrast sharply with the MMR levels of 48 per 100,000 in Jakarta province, which has the highest 2HA‑to‑EOSC ratio . The significant negative correlation between the maternal mortality ratio (MMR) and the number of active practicing obstetricians and gynecologists (APOs), as well as the OBGYN‑to‑10,000‑women‑of‑reproductive‑age (WRA) ratio, underscores the critical role of skilled obstetric care availability in reducing maternal deaths. This relationship suggests that provinces with a higher density of practicing obstetricians experience lower maternal mortality rates, likely due to improved access to timely and competent care during pregnancy and childbirth. This finding aligns with previous research indicating that the availability of human resources, such as obstetricians and midwives, is correlated with reduced insufficient referrals and better maternal outcomes . Conversely, the absence of a significant correlation between MMR and the number of hospitals with an available OBGYN (HAO) suggests that merely increasing the number of such facilities does not necessarily lead to improved maternal health outcomes. This discrepancy may be attributed to factors such as the uneven distribution of obstetricians across hospitals, service quality, and readiness variations, as well as barriers to accessing these facilities, including geographic distance and socioeconomic constraints. Studies have highlighted that, despite the presence of healthcare facilities, disparities in access and quality of care persist, contributing to high maternal mortality rates . Therefore, addressing maternal mortality requires a comprehensive approach that ensures equitable distribution of qualified healthcare providers and addresses barriers to accessing high‑quality maternal care. Indonesia has achieved a national CS rate of 17.6%, using the 2018 Indonesian Demographic Health Survey . This surpasses the World Health Organization’s recommended threshold of 10–15% for deliveries to reduce MMR levels ; however, the country still reports high MMR levels . This discrepancy underscores the significant gap in maternal health outcomes. While factors, including proximity to health facilities and urban residency, contribute to higher CS rates , they do not fully address the complexities of maternal mortality. This indicates potential gaps in the healthcare system, such as variations in the quality of care, rural access problems, and underlying health conditions in pregnant women, which are not resolved by higher CS rates alone. A deeper evaluation of these factors is essential to understand the persistently high MMR despite meeting the CS targets, suggesting the need for more comprehensive maternal healthcare strategies. The LCoGS identified three factors causing delays in patient care: (1) lack of knowledge about health systems, poor health‑seeking behavior, or distrust in the health system and cultural beliefs; (2) poor accessibility to healthcare facilities due to costs or infrastructure; and (3) insufficient capacity of health services to provide necessary care upon arrival . The 2‑h bellwether access in our geospatial analysis study only addresses the travel aspect of the second delay and does not account for travel costs, including ambulance services, which are not free in Indonesia and often face logistical challenges along with long response times . These travel costs and personal family needs can contribute to the first and second delays . In addition, a late referral system can worsen delays, leading to maternal mortality as evidenced by previous study. Therefore, further studies should consider these other “delays” to improve access to SAO care . Our study had some methodological limitations. First, the Indonesian archipelago, with over 17,000 islands and a total area of approximately 5.1 million km², is predominantly water‑logged (approximately 70%). Similar to a study conducted in the Philippines , we excluded boats and air transportation from our ArcGIS analysis. The variability in air and sea travel time influenced by the weather conditions and transportation type (from ferries to canoes), including challenging terrain in mountainous regions, such as Papua, where access may depend solely on helicopters or non‑commercial flights with infrequent schedules, made us standardize our measurements to land transportation times. Consequently, residents of remote islands requiring air or sea travel to reach healthcare facilities were categorized as outside the 2HA zone for hospitals with available OBGYNs. In this study, we considered other influential travel factors, such as economic, social, or cultural aspects. There is a possibility that the 2HA estimations are optimistic because we did not account for variability in traffic conditions and temporary road closures due to seasonal climate change, monsoons, floods, or earthquakes, which are frequent in Indonesia, a tropical country located in the Ring of Fire. In addition, the road network dataset in Indonesia may not accurately represent travel conditions due to the exclusion of unnamed and smaller roads. Furthermore, population datasets may either under‑ or overestimate the population of certain areas, leading to distorted results. Moreover, not all Indonesians can afford cars or other motorized vehicles, which may result in variations in the calculated 2HA estimates. According to the Indonesian Central Bureau of Statistics, the number of registered cars and motorcycles in 2022 was 17,168,862 and 125,305,332, respectively. This translates to an estimated six cars and 45 motorcycles per 100 persons. Despite these figures being relatively low, there are multiple alternatives and affordable modes of public transportation tailored to local customs in various regions, including angkot (minivans), bajaj (three‑wheeled taxis), mikrolet (minibuses), becak (pedicabs), bentor (motorized pedicabs), bendi (horse‑drawn carriages), and oplet (vans). In addition, online motorcycle and car taxi services, such as Gojek, are available in almost all major cities and regencies in Indonesia and can be booked 24/7, depending on driver availability. In this study, we assumed that hospitals can provide emergency obstetric surgical procedures 24/7. However, some facilities may not continue to operate, and the availability of OBGYNs and functional operating theaters is limited in certain regions. The surgical safety checklist for cesarean sections developed by the Society for Maternal‑Fetal Medicine (SMFM) may not be followed by all hospitals in Indonesia . Furthermore, the shortage of anesthesiologists in some areas may result in surgeries being performed without adequate anesthesia, which is suboptimal and can lead to undesirable outcomes . Owing to technical and logistical constraints, we were unable to conduct a facility assessment across all 2,566 of the 3,117 hospitals in Indonesia. We recommend that future studies should include such assessments because data discrepancies are significant problems in Indonesia and are simultaneously being addressed and improved through the One Data Indonesia initiative by the government . In conclusion, 94.5% of the Indonesian WRA populations were able to reach a hospital with EOSC within 2 h. However, the provincial analysis showed that access is below the LCoGS’s recommendation of 80% in the eight provinces. The results of this study will help inform the Indonesian government when it comes to national obstetric planning. | Other | biomedical | en | 0.999998 |
PMC11697623 | In recent decades there has been increasing interest in the impact of research. Late phase clinical trials and systematic reviews of trials may find results that have the potential to improve health outcomes for people. However, there are often delays in the results influencing clinical practice. Previous research has found that it can take almost two decades, on average, for research results to go from discovery to practical application . These delays in implementing evidence-based approaches have serious implications for patients and the health care system. The most obvious effect is that, due to this delay, many patients and service users miss out on the benefits of evidence-based care [ 1 – 3 ]. These delays are not inevitable; for example, during the COVID-19 pandemic guidelines incorporating the latest evidence from trials and meta-analyses were developed at pace, and practice changed rapidly in response to emerging evidence . Against this backdrop, the concept of knowledge transfer and exchange has developed, which seeks to encourage the movement of research knowledge into action . Originally developed by the Canadian Institute of Health Research, many research funders now encourage grant applicants to think about how their research will be translated into action from this early stage of the development of ideas. This is of particular interest to public and charitable research funders, who want to be able to demonstrate to tax payers and donors that their investment in research has resulted in changes in policy and practice. Having a knowledge transfer and exchange strategy is a requirement of the Medical Research Council for University Units it funds, which includes our department. Part of the vision of our department is delivering a swifter and more effective translation of scientific research into patient benefits. Many models and frameworks to understand the knowledge to practice process exist [ 6 – 17 ], but these may be hard for busy clinical trialists to translate into practical actions. We therefore sought to develop a knowledge transfer and exchange strategy for our clinical trials unit, to support research teams to think through the actions they can take at different stages of their research to maximise and accelerate the impact of that research on policy and practice. This letter describes the strategy we developed, and how it was developed. The Medical Research Council Clinical Trials Unit at UCL (MRCCTU at UCL) is a large clinical trials unit carrying out mostly late-phase trials in the areas of infectious diseases, cancer and neurodegenerative diseases. We work in both high and low- and middle-income settings. Our aim is to deliver a swifter and more effective translation of our trial and meta-analysis results into health benefits. Effective knowledge transfer and exchange is essential to achieving this. We have a small team of research communications professionals who support the knowledge transfer and exchange activities of the unit. The first step in developing the strategy occurred at a senior staff away day, where attendees were asked to list the activities they did as part of their studies to encourage knowledge transfer and exchange. These activities were grouped into 5 ‘strands’, described in Table 1 . Partnerships Communication Maximising the scientific value of our studies Strengthening capacity Learning and sharing Table 1 Description of the strands of our knowledge transfer and exchange strategy Strand Description Partnerships with external stakeholders Including collaborators involved in implementing our research; patient and public involvement, and stakeholder engagement activities Communication Activities to communicate about our research to various audiences, throughout the study process Maximising the scientific value of studies Actions to ensure our studies generate the range of evidence needed by stakeholders (such as including multi-disciplinary sub-studies) and that evidence is accessible to stakeholders (such as through open access publications and data sharing) Strengthening capacity Including efforts to build the capacity of our staff and partners around knowledge transfer and exchange, and to build the capacity of stakeholders to understand and apply the results of our studies Learning and sharing Evaluating the impact of our studies and knowledge transfer and exchange work to inform future studies; sharing our learning internally and externally, and seconding people to and from other organisations, so we can learn and share our knowledge with them We then formed a Knowledge Transfer and Exchange Working Group, made up of representatives from the Infections Cancer, and Methodology Research Themes together with members of the Communications Team. This group was tasked with developing the Knowledge Transfer and Exchange strategy for the unit. The group met approximately monthly throughout 2022. The group decided the strategy needed to cover activities that happen at the unit-level and those that happen at the study-level. It was agreed that there were substantial differences between the sorts of study-level activities appropriate for clinical trials, observational studies and meta-analyses, and those relevant for methodological research into the design, conduct and analysis of clinical trials and meta-analyses. A sub-group was formed to focus on developing a version that was relevant to methodological studies. This letter shares the strategy developed for clinical trials, observational studies, meta-analyses and other studies where primary data are being collected. Activities were included in the strategy if they have been used in at least some of our studies. Those that were mandatory in order to comply with department or funder policies (such as open access publication, and patient and public involvement) were categorised as essential. Those that are likely to be useful and appropriate for most of our studies were highly recommended, while those which may only be relevant in some contexts (but useful in those situations) were categorised as for consideration. We excluded activities that, although known to be effective at promoting research impact, were unlikely to be feasible for our studies, such as academic detailing (outreach) interventions . Through discussion, the working group developed separate tables showing the activities happening at unit (Table 2 ) and study level , organised by strand as identified earlier in the process. The Working Group then developed checklists for studies at different stages (planning (from initial idea through to opening of the study), conduct (from opening to closing of the study), results (from analysis of results to publication), and translation of results (activities that take place after publication)). The checklists contain links to relevant guidance, to help teams think through what they should be doing to encourage knowledge transfer and exchange. Examples of the different activities being applied in different studies were compiled. Table 2 Unit-level knowledge transfer and exchange activities Strand Activities Partnerships with external stakeholders Patient and Public Involvement (PPI) Group PPI input to Quality Management Advisory Group PPI on Protocol Review Committee Engaging with other external stakeholders (long-term relationships lasting over generations of trials, and new partnerships developed to respond to current challenges and opportunities), including NGOs, professional bodies, guideline developers, healthcare commissioners, ethics committees, regulators and industry partners Communication Development and implementation of Unit Communications Strategy Maintaining communications channels including Vimeo, Soundcloud, MRCCTU website, LinkedIn, YouTube and Twitter Maximising the scientific value of studies Unit infrastructure supporting open access publication Unit infrastructure supporting data sharing SSG review to look for opportunities to embed methodology studies, and other ways to maximise the scientific value of our studies Identifying IP issues that need to be considered for a study Strengthening capacity Building internal capacity to develop and implement research impact strategies Building internal capacity to involve patients and the public in research and communication of results Building internal capacity to communicate research clearly Building external capacity to do high-quality research and apply methods developed at the unit Building external capacity to use/understand research Learning and sharing Seconding people into the unit with very specific skill sets to bring to the CTU, and those seeking to gain skills and experience to further their own careers within partner organisations Seconding unit staff to partner organisations Evaluating the impact of our research, and sharing case studies internally and externally Collect examples of impact of our research annually Monitoring our unit communication channels Sharing good practice and lessons learnt Fig. 1 Clinical study-level knowledge transfer and exchange activities The Knowledge Transfer and Exchange Working Group recruited studies at different stages of the trial life-cycle, to pilot the strategy, guidance and tools. Feedback led us to clarify the wording in some places, and compile examples from previous studies to illustrate some of the activities. Trial teams who piloted the worksheets found them easy to use and thought-provoking. Teams who piloted the strategy agreed with our categorisation of activities. No additional activities to include were identified through the piloting. The strategy was revised and then launched to the unit. Study teams were offered support from the Communications Team to complete the worksheets. Further feedback and examples to use in the guidance were encouraged. The strategy is based around five strands of activity that apply at the unit and study levels, across the life-course of our research, described in Table 1 . Table 2 shows the unit-level activities under each of these strands. Figure 1 outlines the different activities that may be appropriate for our clinical studies in each of the five strands of our strategy, across the life-course of the study. Those in green are considered essential, while those in orange are highly recommended. Those in yellow are for consideration, as they might not be appropriate for every study. Supplementary Materials contains the worksheets for the different study stages. The strategy has been incorporated into the training we provide for our staff on ‘Planning for Impact’, and has been promoted via internal meetings, on the intranet, and in the internal newsletter. We developed a knowledge transfer and exchange strategy for our clinical studies, focusing on five areas of activity, across the lifecycle of a study, from planning through to translation of results: Partnerships with external stakeholders (including patient and public involvement) Communication Maximising the scientific value of our studies Strengthening capacity Learning and sharing The strategy and associated tools and guidance provide a structured approach to help study teams think through knowledge transfer and exchange at different stages of their project and record that thinking, which may be helpful when evaluating activities or reporting to funders. However, the process of completing the worksheets and implementing the activities does take time, which may be a barrier to some busy trial teams engaging with the strategy. There are numerous models and frameworks for knowledge transfer in the published literature [ 6 , 8 – 17 ]. Ward et al. found 28 different models in their 2009 review , from which they identified five common components of the knowledge transfer process, which overlap with the four research stages of our strategy (they go further than our research strategy, to research utilisation, which is beyond the scope of our strategy, as that is carried out by health care practitioners rather than researchers). Their problem identification and communication component links to some of our activities in the ‘planning stage’, particularly patient and public involvement to inform the research question; engaging with external stakeholders to inform research question and design; and building in multidisciplinary aspects needed to influence policy and practice. Their analysis of the context component is demonstrated in our activities of mapping key stakeholders to identify which organisations we should be engaging with; development of research impact strategies and capturing current guidelines/practice. Their knowledge transfer activities or interventions component could include many of the activities under the communication (‘distribution’) and partnership (‘linkage’) strands of our strategy, primarily at the results and translation of results stages. Where our strategy differs from many of the existing knowledge transfer models is its direct application to clinical trial, observational studies and meta-analysis research, explicitly focusing on the practical actions study teams and clinical trials units can undertake throughout the research lifecourse to enable impact. Many of the existing models and frameworks focus instead on the perspective of the (potential) information user, when seeking to apply evidence in practice , or identify factors for researchers to consider , or focus more narrowly on one strand of activities from our strategy . Our strategy considers not just the clinical implementation of study results, but also impact on science through data and sample sharing and methodological developments generated from the research. Another difference from most existing frameworks is our strategy identifies patient and public involvement as an essential part of knowledge transfer and exchange (within the partnership strand of activities), from identifying research questions through to advocating for the translation of results. As such, we hope our strategy will be of use to other researchers thinking about what they can do to maximise and speed the impact of their research. Our strategy, focusing on five strands of knowledge transfer and exchange activities across the lifecycle of clinical trials and meta-analyses, may help researchers systematically identify things they can do which may help to improve the usefulness and uptake of their study results. Supplementary Material 1. | Review | biomedical | en | 0.999997 |
PMC11697641 | Shoulder symptoms are a prevalent source of musculoskeletal pain and disability, affecting approximately one-quarter of the population . Shoulder imaging is frequently used to complement clinical examination and may detect abnormalities such as degenerative and traumatic rotator cuff injuries, labral and biceps pathology, glenohumeral and acromioclavicular joint arthritis (AC OA), subacromial bursal enlargement or inflammation, and fractures, most commonly fractures of the humeral head or clavicle . While it seems logical to associate these structural abnormalities with symptoms and to consider surgical correction if symptoms persist, many of these abnormalities are also commonly observed in asymptomatic individuals, particularly in the aging population [ 6 – 8 ]. Imaging modalities such as X-ray, ultrasound (US), computed tomography (CT), and magnetic resonance imaging (MRI) have distinct strengths and limitations in identifying structural abnormalities. X-rays are cost-effective for bony structures but lack soft tissue assessment, while US provides dynamic imaging of soft tissues, depending on operator skill. CT excels in detailed bony anatomy visualization but has higher radiation exposure and limited soft tissue utility. MRI, the gold standard, effectively evaluates both soft tissue and bone but is costly and less accessible. The overall aim of the The SystematiC Review of shoUlder imaging abnormaliTies IN asYmptomatic adults (SCRUTINY) study was to summarize the prevalence of shoulder imaging abnormalities in asymptomatic adults. The primary objective of this paper was to assess the prevalence of abnormalities of the acromioclavicular (AC) joint and subacromial (SA) space from (a) population-based studies, and (b) other study populations, such as volunteers, healthcare-populations, and athletes. Our secondary objective was to compare the prevalence of imaging abnormalities in adults with and without symptoms from the same or comparable study populations. The SCRUTINY systematic review adheres to the Preferred Reporting Items for Systematic reviews and Meta-Analysis (PRISMA) 2020 Statement and is registered with PROSPERO . This paper presents findings related to abnormalities of the AC joint and SA space. Part I of the SCRUTINY study details the findings concerning the glenohumeral joint, while Part III focuses on rotator cuff abnormalities. Observational population-based studies with asymptomatic adult participants (18 years and older) reporting on the prevalence of (i) AC OA, (ii) SA bursal abnormalities, (iii) SA space abnormalities, and (iv) SA calcification, as detected by X-ray, ultrasound (US), computed tomography (CT) and magnetic resonance imaging (MRI), were included. Given the limited number of population-based studies—those conducted in general populations rather than recruiting from specific groups like athletes or individuals with particular characteristics—we also included research involving other groups, such as community volunteers, healthcare populations and athletes. Studies that reported on both asymptomatic and symptomatic shoulders, whether from the same individuals or different individuals within the same study population were also included. Detailed eligibility criteria are provided in Supplementary Table 1 . We conducted a comprehensive search of Ovid MEDLINE, Embase, CINAHL, and Web of Science from their inception up to June 12, 2023, without imposing language restrictions. The search strings used for each database are detailed in Supplementary Table 2 . Additionally, on June 16, 2023, we performed a backward and forward citation analysis of the included studies using Scopus. The titles and abstracts of identified studies were independently screened by five authors (SLS, RH, RJ, TI, and LR). Full-text papers of potentially eligible studies were then retrieved and thoroughly reviewed to determine their eligibility. Disagreements were resolved by a third author (RB or TI) in cases where consensus could not be achieved. Reasons for the exclusion of ineligible studies were documented. Pairs of reviewers (SLS & RH or TI & LR) independently evaluated each study using a modified version of the risk of bias assessment tool originally developed by Hoy et al. . This adapted version comprised seven items targeting essential domains for assessing the risk of bias in prevalence studies, mainly regarding selection bias and measurement bias. An overall judgment of the risk of bias was assigned as high, moderate, or low. Detailed information regarding the adaptations and guidance for conducting the risk of bias assessment can be found in Supplementary Table 3 . Using a pre-tested data extraction template, we extracted study details, participant demographics (population-based, athletes, or miscellaneous populations including community volunteers and healthcare populations), imaging modalities (X-ray, US, CT, or MRI), and prevalence findings (AC OA, SA bursa, SA space, SA calcification). In instances where studies conducted shoulder imaging but did not provide prevalence data categorized by shoulder symptom status, we contacted the first and last study authors via email to request this information. Given that most of the included studies presented prevalence data of AC joint and SA space abnormalities per shoulder and not per individual, we chose to analyze the data based on the number of shoulders rather than number of participants. Prevalence estimates and their corresponding 95% confidence intervals were calculated using the Freeman-Tukey double arcsine transformation and exact confidence intervals, with each calculation based on one shoulder per individual. Initially, our primary analysis was aimed at the general population. Due to clinical heterogeneity of the included studies, it was inappropriate to perform meta-analyses. We therefore conducted a narrative synthesis of the studies reporting the prevalence of imaging abnormalities in asymptomatic shoulders. We also performed a narrative synthesis of studies reporting the prevalence of structural abnormalities in both asymptomatic and symptomatic shoulders from the same individuals or study populations. However, studies comparing the prevalence of imaging abnormalities in asymptomatic individuals with a different group of participants experiencing symptoms (for example, comparing symptomatic athletes with asymptomatic non-athletes) were excluded from this analysis. Patients and the general public did not participate in the planning or conduct of this systematic review. Currently there is no specific Grading of Recommendations, Assessment, Development, and Evaluation (GRADE) framework tailored for prevalence studies. Consequently, we adapted the GRADE approach for prognostic studies, as described by Iorio et al. (Supplementary Table 4 ). We evaluated the certainty of evidence independently for each outcome and study population. Deviations from the planned methods, along with their rationale, are outlined in detail in Supplementary Table 5 . A total of 2457 records were identified through database searches, with an additional 1156 records obtained via other methods . Following a full-text review of 186 papers, 93 studies were excluded. The reasons for exclusion are detailed in Supplementary Table 6 . Studies that met the eligibility criteria but did not provide usable prevalence data are listed in Supplementary Table 7 . Fig. 1 PRISMA flow diagram showing search and screening results. Abbreviations: US, ultrasound; MRI, magnetic resonance imaging; SCRUTINY, the SystematiC Review of shoUlder imaging abnormaliTies IN asYmptomatic adults Overall, the SCRUTINY review included 90 studies reported in 93 publications. In this paper, 31 studies with usable data were included, comprising 4 X-ray [ 12 – 15 ], 11 US [ 6 , 16 – 25 ], 15 MRI [ 5 , 7 , 8 , 26 – 37 ], and 1 including both X-ray and MRI . No CT studies were found. There was one population-based study ( n = 20 shoulders) , 16 studies with miscellaneous populations (including volunteers , healthcare populations , a mixed population of volunteers and athletes , or a combination of volunteers and healthcare populations ) , and 14 studies reporting on athletes ( n = 708 shoulders) [ 13 , 15 , 19 , 23 , 25 , 28 , 31 – 37 , 39 ]. Among the athlete studies, four also included volunteers , with data presented separately for each group. Table 1 summarizes the characteristics of 31 studies reporting shoulder prevalence data across population-based, miscellaneous, and athletic groups. The single population-based study involved a longitudinal cohort of 4056 adults, with imaging findings from 30 participants aged 65 years on average, predominantly female (60%) . Table 1 Prevalence of acromioclavicular (AC) joint osteoarthritis (OA), subacromial (SA) bursa abnormality, SA space abnormality, and SA calcification in asymptomatic shoulders of population-based, miscellaneous, and athlete populations according to imaging modality (X-ray, ultrasound, and magnetic resonance imaging [MRI]) Study Study population Study location Mean age, years (range) Women, % No of participants (No of shoulders) AC-joint OA*, % (n/N) SA bursa abnormality†, % (n/N) SA space abnormality‡, % (n/N) SA calcification, % (n/N) X-ray and MRI studies Gill et al. 2014 Population-based X-ray MRI Australia 64.8 (56–74) 60 20 (20) 95 (19/20) 85 (17/20) 90 (18/20) 20 (4/20) 5 (1/20) X-ray studies Worland et al. 2003 Miscellaneous (volunteers) USA 60.2 45 (40–49) 54 (50–59) 65 (60–69) 77 (70–) 50.8 59 (118) 15 (30) 15 (30) 14 (28) 15 (30) NR NR 42.4 (50/118) 20 (6/30) 56.7 (17/30) 25 (7/28) 66.7 (20/30) NR Maquirriain et al. 2006 Miscellaneous (volunteers) Argentina 59.8 (51–76) 6 18 (36) 19.4 (7/36) NR NR 0 (0/36) Khoschnau et al. 2020 Miscellaneous (healthcare population and volunteers) Sweden 66 (50–75) 51 106 (129) 31 (40/129) NR NR NR Maquirriain et al. 2006 Athletes (former elite tennis players) Argentina 57.2 (51–75) 6 18 (36) 41.7 (15/36) NR NR 2.8 (1/36) Wright et al. 2007 Athletes (overhead, baseball pitchers) USA 29 (19–43) 0 57 (57) 47.4 (27/57) NR NR NR Ultrasound studies Wang et al. 2005 Miscellaneous (volunteers and athletes) Taiwan 21 NR 28 (56) 12.5 (7/56) NR NR NR Oschman et al. 2007 Miscellaneous (healthcare population, contralateral rotator cuff tear) South Africa 64 (40–83) 36 50 (50) NR 78 (39/50) 86 (43/50) NR Abate et al. 2010 Miscellaneous (healthcare population, with and without diabetes) Italy 71 (65–84) 38 80 (160) NR 18.8 (30/160) NR NR Ocguder et al. 2010 Miscellaneous (asymptomatic volunteers) Turkey 25 (18–33) 30 43 (86) NR 0 (0/86) NR 0 (0/86) Girish et al. 2011 Miscellaneous (healthcare population, males with knee problems) USA 56 (40–70) 0 51 (51) 64.7 (33/51) 78.4 (40/51) 5.9 (3/51) 3.9 (2/51) Iagnocco et al. 2013 Miscellaneous (healthy volunteers from four rheumatologic units) Italy 44.2 (20–85) 26 (20–29) 34 (30–39) 44 (40–49) 55 (50–59) 66 (60–) 54 97 (194) 21 (42) 19 (38) 20 (40) 19 (38) 18 (36) 25.7 (50/194) 7.1 (3/42) 7.9 (3/38) 17.5 (7/40) 47.4 (18/38) 52.8 (19/36) 11.3 (22/194) 0 (0/42) 7.9 (3/38) 15 (6/40) 15.8 (6/38) 19.4 (7/36) 2.6 (5/194) 0 (0/42) 0 (0/38) 0 (0/40) 10.5 (4/38) 2.7 (1/36) 18 (35/194) 2.4 (1/42) 10.5 (4/38) 25 (10/40) 5.3 (2/38) 50 (18/36) Sansone et al. 2016 Miscellaneous (healthcare population, females referred to routine gynecological screening) Italy 38.5 (18–60) 100 (509) NR NR NR 13.6 (69/509) Meroni et al. 2017 Miscellaneous (volunteers, working aged women) Italy 36.7 (19–56) 100 228 (456) 0.9 (4/456) 0.2 (1/456) 0 (0/456) 5.7 (26/456) Suzuki et al. 2021 Miscellaneous (volunteers) Japan 51.2 (33–65) 60 20 (40) NR 2.5 (1/40) NR 7.5 (3/40) Eliason et al. 2022 Miscellaneous (healthcare population, primary healthcare patients with unilateral shoulder pain) Sweden 45.0 (20–59) (20–29) (30–39) (40–49) (50–59) 53 115 (115) 14 (14) 19 (19) 35 (35) 47 (47) 13 (15/115) 0 (0/14) 21.1 (4/19) 17.1 (6/35) 10.6 (5/47) 73 (84/115) 35.7 (5/14) 89.5 (17/19) 71.4 (25/35) 78.7 (37/47) NR 17.4 (20/115) 0 (0/14) 10.5 (2/19) 20 (7/35) 23.4 (11/47) Brasseur et al. 2004 Athletes (veteran tennis players) France 55 (37–77) 43 119 (119) NR 22.7 (27/119) NR 25.2 (30/119) Ocguder et al. 2010 Athletes (overhead sports) Turkey 22 (17–40) 18 45 (90) NR 20 (18/90) NR 2.2 (2/90) Suzuki et al. 2021 Athletes (masters level swimmers) Japan 51.8 (33–65) 60 40 (60) NR 11.7 (7/60) NR 16.7 (10/60) Study Study population Study location Mean age, years (range) Women, % No of participants (No of shoulders) AC-joint OA*, % (n/N) SA bursa abnormality†, % (n/N) SA space abnormality‡, % (n/N) SA calcification, % (n/N) MRI studies Chandnani et al. 1992 Miscellaneous (volunteers) USA (25–55) NR 20 (20) 35 (7/20) 0 (0/20) NR NR Neumann et al. 1992 Miscellaneous (volunteers) USA 26 (22–45) 28 55 (32) 43.6 (24/55) 20 (11/55) 3.6 (2/55) NR Needell et al. 1996 Miscellaneous (volunteers without shoulder pain participating in a sports medicine study) USA 54 (19–88) 29 (19–39) 50 (40–60) 75 (61–88) 51 100 (100) 26 (26) 26 (26) 48 (48) 76 (76/100) 38.5 (10/26) 88.5 (23/26) 89.6 (43/48) 33 (33/100) 19.2 (5/26) 19.2 (5/26) 47.9 (23/48) 39 (39/100) 15.4 (4/26) 26.9 (7/26) 58.3 (28/48) NR Stein et al. 2001 Miscellaneous (healthcare population, other musculoskeletal complaint) USA 35 (19–72) 25 (19–30) 42 (31–72) 57 42 (50) (19)(31) 82 (41/50) 68.4 (13/19)90.3 (28/31) NR NR NR Barreto et al. 2019 Miscellaneous (volunteers with unilateral shoulder pain from the community) Brazil 39.4 (18–77) 46 123 (123) 73.2 (90/123) 52.8 (65/123) 13 (16/123) NR Su et al. 2020 Miscellaneous (male volunteers from the study institution) Taiwan 25.3 (22–29) 0 30 (30) NR 13.3 (4/30) NR NR Liu et al. 2021 Miscellaneous (volunteers, healthy non-athletic young adults) USA 24 (20–29) 66 29 (58) 0 (0/58) 0 (0/58) 1.7 (1/58) NR Miniaci et al. 2002 Athletes (professional male baseball pitchers) Canada 20.1 (18–22) 0 14 (28) 35.7 (10/28) 78.6 (22/28) 46.4 (13/28) NR Connor et al. 2003 Athletes (elite overhead athletes) USA 26.4 (18–38) NR 20 (40) NR 47.5 (19/40) NR NR Reuter et al. 2008 Athletes (Ironman participants) USA 35 (29–62) 29 7 (7) 71.4 (5/7) NR NR NR Del Grande et al. 2016 Athletes (male overhead athletes) USA 19.9 (17–22) 0 19 (19) 21.1 (4/19) 63.2 (12/19) NR NR Celliers et al. 2017 Athletes (elite swimmers) South Africa 18.9 (16–25) 45 20 (29) 34.5 (10/29) 34.5 (10/29) NR NR Hacken et al. 2019 Athletes (college and professional male ice hockey players) USA 22.1 (18–28) 0 25 (49) 8.2 (4/49) NR NR NR Lee et al. 2020 Athletes (elite volleyball players) USA 25.5 (21–30) 46 26 (26) 69.2 (18/26) NR NR NR Su et al. 2020 Athletes (male baseball players) Taiwan 25.6 (18–35) 0 68 (68) NR 55.9 (38/68) NR NR Cooper et al. 2022 Athletes (elite rock climbers) USA 34.1 (20–60) 42 50 (100) 28 (28/100) 79 (79/100) NR NR * = Osteophytes, joint effusion, bone oedema, bony ridging, elevated bone marrow signal, joint narrowing, joint degeneration, joint hypertrophy, articular surface irregularity, articular cartilage thinning, fissuring or degeneration, cortical irregularities, margin irregularity, bone sclerosis, erosions, osteoarthritis, synovial scarring, cystic change † = Bursal effusion, bursal thickening, bursal hypertrophy ‡ = SA space narrowing, SA spurs, SA enthesophytes, acromion osteophytes, acromiohumeral distance (abnormal/narrow), Type III acromion (hooked), SA impingement, AC joint osteophytes impinging the supraspinatus tendon Sixteen studies focused on miscellaneous populations, with a wide range of participants in terms of age (21–71 years) and sex distribution (0–100% female). These included asymptomatic volunteers , individuals with unrelated health conditions , and those with contralateral shoulder symptoms . Fourteen studies focused on athletic populations, including 13 on overhead sports [ 13 , 15 , 19 , 23 , 25 , 28 , 31 – 34 , 36 , 37 , 39 ] and one on ice hockey players , with participant ages ranging from 19 to 57 years and female representation varying between 0 and 60%. All included studies were deemed to have a high overall risk of bias, primarily due to concerns about the representativeness of the target population limiting the generalizability of the findings. This was because the populations studied were not closely representative of the national population, lacked sample frame representativeness, or did not utilize random selection or consecutive series for sample selection . Additionally, there was variation in the outcome definitions across the studies (Supplementary Table 8 ). Fig. 2 Risk of bias summary: review authors judgments about each risk of bias item for each study providing prevalence data per shoulder. All 31 studies included in the review were judged to have a high risk of bias overall Twenty-two studies reported the prevalence of AC OA per shoulder. This included one population-based study that included both X-ray and MRI , 12 studies within the miscellaneous group (comprising 1 X-ray study with a mixed population of volunteers and healthcare patients ; 5 US studies involving healthy volunteers , a mixed group of volunteers and athletes , and healthcare populations ; 6 MRI studies with healthy volunteers , volunteers with unilateral shoulder pain , or other musculoskeletal conditions ), as well as one X-ray and seven MRI [ 32 – 37 , 39 ] studies of athletes. Additionally, one X-ray study reported on both athletes and a matched cohort of volunteers . Data categorized by age-group were available in four studies within the miscellaneous group [ 6 – 8 , 24 ] . Fig. 3 Studies reporting the prevalence of acromioclavicular osteoarthritis (AC OA) ( A ), subacromial (SA) bursa abnormalities ( B ), SA space abnormalties ( C ), and SA calcification per shoulder ( D ). Studies are arranged according to mean or midpoint age within each study population Fig. 4 Data stratified by age group were available in four studies reporting the prevalence of acromioclavicular osteoarthritis (AC OA) ( A ), three studies on subacromial (SA) bursa abnormalities ( B ), three studies on SA space abnormalities ( C ), and two studies on SA calcification ( D ). Overall, the data suggest a trend of increasing prevalence with age In the population-based study (20 shoulders, mean participant age 65 years) findings consistent with AC OA were observed in 19 shoulders (95%) on X-ray and 17 shoulders (85%) on MRI . Among the 21 studies with non-population-based samples , 483 shoulders (27%) had findings indicative of AC OA. The sample sizes varied from 57 to 129 shoulders in the three X-ray studies, 51 to 456 shoulders in the five US studies, and 7 to 123 shoulders in the 13 MRI studies. The prevalence of AC OA findings within the individual studies ranged from 6 to 47% for X-ray, 1 to 65% for US, and 0 to 82% for MRI. Twenty-one studies reported the prevalence of SA bursa abnormalities per shoulder. This included one population-based study that used MRI , 11 studies within the miscellaneous group (comprising 6 US studies with healthy volunteers and healthcare populations ; 5 MRI studies with healthy volunteers and volunteers with unilateral shoulder pain ), as well as one US and five MRI studies of athletes. Additionally, two US and one MRI study reported on both athletes and a matched cohort of volunteers . Data categorized by age-group were available in three studies within the miscellaneous group . In the population-based study, there were MRI abnormalities in the SA bursa in 18 shoulders (90%) . Among the 20 studies with non-population-based samples , 562 (27%) shoulders had SA bursa abnormalities. Sample sizes varied from 50 to 456 in nine US studies, and 20 to 123 in 11 MRI studies and the prevalence of SA bursa abnormalities ranged from 0 to 78% for US, and 0 to 79 for MRI. Eleven studies reported the prevalence of SA space abnormalities per shoulder. This included one population-based study that used X-ray , nine studies within the miscellaneous group (1 X-ray study with healthy volunteers ; 4 US studies in either healthy volunteers or healthcare populations ; 4 MRI studies with either healthy volunteers or volunteers with unilateral shoulder pain ), and one MRI study of athletes . Data categorized by age-group were available in three studies within the miscellaneous group . There were X-ray SA space abnormalities in four shoulders (20%) in the population-based study . Among the 10 studies with non-population-based samples , 172 shoulders (14%) shoulders showed SA space abnormalities. The sample size for the X-ray study was 118, while it ranged from 50 to 456 across four US studies and from 28 to 123 across five MRI studies. The prevalence of SA space abnormalities was 42% in the X-ray study, and ranged from 0 to 86% for US, and 2 to 46% in for MRI. Ten studies reported the prevalence of SA calcification per shoulder. This included one population-based study that used X-ray , five studies (all US) within the miscellaneous group comprising healthy volunteers and healthcare populations , and one US study of athletes . Additionally, one X-ray and two US studies reported on both athletes and a matched cohort of volunteers . Data categorized by age-group were available in two studies within the miscellaneous group . There was SA calcification in one shoulder (5%) in the population-based study . Among the nine studies with non-population-based samples , 198 shoulders (11%) showed SA calcifications. The sample size for the X-ray study was 72, while it ranged from 51 to 509 across eight US studies. The prevalence of SA calcifications was 1% in the X-ray study and ranged from 1 to 25% in the US studies. All prevalence estimates were judged to be of very low certainty. Detailed results of the grading process can be found in Supplementary Table 9 . Ten studies examined the prevalence of imaging findings in both asymptomatic and symptomatic shoulders, as detailed in Supplementary Table 10 . Two studies included findings from both shoulders in participants with unilateral shoulder pain and four studies reported on asymptomatic and symptomatic shoulders from different individuals within the same study population . The remaining studies did not clearly specify whether they reported findings within the same individuals, separate individuals, or a mix of both . Seven studies investigated the prevalence of AC OA, including one X-ray study , one US study , four MRI studies , and one study that used both X-ray and MRI . These studies collectively examined 443 asymptomatic shoulders (ranging from 7 to 129 per study) and 378 symptomatic shoulders (ranging from 10 to 123 per study). In one population-based study, the prevalence of AC OA in asymptomatic shoulders was 85% on MRI and 95% on X-ray, while in symptomatic shoulders, it was 100% on both X-ray and MRI . Across all studies, the prevalence of AC OA varied from 13 to 95% in asymptomatic shoulders and from 20 to 100% in symptomatic shoulders . Fig. 5 Studies reporting the prevalence of both asymptomatic and symptomatic shoulders for acromioclavicular osteoarthritis (AC OA) ( A ), subacromial (SA) bursa abnormalities ( B ), SA space abnormalities ( C ), and SA calcification per shoulder ( D ). Numbers under the authors express the total amount of shoulders (asymptomatic/symptomatic) in the study Seven studies, consisting of 3 US studies [ 23 – 25 ] and 4 MRI studies , investigated the prevalence of SA bursa abnormalities. These studies collectively examined 506 asymptomatic shoulders (sample sizes ranging from 20 to 123 per study) and 350 symptomatic shoulders (sample sizes ranging from 10 to 123 per study). In the single population-based study, the prevalence of SA bursa abnormalities was 90% in asymptomatic shoulders and 100% in symptomatic shoulders . Across all studies, the prevalence of SA bursa abnormalities varied from 0 to 90% in asymptomatic shoulders and from 10 to 100% in symptomatic shoulders . Two studies, one that used X-ray and one that used MRI , investigated the prevalence of SA space abnormalities. These studies collectively examined 143 asymptomatic shoulders (ranging from 20 to 123 per study) and 133 symptomatic shoulders (ranging from 10 to 123 per study). In the single population-based study, the prevalence of SA space abnormalities was 20% in asymptomatic shoulders and 30% in symptomatic shoulders . Across all studies, the prevalence of SA space abnormalities varied from 13 to 20% in asymptomatic shoulders and from 15 to 30% in symptomatic shoulders . Five studies, consisting of one X-ray and four US [ 21 , 23 – 25 ] studies, investigated the prevalence of SA calcifications. These studies collectively examined 823 asymptomatic shoulders (ranging from 20 to 509 per study) and 271 symptomatic shoulders (ranging from 10 to 115 per study). In the single population-based study, the prevalence of SA calcification was 5% in asymptomatic shoulders and 20% in symptomatic shoulders . Across all studies, the prevalence of subacromial calcifications varied from 5 to 25% in asymptomatic shoulders and from 20 to 39% in symptomatic shoulders . This systematic review is the first to summarize the prevalence of AC joint and SA space abnormalities in asymptomatic shoulders. We identified one population-based study and 30 additional studies with various study populations. There was considerable variation in prevalence, age groups, genders, and outcome definitions across these studies, but structural changes were frequently observed in asymptomatic shoulders in both population-based and other study populations. Overall, all studies were assessed as having a high risk of bias and their prevalence estimates were judged to be of very low certainty. The prevalence of AC joint and SA space abnormalities was nearly as high in asymptomatic shoulders as in symptomatic shoulders except for subacromial calcification, which was more prevalent in symptomatic shoulders. Since imaging abnormalities are frequently observed in both asymptomatic and symptomatic shoulders, clinicians should exercise caution when linking these findings directly to a patient’s symptoms. Similar observations have been made regarding imaging findings of the glenohumeral joint , and in reviews of other painful musculoskeletal conditions [ 41 – 48 ]. Our review underscores the lack of reliable prevalence estimates for common shoulder imaging abnormalities. Our findings should therefore be interpreted with caution due to the high risk of bias of the included studies and the consequent very low certainty evidence. To establish the true age-specific prevalence of shoulder imaging abnormalities in the general population, further studies with large, representative samples are necessary. There is also a need to establish international consensus on clinically relevant outcome definitions which would facilitate better assessment of comparability across studies, and allow pooling of data across studies which would improve the precision of the prevalence estimates. To our knowledge, this is the first systematic review to synthesize the prevalence of imaging abnormalities in the AC joint and SA space. Previous reviews have reported on the prevalence of abnormalities of the rotator cuff and the glenohumeral joint , and one review has explored the link between imaging abnormalities and symptoms . We conducted a comprehensive literature search covering all commonly used imaging modalities. To improve comparability, we restricted our analysis to studies comparing symptomatic and asymptomatic shoulders within the same populations. We meticulously evaluated the risk of bias for each included study using a modified version of an established risk of bias assessment tool for prevalence studies , and we graded the certainty of evidence for each outcome using GRADE . Our review’s findings are limited by the quality of the available studies. The considerable variability in prevalence estimates across studies may be partly explained by their heterogeneity. Contributing factors include differences in study populations, potential selection bias even within the same population groups, and considerable variations in outcome definitions. Unlike findings related to the glenohumeral joint , age did not appear to have as large an impact on prevalence. Participants recruited from healthcare settings had a range of health conditions, such as contralateral shoulder pain , confirmed contralateral rotator cuff tears , and other healthcare issues , and the extent of upper extremity workload in athletes, may have also affected prevalence estimates. Differences in defining symptom status may also contribute to the wide range of prevalence estimates. Some studies relied solely on symptom questionnaires or interviews, while others also included clinical examinations. Some studies included participants with prior episodes of shoulder pain while others only enrolled individuals who had never experienced shoulder symptoms. The timeframe for defining asymptomatic shoulders also varied widely; definitions ranged from “no symptoms at recruitment” to specific durations such as one week, one month, one year, or longer. Additionally, some studies did not provide a clear explanation of symptom status or timeframe. There were also differences in how abnormalities were defined and assessed across studies. For example, AC OA definitions varied widely. Only two out of 14 MRI studies used the established Stein classification , while all included X-ray and ultrasound studies applied their own criteria which could include diverse findings such as osteophytes, joint effusion, bone oedema, joint narrowing, degeneration, hypertrophy, articular surface irregularity, sclerosis and cystic changes. Similarly, the assessment of SA bursa abnormalities differed. Criteria included bursal effusion, thickening and hypertrophy. Some studies considered size over 1 mm as abnormal , while others considered over 2 mm as abnormal . There was also variation in imaging protocols, such as differences in MRI field strength ranging from 0.25 to 3 T, which may affect the diagnostic accuracy of abnormalities. Although we applied a method to consistently count and report abnormalities, our approach was conservative, potentially leading to underestimation of the true prevalence. Additionally, we chose to report abnormalities per shoulder rather than per person. Some studies included both shoulders from the same individual, which could have biased the prevalence estimates if they assumed that if one shoulder was structurally normal then the other would be as well. However, most studies reported prevalence per shoulder or included findings for only one shoulder per person. Therefore, we deemed it inappropriate to report prevalence per person in this review. In future studies, we recommend assessing and reporting prevalence of symptoms and imaging abnormalities on both a per shoulder and per person basis. The true prevalence of AC joint and SA space imaging abnormalities in asymptomatic individuals remains uncertain, with estimates suggesting rates as high as 90 to 95%. Except for SA calcifications, which appear more common in symptomatic shoulders, these abnormalities occur almost as frequently in asymptomatic individuals as in those with symptoms. This highlights the importance of exercising caution when attributing causation of shoulder symptoms to imaging findings. Effective management of shoulder pain requires a comprehensive assessment of the patient’s medical history and a targeted physical examination. Imaging should be employed judiciously as a supplemental tool, primarily to confirm specific clinical suspicions or to exclude serious conditions such as tumors or infections. Finally, obtaining more accurate prevalence data is critical to guide evidence-based diagnostic and treatment strategies, ensuring appropriate interventions and minimizing unnecessary procedures. Supplementary file 1. | Review | biomedical | en | 0.999997 |
PMC11697674 | Mucopolysaccharidosis type I (MPS I) is a rare autosomal recessive lysosomal storage disease (LSD) linked to pathogenic variants in IDUA gene. IDUA codes for the α-L-iduronidase enzyme and its deficit leads to lysosomal storage of glycosaminoglycans dermatan sulfate and heparan sulfate. Clinical features are variable, ranging from a severe form with onset before 1 year, to milder forms with later onset: Hurler-Scheie and Scheie types . The incidence of this pathology is estimated at 1 in 100,000 live births for Hurler type to 1 in 800,000 for Scheie type . In the majority of cases of Hurler syndrome, clinical signs appear after birth, and neonatal signs are rare. These clinical signs include musculoskeletal abnormalities (short stature, multiple dysostosis, thoraco-lumbar kyphosis), progressive thickening of facial features (protruding frontal bones, low nasal root with broad tip and anteverted nostrils, round cheeks, thickened lips), cardiomyopathy and valvular anomalies, sensorineural deafness, enlarged tonsils and adenoids. Developmental delay, particularly in speech, typically arises between 12 and 24 months, accompanied by progressive cognitive and sensory decline. Other manifestations include organomegaly, hernia, hirsutism, hydrocephalus, diffuse corneal . The first specific clinical signs only appear after a few months of life, linked to progressive lysosomal overload. MPS I with prenatal visceral presentation is particularly rare. While the combination of hepatosplenomegaly and coarse facial features is highly suggestive of a lysosomal disease in children, these signs have never been reported prenatally in MPS I according to our literature search. Prenatal diagnosis is performed mainly on family history, and a few cases of hydrops have been described, although this is much less frequent than in other lysosomal pathologies . We present what is, to our knowledge, the first case of prenatal MPS I diagnosed based on the presence of antenatal signs of overload, including hepatosplenomegaly and coarse facial features, as early as the second trimester of pregnancy. This diagnosis was confirmed through biochemical and genetic testing. A pregnant woman was referred by a partner center at 26.5 gestational weeks (GW) to the prenatal diagnostic center of Rennes (France). This was her second pregnancy following a previous delivery of a healthy infant. The couple was not consanguineous, their phenotype was normal, and they had no significant personal or family histories. Morphologic ultrasound examination conducted during the first trimester revealed a normal nuchal translucency of 2 mm (1.06 Multiple of Median (MoM), Crown-rump length: 77.6 mm) and a single umbilical artery. Additionally, vaginal bleeding related to a placental hematoma was observed. Ultrasound examination at 24.0 GW revealed hepatosplenomegaly and dysmorphic features, including a long and broad philtrum , as well as a few echogenic spots in the liver, spleen, peritoneum, and thymus . The cytomegalovirus (CMV) profile indicated long-standing immunity. Amniocentesis was performed at 26.7 GW for a chromosomal microarray analysis (CMA) and trio whole-exome sequencing (WES) examination. CMA was normal, but two likely pathogenic variants (class 4 according to ACMG classification) were identified by WES on IDUA gene: NM_000203.5:c.[590G > A]; [1139dup]; NP_000194.2:p.[(Gly197Asp)]; [(Leu381Alafs*18)] (Table 1 ). The presence of these two variants in compound heterozygous state raised suspicion of MPS I. No other variants of interest were identified. MPS I was next confirmed by enzymatic analysis in cultured amniocytes, with evidence of a deficiency in α-L-iduronidase activity (Table 1 ). Fig. 1 Morphological studies on Fetal Ultrasonic image at 28 GW. (A) Hepatosplenomegaly (measurements over + 2 SD). (B) Fetal profile with blue arrow pointing to the broad philtrum. (C) Peritoneal echogenic punctuation above the stomach (blue circle) Table 1 Biology results: genetic analysis results; α-L-iduronidase enzyme activity in cultured amniocytes Whole exome and targeted sequencing (fetal DNA) Allele 1: NM_000203.5(IDUA): c.[494-57G > A;590 G > A], inherited from the mother Allele 2: NM_000203.5(IDUA): c.1139dup, inherited from the father α-L-iduronidase enzyme activity in cultured amniocytes. Measured value Laboratory control α-L-iduronidase activity 0.3 µkat/kg 33.8 µkat/kg Hexosaminidase activity (control enzyme) 827 µkat/kg 1687 µkat/kg The couple elected for a medical termination of pregnancy, which was carried out at 35 GW. In France, pregnancy terminations for medical reasons are permitted until its term when a disease of particular severity is diagnosed in the fetus and is incurable at the time of diagnosis, as is the case for severe MPS I. At the parents’ request, only an external examination was performed. The infant’s birth biometrics were as follows: weight, 3140 g (94th percentile); length, 48 cm (80th percentile); occipitofrontal circumference 34 cm (83rd percentile). External examination confirmed hepatomegaly, with hepatic overhang of 4 cm and dysmorphic features, including coarse facial features, bulging or forward-projecting philtrum, broad nasal tip, micrognathia, thin upper lip vermilion, hypertelorism, plagiocephaly, microretrognatism, full and drooping cheeks, large, badly hemmed ears with bulky lobes, bulging eyes and marked suborbital folds . Placenta analysis showed single umbilical artery and micro vacuolized appearance of Hofbauer cells, compatible with lysosomal overload . Fig. 2 Post-termination studies. (A) External examination post-medical abortion at 35 GW; coarse facial features with broad philtrum, broad nasal tip, micrognathia, thin upper lip vermilion, hypertelorism, plagiocephaly, microretrognatism, full and drooping cheeks, bulging or forward-projecting philtrum, large, badly hemmed ears with bulky lobes, bulging eyes, marked suborbital folds. (B) Optical microscopic image showing vacuolization of Hofbauer cells (H&E stain; ×100) Given that the substitution variant (c.590G > A) is located at the canonical acceptor site of exon 6, we investigated the possible splicing impact. This was achieved through the use of a Minigene assay (as detailed in Gaildrat et al. ). In this construct, the c.590G > A variant is responsible for the appearance of a major transcript with complete retention of intron 5, as well as a few alternative transcripts with retention of the last 22, 25 and 28 nucleotides of intron 5. Complete retention of intron 5 leads to a premature stop codon, p.(Phe198Valfs*127). A second construction, using a longer sequence, revealed the complementary role of a 2nd rare variant (c.494-57G > A), in cis of the c.590G > A variant, also altering splicing. This variant creates an additional cryptic splicing site, resulting in the retention of the final 55 nucleotides of intron 4. This, in turn, leads to the formation of a premature stop codon (p.(Arg166Valfs*18)).)). These functional studies (enzyme activity and transcript studies) allowed us to reclassify these variants as pathogenic (class 5 according to ACMG classification). The etiology of fetal hepatosplenomegaly is multifactorial. It is crucial to determine the underlying cause, as some diagnoses are amenable to treatment or may have subsequent gestational implications (e.g., neonatal hemochromatosis). Major contributors include fetal infections, summarized by the acronym TORCH (Toxoplasmosis, Other infections (Parvovirus, Syphilis, Zika, Chickenpox, HIV), Rubella, Cytomegalovirus, Herpes Simplex). Hepatomegaly may also result from fetal anemia or hepatic tumor, such as hepatoblastoma, hemangiomas, mesenchymal hamartomas… . Among constitutional genetic causes, trisomy 21 is responsible in 5–10% of cases for transient abnormal myelopoiesis , a pre-leukemic syndrome which is responsible for hepatomegaly in fetuses and newborns . Wiedemann-Beckwith syndrome combines macroglossia, omphalocele, polyhydramnios, macrosomia and visceromegaly with hepatosplenomegaly . Lysosomal storage diseases (LSD) are a classic yet rare cause of hepatosplenomegaly, with few cases arising during the prenatal period and often associated with others signs like hydrops fetalis and/or fetal ascitis. Indeed, hydrops fetalis is the most frequent presentation indicator of lysosomal pathology, while associated antenatal hepatomegaly is seldom documented. In a context of nonimmune hydrops fetalis, the estimated prevalence of LSD is between 1.3 and 8% [ 10 – 12 ] with the most frequently diagnosed conditions (> 70% of cases) being mucopolysaccharidosis type VII , galactosialidosis and sialidosis , infantile free sialic acid storage disease , Gaucher disease , and GM1 gangliosidosis . In addition, a significant number of other lysosomal pathologies have been identified at least once as a cause of hydrops fetalis , including, but rarely, a few cases of MPS I. Another way LSD may manifest during the prenatal period is chondrodysplasia punctata, as observed in mucolipidosis type II and GM1 gangliosidosis , or multiple dysostosis, as in mucolipidosis type II . In MPS I, the earliest signs typically manifest after birth, and are often present from the first month of life but are not necessarily specific: breathing difficulties, otitis media, hearing loss, hernias, hypotonia, feeding difficulties . Consequently, the diagnosis is often made later, except in countries where newborn screening has been introduced . In the MPS I registry study of 115 individuals with Hurler form with no family history, the median age at diagnosis was 0.8 years . The most specific signs are kyphosis, corneal opacity, characteristic coarse facial features and hepatomegaly. However, hepatomegaly is classically one of the later signs, present in 61.4% of patients and detected after a median of 9.8 months in this study . In prenatal care, only a few isolated cases of MPS I with hydrops have been published , with most prenatal diagnoses being made because of family history. In France, pregnancy monitoring includes 3 systematic ultrasounds (first 9–11 WG, second 20–22 WG and third trimester 30–32 WG). It is challenging to diagnose lysosomal pathology prior to the second trimester ultrasound, given that the initial ultrasound signs were documented in the literature at this gestational age. Here, the fetus exhibited indications of visceral overload from the prenatal period. This severe expression of the pathology is consistent with genetic studies that identified two variants resulting in premature stop codons, and therefore probably no mRNA, targeted by non-mediated decay. To date, over 300 pathogenic variants have been described and reported in the IDUA gene . These include some over-represented variants (e.g., p.Trp402Ter, p.Gln70Ter, p.Pro533Arg) as well as more complex and difficult to interpret pseudodeficient alleles (e.g., p.His82Gln, p.Ala300Thr) . In severe forms, > 79% of genotypes include at least one nonsense/splice/frameshift variant; however, in many cases (i.e., > 20%), the combination of variants is unique to a single patient . The enzymatic studies corroborate this finding, with a marked decrease in α-L-iduronidase activity. Given the grave ramifications of this disease and the risk of recurrence (25%) for future pregnancies, a prenatal or pre-implantation diagnosis can be offered to the couple. From a genetic standpoint, it is noteworthy that only the c.590G > A variant of the c.[494-57G > A;590 G > A] complex allele was identified during the WES. The c.494-57G > A variant is located more than 50 bp from the intron-exon junction and was therefore not covered by WES (the presence of this second variant was confirmed by targeted sequencing in the fetus (Table 1 )). However, as it is upstream of the c.590 G > A variant, the c.494-57G > A has probably the greatest biological impact, although another substitution (c.589G > A p.(Gly197Ser)) on the same codon as the first variant has already been reported as pathogenic . Without being associated with the c.590 G > A variant, the c.494-57G > A variant alone might not have been detected, and the diagnosis of MPS I might therefore have been delayed or not made at all. In the absence of any previous description of signs of prenatal visceral overload in MPS I, as reported in this case, it is unlikely that the pathology would have been sought by targeted enzymatic techniques, as has been done historically, and as was done here following the genetic suspicion. The advent of prenatal genomics will provide better coverage of these intronic variants and therefore improve diagnostic results . This also underlines the importance of histological analysis of the placenta (and the fetus when feasible) in instances of suspected lysosomal disease, as this can assist in the diagnostic process when a definitive diagnosis has not been reached during the prenatal period. Microscopically, macrophage overload is a constant feature in lysosomal storage disorder, with macrophages being particularly rich in lysosomes. Lysosomal overloading is identified by the presence of vacuoles in affected cells, for example in the placenta, and particularly in Hofbauer cells . Vacuoles may be present as early as the first trimester, though they may only be visible under electron microscopy. In some cases, the location and composition of the vacuoles can assist in formulating a diagnosis . The chorionic villi from the placenta of fetuses with MPS1 displayed a remarkable degree of vacuolation of stromal cells , with vacuoles being relatively scarce within the cytotrophoblast and occurred with greater regularity in fibroblasts and endothelial cells . This case illustrates the growing interest in prenatal studies at the exome or genome level for the diagnosis of rare genetic diseases, making it possible to broaden the clinical spectrum of these diseases, and to make informed decisions for the current pregnancy, in particular when ultrasound signs are not specific, and carrying out a prenatal diagnosis for subsequent pregnancies. | Clinical case | biomedical | en | 0.999996 |
Subsets and Splits