dataset
stringclasses 1
value | id
stringlengths 1
5
| messages
listlengths 2
2
|
---|---|---|
query-laysum
|
22944
|
[
{
"role": "user",
"content": "Abstract: Background\nNon-small cell lung cancer (NSCLC) is the most common lung cancer, accounting for approximately 80% to 85% of all cases. For people with localised NSCLC (stages I to III), it has been speculated that immunotherapy may be helpful for reducing postoperative recurrence rates, or improving the clinical outcomes of current treatment for unresectable tumours. This is an update of a Cochrane Review first published in 2017 and it includes two new randomised controlled trials (RCTs).\nObjectives\nTo assess the effectiveness and safety of immunotherapy (excluding checkpoint inhibitors) among people with localised NSCLC of stages I to III who received curative intent of radiotherapy or surgery.\nSearch methods\nWe searched the following databases (from inception to 19 May 2021): CENTRAL, MEDLINE, Embase, CINAHL, and five trial registers. We also searched conference proceedings and reference lists of included trials.\nSelection criteria\nWe included RCTs conducted in adults (≥ 18 years) diagnosed with NSCLC stage I to III after surgical resection, and those with unresectable locally advanced stage III NSCLC receiving radiotherapy with curative intent. We included participants who underwent primary surgical treatment, postoperative radiotherapy or chemoradiotherapy if the same strategy was provided for both intervention and control groups.\nData collection and analysis\nTwo review authors independently selected eligible trials, assessed risk of bias, and extracted data. We used survival analysis to pool time-to-event data, using hazard ratios (HRs). We used risk ratios (RRs) for dichotomous data, and mean differences (MDs) for continuous data, with 95% confidence intervals (CIs). Due to clinical heterogeneity (immunotherapeutic agents with different underlying mechanisms), we combined data by applying random-effects models.\nMain results\nWe included 11 RCTs involving 5128 participants (this included 2 new trials with 188 participants since the last search dated 20 January 2017). Participants who underwent surgical resection or received curative radiotherapy were randomised to either an immunotherapy group or a control group. The immunological interventions were active immunotherapy Bacillus Calmette-Guérin (BCG) adoptive cell transfer (i.e. transfer factor (TF), tumour-infiltrating lymphocytes (TIL), dendritic cell/cytokine-induced killer (DC/CIK), antigen-specific cancer vaccines (melanoma-associated antigen 3 (MAGE-A3) and L-BLP25), and targeted natural killer (NK) cells. Seven trials were at high risk of bias for at least one of the risk of bias domains. Three trials were at low risk of bias across all domains and one small trial was at unclear risk of bias as it provided insufficient information. We included data from nine of the 11 trials in the meta-analyses involving 4863 participants.\nThere was no evidence of a difference between the immunotherapy agents and the controls on any of the following outcomes: overall survival (HR 0.94, 95% CI 0.84 to 1.05; P = 0.27; 4 trials, 3848 participants; high-quality evidence), progression-free survival (HR 0.94, 95% CI 0.86 to 1.03; P = 0.19; moderate-quality evidence), adverse events (RR 1.12, 95% CI 0.97 to 1.28; P = 0.11; 4 trials, 4126 evaluated participants; low-quality evidence), and severe adverse events (RR 1.14, 95% CI 0.92 to 1.40; 6 trials, 4546 evaluated participants; low-quality evidence).\nSurvival rates at different time points showed no evidence of a difference between immunotherapy agents and the controls. Survival rate at 1-year follow-up (RR 1.02, 95% CI 0.96 to 1.08; I2 = 57%; 7 trials, 4420 participants; low-quality evidence), 2-year follow-up (RR 1.02, 95% CI 0.93 to 1.12; 7 trials, 4420 participants; moderate-quality evidence), 3-year follow-up (RR 0.99, 95% CI 0.90 to 1.09; 7 trials, 4420 participants; I2 = 22%; moderate-quality evidence) and at 5-year follow-up (RR 0.98, 95% CI 0.86 to 1.12; I2 = 0%; 7 trials, 4389 participants; moderate-quality evidence).\nOnly one trial reported overall response rates. Two trials provided health-related quality of life results with contradicting results.\nAuthors' conclusions\nBased on this updated review, the current literature does not provide evidence that suggests a survival benefit from adding immunotherapy (excluding checkpoint inhibitors) to conventional curative surgery or radiotherapy, for people with localised NSCLC (stages I to III). Several ongoing trials with immune checkpoints inhibitors (PD-1/PD-L1) might bring new insights into the role of immunotherapy for people with stages I to III NSCLC.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found that giving immunotherapy, mainly vaccine-based (aiming to activate the host immune system to induce human immune response to tumour-specific antigens), after surgery or radiotherapy did not make people live longer. People who were given vaccine-based immunotherapy did not seem to experience more side effects than the others. We did not find results that could tell us whether the addition of immunotherapy improved quality of life. At the moment, there is no evidence to support or refute giving immunotherapy (mainly vaccine-based) to people with localised NSCLC (stages I to III). RCTs in progress are testing new, more promising immunotherapy drugs (e.g. checkpoint inhibitors)."
}
] |
query-laysum
|
22945
|
[
{
"role": "user",
"content": "Abstract: Background\nDecreased exercise capacity and health-related quality of life (HRQoL) are common in people following lung resection for non-small cell lung cancer (NSCLC). Exercise training has been demonstrated to confer gains in exercise capacity and HRQoL for people with a range of chronic conditions, including chronic obstructive pulmonary disease and heart failure, as well as in people with prostate and breast cancer. A programme of exercise training may also confer gains in these outcomes for people following lung resection for NSCLC. This systematic review updates our 2013 systematic review.\nObjectives\nThe primary aim of this review was to determine the effects of exercise training on exercise capacity and adverse events in people following lung resection (with or without chemotherapy) for NSCLC. The secondary aims were to determine the effects of exercise training on other outcomes such as HRQoL, force-generating capacity of peripheral muscles, pressure-generating capacity of the respiratory muscles, dyspnoea and fatigue, feelings of anxiety and depression, lung function, and mortality.\nSearch methods\nWe searched for additional randomised controlled trials (RCTs) in the Cochrane Central Register of Controlled Trials (CENTRAL) (the Cochrane Library 2019, Issue 2 of 12), MEDLINE (via PubMed) (2013 to February 2019), Embase (via Ovid) (2013 to February 2019), SciELO (The Scientific Electronic Library Online) (2013 to February 2019), and PEDro (Physiotherapy Evidence Database) (2013 to February 2019).\nSelection criteria\nWe included RCTs in which participants with NSCLC who underwent lung resection were allocated to receive either exercise training, which included aerobic exercise, resistance exercise, or a combination of both, or no exercise training.\nData collection and analysis\nTwo review authors screened the studies and identified those eligible for inclusion. We used either postintervention values (with their respective standard deviation (SD)) or mean changes (with their respective SD) in the meta-analyses that reported results as mean difference (MD). In meta-analyses that reported results as standardised mean difference (SMD), we placed studies that reported postintervention values and those that reported mean changes in separate subgroups. We assessed the certainty of evidence for each outcome by downgrading or upgrading the evidence according to GRADE criteria.\nMain results\nAlong with the three RCTs included in the original version of this review (2013), we identified an additional five RCTs in this update, resulting in a total of eight RCTs involving 450 participants (180 (40%) females). The risk of selection bias in the included studies was low and the risk of performance bias high. Six studies explored the effects of combined aerobic and resistance training; one explored the effects of combined aerobic and inspiratory muscle training; and one explored the effects of combined aerobic, resistance, inspiratory muscle training and balance training. On completion of the intervention period, compared to the control group, exercise capacity expressed as the peak rate of oxygen uptake (VO2peak) and six-minute walk distance (6MWD) was greater in the intervention group (VO2peak: MD 2.97 mL/kg/min, 95% confidence interval (CI) 1.93 to 4.02 mL/kg/min, 4 studies, 135 participants, moderate-certainty evidence; 6MWD: MD 57 m, 95% CI 34 to 80 m, 5 studies, 182 participants, high-certainty evidence). One adverse event (hip fracture) related to the intervention was reported in one of the included studies. The intervention group also achieved greater improvements in the physical component of general HRQoL (MD 5.0 points, 95% CI 2.3 to 7.7 points, 4 studies, 208 participants, low-certainty evidence); improved force-generating capacity of the quadriceps muscle (SMD 0.75, 95% CI 0.4 to 1.1, 4 studies, 133 participants, moderate-certainty evidence); and less dyspnoea (SMD −0.43, 95% CI −0.81 to −0.05, 3 studies, 110 participants, very low-certainty evidence). We observed uncertain effects on the mental component of general HRQoL, disease-specific HRQoL, handgrip force, fatigue, and lung function. There were insufficient data to comment on the effect of exercise training on maximal inspiratory and expiratory pressures and feelings of anxiety and depression. Mortality was not reported in the included studies.\nAuthors' conclusions\nExercise training increased exercise capacity and quadriceps muscle force of people following lung resection for NSCLC. Our findings also suggest improvements on the physical component score of general HRQoL and decreased dyspnoea. This systematic review emphasises the importance of exercise training as part of the postoperative management of people with NSCLC.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We updated the evidence about the effect of exercise training after lung surgery for non-small cell lung cancer (NSCLC) on fitness level; adverse events; quality of life; strength of the leg, hand, and breathing muscles; breathlessness; fatigue; feelings of anxiety and depression; and lung function."
}
] |
query-laysum
|
22946
|
[
{
"role": "user",
"content": "Abstract: Background\nAlopecia areata is an autoimmune disease leading to nonscarring hair loss on the scalp or body. There are different treatments including immunosuppressants, hair growth stimulants, and contact immunotherapy.\nObjectives\nTo assess the benefits and harms of the treatments for alopecia areata (AA), alopecia totalis (AT), and alopecia universalis (AU) in children and adults.\nSearch methods\nThe Cochrane Skin Specialised Register, CENTRAL, MEDLINE, Embase, ClinicalTrials.gov and WHO ICTRP were searched up to July 2022.\nSelection criteria\nWe included randomised controlled trials (RCTs) that evaluated classical immunosuppressants, biologics, small molecule inhibitors, contact immunotherapy, hair growth stimulants, and other therapies in paediatric and adult populations with AA.\nData collection and analysis\nWe used the standard procedures expected by Cochrane including assessment of risks of bias using RoB2 and the certainty of the evidence using GRADE. The primary outcomes were short-term hair regrowth ≥ 75% (between 12 and 26 weeks of follow-up), and incidence of serious adverse events. The secondary outcomes were long-term hair regrowth ≥ 75% (greater than 26 weeks of follow-up) and health-related quality of life. We could not perform a network meta-analysis as very few trials compared the same treatments. We presented direct comparisons and made a narrative description of the findings.\nMain results\nWe included 63 studies that tested 47 different treatments in 4817 randomised participants. All trials used a parallel-group design except one that used a cross-over design. The mean sample size was 78 participants. All trials recruited outpatients from dermatology clinics. Participants were between 2 and 74 years old. The trials included patients with AA (n = 25), AT (n = 1), AU (n = 1), mixed cases (n = 31), and unclear types of alopecia (n = 4).\nThirty-three out of 63 studies (52.3%) reported the proportion of participants achieving short-term hair regrowth ≥ 75% (between 12 and 26 weeks). Forty-seven studies (74.6%) reported serious adverse events and only one study (1.5%) reported health-related quality of life. Five studies (7.9%) reported the proportion of participants with long-term hair regrowth ≥ 75% (greater than 26 weeks).\nAmongst the variety of interventions found, we prioritised some groups of interventions for their relevance to clinical practice: systemic therapies (classical immunosuppressants, biologics, and small molecule inhibitors), and local therapies (intralesional corticosteroids, topical small molecule inhibitors, contact immunotherapy, hair growth stimulants and cryotherapy).\nConsidering only the prioritised interventions, 14 studies from 12 comparisons reported short-term hair regrowth ≥ 75% and 22 studies from 10 comparisons reported serious adverse events (18 reported zero events and 4 reported at least one). One study (1 comparison) reported quality of life, and two studies (1 comparison) reported long-term hair regrowth ≥ 75%.\nFor the main outcome of short-term hair regrowth ≥ 75%, the evidence is very uncertain about the effect of oral prednisolone or cyclosporine versus placebo (RR 4.68, 95% CI 0.57 to 38.27; 79 participants; 2 studies; very low-certainty evidence), intralesional betamethasone or triamcinolone versus placebo (RR 13.84, 95% CI 0.87 to 219.76; 231 participants; 1 study; very low-certainty evidence), oral ruxolitinib versus oral tofacitinib (RR 1.08, 95% CI 0.77 to 1.52; 80 participants; 1 study; very low-certainty evidence), diphencyprone or squaric acid dibutil ester versus placebo (RR 1.16, 95% CI 0.79 to 1.71; 99 participants; 1 study; very-low-certainty evidence), diphencyprone or squaric acid dibutyl ester versus topical minoxidil (RR 1.16, 95% CI 0.79 to 1.71; 99 participants; 1 study; very low-certainty evidence), diphencyprone plus topical minoxidil versus diphencyprone (RR 0.67, 95% CI 0.13 to 3.44; 30 participants; 1 study; very low-certainty evidence), topical minoxidil 1% and 2% versus placebo (RR 2.31, 95% CI 1.34 to 3.96; 202 participants; 2 studies; very low-certainty evidence) and cryotherapy versus fractional CO2 laser (RR 0.31, 95% CI 0.11 to 0.86; 80 participants; 1 study; very low-certainty evidence). The evidence suggests oral betamethasone may increase short-term hair regrowth ≥ 75% compared to prednisolone or azathioprine (RR 1.67, 95% CI 0.96 to 2.88; 80 participants; 2 studies; low-certainty evidence). There may be little to no difference between subcutaneous dupilumab and placebo in short-term hair regrowth ≥ 75% (RR 3.59, 95% CI 0.19 to 66.22; 60 participants; 1 study; low-certainty evidence) as well as between topical ruxolitinib and placebo (RR 5.00, 95% CI 0.25 to 100.89; 78 participants; 1 study; low-certainty evidence). However, baricitinib results in an increase in short-term hair regrowth ≥ 75% when compared to placebo (RR 7.54, 95% CI 3.90 to 14.58; 1200 participants; 2 studies; high-certainty evidence).\nFor the incidence of serious adverse events, the evidence is very uncertain about the effect of topical ruxolitinib versus placebo (RR 0.33, 95% CI 0.01 to 7.94; 78 participants; 1 study; very low-certainty evidence). Baricitinib and apremilast may result in little to no difference in the incidence of serious adverse events versus placebo (RR 1.47, 95% CI 0.60 to 3.60; 1224 participants; 3 studies; low-certainty evidence). The same result is observed for subcutaneous dupilumab compared to placebo (RR 1.54, 95% CI 0.07 to 36.11; 60 participants; 1 study; low-certainty evidence).\nFor health-related quality of life, the evidence is very uncertain about the effect of oral cyclosporine compared to placebo (MD 0.01, 95% CI -0.04 to 0.07; very low-certainty evidence).\nBaricitinib results in an increase in long-term hair regrowth ≥ 75% compared to placebo (RR 8.49, 95% CI 4.70 to 15.34; 1200 participants; 2 studies; high-certainty evidence).\nRegarding the risk of bias, the most relevant issues were the lack of details about randomisation and allocation concealment, the limited efforts to keep patients and assessors unaware of the assigned intervention, and losses to follow-up.\nAuthors' conclusions\nWe found that treatment with baricitinib results in an increase in short- and long-term hair regrowth compared to placebo. Although we found inconclusive results for the risk of serious adverse effects with baricitinib, the reported small incidence of serious adverse events in the baricitinib arm should be balanced with the expected benefits. We also found that the impact of other treatments on hair regrowth is very uncertain. Evidence for health-related quality of life is still scant.\n\nGiven the provided abstract, please respond to the following query: \"Limitations of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Our confidence in the evidence is only high for one comparison (baricitinib compared to placebo) and, for the remaining identified evidence, our confidence is generally low since most of the therapies have been evaluated in studies with weaknesses in their design, with few patients, and that have not been replicated to evaluate the consistency of the results."
}
] |
query-laysum
|
22947
|
[
{
"role": "user",
"content": "Abstract: Background\nHistorically, women have generally been attended and supported by other women during labour. However, in hospitals worldwide, continuous support during labour has often become the exception rather than the routine.\nObjectives\nThe primary objective was to assess the effects, on women and their babies, of continuous, one-to-one intrapartum support compared with usual care, in any setting. Secondary objectives were to determine whether the effects of continuous support are influenced by:\n1. Routine practices and policies in the birth environment that may affect a woman's autonomy, freedom of movement and ability to cope with labour, including: policies about the presence of support people of the woman's own choosing; epidural analgesia; and continuous electronic fetal monitoring.\n2. The provider's relationship to the woman and to the facility: staff member of the facility (and thus has additional loyalties or responsibilities); not a staff member and not part of the woman's social network (present solely for the purpose of providing continuous support, e.g. a doula); or a person chosen by the woman from family members and friends;\n3. Timing of onset (early or later in labour);\n4. Model of support (support provided only around the time of childbirth or extended to include support during the antenatal and postpartum periods);\n5. Country income level (high-income compared to low- and middle-income).\nSearch methods\nWe searched the Cochrane Pregnancy and Childbirth Group's Trials Register (31 October 2016), ClinicalTrials.gov, the WHO International Clinical Trials Registry Platform (ICTRP) (1 June 2017) and reference lists of retrieved studies.\nSelection criteria\nAll published and unpublished randomised controlled trials, cluster-randomised trials comparing continuous support during labour with usual care. Quasi-randomised and cross-over designs were not eligible for inclusion.\nData collection and analysis\nTwo review authors independently assessed trials for inclusion and risk of bias, extracted data and checked them for accuracy. We sought additional information from the trial authors. The quality of the evidence was assessed using the GRADE approach.\nMain results\nWe included a total of 27 trials, and 26 trials involving 15,858 women provided usable outcome data for analysis. These trials were conducted in 17 different countries: 13 trials were conducted in high-income settings; 13 trials in middle-income settings; and no studies in low-income settings. Women allocated to continuous support were more likely to have a spontaneous vaginal birth (average RR 1.08, 95% confidence interval (CI) 1.04 to 1.12; 21 trials, 14,369 women; low-quality evidence) and less likely to report negative ratings of or feelings about their childbirth experience (average RR 0.69, 95% CI 0.59 to 0.79; 11 trials, 11,133 women; low-quality evidence) and to use any intrapartum analgesia (average RR 0.90, 95% CI 0.84 to 0.96; 15 trials, 12,433 women). In addition, their labours were shorter (MD -0.69 hours, 95% CI -1.04 to -0.34; 13 trials, 5429 women; low-quality evidence), they were less likely to have a caesarean birth (average RR 0.75, 95% CI 0.64 to 0.88; 24 trials, 15,347 women; low-quality evidence) or instrumental vaginal birth (RR 0.90, 95% CI 0.85 to 0.96; 19 trials, 14,118 women), regional analgesia (average RR 0.93, 95% CI 0.88 to 0.99; 9 trials, 11,444 women), or a baby with a low five-minute Apgar score (RR 0.62, 95% CI 0.46 to 0.85; 14 trials, 12,615 women). Data from two trials for postpartum depression were not combined due to differences in women, hospitals and care providers included; both trials found fewer women developed depressive symptomatology if they had been supported in birth, although this may have been a chance result in one of the studies (low-quality evidence). There was no apparent impact on other intrapartum interventions, maternal or neonatal complications, such as admission to special care nursery (average RR 0.97, 95% CI 0.76 to 1.25; 7 trials, 8897 women; low-quality evidence), and exclusive or any breastfeeding at any time point (average RR 1.05, 95% CI 0.96 to 1.16; 4 trials, 5584 women; low-quality evidence).\nSubgroup analyses suggested that continuous support was most effective at reducing caesarean birth, when the provider was present in a doula role, and in settings in which epidural analgesia was not routinely available. Continuous labour support in settings where women were not permitted to have companions of their choosing with them in labour, was associated with greater likelihood of spontaneous vaginal birth and lower likelihood of a caesarean birth. Subgroup analysis of trials conducted in high-income compared with trials in middle-income countries suggests that continuous labour support offers similar benefits to women and babies for most outcomes, with the exception of caesarean birth, where studies from middle-income countries showed a larger reduction in caesarean birth. No conclusions could be drawn about low-income settings, electronic fetal monitoring, the timing of onset of continuous support or model of support.\nRisk of bias varied in included studies: no study clearly blinded women and personnel; only one study sufficiently blinded outcome assessors. All other domains were of varying degrees of risk of bias. The quality of evidence was downgraded for lack of blinding in studies and other limitations in study designs, inconsistency, or imprecision of effect estimates.\nAuthors' conclusions\nContinuous support during labour may improve outcomes for women and infants, including increased spontaneous vaginal birth, shorter duration of labour, and decreased caesarean birth, instrumental vaginal birth, use of any analgesia, use of regional analgesia, low five-minute Apgar score and negative feelings about childbirth experiences. We found no evidence of harms of continuous labour support. Subgroup analyses should be interpreted with caution, and considered as exploratory and hypothesis-generating, but evidence suggests continuous support with certain provider characteristics, in settings where epidural analgesia was not routinely available, in settings where women were not permitted to have companions of their choosing in labour, and in middle-income country settings, may have a favourable impact on outcomes such as caesarean birth. Future research on continuous support during labour could focus on longer-term outcomes (breastfeeding, mother-infant interactions, postpartum depression, self-esteem, difficulty mothering) and include more woman-centred outcomes in low-income settings.\n\nGiven the provided abstract, please respond to the following query: \"Why is this important?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Research shows that women value and benefit from the presence of a support person during labour and childbirth. This support may include emotional support (continuous presence, reassurance and praise) and information about labour progress. It may also include advice about coping techniques, comfort measures (comforting touch, massage, warm baths/showers, encouraging mobility, promoting adequate fluid intake and output) and speaking up when needed on behalf of the woman. Lack of continuous support during childbirth has led to concerns that the experience of labour and birth may have become dehumanised.\nModern obstetric care frequently means women are required to experience institutional routines. These may have adverse effects on the quality, outcomes and experience of care during labour and childbirth. Supportive care during labour may enhance physiological labour processes, as well as women's feelings of control and confidence in their own strength and ability to give birth. This may reduce the need for obstetric intervention and also improve women's experiences."
}
] |
query-laysum
|
22948
|
[
{
"role": "user",
"content": "Abstract: Background\nWork-related musculoskeletal disorders are a group of musculoskeletal disorders that comprise one of the most common disorders related to occupational sick leave worldwide. Musculoskeletal disorders accounted for 21% to 28% of work absenteeism days in 2017/2018 in the Netherlands, Germany and the UK. There are several interventions that may be effective in tackling the high prevalence of work-related musculoskeletal disorders among workers, such as physical, cognitive and organisational interventions. In this review, we will focus on work breaks as a measure of primary prevention, which are a type of organisational intervention.\nObjectives\nTo compare the effectiveness of different work-break schedules for preventing work-related musculoskeletal symptoms and disorders in healthy workers, when compared to conventional or alternate work-break schedules.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, Embase, CINAHL, PsycINFO, SCOPUS, Web of Science, ClinicalTrials.gov and the World Health Organization International Clinical Trials Registry Platform, to April/May 2019. In addition, we searched references of the included studies and of relevant literature reviews.\nSelection criteria\nWe included randomised controlled trials (RCTs) of work-break interventions for preventing work-related musculoskeletal symptoms and disorders among workers. The studies were eligible for inclusion when intervening on work-break frequency, duration and/or type, compared to conventional or an alternate work-break intervention. We included only those studies in which the investigated population included healthy, adult workers, who were free of musculoskeletal complaints during study enrolment, without restrictions to sex or occupation. The primary outcomes were newly diagnosed musculoskeletal disorders, self-reported musculoskeletal pain, discomfort or fatigue, and productivity or work performance. We considered workload changes as secondary outcomes.\nData collection and analysis\nTwo review authors independently screened titles, abstracts and full texts for study eligibility, extracted data and assessed risk of bias. We contacted authors for additional study data where required. We performed meta-analyses, where possible, and we assessed the overall quality of the evidence for each outcome of each comparison using the five GRADE considerations.\nMain results\nWe included six studies (373 workers), four parallel RCTs, one cross-over RCT, and one combined parallel plus cross-over RCT. At least 295 of the employees were female and at least 39 male; for the remaining 39 employees, the sex was not specified in the study trial. The studies investigated different work-break frequencies (five studies) and different work-break types (two studies). None of the studies investigated different work-break durations. We judged all studies to have a high risk of bias. The quality of the evidence for the primary outcomes of self-reported musculoskeletal pain, discomfort and fatigue was low; the quality of the evidence for the primary outcomes of productivity and work performance was very low. The studies were executed in Europe or Northern America, with none from low- to middle-income countries. One study could not be included in the data analyses, because no detailed results have been reported.\nChanges in the frequency of work breaks\nThere is low-quality evidence that additional work breaks may not have a considerable effect on musculoskeletal pain, discomfort or fatigue, when compared with no additional work breaks (standardised mean difference (SMD) -0.08; 95% CI -0.35 to 0.18; three studies; 225 participants). Additional breaks may not have a positive effect on productivity or work performance, when compared with no additional work breaks (SMD -0.07; 95% CI -0.33 to 0.19; three studies; 225 participants; very low-quality evidence).\nWe found low-quality evidence that additional work breaks may not have a considerable effect on participant-reported musculoskeletal pain, discomfort or fatigue (MD 1.80 on a 100-mm VAS scale; 95% CI -41.07 to 64.37; one study; 15 participants), when compared to work breaks as needed (i.e. microbreaks taken at own discretion). There is very low-quality evidence that additional work breaks may have a positive effect on productivity or work performance, when compared to work breaks as needed (MD 542.5 number of words typed per 3-hour recording session; 95% CI 177.22 to 907.78; one study; 15 participants).\nAdditional higher frequency work breaks may not have a considerable effect on participant-reported musculoskeletal pain, discomfort or fatigue (MD 11.65 on a 100-mm VAS scale; 95% CI -41.07 to 64.37; one study; 10 participants; low-quality evidence), when compared to additional lower frequency work breaks. We found very low-quality evidence that additional higher frequency work breaks may not have a considerable effect on productivity or work performance (MD -83.00 number of words typed per 3-hour recording session; 95% CI -305.27 to 139.27; one study; 10 participants), when compared to additional lower frequency work breaks.\nChanges in the duration of work breaks\nNo trials were identified that assessed the effect of different durations of work breaks.\nChanges in the type of work break\nWe found low-quality evidence that active breaks may not have a considerable positive effect on participant-reported musculoskeletal pain, discomfort and fatigue (MD -0.17 on a 1-7 NRS scale; 95% CI -0.71 to 0.37; one study; 153 participants), when compared to passive work breaks.\nRelaxation work breaks may not have a considerable effect on participant-reported musculoskeletal pain, discomfort or fatigue, when compared to physical work breaks (MD 0.20 on a 1-7 NRS scale; 95% CI -0.43 to 0.82; one study; 97 participants; low-quality evidence).\nAuthors' conclusions\nWe found low-quality evidence that different work-break frequencies may have no effect on participant-reported musculoskeletal pain, discomfort and fatigue. For productivity and work performance, evidence was of very low-quality that different work-break frequencies may have a positive effect. For different types of break, there may be no effect on participant-reported musculoskeletal pain, discomfort and fatigue according to low-quality evidence. Further high-quality studies are needed to determine the effectiveness of frequency, duration and type of work-break interventions among workers, if possible, with much higher sample sizes than the studies included in the current review. Furthermore, work-break interventions should be reconsidered, taking into account worker populations other than office workers, and taking into account the possibility of combining work-break intervention with other interventions such as ergonomic training or counselling, which may may possibly have an effect on musculoskeletal outcomes and work performance.\n\nGiven the provided abstract, please respond to the following query: \"Conclusions\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "At present, we conclude that there is very low- to low-quality evidence that different work-break frequencies and types may not considerably reduce the incidence of musculoskeletal disorders. Although the results suggest that there may be a positive effect of different work-break frequencies on productivity and work performance, there is a need for high-quality studies with large enough sample sizes to assess the effectiveness of different work-break interventions. Furthermore, work-break interventions should be reconsidered, taking into account worker populations other than office workers and the possibility of combining work-break interventions with other interventions such as ergonomic training or counselling, which may possibly prevent musculoskeletal disorders."
}
] |
query-laysum
|
22949
|
[
{
"role": "user",
"content": "Abstract: Background\nGlaucoma is a group of optic neuropathies characterized by progressive degeneration of the retinal ganglion cells, axonal loss and irreversible visual field defects. Glaucoma is classified as primary or secondary, and worldwide, primary glaucoma is a leading cause of irreversible blindness. Several subtypes of glaucoma exist, and primary open-angle glaucoma (POAG) is the most common. The etiology of POAG is unknown, but current treatments aim to reduce intraocular pressure (IOP), thus preventing the onset and progression of the disease. Compared with traditional antiglaucomatous treatments, rho kinase inhibitors (ROKi) have a different pharmacodynamic. ROKi is the only current treatment that effectively lowers IOP by modulating the drainage of aqueous humor through the trabecular meshwork and Schlemm's canal. As ROKi are introduced into the market more widely, it is important to assess the efficacy and potential AEs of the treatment.\nObjectives\nTo compare the efficacy and safety of ROKi with placebo or other glaucoma medication in people diagnosed with open-angle glaucoma (OAG), primary open-angle glaucoma (POAG) or ocular hypertension (OHT).\nSearch methods\nWe used standard Cochrane methods and searched databases on 11 December 2020.\nSelection criteria\nWe included randomized clinical trials examining commercially available ROKi-based monotherapy or combination therapy compared with placebo or other IOP-lowering medical treatments in people diagnosed with (P)OAG or OHT. We included trials where ROKi were administered according to official glaucoma guidelines. There were no restrictions regarding type, year or status of the publication.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane. Two review authors independently screened studies, extracted data, and evaluated risk of bias by using Cochrane's RoB 2 tool.\nMain results\nWe included 17 trials with 4953 participants diagnosed with (P)OAG or OHT. Fifteen were multicenter trials and 15 were masked trials. All participants were aged above 18 years. Trial duration varied from 24 hours to 12 months. Trials were conducted in the USA, Canada and Japan. Sixteen trials were funded by pharmaceutical companies, and one trial provided no information about funding sources. The trials compared ROKi monotherapy (netarsudil or ripasudil) or combination therapy with latanoprost (prostaglandin analog) or timolol (beta-blocker) with placebo, timolol, latanoprost or netarsudil. Reported outcomes were IOP and safety. Meta-analyses were applied to 13 trials (IOP reduction from baseline) and 15 trials (ocular AEs).\nOf the trials evaluating IOP, seven were at low risk, three had some concerns, and three were at high risk of bias. Three trials found that netarsudil monotherapy may be superior to placebo (mean difference [MD] 3.11 mmHg, 95% confidence interval [CI] 2.59 to 3.62; I2 = 0%; 155 participants; low-certainty evidence). Evidence from three trials found that timolol may be superior to netarsudil with an MD of 0.66 mmHg (95% CI 0.41 to 0.91; I2 = 0%; 1415 participants; low-certainty evidence). Evidence from four trials found that latanoprost may be superior to netarsudil with an MD of 0.97 mmHg (95% CI 0.67 to 1.27; I2 = 4%; 1283 participants; moderate-certainty evidence).\nEvidence from three trials showed that, compared with monotherapy with latanoprost, combination therapy with netarsudil and latanoprost probably led to an additional pooled mean IOP reduction from baseline of 1.64 mmHg (95% CI −2.16 to −1.11; 1114 participants). Evidence from three trials showed that, compared with monotherapy with netarsudil, combination therapy with netarsudil and latanoprost probably led to an additional pooled mean IOP reduction from baseline of 2.66 mmHg (95% CI −2.98 to −2.35; 1132 participants). The certainty of evidence was moderate. One trial showed that, compared with timolol monotherapy, combination therapy with ripasudil and timolol may lead to an IOP reduction from baseline of 0.75 mmHg (95% −1.29 to −CI 0.21; 208 participants). The certainty of evidence was moderate.\nOf the trials assessing total ocular AEs, three were at low risk, four had some concerns, and eight were at high risk of bias.\nWe found very low-certainty evidence that netarsudil may lead to more ocular AEs compared with placebo, with 66 more ocular AEs per 100 person-months (95% CI 28 to 103; I2 = 86%; 4 trials, 188 participants). We found low-certainty evidence that netarsudil may lead to more ocular AEs compared with latanoprost, with 29 more ocular AEs per 100 person-months (95% CI 17 to 42; I2 = 95%; 4 trials, 1286 participants).\nWe found moderate-certainty evidence that, compared with timolol, netarsudil probably led to 21 additional ocular AEs (95% CI 14 to 27; I2 = 93%; 4 trials, 1678 participants). Data from three trials (1132 participants) showed no evidence of differences in the incidence rate of AEs between combination therapy with netarsudil and latanoprost and netarsudil monotherapy (1 more event per 100 person-months, 95% CI 0 to 3); however, the certainty of evidence was low. Similarly, we found low-certainty evidence that, compared with latanoprost, combination therapy with netarsudil and latanoprost may cause 29 more ocular events per 100 person-months (95% CI 11 to 47; 3 trials, 1116 participants). We found moderate-certainty evidence that, compared with timolol monotherapy, combination therapy with ripasudil and timolol probably causes 35 more ocular events per 100 person-months (95% CI 25 to 45; 1 trial, 208 participants). In all included trials, ROKi was reportedly not associated with any particular serious AEs.\nAuthors' conclusions\nThe current evidence suggests that in people diagnosed with OHT or (P)OAG, the hypotensive effect of netarsudil may be inferior to latanoprost and slightly inferior to timolol. Combining netarsudil and latanoprost probably further reduces IOP compared with monotherapy. Netarsudil as mono- or combination therapy may result in more ocular AEs. However, the certainty of evidence was very low or low for all comparisons except timolol. In general, AEs were described as mild, transient, and reversible upon treatment discontinuation. ROKi was not associated with any particular serious AEs. Future trials of sufficient size and follow-up should be conducted to provide reliable information about glaucoma progression, relevant IOP measurements and a detailed description of AEs using similar terminology. This would ensure the robustness and confidence of the results and assess the intermediate- and long-term efficacy and safety of ROKi.\n\nGiven the provided abstract, please respond to the following query: \"Search date\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched medical databases on 11 December 2020."
}
] |
query-laysum
|
22950
|
[
{
"role": "user",
"content": "Abstract: Background\nGuidelines have provided positive recommendations for pulmonary rehabilitation after exacerbations of chronic obstructive pulmonary disease (COPD), but recent studies indicate that postexacerbation rehabilitation may not always be effective in patients with unstable COPD.\nObjectives\nTo assess effects of pulmonary rehabilitation after COPD exacerbations on hospital admissions (primary outcome) and other patient-important outcomes (mortality, health-related quality of life (HRQL) and exercise capacity).\nSearch methods\nWe identified studies through searches of the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, Embase, PEDro (Physiotherapy Evidence Database) and the Cochrane Airways Review Group Register of Trials. Searches were current as of 20 October 2015, and handsearches were run up to 5 April 2016.\nSelection criteria\nRandomised controlled trials (RCTs) comparing pulmonary rehabilitation of any duration after exacerbation of COPD versus conventional care. Pulmonary rehabilitation programmes had to include at least physical exercise (endurance or strength exercise, or both). We did not apply a criterion for the minimum number of exercise sessions a rehabilitation programme had to offer to be included in the review. Control groups received conventional community care without rehabilitation.\nData collection and analysis\nWe expected substantial heterogeneity across trials in terms of how extensive rehabilitation programmes were (i.e. in terms of number of completed exercise sessions; type, intensity and supervision of exercise training; and patient education), duration of follow-up (< 3 months vs ≥ 3 months) and risk of bias (generation of random sequence, concealment of random allocation and blinding); therefore, we performed subgroup analyses that were defined before we carried them out. We used standard methods expected by Cochrane in preparing this update, and we used GRADE for assessing the quality of evidence.\nMain results\nFor this update, we added 11 studies and included a total of 20 studies (1477 participants). Rehabilitation programmes showed great diversity in terms of exercise training (number of completed exercise sessions; type, intensity and supervision), patient education (from none to extensive self-management programmes) and how they were organised (within one setting, e.g. pulmonary rehabilitation, to across several settings, e.g. hospital, outpatient centre and home). In eight studies, participants completed extensive pulmonary rehabilitation, and in 12 studies, participants completed pulmonary rehabilitation ranging from not extensive to moderately extensive.\nEight studies involving 810 participants contributed data on hospital readmissions. Moderate-quality evidence indicates that pulmonary rehabilitation reduced hospital readmissions (pooled odds ratio (OR) 0.44, 95% confidence interval (CI) 0.21 to 0.91), but results were heterogenous (I2 = 77%). Extensiveness of rehabilitation programmes and risk of bias may offer an explanation for the heterogeneity, but subgroup analyses were not statistically significant (P values for subgroup effects were between 0.07 and 0.11). Six studies including 670 participants contributed data on mortality. The quality of evidence was low, and the meta-analysis did not show a statistically significant effect of rehabilitation on mortality (pooled OR 0.68, 95% CI 0.28 to 1.67). Again, results were heterogenous (I2 = 59%). Subgroup analyses showed statistically significant differences in subgroup effects between trials with more and less extensive rehabilitation programmes and between trials at low and high risk for bias, indicating possible explanations for the heterogeneity. Hospital readmissions and mortality studies newly included in this update showed, on average, significantly smaller effects of rehabilitation than were seen in earlier studies.\nHigh-quality evidence suggests that pulmonary rehabilitation after an exacerbation improves health-related quality of life. The eight studies that used St George's Respiratory Questionnaire (SGRQ) reported a statistically significant effect on SGRQ total score, which was above the minimal important difference (MID) of four points (mean difference (MD) -7.80, 95% CI -12.12 to -3.47; I2 = 64%). Investigators also noted statistically significant and important effects (greater than MID) for the impact and activities domains of the SGRQ. Effects were not statistically significant for the SGRQ symptoms domain. Again, all of these analyses showed heterogeneity, but most studies showed positive effects of pulmonary rehabilitation, some studies showed large effects and others smaller but statistically significant effects. Trials at high risk of bias because of lack of concealment of random allocation showed statistically significantly larger effects on the SGRQ than trials at low risk of bias. High-quality evidence shows that six-minute walk distance (6MWD) improved, on average, by 62 meters (95% CI 38 to 86; I2 = 87%). Heterogeneity was driven particularly by differences between studies showing very large effects and studies showing smaller but statistically significant effects. For both health-related quality of life and exercise capacity, studies newly included in this update showed, on average, smaller effects of rehabilitation than were seen in earlier studies, but the overall results of this review have not changed to an important extent compared with results reported in the earlier version of this review.\nFive studies involving 278 participants explicitly recorded adverse events, four studies reported no adverse events during rehabilitation programmes and one study reported one serious event.\nAuthors' conclusions\nOverall, evidence of high quality shows moderate to large effects of rehabilitation on health-related quality of life and exercise capacity in patients with COPD after an exacerbation. Some recent studies showed no benefit of rehabilitation on hospital readmissions and mortality and introduced heterogeneity as compared with the last update of this review. Such heterogeneity of effects on hospital readmissions and mortality may be explained to some extent by the extensiveness of rehabilitation programmes and by the methodological quality of the included studies. Future researchers must investigate how the extent of rehabilitation programmes in terms of exercise sessions, self-management education and other components affects the outcomes, and how the organisation of such programmes within specific healthcare systems determines their effects after COPD exacerbations on hospital readmissions and mortality.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Uncertainty about reasons for differences across trials in terms of hospital readmissions and mortality led to downgrading of the quality of evidence (moderate-quality evidence for reduction in hospital readmissions and low-quality evidence for reduction in mortality). The quality of evidence was high for quality of life and exercise capacity."
}
] |
query-laysum
|
22951
|
[
{
"role": "user",
"content": "Abstract: Background\nPain is a common symptom in people with cancer; 30% to 50% of people with cancer will experience moderate-to-severe pain. This can have a major negative impact on their quality of life. Opioid (morphine-like) medications are commonly used to treat moderate or severe cancer pain, and are recommended for this purpose in the World Health Organization (WHO) pain treatment ladder. Pain is not sufficiently relieved by opioid medications in 10% to 15% of people with cancer. In people with insufficient relief of cancer pain, new analgesics are needed to effectively and safely supplement or replace opioids.\nObjectives\nTo evaluate the benefits and harms of cannabis-based medicines, including medical cannabis, for treating pain and other symptoms in adults with cancer compared to placebo or any other established analgesic for cancer pain.\nSearch methods\nWe used standard, extensive Cochrane search methods. The latest search date was 26 January 2023.\nSelection criteria\nWe selected double-blind randomised, controlled trials (RCT) of medical cannabis, plant-derived and synthetic cannabis-based medicines against placebo or any other active treatment for cancer pain in adults, with any treatment duration and at least 10 participants per treatment arm.\nData collection and analysis\nWe used standard Cochrane methods. The primary outcomes were 1. proportions of participants reporting no worse than mild pain; 2. Patient Global Impression of Change (PGIC) of much improved or very much improved and 3. withdrawals due to adverse events. Secondary outcomes were 4. number of participants who reported pain relief of 30% or greater and overall opioid use reduced or stable; 5. number of participants who reported pain relief of 30% or greater, or 50% or greater; 6. pain intensity; 7. sleep problems; 8. depression and anxiety; 9. daily maintenance and breakthrough opioid dosage; 10. dropouts due to lack of efficacy; 11. all central nervous system adverse events. We used GRADE to assess certainty of evidence for each outcome.\nMain results\nWe identified 14 studies involving 1823 participants. No study assessed the proportions of participants reporting no worse than mild pain on treatment by 14 days after start of treatment.\nWe found five RCTs assessing oromucosal nabiximols (tetrahydrocannabinol (THC) and cannabidiol (CBD)) or THC alone involving 1539 participants with moderate or severe pain despite opioid therapy. The double-blind periods of the RCTs ranged between two and five weeks. Four studies with a parallel design and 1333 participants were available for meta-analysis.\nThere was moderate-certainty evidence that there was no clinically relevant benefit for proportions of PGIC much or very much improved (risk difference (RD) 0.06, 95% confidence interval (CI) 0.01 to 0.12; number needed to treat for an additional beneficial outcome (NNTB) 16, 95% CI 8 to 100). There was moderate-certainty evidence for no clinically relevant difference in the proportion of withdrawals due to adverse events (RD 0.04, 95% CI 0 to 0.08; number needed to treat for an additional harmful outcome (NNTH) 25, 95% CI 16 to endless). There was moderate-certainty evidence for no difference between nabiximols or THC and placebo in the frequency of serious adverse events (RD 0.02, 95% CI −0.03 to 0.07). There was moderate-certainty evidence that nabiximols and THC used as add-on treatment for opioid-refractory cancer pain did not differ from placebo in reducing mean pain intensity (standardised mean difference (SMD) −0.19, 95% CI −0.40 to 0.02).\nThere was low-certainty evidence that a synthetic THC analogue (nabilone) delivered over eight weeks was not superior to placebo in reducing pain associated with chemotherapy or radiochemotherapy in people with head and neck cancer and non-small cell lung cancer (2 studies, 89 participants, qualitative analysis). Analyses of tolerability and safety were not possible for these studies.\nThere was low-certainty evidence that synthetic THC analogues were superior to placebo (SMD −0.98, 95% CI −1.36 to −0.60), but not superior to low-dose codeine (SMD 0.03, 95% CI −0.25 to 0.32; 5 single-dose trials; 126 participants) in reducing moderate-to-severe cancer pain after cessation of previous analgesic treatment for three to four and a half hours (2 single-dose trials; 66 participants). Analyses of tolerability and safety were not possible for these studies.\nThere was low-certainty evidence that CBD oil did not add value to specialist palliative care alone in the reduction of pain intensity in people with advanced cancer. There was no difference in the number of dropouts due to adverse events and serious adverse events (1 study, 144 participants, qualitative analysis).\nWe found no studies using herbal cannabis.\nAuthors' conclusions\nThere is moderate-certainty evidence that oromucosal nabiximols and THC are ineffective in relieving moderate-to-severe opioid-refractory cancer pain. There is low-certainty evidence that nabilone is ineffective in reducing pain associated with (radio-) chemotherapy in people with head and neck cancer and non-small cell lung cancer. There is low-certainty evidence that a single dose of synthetic THC analogues is not superior to a single low-dose morphine equivalent in reducing moderate-to-severe cancer pain. There is low-certainty evidence that CBD does not add value to specialist palliative care alone in the reduction of pain in people with advanced cancer.\n\nGiven the provided abstract, please respond to the following query: \"Main results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Mouth spray with a plant-derived combination of THC and CBD was probably not better than placebo in reducing pain in people with moderate-to-severe cancer pain despite opioid treatment. Thirty-two out of 100 people reported to be much or very much improved by cannabis-based mouth spray and 23 out of 100 people with mouth spray with placebo. A total of 19 out of 100 people withdrew early because of side effects by cannabis-based mouth spray and 16 out of 100 people by mouth spray with placebo. There was no difference in serious side effects between the cannabis-based mouth spray and a placebo mouth spray.\nArtificial cannabinoid mimicking the effects of THC may not be better than a fake medication in reducing pain associated with chemotherapy or radiochemotherapy in people with head and neck cancer and a certain type of lung cancer.\nA single dose of an artificial cannabinoid mimicking the effects of THC may be better than a single dose of placebo, but may not differ from a single small dose of a morphine-like medication in reducing moderate-to-severe cancer pain after cessation of previous analgesic treatment for three to four and a half hours.\nCBD may not add value to specialist palliative care alone in the reduction of pain in people with advanced cancer.\nWe found no studies with medical cannabis."
}
] |
query-laysum
|
22952
|
[
{
"role": "user",
"content": "Abstract: Background\nVitamin K is necessary for the synthesis of coagulation factors. Term infants, especially those who are exclusively breast fed, are deficient in vitamin K and consequently may have vitamin K deficiency bleeding (VKDB). Preterm infants are potentially at greater risk for VKDB because of delayed feeding and subsequent delay in the colonization of their gastrointestinal system with vitamin K producing microflora, as well as immature hepatic and hemostatic function.\nObjectives\nTo determine the effect of vitamin K prophylaxis in the prevention of vitamin K deficiency bleeding (VKDB) in preterm infants.\nSearch methods\nWe used the standard search strategy of Cochrane Neonatal to search the Cochrane Central Register of Controlled Trials (CENTRAL 2016, Issue 11), MEDLINE via PubMed (1966 to 5 December 2016), Embase (1980 to 5 December 2016), and CINAHL (1982 to 5 December 2016). We also searched clinical trials databases, conference proceedings, and the reference lists of retrieved articles.\nSelection criteria\nRandomized controlled trials (RCTs) or quasi-RCTs of any preparation of vitamin K given to preterm infants.\nData collection and analysis\nWe evaluated potential studies and extracted data in accordance with the recommendations of Cochrane Neonatal.\nMain results\nWe did not identify any eligible studies that compared vitamin K to no treatment.\nOne study compared intravenous (IV) to intramuscular (IM) administration of vitamin K and compared various dosages of vitamin K. Three different prophylactic regimes of vitamin K (0.5 mg IM, 0.2 mg vitamin K1, or 0.2 mg IV) were given to infants less than 32 weeks' gestation. Given that only one small study met the inclusion criteria, we assessed the quality of the evidence for the outcomes evaluated as low.\nIntramuscular versus intravenous\nThere was no statistically significant difference in vitamin K levels in the 0.2 mg IV group when compared to the infants that received either 0.2 or 0.5 mg vitamin K IM (control) on day 5. By day 25, vitamin K1 levels had declined in all of the groups, but infants who received 0.5 mg vitamin K IM had higher levels of vitamin K1 than either the 0.2 mg IV group or the 0.2 mg IM group.\nVitamin K1 2,3-epoxide (vitamin K1O) levels in the infants that received 0.2 mg IV were not statistically different from those in the control group on day 5 or 25 of the study. All of the infants had normal or supraphysiologic levels of vitamin K1 concentrations and either no detectable or insignificant amounts of prothrombin induced by vitamin K absence-II (PIVKA II).\nDosage comparisons\nDay 5 vitamin K1 levels and vitamin K1O levels were significantly lower in the 0.2 mg IM group when compared to the 0.5 mg IM group. On day 25, vitamin K1O levels and vitamin K1 levels in the 0.2 mg IM group and the 0.5 mg IM group were not significantly different. Presence of PIVKA II proteins in the 0.2 mg IM group versus the 0.5 mg IM group was not significantly different at day 5 or 25 of the study.\nAuthors' conclusions\nPreterm infants have low levels of vitamin K and develop detectable PIVKA proteins during the first week of life. Despite being at risk for VKDB, there are no studies comparing vitamin K versus non-treatment and few studies that address potential dosing strategies for effective treatment. Dosage studies suggest that we are currently giving doses of vitamin K to preterm infants that lead to supraphysiologic levels. Because of current uncertainty, clinicians will have to extrapolate data from term infants to preterm infants. Since there is no available evidence that vitamin K is harmful or ineffective and since vitamin K is an inexpensive drug, it seems prudent to follow the recommendations of expert bodies and give vitamin K to preterm infants. However, further research on appropriate dose and route of administration is warranted.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "In searches completed up to 5 December 2016, we found no studies in preterm infants that compare prophylactic vitamin K to non treatment and one study that compared dosage and route of vitamin K administration."
}
] |
query-laysum
|
22953
|
[
{
"role": "user",
"content": "Abstract: Background\nPeople with dementia in nursing homes often experience pain, but often do not receive adequate pain therapy. The experience of pain has a significant impact on quality of life in people with dementia, and is associated with negative health outcomes. Untreated pain is also considered to be one of the causes of challenging behaviour, such as agitation or aggression, in this population. One approach to reducing pain in people with dementia in nursing homes is an algorithm-based pain management strategy, i.e. the use of a structured protocol that involves pain assessment and a series of predefined treatment steps consisting of various non-pharmacological and pharmacological pain management interventions.\nObjectives\nTo assess the effects of algorithm-based pain management interventions to reduce pain and challenging behaviour in people with dementia living in nursing homes.\nTo describe the components of the interventions and the content of the algorithms.\nSearch methods\nWe searched ALOIS, the Cochrane Dementia and Cognitive Improvement Group's register, MEDLINE, Embase, PsycINFO, CINAHL (Cumulative Index to Nursing and Allied Health Literature), Web of Science Core Collection (ISI Web of Science), LILACS (Latin American and Caribbean Health Science Information database), ClinicalTrials.gov and the World Health Organization's meta-register the International Clinical Trials Registry Portal on 30 June 2021.\nSelection criteria\nWe included randomised controlled trials investigating the effects of algorithm-based pain management interventions for people with dementia living in nursing homes. All interventions had to include an initial pain assessment, a treatment algorithm (a treatment plan consisting of at least two different non-pharmacological or pharmacological treatment steps to reduce pain), and criteria to assess the success of each treatment step. The control groups could receive usual care or an active control intervention. Primary outcomes for this review were pain-related outcomes, e.g. the number of participants with pain (self- or proxy-rated), challenging behaviour (we used a broad definition that could also include agitation or behavioural and psychological symptoms assessed with any validated instrument), and serious adverse events.\nData collection and analysis\nTwo authors independently selected the articles for inclusion, extracted data and assessed the risk of bias of all included studies. We reported results narratively as there were too few studies for a meta-analysis. We used GRADE methods to rate the certainty of the results.\nMain results\nWe included three cluster-randomised controlled trials with a total of 808 participants (mean age 82 to 89 years). In two studies, participants had severe cognitive impairment and in one study mild to moderate impairment. The algorithms used in the studies varied in the number of treatment steps. The comparator was pain education for nursing staff in two studies and usual care in one study.\nWe judged the risk of detection bias to be high in one study. The risk of selection bias and performance bias was unclear in all studies.\nSelf-rated pain (i.e. pain rated by participants themselves) was reported in two studies. In one study, all residents in the nursing homes were included, but fewer than half of the participants experienced pain at baseline, and the mean values of self-rated and proxy-rated pain at baseline and follow-up in both study groups were below the threshold of pain that may require treatment. We considered the evidence from this study to be very low-certainty and therefore are uncertain whether the algorithm-based pain management intervention had an effect on self-rated pain intensity compared with pain education (MD -0.27, 95% CI -0.49 to -0.05, 170 participants; Verbal Descriptor Scale, range 0 to 3). In the other study, all participants had mild to moderate pain at baseline. Here, we found low-certainty evidence that an algorithm-based pain management intervention may have little to no effect on self-rated pain intensity compared with pain education (MD 0.4, 95% CI -0.58 to 1.38, 246 participants; Iowa Pain Thermometer, range 0 to 12).\nPain was rated by proxy in all three studies. Again, we considered the evidence from the study in which mean pain scores indicated no pain, or almost no pain, at baseline to be very low-certainty and were uncertain whether the algorithm-based pain management intervention had an effect on proxy-rated pain intensity compared with pain education. For participants with mild to moderate pain at baseline, we found low-certainty evidence that an algorithm-based pain management intervention may reduce proxy-rated pain intensity in comparison with usual care (MD -1.49, 95% CI -2.11 to -0.87, 1 study, 128 participants; Pain Assessment in Advanced Dementia Scale-Chinese version, range 0 to 10), but may not be more effective than pain education (MD -0.2, 95% CI -0.79 to 0.39, 1 study, 383 participants; Iowa Pain Thermometer, range 0 to 12).\nFor challenging behaviour, we found very low-certainty evidence from one study in which mean pain scores indicated no pain, or almost no pain, at baseline. We were uncertain whether the algorithm-based pain management intervention had any more effect than education for nursing staff on challenging behaviour of participants (MD -0.21, 95% CI -1.88 to 1.46, 1 study, 170 participants; Cohen-Mansfield Agitation Inventory-Chinese version, range 7 to 203).\nNone of the studies systematically assessed adverse effects or serious adverse effects and no study reported information about the occurrence of any adverse effect. None of the studies assessed any of the other outcomes of this review.\nAuthors' conclusions\nThere is no clear evidence for a benefit of an algorithm-based pain management intervention in comparison with pain education for reducing pain intensity or challenging behaviour in people with dementia in nursing homes. We found that the intervention may reduce proxy-rated pain compared with usual care. However, the certainty of evidence is low because of the small number of studies, small sample sizes, methodological limitations, and the clinical heterogeneity of the study populations (e.g. pain level and cognitive status). The results should be interpreted with caution. Future studies should also focus on the implementation of algorithms and their impact in clinical practice.\n\nGiven the provided abstract, please respond to the following query: \"What was studied in the review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "People with dementia in nursing homes often experience pain. However, they cannot always tell their caregivers if they are in pain, so it can be difficult to recognise, and we know that nursing home residents with dementia receive less pain medication than those without dementia. Untreated pain can have a negative impact on well-being and health, and can also be one reason for challenging behaviour, such as aggression. The use of detailed step-by-step guidance for nursing staff, in this review called an algorithm, is designed to improve pain management. Algorithms start with a structured pain assessment and then set out different treatment steps, which can be non-medication or medication treatments for reducing pain. If pain is detected, the treatment described in the first step is applied. If this treatment does not reduce pain, the treatment from the next step is applied, and so on."
}
] |
query-laysum
|
22954
|
[
{
"role": "user",
"content": "Abstract: Background\nFoot ulceration is a major problem in people with diabetes and is the leading cause of hospitalisation and limb amputations. Skin grafts and tissue replacements can be used to reconstruct skin defects for people with diabetic foot ulcers in addition to providing them with standard care. Skin substitutes can consist of bioengineered or artificial skin, autografts (taken from the patient), allografts (taken from another person) or xenografts (taken from animals).\nObjectives\nTo determine the benefits and harms of skin grafting and tissue replacement for treating foot ulcers in people with diabetes.\nSearch methods\nIn April 2015 we searched: The Cochrane Wounds Specialised Register; the Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library); Ovid MEDLINE; Ovid MEDLINE (In-Process & Other Non-Indexed Citations); Ovid EMBASE and EBSCO CINAHL. We also searched clinical trial registries to identify ongoing studies. We did not apply restrictions to language, date of publication or study setting.\nSelection criteria\nRandomised clinical trials (RCTs) of skin grafts or tissue replacements for treating foot ulcers in people with diabetes.\nData collection and analysis\nTwo review authors independently extracted data and assessed the quality of the included studies.\nMain results\nWe included seventeen studies with a total of 1655 randomised participants in this review. Risk of bias was variable among studies. Blinding of participants, personnel and outcome assessment was not possible in most trials because of obvious differences between the treatments. The lack of a blinded outcome assessor may have caused detection bias when ulcer healing was assessed. However, possible detection bias is hard to prevent due to the nature of the skin replacement products we assessed, and the fact that they are easily recognisable. Strikingly, nearly all studies (15/17) reported industry involvement; at least one of the authors was connected to a commercial organisation or the study was funded by a commercial organisation. In addition, the funnel plot for assessing risk of bias appeared to be asymmetrical; suggesting that small studies with 'negative' results are less likely to be published.\nThirteen of the studies included in this review compared a skin graft or tissue replacement with standard care. Four studies compared two grafts or tissue replacements with each other. When we pooled the results of all the individual studies, the skin grafts and tissue replacement products that were used in the trials increased the healing rate of foot ulcers in patients with diabetes compared to standard care (risk ratio (RR) 1.55, 95% confidence interval (CI) 1.30 to 1.85, low quality of evidence). However, the strength of effect was variable depending on the specific product that was used (e.g. EpiFix® RR 11.08, 95% CI 1.69 to 72.82 and OrCel® RR 1.75, 95% CI 0.61 to 5.05). Based on the four included studies that directly compared two products, no specific type of skin graft or tissue replacement showed a superior effect on ulcer healing over another type of skin graft or tissue replacement.\nSixteen of the included studies reported on adverse events in various ways. No study reported a statistically significant difference in the occurrence of adverse events between the intervention and the control group.\nOnly two of the included studies reported on total incidence of lower limb amputations. We found fewer amputations in the experimental group compared with the standard care group when we pooled the results of these two studies, although the absolute risk reduction for amputation was small (RR 0.43, 95% CI 0.23 to 0.81; risk difference (RD) -0.06, 95% CI -0.10 to -0.01, very low quality of evidence).\nAuthors' conclusions\nBased on the studies included in this review, the overall therapeutic effect of skin grafts and tissue replacements used in conjunction with standard care shows an increase in the healing rate of foot ulcers and slightly fewer amputations in people with diabetes compared with standard care alone. However, the data available to us was insufficient for us to draw conclusions on the effectiveness of different types of skin grafts or tissue replacement therapies. In addition, evidence of long term effectiveness is lacking and cost-effectiveness is uncertain.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Foot ulceration is a major problem in people with diabetes and is the leading cause of hospital admissions and limb amputations. Despite the current variety of strategies available for the treatment of foot ulcers in people with diabetes, not all ulcers heal completely. Additional treatments with skin grafts and tissue replacement products have been developed to help complete wound closure."
}
] |
query-laysum
|
22955
|
[
{
"role": "user",
"content": "Abstract: Background\nDepressive disorders are the most common psychiatric comorbidity in people with epilepsy, affecting around one-third, with a significant negative impact on quality of life. There is concern that people may not be receiving appropriate treatment for their depression because of uncertainty regarding which antidepressant or class works best, and the perceived risk of exacerbating seizures. This review aimed to address these issues, and inform clinical practice and future research.\nThis is an updated version of the original Cochrane Review published in Issue 12, 2014.\nObjectives\nTo evaluate the efficacy and safety of antidepressants in treating depressive symptoms and the effect on seizure recurrence, in people with epilepsy and depression.\nSearch methods\nFor this update, we searched CRS Web, MEDLINE, SCOPUS, PsycINFO, and ClinicalTrials.gov (February 2021). We searched the World Health Organization Clinical Trials Registry in October 2019, but were unable to update it because it was inaccessible. There were no language restrictions.\nSelection criteria\nWe included randomised controlled trials (RCTs) and prospective non-randomised studies of interventions (NRSIs), investigating children or adults with epilepsy, who were treated with an antidepressant and compared to placebo, comparative antidepressant, psychotherapy, or no treatment for depressive symptoms.\nData collection and analysis\nThe primary outcomes were changes in depression scores (proportion with a greater than 50% improvement, mean difference, and proportion who achieved complete remission) and change in seizure frequency (mean difference, proportion with a seizure recurrence, or episode of status epilepticus). Secondary outcomes included the number of participants who withdrew from the study and reasons for withdrawal, quality of life, cognitive functioning, and adverse events.\nTwo review authors independently extracted data for each included study. We then cross-checked the data extraction. We assessed risk of bias using the Cochrane tool for RCTs, and the ROBINS-I for NRSIs. We presented binary outcomes as risk ratios (RRs) with 95% confidence intervals (CIs) or 99% CIs for specific adverse events. We presented continuous outcomes as standardised mean differences (SMDs) with 95% CIs, and mean differences (MDs) with 95% CIs.\nMain results\nWe included 10 studies in the review (four RCTs and six NRSIs), with 626 participants with epilepsy and depression, examining the effects of antidepressants. One RCT was a multi-centre study comparing an antidepressant with cognitive behavioural therapy (CBT). The other three RCTs were single-centre studies comparing an antidepressant with an active control, placebo, or no treatment. The NRSIs reported on outcomes mainly in participants with focal epilepsy before and after treatment for depression with a selective serotonin reuptake inhibitor (SSRI); one NRSI compared SSRIs to CBT.\nWe rated one RCT at low risk of bias, three RCTs at unclear risk of bias, and all six NRSIs at serious risk of bias. We were unable to conduct any meta-analysis of RCT data due to heterogeneity of treatment comparisons. We judged the certainty of evidence to be moderate to very low across comparisons, because single studies contributed limited outcome data, and because of risk of bias, particularly for NRSIs, which did not adjust for confounding variables.\nMore than 50% improvement in depressive symptoms ranged from 43% to 82% in RCTs, and from 24% to 97% in NRSIs, depending on the antidepressant given. Venlafaxine improved depressive symptoms by more than 50% compared to no treatment (mean difference (MD) -7.59 (95% confidence interval (CI) -11.52 to -3.66; 1 study, 64 participants; low-certainty evidence); the results between other comparisons were inconclusive. Two studies comparing SSRIs to CBT reported inconclusive results for the proportion of participants who achieved complete remission of depressive symptoms.\nSeizure frequency data did not suggest an increased risk of seizures with antidepressants compared to control treatments or baseline. Two studies measured quality of life; antidepressants did not appear to improve quality of life over control. No studies reported on cognitive functioning.\nTwo RCTs and one NRSI reported comparative data on adverse events; antidepressants did not appear to increase the severity or number of adverse events compared to controls. The NSRIs reported higher rates of withdrawals due to adverse events than lack of efficacy. Reported adverse events for antidepressants included nausea, dizziness, sedation, headache, gastrointestinal disturbance, insomnia, and sexual dysfunction.\nAuthors' conclusions\nExisting evidence on the effectiveness of antidepressants in treating depressive symptoms associated with epilepsy is still very limited. Rates of response to antidepressants were highly variable. There is low certainty evidence from one small RCT (64 participants) that venlafaxine may improve depressive symptoms more than no treatment; this evidence is limited to treatment between 8 and 16 weeks, and does not inform longer-term effects. Moderate to low evidence suggests neither an increase nor exacerbation of seizures with SSRIs.\nThere are no available comparative data to inform the choice of antidepressant drug or classes of drug for efficacy or safety for treating people with epilepsy and depression.\nRCTs of antidepressants utilising interventions from other treatment classes besides SSRIs, in large samples of patients with epilepsy and depression, are needed to better inform treatment policy. Future studies should assess interventions across a longer treatment duration to account for delayed onset of action, sustainability of treatment responses, and to provide a better understanding of the impact on seizure control.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Depressive disorders occur in approximately one-third of people with epilepsy, often requiring antidepressant treatment. However, depression often goes untreated in people with epilepsy, partly due to fear that antidepressants might cause seizures. There are different classes of antidepressants, however they all aim to increase key nerve chemicals in the brain, thereby alleviating depressive symptoms."
}
] |
query-laysum
|
22956
|
[
{
"role": "user",
"content": "Abstract: Background\nDuloxetine is a balanced serotonin and noradrenaline reuptake inhibitor licensed for the treatment of major depressive disorders, urinary stress incontinence and the management of neuropathic pain associated with diabetic peripheral neuropathy. A number of trials have been conducted to investigate the use of duloxetine in neuropathic and nociceptive painful conditions. This is the first update of a review first published in 2010.\nObjectives\nTo assess the benefits and harms of duloxetine for treating painful neuropathy and different types of chronic pain.\nSearch methods\nOn 19th November 2013, we searched The Cochrane Neuromuscular Group Specialized Register, CENTRAL, DARE, HTA, NHSEED, MEDLINE, and EMBASE. We searched ClinicalTrials.gov for ongoing trials in April 2013. We also searched the reference lists of identified publications for trials of duloxetine for the treatment of painful peripheral neuropathy or chronic pain.\nSelection criteria\nWe selected all randomised or quasi-randomised trials of any formulation of duloxetine, used for the treatment of painful peripheral neuropathy or chronic pain in adults.\nData collection and analysis\nWe used standard methodological procedures expected by The Cochrane Collaboration.\nMain results\nWe identified 18 trials, which included 6407 participants. We found 12 of these studies in the literature search for this update. Eight studies included a total of 2728 participants with painful diabetic neuropathy and six studies involved 2249 participants with fibromyalgia. Three studies included participants with depression and painful physical symptoms and one included participants with central neuropathic pain. Studies were mostly at low risk of bias, although significant drop outs, imputation methods and almost every study being performed or sponsored by the drug manufacturer add to the risk of bias in some domains. Duloxetine at 60 mg daily is effective in treating painful diabetic peripheral neuropathy in the short term, with a risk ratio (RR) for ≥ 50% pain reduction at 12 weeks of 1.73 (95% CI 1.44 to 2.08). The related NNTB is 5 (95% CI 4 to 7). Duloxetine at 60 mg daily is also effective for fibromyalgia over 12 weeks (RR for ≥ 50% reduction in pain 1.57, 95% CI 1.20 to 2.06; NNTB 8, 95% CI 4 to 21) and over 28 weeks (RR 1.58, 95% CI 1.10 to 2.27) as well as for painful physical symptoms in depression (RR 1.37, 95% CI 1.19 to 1.59; NNTB 8, 95% CI 5 to 14). There was no effect on central neuropathic pain in a single, small, high quality trial. In all conditions, adverse events were common in both treatment and placebo arms but more common in the treatment arm, with a dose-dependent effect. Most adverse effects were minor, but 12.6% of participants stopped the drug due to adverse effects. Serious adverse events were rare.\nAuthors' conclusions\nThere is adequate amounts of moderate quality evidence from eight studies performed by the manufacturers of duloxetine that doses of 60 mg and 120 mg daily are efficacious for treating pain in diabetic peripheral neuropathy but lower daily doses are not. Further trials are not required. In fibromyalgia, there is lower quality evidence that duloxetine is effective at similar doses to those used in diabetic peripheral neuropathy and with a similar magnitude of effect. The effect in fibromyalgia may be achieved through a greater improvement in mental symptoms than in somatic physical pain. There is low to moderate quality evidence that pain relief is also achieved in pain associated with depressive symptoms, but the NNTB of 8 in fibromyalgia and depression is not an indication of substantial efficacy. More trials (preferably independent investigator led studies) in these indications are required to reach an optimal information size to make convincing determinations of efficacy.\nMinor side effects are common and more common with duloxetine 60 mg and particularly with 120 mg daily, than 20 mg daily, but serious side effects are rare.\nImproved direct comparisons of duloxetine with other antidepressants and with other drugs, such as pregabalin, that have already been shown to be efficacious in neuropathic pain would be appropriate. Unbiased economic comparisons would further help decision making, but no high quality study includes economic data.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We looked at all the published scientific literature and found 18 trials, involving a total of 6407 participants, that were of sufficient quality to include in this review. Eight trials tested the effect of duloxetine on painful diabetic neuropathy and six on the pain of fibromyalgia. Three trials treated painful physical symptoms associated with depression and one small study investigated duloxetine for the pain from strokes or diseases of the spinal cord (central pain)."
}
] |
query-laysum
|
22957
|
[
{
"role": "user",
"content": "Abstract: Background\nCombined modality treatment consisting of chemotherapy followed by localised radiotherapy is the standard treatment for patients with early stage Hodgkin lymphoma (HL). However, due to long- term adverse effects such as secondary malignancies the role of radiotherapy has been questioned recently and some clinical study groups advocate chemotherapy only for this indication.\nObjectives\nTo assess the effects of chemotherapy alone compared to chemotherapy plus radiotherapy in adults with early stage HL .\nSearch methods\nFor the or i ginal version of this review, we searched MEDLINE, Embase and CENTRAL as well as conference proceedings (American Society of Hematology, American Society of Clinical Oncology and International Symposium of Hodgkin Lymphoma) from January 1980 to November 2010 for randomised controlled trials (RCTs) comparing chemotherapy alone versus chemotherapy regimens plus radiotherapy. For the updated review we searched MEDLINE, CENTRAL and conference proceedings to December 2016.\nSelection criteria\nWe included RCTs comparing chemotherapy alone with chemotherapy plus radiotherapy in patients with early stage HL. We excluded trials with more than 20% of patients in advanced stage. As the value of radiotherapy in addition to chemotherapy is still not clear, we also compared to more cycles of chemotherapy in the control arm. In this updated review, we also included a second comparison evaluating trials with varying numbers of cycles of chemotherapy between intervention and control arms, same chemotherapy regimen in both arms assumed. We excluded trials evaluating children only, therefore only trials involving adults are included in this updated review.\nData collection and analysis\nTwo review authors independently extracted data and assessed the quality of trials. We contacted study authors to obtain missing information. As effect measures we used hazard ratios (HR) for overall survival (OS) and progression-free survival (PFS) and risk ratios (RR) for response rates. Since not all trials reported PFS according to our definitions, we evaluated all similar outcomes (e.g. event-free survival) as PFS/tumour control.\nMain results\nOur search led to 5518 potentially relevant references. From these, we included seven RCTs in the analyses involving 2564 patients. In contrast to the first version of this review including five trials, we excluded trials randomising children. As a result, we excluded one trial from the former analyses and we identified three new trials.\nFive trials with 1388 patients compared the combination of chemotherapy alone and chemotherapy plus radiotherapy, with the same number of chemotherapy cycles in both arms. The addition of radiotherapy to chemotherapy has probably little or no difference on OS (HR 0.48; 95% confidence interval (CI) 0.22 to 1.06; P = 0.07, moderate- quality evidence), however two included trials had potential other high risk of bias due to a high number of patients not receiving planned radiotherapy. After excluding these trials in a sensitivity analysis, the results showed that the combination of chemotherapy and radiotherapy improved OS compared to chemotherapy alone (HR 0.31; 95% CI 0.19 to 0.52; P <0.00001, moderate- quality evidence). In contrast to chemotherapy alone the use of chemotherapy and radiotherapy improved PFS (HR 0.42; 95% CI 0.25 to 0.72; P = 0.001; moderate- quality evidence). Regarding infection- related mortality (RR 0.33; 95% CI 0.01 to 8.06; P = 0.5; low- quality evidence), second cancer- related mortality (RR 0.53; 95% CI 0.07 to 4.29; P = 0.55; low- quality evidence) and cardiac disease- related mortality (RR 2.94; 95% CI 0.31 to 27.55; P = 0.35;low- quality evidence), there is no evidence for a difference between the use of chemotherapy alone and chemotherapy plus radiotherapy. For complete response rate (CRR) (RR 1.08; 95% CI 0.93 to 1.25; P = 0.33; low- quality evidence), there is also no evidence for a difference between treatment groups.\nTwo trials with 1176 patients compared the combination of chemotherapy alone and chemotherapy plus radiotherapy, with different numbers of chemotherapy cycles in both arms. OS is reported in one trial only, the use of chemotherapy alone (more chemotherapy cycles) may improve OS compared to chemotherapy plus radiotherapy (HR 2.12; 95% CI 1.03 to 4.37; P = 0.04; low- quality evidence). This trial also had a potential other high risk of bias due to a high number of patients not receiving planned therapy. There is no evidence for a difference between chemotherapy alone and chemotherapy plus radiotherapy regarding PFS (HR 0.42; 95% CI 0.14 to 1.24; P = 0.12; low- quality evidence). After excluding the trial with patients not receiving the planned therapy in a sensitivity analysis, the results showed that the combination of chemotherapy and radiotherapy improved PFS compared to chemotherapy alone (HR 0.24; 95% CI 0.070 to 0.88; P = 0.03, based on one trial). For infection- related mortality (RR 6.90; 95% CI 0.36 to 132.34; P = 0.2; low- quality evidence), second cancer- related mortality (RR 2.22; 95% CI 0.7 to 7.03; P = 0.18; low- quality evidence) and cardiac disease-related mortality (RR 0.99; 95% CI 0.14 to 6.90; P = 0.99; low-quality evidence), there is no evidence for a difference between the use of chemotherapy alone and chemotherapy plus radiotherapy. CRR rate was not reported.\nAuthors' conclusions\nThis systematic review compared the effects of chemotherapy alone and chemotherapy plus radiotherapy in adults with early stage HL .\nFor the comparison with same numbers of chemotherapy cycles in both arms, we found moderate- quality evidence that PFS is superior in patients receiving chemotherapy plus radiotherapy than in those receiving chemotherapy alone. The addition of radiotherapy to chemotherapy has probably little or no difference on OS . The sensitivity analysis without the trials with potential other high risk of bias showed that chemotherapy plus radiotherapy improves OS compared to chemotherapy alone.\nFor the comparison with different numbers of chemotherapy cycles between the arms there are no implications for OS and PFS possible, because of the low quality of evidence of the results.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "This systematic review compares overall survival (OS) and progression free survival (PFS) in adults with early stage HL after receiving chemotherapy alone or chemotherapy plus radiotherapy."
}
] |
query-laysum
|
22958
|
[
{
"role": "user",
"content": "Abstract: Background\nNontuberculous mycobacteria are mycobacteria, other than those in the Mycobacterium tuberculosis complex, and are commonly found in the environment. Nontuberculous mycobacteria species (most commonly Mycobacterium avium complex and Mycobacterium abscessus) are isolated from the respiratory tract of approximately 5% to 40% of individuals with cystic fibrosis; they can cause lung disease in people with cystic fibrosis leading to more a rapid decline in lung function and even death in certain circumstances. Although there are guidelines for the antimicrobial treatment of nontuberculous mycobacteria lung disease, these recommendations are not specific for people with cystic fibrosis and it is not clear which antibiotic regimen may be the most effective in the treatment of these individuals. This is an update of a previous review.\nObjectives\nThe objective of our review was to compare antibiotic treatment to no antibiotic treatment, or to compare different combinations of antibiotic treatment, for nontuberculous mycobacteria lung infections in people with cystic fibrosis. The primary objective was to assess the effect of treatment on lung function and pulmonary exacerbations and to quantify adverse events. The secondary objectives were to assess treatment effects on the amount of bacteria in the sputum, quality of life, mortality, nutritional parameters, hospitalizations and use of oral antibiotics.\nSearch methods\nWe searched the Cochrane Cystic Fibrosis Trials Register, compiled from electronic database searches and hand searching of journals and conference abstract books. Date of last search: 24 February 2020.\nWe also searched a register of ongoing trials and the reference lists of relevant articles and reviews. Date of last search: 21 March 2019.\nSelection criteria\nAny randomized controlled trials comparing nontuberculous mycobacteria antibiotics to no antibiotic treatment, as well as one nontuberculous mycobacteria antibiotic regimen compared to another nontuberculous mycobacteria antibiotic regimen, in individuals with cystic fibrosis.\nData collection and analysis\nData were not collected because in the one trial identified by the search, data specific to individuals with cystic fibrosis could not be obtained from the pharmaceutical company.\nMain results\nOne completed trial was identified by the searches, but data specific to individuals with cystic fibrosis could not be obtained from the pharmaceutical company.\nAuthors' conclusions\nThis review did not find any evidence for the effectiveness of different antimicrobial treatment for nontuberculous mycobacteria lung disease in people with cystic fibrosis. Until such evidence becomes available, it is reasonable for clinicians to follow published clinical practice guidelines for the diagnosis and treatment of nodular or bronchiectatic pulmonary disease due to Mycobacterium avium complex or Mycobacterium abscessus in patients with cystic fibrosis.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Until the time when randomized controlled trial data is available for individuals with cystic fibrosis, clinicians should follow the current guidelines for the diagnosis and treatment of lung infections due to nontuberculous mycobacteria in the general population."
}
] |
query-laysum
|
22959
|
[
{
"role": "user",
"content": "Abstract: Background\nTemporomandibular disorders (TMDs) are a group of musculoskeletal disorders affecting the jaw. They are frequently associated with pain that can be difficult to manage and may become persistent (chronic). Psychological therapies aim to support people with TMDs to manage their pain, leading to reduced pain, disability and distress.\nObjectives\nTo assess the effects of psychological therapies in people (aged 12 years and over) with painful TMD lasting 3 months or longer.\nSearch methods\nCochrane Oral Health's Information Specialist searched six bibliographic databases up to 21 October 2021 and used additional search methods to identify published, unpublished and ongoing studies.\nSelection criteria\nWe included randomised controlled trials (RCTs) of any psychological therapy (e.g. cognitive behaviour therapy (CBT), behaviour therapy (BT), acceptance and commitment therapy (ACT), mindfulness) for the management of painful TMD. We compared these against control or alternative treatment (e.g. oral appliance, medication, physiotherapy).\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane. We reported outcome data immediately after treatment and at the longest available follow-up.\nWe used the Cochrane RoB 1 tool to assess the risk of bias in included studies. Two review authors independently assessed each included study for any risk of bias in sequence generation, allocation concealment, blinding of outcome assessors, incomplete outcome data, selective reporting of outcomes, and other issues. We judged the certainty of the evidence for each key comparison and outcome as high, moderate, low or very low according to GRADE criteria.\nMain results\nWe identified 22 RCTs (2001 participants), carried out between 1967 and 2021. We were able to include 12 of these studies in meta-analyses. The risk of bias was high across studies, and we judged the certainty of the evidence to be low to very low overall; further research may change the findings. Our key outcomes of interest were: pain intensity, disability caused by pain, adverse events and psychological distress. Treatments varied in length, with the shortest being 4 weeks. The follow-up time ranged from 3 months to 12 months. Most studies evaluated CBT.\nAt treatment completion, there was no evidence of a benefit of CBT on pain intensity when measured against alternative treatment (standardised mean difference (SMD) 0.03, confidence interval (CI) -0.21 to 0.28; P = 0.79; 5 studies, 509 participants) or control (SMD -0.09, CI -0.30 to 0.12; P = 0.41; 6 studies, 577 participants). At follow-up, there was evidence of a small benefit of CBT for reducing pain intensity compared to alternative treatment (SMD -0.29, 95% CI -0.50 to -0.08; 5 studies, 475 participants) and control (SMD -0.30, CI -0.51 to -0.09; 6 studies, 639 participants).\nAt treatment completion, there was no evidence of a difference in disability outcomes (interference in activities caused by pain) between CBT and alternative treatment (SMD 0.15, CI -0.40 to 0.10; P = 0.25; 3 studies, 245 participants), or between CBT and control/usual care (SMD 0.02, CI -0.21 to 0.24; P = 0.88; 3 studies, 315 participants). Nor was there evidence of a difference at follow-up (CBT versus alternative treatment: SMD -0.15, CI -0.42 to 0.12; 3 studies, 245 participants; CBT versus control: SMD 0.01 CI - 0.61 to 0.64; 2 studies, 240 participants).\nThere were very few data on adverse events. From the data available, adverse effects associated with psychological treatment tended to be minor and to occur less often than in alternative treatment groups. There were, however, insufficient data available to draw firm conclusions.\nCBT showed a small benefit in terms of reducing psychological distress at treatment completion compared to alternative treatment (SMD -0.32, 95% CI -0.50 to -0.15; 6 studies, 553 participants), which was maintained at follow-up (SMD -0.32, 95% CI -0.51 to -0.13; 6 studies, 516 participants). For CBT versus control, only one study reported results for distress and did not find evidence of a difference between groups at treatment completion (mean difference (MD) 2.36, 95% CI -1.17 to 5.89; 101 participants) or follow-up (MD -1.02, 95% CI -4.02 to 1.98; 101 participants).\nWe assessed the certainty of the evidence to be low or very low for all comparisons and outcomes.\nThe data were insufficient to draw any reliable conclusions about psychological therapies other than CBT.\nAuthors' conclusions\nWe found mixed evidence for the effects of psychological therapies on painful temporomandibular disorders (TMDs). There is low-certainty evidence that CBT may reduce pain intensity more than alternative treatments or control when measured at longest follow-up, but not at treatment completion. There is low-certainty evidence that CBT may be better than alternative treatments, but not control, for reducing psychological distress at treatment completion and follow-up. There is low-certainty evidence that CBT may not be better than other treatments or control for pain disability outcomes.\nThere is insufficient evidence to draw conclusions about alternative psychological therapeutic approaches, and there are insufficient data to be clear about adverse effects that may be associated with psychological therapies for painful TMD.\nOverall, we found insufficient evidence on which to base a reliable judgement about the efficacy of psychological therapies for painful TMD. Further research is needed to determine whether or not psychological therapies are effective, the most effective type of therapy and delivery method, and how it can best be targeted. In particular, high-quality RCTs conducted in primary care and community settings are required, which evaluate a range of psychological approaches against alternative treatments or usual care, involve both adults and adolescents, and collect measures of pain intensity, pain disability and psychological distress until at least 12 months post-treatment.\n\nGiven the provided abstract, please respond to the following query: \"What did we find?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Overall we found 22 relevant studies. Most of the studies reported on one particular form of psychological therapy called cognitive behaviour therapy (CBT). We did not have enough information to draw any conclusions about any other psychological therapies.\nThe results told us that CBT was no different to other treatments (e.g. oral splints, medicine) or usual care/no treatment in reducing the intensity of the TMD pain by the end of treatment. There was some evidence that people who had CBT might have slightly less pain a few months after treatment.\nThere was some evidence that CBT might be better than other treatments for reducing psychological distress both at the end of treatment and a few months later. This was not seen in the one study that compared CBT against usual care.\nIn terms of how much pain interfered with activities, there was no evidence that there was any difference between CBT and other treatments.\nThere was too little information to be sure about whether psychological treatments cause adverse effects (problems caused by treatment such as feeling unwell or worse pain or unexpected effects). Only six of the 22 studies measured what adverse effects participants experienced. In these six studies, adverse effects associated with psychological treatment seemed to be minor in general and to occur less often than in alternative treatment groups."
}
] |
query-laysum
|
22960
|
[
{
"role": "user",
"content": "Abstract: Background\nPrevious systematic reviews and randomised controlled trials have investigated the effect of post-stroke trunk training. Findings suggest that trunk training improves trunk function and activity or the execution of a task or action by an individual. But it is unclear what effect trunk training has on daily life activities, quality of life, and other outcomes.\nObjectives\nTo assess the effectiveness of trunk training after stroke on activities of daily living (ADL), trunk function, arm-hand function or activity, standing balance, leg function, walking ability, and quality of life when comparing with both dose-matched as non-dose-matched control groups.\nSearch methods\nWe searched the Cochrane Stroke Group Trials Register, CENTRAL, MEDLINE, Embase, and five other databases to 25 October 2021. We searched trial registries to identify additional relevant published, unpublished, and ongoing trials. We hand searched the bibliographies of included studies.\nSelection criteria\nWe selected randomised controlled trials comparing trunk training versus non-dose-matched or dose-matched control therapy including adults (18 years or older) with either ischaemic or haemorrhagic stroke. Outcome measures of trials included ADL, trunk function, arm-hand function or activity, standing balance, leg function, walking ability, and quality of life.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane.\nTwo main analyses were carried out. The first analysis included trials where the therapy duration of control intervention was non-dose-matched with the therapy duration of the experimental group and the second analysis where there was comparison with a dose-matched control intervention (equal therapy duration in both the control as in the experimental group).\nMain results\nWe included 68 trials with a total of 2585 participants.\nIn the analysis of the non-dose-matched groups (pooling of all trials with different training duration in the experimental as in the control intervention), we could see that trunk training had a positive effect on ADL (standardised mean difference (SMD) 0.96; 95% confidence interval (CI) 0.69 to 1.24; P < 0.001; 5 trials; 283 participants; very low-certainty evidence), trunk function (SMD 1.49, 95% CI 1.26 to 1.71; P < 0.001; 14 trials, 466 participants; very low-certainty evidence), arm-hand function (SMD 0.67, 95% CI 0.19 to 1.15; P = 0.006; 2 trials, 74 participants; low-certainty evidence), arm-hand activity (SMD 0.84, 95% CI 0.009 to 1.59; P = 0.03; 1 trial, 30 participants; very low-certainty evidence), standing balance (SMD 0.57, 95% CI 0.35 to 0.79; P < 0.001; 11 trials, 410 participants; very low-certainty evidence), leg function (SMD 1.10, 95% CI 0.57 to 1.63; P < 0.001; 1 trial, 64 participants; very low-certainty evidence), walking ability (SMD 0.73, 95% CI 0.52 to 0.94; P < 0.001; 11 trials, 383 participants; low-certainty evidence) and quality of life (SMD 0.50, 95% CI 0.11 to 0.89; P = 0.01; 2 trials, 108 participants; low-certainty evidence). Non-dose-matched trunk training led to no difference for the outcome serious adverse events (odds ratio: 7.94, 95% CI 0.16 to 400.89; 6 trials, 201 participants; very low-certainty evidence).\nIn the analysis of the dose-matched groups (pooling of all trials with equal training duration in the experimental as in the control intervention), we saw that trunk training had a positive effect on trunk function (SMD 1.03, 95% CI 0.91 to 1.16; P < 0.001; 36 trials, 1217 participants; very low-certainty evidence), standing balance (SMD 1.00, 95% CI 0.86 to 1.15; P < 0.001; 22 trials, 917 participants; very low-certainty evidence), leg function (SMD 1.57, 95% CI 1.28 to 1.87; P < 0.001; 4 trials, 254 participants; very low-certainty evidence), walking ability (SMD 0.69, 95% CI 0.51 to 0.87; P < 0.001; 19 trials, 535 participants; low-certainty evidence) and quality of life (SMD 0.70, 95% CI 0.29 to 1.11; P < 0.001; 2 trials, 111 participants; low-certainty evidence), but not for ADL (SMD 0.10; 95% confidence interval (CI) -0.17 to 0.37; P = 0.48; 9 trials; 229 participants; very low-certainty evidence), arm-hand function (SMD 0.76, 95% CI -0.18 to 1.70; P = 0.11; 1 trial, 19 participants; low-certainty evidence), arm-hand activity (SMD 0.17, 95% CI -0.21 to 0.56; P = 0.38; 3 trials, 112 participants; very low-certainty evidence). Trunk training also led to no difference for the outcome serious adverse events (odds ratio (OR): 7.39, 95% CI 0.15 to 372.38; 10 trials, 381 participants; very low-certainty evidence).\nTime post stroke led to a significant subgroup difference for standing balance (P < 0.001) in non-dose-matched therapy. In non-dose-matched therapy, different trunk therapy approaches had a significant effect on ADL (< 0.001), trunk function (P < 0.001) and standing balance (< 0.001).\nWhen participants received dose-matched therapy, analysis of subgroup differences showed that the trunk therapy approach had a significant effect on ADL (P = 0.001), trunk function (P < 0.001), arm-hand activity (P < 0.001), standing balance (P = 0.002), and leg function (P = 0.002). Also for dose-matched therapy, subgroup analysis for time post stroke resulted in a significant difference for the outcomes standing balance (P < 0.001), walking ability (P = 0.003) and leg function (P < 0.001), time post stroke significantly modified the effect of intervention.\nCore-stability trunk (15 trials), selective-trunk (14 trials) and unstable-trunk (16 trials) training approaches were mostly applied in the included trials.\nAuthors' conclusions\nThere is evidence to suggest that trunk training as part of rehabilitation improves ADL, trunk function, standing balance, walking ability, upper and lower limb function, and quality of life in people after stroke. Core-stability, selective-, and unstable-trunk training were the trunk training approaches mostly applied in the included trials. When considering only trials with a low risk of bias, results were mostly confirmed, with very low to moderate certainty, depending on the outcome.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The quality of the evidence was very low to low."
}
] |
query-laysum
|
22961
|
[
{
"role": "user",
"content": "Abstract: Background\nUterine fibroids can cause heavy menstrual bleeding. Medical treatments are considered to preserve fertility. It is unclear whether progestogens or progestogen-releasing intrauterine systems can reduce fibroid-related symptoms. This is the first update of a Cochrane Review published in 2013.\nObjectives\nTo determine the effectiveness of progestogens or progestogen-releasing intrauterine systems in treating premenopausal women with uterine fibroids.\nSearch methods\nWe searched the Cochrane Gynaecology and Fertility Group Specialised Register, CENTRAL, MEDLINE, Embase, and PsycINFO databases to July 2020. We also searched trials registers for ongoing and registered trials, and checked references of relevant trials.\nSelection criteria\nAll identified published or unpublished randomised controlled trials (RCTs) assessing the effect of progestogens or progestogen-releasing intrauterine systems in treating premenopausal women with uterine fibroids.\nData collection and analysis\nTwo authors independently extracted data, assessed risk of bias, and assessed the quality of the evidence using the GRADE approach.\nMain results\nThis updated review included four studies with 221 women with uterine fibroids. The evidence was very low quality, downgraded for serious risk of bias, due to poor reporting of study methods, and serious imprecision.\nLevonorgestrel-releasing intrauterine device (LNG-IUS) versus hysterectomy\nThere was no information on the outcomes of interest, including adverse events.\nLNG-IUS versus low dose combined oral contraceptive (COC)\nAt 12 months, we are uncertain whether LNG-IUS reduced the percentage of abnormal uterine bleeding, measured with the alkaline hematin test (mean difference (MD) 77.50%, 95% confidence interval (CI) 70.44 to 84.56; 1 RCT, 44 women; very low-quality evidence), or the pictorial blood assessment chart (PBAC; MD 34.50%, 95% CI 11.59 to 57.41; 1 RCT, 44 women; very low-quality evidence); increased haemoglobin levels (MD 1.50 g/dL, 95% CI 0.85 to 2.15; 1 RCT, 44 women; very low-quality evidence), or reduced fibroid size more than COC (MD 1.90%, 95% CI -12.24 to 16.04; 1 RCT, 44 women; very low-quality evidence). The study did not measure adverse events.\nLNG-IUS versus oral progestogen (norethisterone acetate (NETA))\nCompared to NETA, we are uncertain whether LNG-IUS reduced abnormal uterine bleeding more from baseline to six months (visual bleeding score; MD 23.75 points, 95% CI 1.26 to 46.24; 1 RCT, 45 women; very low-quality evidence); increased the percentage of change in haemoglobin from baseline to three months (MD 4.53%, 95% CI 1.46 to 7.60; 1 RCT, 48 women; very low-quality evidence), or from baseline to six months (MD 10.14%, 95% CI 5.57 to 14.71; 1 RCT, 45 women; very low-quality evidence). The study did not measure fibroid size. Spotting (adverse event) was more likely to be reported by women with the LNG-IUS (64.3%) than by those taking NETA (30%; 1 RCT, 45 women; very low-quality evidence).\nOral progestogen (dienogest, desogestrel) versus goserelin acetate\nCompared to goserelin acetate, we are uncertain whether abnormal uterine bleeding was reduced at 12 weeks with dienogest (PBAC; MD 216.00 points, 95% CI 149.35 to 282.65; 1 RCT, 14 women; very low-quality evidence) or desogestrel (PBAC; MD 78.00 points, 95% CI 28.94 to 127.06; 1 RCT, 16 women; very low-quality evidence). Vasomotor symptoms (adverse events, e.g. hot flashes) are only associated with goserelin acetate (55%), not with dienogest (1 RCT, 14 women; very low-quality evidence) or with desogestrel (1 RCT, 16 women; very low-quality evidence). The study did not report fibroid size.\nAuthors' conclusions\nBecause of very low-quality evidence, we are uncertain whether the LNG-IUS reduces abnormal uterine bleeding or increases haemoglobin levels in premenopausal women with uterine fibroids, compared to COC or norethisterone acetate. There was insufficient evidence to determine whether the LNG-IUS reduces the size of uterine fibroids compared to COC. We are uncertain whether oral progestogens reduce abnormal uterine bleeding as effectively as goserelin acetate, but women reported fewer adverse events, such as hot flashes.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The evidence was of very low-quality. The main limitations of the evidence were poor reporting of study methods (high or unclear risk of bias), lack of precise findings, and small numbers of studies and participants."
}
] |
query-laysum
|
22962
|
[
{
"role": "user",
"content": "Abstract: Background\nThe impact of workplace interventions on the outcome of occupational asthma is not well understood.\nObjectives\nTo evaluate the effectiveness of workplace interventions on occupational asthma.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL); MEDLINE (PubMed); EMBASE(Ovid); NIOSHTIC-2; and CISILO (CCOHS) up to July 31, 2019.\nSelection criteria\nWe included all eligible randomized controlled trials, controlled before and after studies and interrupted time-series of workplace interventions for occupational asthma.\nData collection and analysis\nTwo authors independently assessed study eligibility and risk of bias, and extracted data.\nMain results\nWe included 26 non-randomized controlled before and after studies with 1,695 participants that reported on three comparisons: complete removal from exposure and reduced exposure compared to continued exposure, and complete removal from exposure compared to reduced exposure. Reduction of exposure was achieved by limiting use of the agent, improving ventilation, or using protective equipment in the same job; by changing to another job with intermittent exposure; or by implementing education programs. For continued exposure, 56 per 1000 workers reported absence of symptoms at follow-up, the decrease in forced expiratory volume in one second as a percentage of a reference value (FEV1 %) was 5.4% during follow-up, and the standardized change in non-specific bronchial hyperreactivity (NSBH) was -0.18.\nIn 18 studies, authors compared removal from exposure to continued exposure. Removal may increase the likelihood of reporting absence of asthma symptoms, with risk ratio (RR) 4.80 (95% confidence interval (CI) 1.67 to 13.86), and it may improve asthma symptoms, with RR 2.47 (95% CI 1.26 to 4.84), compared to continued exposure. Change in FEV1 % may be better with removal from exposure, with a mean difference (MD) of 4.23 % (95% CI 1.14 to 7.31) compared to continued exposure. NSBH may improve with removal from exposure, with standardized mean difference (SMD) 0.43 (95% CI 0.03 to 0.82).\nIn seven studies, authors compared reduction of exposure to continued exposure. Reduction of exposure may increase the likelihood of reporting absence of symptoms, with RR 2.65 (95% CI 1.24 to 5.68). There may be no considerable difference in FEV1 % between reduction and continued exposure, with MD 2.76 % (95% CI -1.53 to 7.04) . No studies reported or enabled calculation of change in NSBH.\nIn ten studies, authors compared removal from exposure to reduction of exposure. Following removal from exposure there may be no increase in the likelihood of reporting absence of symptoms, with RR 6.05 (95% CI 0.86 to 42.34), and improvement in symptoms, with RR 1.11 (95% CI 0.84 to 1.47), as well as no considerable change in FEV1 %, with MD 2.58 % (95% CI −3.02 to 8.17). However, with all three outcomes, there may be improved results for removal from exposure in the subset of patients exposed to low molecular weight agents. No studies reported or enabled calculation of change in NSBH.\nIn two studies, authors reported that the risk of unemployment after removal from exposure may increase compared with reduction of exposure, with RR 14.28 (95% CI 2.06 to 99.16). Four studies reported a decrease in income of 20% to 50% after removal from exposure.\nThe quality of the evidence is very low for all outcomes.\nAuthors' conclusions\nBoth removal from exposure and reduction of exposure may improve asthma symptoms compared with continued exposure. Removal from exposure, but not reduction of exposure, may improve lung function compared to continued exposure. When we compared removal from exposure directly to reduction of exposure, the former may improve symptoms and lung function more among patients exposed to low molecular weight agents. Removal from exposure may also increase the risk of unemployment. Care providers should balance the potential clinical benefits of removal from exposure or reduction of exposure with potential detrimental effects of unemployment. Additional high-quality studies are needed to evaluate the effectiveness of workplace interventions for occupational asthma.\n\nGiven the provided abstract, please respond to the following query: \"What was studied in the review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The review is based on 26 studies that included 1,695 participants with occupational asthma. Sensitizers caused nearly all cases. We focused on the interventions of removal from exposure and reduction of exposure, which were compared with continued exposure. Outcomes were changes in asthma symptoms, lung function, and non-specific bronchial hyperreactivity between baseline and follow-up."
}
] |
query-laysum
|
22963
|
[
{
"role": "user",
"content": "Abstract: Background\nPreterm infants are susceptible to hyperglycaemia and hypoglycaemia, which may lead to adverse neurodevelopment. The use of continuous glucose monitoring (CGM) devices might help in keeping glucose levels in the normal range, and reduce the need for blood sampling. However, the use of CGM might be associated with harms in the preterm infant.\nObjectives\nTo assess the benefits and harms of CGM versus intermittent modalities to measure glycaemia in preterm infants 1. at risk of hypoglycaemia or hyperglycaemia; 2. with proven hypoglycaemia; or 3. with proven hyperglycaemia.\nSearch methods\nWe searched CENTRAL (2021, Issue 4); PubMed; Embase; and CINAHL in April 2021. We also searched clinical trials databases, conference proceedings, and reference lists of retrieved articles for randomized controlled trials (RCTs) and quasi-RCTs.\nSelection criteria\nWe included RCTs and quasi-RCTs comparing the use of CGM versus intermittent modalities to measure glycaemia in preterm infants at risk of hypoglycaemia or hyperglycaemia; with proven hypoglycaemia; or with proven hyperglycaemia.\nData collection and analysis\nWe assessed the methodological quality of included trials using Cochrane Effective Practice and Organisation of Care Group (EPOC) criteria (assessing randomization, blinding, loss to follow-up, and handling of outcome data). We evaluated treatment effects using a fixed-effect model with risk ratio (RR) with 95% confidence intervals (CI) for categorical data and mean, standard deviation (SD), and mean difference (MD) for continuous data. We used the GRADE approach to assess the certainty of the evidence.\nMain results\nWe included four trials enrolling 300 infants in our updated review. We included one new study and excluded another previously included study (because the inclusion criteria of the review have been narrowed). We compared the use of CGM to intermittent modalities in preterm infants at risk of hypoglycaemia or hyperglycaemia; however, one of these trials was analyzed separately because CGM was used as a standalone device, without being coupled to a control algorithm as in the other trials. We identified no studies in preterm infants with proven hypoglycaemia or hyperglycaemia.\nNone of the four included trials reported the neurodevelopmental outcome (i.e. the primary outcome of this review), or seizures. The effect of the use of CGM on mortality during hospitalization is uncertain (RR 0.59, 95% CI 0.16 to 2.13; RD −0.02, 95% CI −0.07 to 0.03; 230 participants; 2 studies; very low-certainty evidence). The certainty of the evidence was very low for all outcomes because of limitations in study design, and imprecision of estimates. One study is ongoing (estimated sample size 60 infants) and planned to be completed in 2022.\nAuthors' conclusions\nThere is insufficient evidence to determine if CGM affects preterm infant mortality or morbidities. We are very uncertain of the safety of CGM and the available management algorithms, and many morbidities remain unreported. Preterm infants at risk of hypoglycaemia or hyperglycaemia were enrolled in all four included studies. No studies have been conducted in preterm infants with proven hypoglycaemia or hyperglycaemia. Long-term outcomes were not reported. Events of necrotizing enterocolitis, reported in the study published in 2021, were lower in the CGM group. However, the effect of CGM on this outcome remains very uncertain. Clinical trials are required to determine the most effective CGM and glycaemic management regimens in preterm infants before larger studies can be performed to assess the efficacy of CGM for reducing mortality, morbidity, and long-term neurodevelopmental impairments.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Newborns born too early ('preterm') are susceptible to levels of blood glucose that are too high or too low. Most preterm babies with these abnormal concentrations have a full recovery, or only mild problems. For some preterm babies with extremely high or low (or more prolonged) blood glucose, this may lead to death or to problems later in life.\nThe aim of this updated review was to assess whether the use of CGM could improve the long-term development or reduce deaths in preterm newborns. CGM devices are inserted subcutaneously, and provide data on blood glucose in real time. The standard method of measuring blood glucose consists of measuring blood glucose concentrations intermittently by withdrawing small amounts of blood, often by heel pricks."
}
] |
query-laysum
|
22964
|
[
{
"role": "user",
"content": "Abstract: Background\nIn people with portal hypertension, gastric varices are less prevalent than oesophageal varices. The risk of bleeding from gastric varices seems to be lower than from oesophageal varices; however, when gastric varices bleed, it is often severe and associated with higher mortality. Endoscopic sclerotherapy of bleeding gastric varices with N-butyl-2-cyanoacrylate glue (cyanoacrylate) is considered the best haemostasis with a lower risk of re-bleeding compared with other endoscopic methods. However, there are some inconsistencies between trials regarding mortality, incidence of re-bleeding, and adverse effects.\nObjectives\nTo assess the benefits and harms of sclerotherapy using cyanoacrylate compared with other endoscopic sclerotherapy procedures or with variceal band ligation for treating acute gastric variceal bleeding with or without vasoactive drugs in people with portal hypertension and to assess the best dosage of cyanoacrylate.\nSearch methods\nWe searched the Cochrane Hepato-Biliary Controlled Trials Register, the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE, and Science Citation Index Expanded from inception to September 2014 and reference lists of articles. We included trials irrespective of trial setting, language, publication status, or date of publication.\nSelection criteria\nRandomised clinical trials comparing sclerotherapy using cyanoacrylate versus other endoscopic methods (sclerotherapy using alcohol-based compounds or endoscopy band ligation) for acute gastric variceal bleeding in people with portal hypertension.\nData collection and analysis\nWe performed the review following the recommendations of the Cochrane Handbook for Systematic Reviews of Interventions and the Cochrane Hepato-Biliary Module.\nWe presented results as risk ratios (RR) with 95% confidence intervals (CI), with I2 statistic values as a measure of intertrial heterogeneity. We analysed data with both fixed-effect and random-effects models, and reported the results with random-effects models. We performed subgroup, sensitivity, and trial sequential analyses to evaluate the robustness of the overall results, risk of bias, sources of intertrial heterogeneity, and risk of random errors.\nMain results\nWe included six randomised clinical trials with three different comparisons: one trial compared two different doses of cyanoacrylate in 91 adults, bleeding actively from all types of gastric varices; one trial compared cyanoacrylate versus alcohol-based compounds in 37 adults with active or acute bleeding from isolated gastric varices only; and four trials compared cyanoacrylate versus endoscopic band ligation in 365 adults, with active or acute bleeding from all types of gastric varices. Main outcomes in the included trials were bleeding-related mortality, failure of intervention, re-bleeding, adverse events, and control of bleeding. Follow-up varied from six to 26 months. The participants included in these trials had chronic liver disease of different severities, were predominantly men, and most were from Eastern countries. We judged all trials at high risk of bias. Application of quality criteria for all outcomes yielded very low quality grade of the evidence in the three analyses, except for the low quality evidence rated for the re-bleeding outcome in the cyanoacrylate versus endoscopic band ligation comparison.\nTwo different doses of cyanoacrylate: we found very low quality evidence from one trial for the effect of 0.5 mL compared with 1.0 mL of cyanoacrylate on all-cause mortality (20/44 (45.5%) with 0.5 mL versus 21/47 (45%) with 1.0 mL; RR 1.02; 95% CI 0.65 to 1.60), 30-day mortality (RR 1.07; 95% CI 0.41 to 2.80), failure of intervention (RR 1.07; 95% CI 0.56 to 2.05), prevention of re-bleeding (RR 1.30; 95% CI 0.73 to 2.31), adverse events reported as fever (RR 0.56; 95% CI 0.32 to 0.98), and control of bleeding (RR 1.04; 95% CI 0.78 to 1.38).\nCyanoacrylate versus alcohol-based compounds: we found very low quality evidence from one trial for the effect of cyanoacrylate versus alcohol-based compounds on 30-day mortality (2/20 (10%) with cyanoacrylate versus 4/17 (23.5%) with alcohol-based compound; RR 0.43; 95% CI 0.09 to 2.04), failure of intervention (RR 0.36; 95% CI 0.09 to 1.35), prevention of re-bleeding (RR 0.85; 95% CI 0.30 to 2.45), adverse events reported as fever (RR 0.43; 95% CI 0.22 to 0.80), and control of bleeding (RR 1.79; 95% CI 1.13 to 2.84).\nCyanoacrylate versus endoscopic band ligation: we found very low quality evidence for the effect of cyanoacrylate versus endoscopic band ligation on bleeding-related mortality (44/185 (23.7%) with cyanoacrylate versus 50/181 (27.6%) with endoscopic band ligation; RR 0.83; 95% CI 0.52 to 1.31), failure of intervention (RR 1.13; 95% CI 0.23 to 5.69), complications (RR 2.81; 95% CI 0.69 to 11.49), and control of bleeding (RR 1.07; 95% CI 0.90 to 1.27). There was low quality evidence for the prevention of re-bleeding (RR 0.60; 95% CI 0.41 to 0.88). Trial sequential analysis showed that the analyses were underpowered (diversity-adjusted required information size was 5290 participants for bleeding-related mortality).\nAuthors' conclusions\nThis review suggests that endoscopic sclerotherapy using cyanoacrylate may be more effective than endoscopic band ligation in terms of preventing re-bleeding from gastric varices. However, due to the very low quality of the evidence, we are very uncertain about our estimates on all-cause and bleeding-related mortality, failure of intervention, adverse events, and control of bleeding. The trials were at high risk of bias; the number of the included randomised clinical trials and number of participants included in each trial was small; and there was evidence of internal heterogeneity across trials, indirectness of evidence in terms of population, and possible publication bias.\nThe effectiveness of different doses of cyanoacrylate and the comparison of cyanoacrylate versus alcohol compounds to treat variceal bleeding in people with portal hypertension is uncertain due to the very low quality of the evidence.\nThe shortcomings mentioned call for more evidence from larger trials that need to be conducted according to the SPIRIT statement and reported according to CONSORT guidelines.\n\nGiven the provided abstract, please respond to the following query: \"Characteristic of included studies\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "This review includes six trials (following search of scientific databases through to September 2014) of three different comparisons regarding the use of cyanoacrylate: comparison of different dosages of cyanoacrylate (one trial, 91 participants), cyanoacrylate compared with alcohol-based compounds (one trial, 37 participants), and cyanoacrylate compared with endoscopic band ligation (where enlarged veins are tied off using elastic bands; four trials, 366 participants). Risk of bias (that is overestimation of benefits and underestimation of harms) was high in all trials. Outcomes assessed included death, bleeding-related death, treatment failure, re-bleeding, side effects, and bleeding control. Follow-up of people varied from six to 26 months. All people included in these trials had chronic liver disease of different severities and were predominantly men. Most of the trials came from Eastern countries, although it must be noted that prevalence of chronic liver disease is fairly similar worldwide, with differences in causes that may have no effect on variceal bleeding."
}
] |
query-laysum
|
22965
|
[
{
"role": "user",
"content": "Abstract: Background\nRadical cystectomy (RC) is the primary surgical treatment for muscle-invasive urothelial carcinoma of the bladder. This major operation is typically associated with an extended hospital stay, a prolonged recovery period and potentially major complications. Nutritional interventions are beneficial in some people with other types of cancer and may be of value in this setting too.\nObjectives\nTo assess the effects of perioperative nutrition in people undergoing radical cystectomy for the treatment of bladder cancer.\nSearch methods\nWe performed a comprehensive search using multiple databases (Evidence Based Medicine Reviews, MEDLINE, Embase, AMED, CINAHL), trials registries, other sources of grey literature, and conference proceedings published up to 22 February 2019, with no restrictions on the language or status of publication.\nSelection criteria\nWe included parallel-group randomised controlled trials (RCTs) of adults undergoing RC for bladder cancer. The intervention was any perioperative nutrition support.\nData collection and analysis\nTwo review authors independently assessed studies for inclusion, extracted data, and assessed risk of bias and the quality of evidence using GRADE. Primary outcomes were postoperative complications at 90 days and length of hospital stay. The secondary outcome was mortality up to 90 days after surgery. When 90-day outcome data were not available, we reported 30-day data.\nMain results\nThe search identified eight trials including 500 participants. Six trials were conducted in the USA and two in Europe.\n1. Parenteral nutrition (PN) versus oral nutrition: based on one study with 157 participants, PN may increase postoperative complications within 30 days (risk ratio (RR) 1.40, 95% confidence interval (CI) 1.07 to 1.82; low-quality evidence). We downgraded the quality of evidence for serious study limitations (unclear risk of selection, performance and selective reporting bias) and serious imprecision. This corresponds to 198 more complications per 1000 participants (95% CI 35 more to 405 more). Length of hospital stay may be similar (mean difference (MD) 0.5 days higher, CI not reported; low-quality evidence).\n2. Immuno-enhancing nutrition versus standard nutrition: based on one study including 29 participants, immuno-enhancing nutrition may reduce 90-day postoperative complications (RR 0.31, 95% CI 0.08 to 1.23; low-quality evidence). These findings correspond to 322 fewer complications per 1000 participants (95% CI 429 fewer to 107 more). Length of hospital stay may be similar (MD 0.20 days, 95% CI 1.69 lower to 2.09 higher; low-quality evidence). We downgraded the quality of evidence of both outcomes for very serious imprecision.\n3. Preoperative oral nutritional support versus normal diet: based on one study including 28 participants, we are very uncertain if preoperative oral supplements reduces postoperative complications. We downgraded quality for serious study limitations (unclear risk of selection, performance, attrition and selective reporting bias) and very serious imprecision. The study did not report on length of hospital stay.\n4. Early postoperative feeding versus standard postoperative management: based on one study with 102 participants, early postoperative feeding may increase postoperative complications (very low-quality evidence) but we are very uncertain of this finding. We downgraded the quality of evidence for serious study limitations (unclear risk of selection and performance bias) and very serious imprecision. Length of hospital stay may be similar (MD 0.95 days less, CI not reported; low-quality evidence). We downgraded the quality of evidence for serious study limitations (unclear risk of selection and performance bias) and serious imprecision.\n5. Amino acid with dextrose versus dextrose: based on two studies with 104 participants, we are very uncertain whether amino acids reduce postoperative complications (very low-quality evidence). We are also very uncertain whether length of hospital stay is similar (very low-quality evidence). We downgraded the quality of evidence for both outcomes for serious study limitations (unclear and high risk of selection bias; unclear risk of performance, detection and selective reporting bias), serious indirectness related to the patient population and very serious imprecision.\n6. Branch chain amino acids versus dextrose only: based on one study including 19 participants, we are very uncertain whether complication rates are similar (very low-quality evidence). We downgraded the quality of evidence for serious study limitations (unclear risk of selection, performance, detection, attrition and selective reporting bias), serious indirectness related to the patient population and very serious imprecision. The study did not report on length of hospital stay.\n7. Perioperative oral nutritional supplements versus oral multivitamin and mineral supplement: based on one study with 61 participants, oral supplements compared to a multivitamin and mineral supplement may slightly decrease postoperative complications (low-quality evidence). These findings correspond to 135 fewer occurrences per 1000 participants (95% CI 256 fewer to 65 more). Length of hospital stay may be similar (low-quality evidence). We downgraded the quality of evidence of both outcomes for study limitations and imprecision.\nAuthors' conclusions\nBased on few, small and dated studies, with serious methodological limitations, we found limited evidence for a benefit of perioperative nutrition interventions. We rated the quality of evidence as low or very low, which underscores the urgent need for high-quality research studies to better inform nutritional support interventions for people undergoing surgery for bladder cancer.\n\nGiven the provided abstract, please respond to the following query: \"Review objective\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "To assess the effects of perioperative nutrition in participants undergoing an operation for treating bladder cancer."
}
] |
query-laysum
|
22966
|
[
{
"role": "user",
"content": "Abstract: Background\nPenetrating abdominal trauma (PAT) is a common type of trauma leading to admission to hospital, which often progresses to septic complications. Antibiotics are commonly administered as prophylaxis prior to laparotomy for PAT. However, an earlier Cochrane Review intending to compare antibiotics with placebo identified no relevant randomised controlled trials (RCTs). Despite this, many RCTs have been carried out that compare different agents and durations of antibiotic therapy. To date, no systematic review of these trials has been performed.\nObjectives\nTo assess the effects of antibiotics in penetrating abdominal trauma, with respect to the type of agent administered and the duration of therapy.\nSearch methods\nWe searched the following electronic databases for relevant randomised controlled trials, from database inception to 23 July 2019; Cochrane Injuries Group's Specialised Register, CENTRAL, MEDLINE Ovid, MEDLINE Ovid In-Process & Other Non-Indexed Citations, MEDLINE Ovid Daily and Ovid OLDMEDLINE, Embase Classic + Embase Ovid, ISI Web of Science (SCI-EXPANDED, SSCI, CPCI-S & CPSI-SSH), and two clinical trials registers. We also searched reference lists from included studies. We applied no restrictions on language or date of publication.\nSelection criteria\nWe included RCTs only. We included studies involving participants of all ages, which were conducted in secondary care hospitals only. We included studies of participants who had an isolated penetrating abdominal wound that breached the peritoneum, who were not already taking antibiotics.\nData collection and analysis\nTwo study authors independently extracted data and assessed risk of bias. We used standard Cochrane methods. We aggregated study results using a random-effects model. We also conducted trial sequential analysis (TSA) to help reduce type I and II errors in our analyses.\nMain results\nWe included 29 RCTs, involving a total of 4458 participants. We deemed 23 trials to be at high risk of bias in at least one domain.\nWe are uncertain of the effect of a long course of antibiotic prophylaxis (> 24 hours) compared to a short course (≤ 24 hours) on abdominal surgical site infection (RR 1.00, 95% CI 0.81 to 1.23; I² = 0%; 7 studies, 1261 participants; very low-quality evidence), mortality (Peto OR 1.67, 95% CI 0.73 to 3.82; I² = 8%; 7 studies, 1261 participants; very low-quality evidence), or intra-abdominal infection (RR 1.23, 95% CI 0.84 to 1.80; I² = 0%; 6 studies, 111 participants; very-low quality evidence).\nBased on very low-quality evidence from fifteen studies, involving 2020 participants, which compared different drug regimens with activity against three classes of gastrointestinal flora (gram positive, gram negative, anaerobic), we are uncertain whether there is a benefit of one regimen over another.\nTSA showed the majority of comparisons did not cross the alpha adjusted boundary for benefit or harm, or reached the required information size, indicating that further studies are required for these analyses. However, in the three analyses which crossed the boundary for futility, further studies are unlikely to show benefit or harm.\nAuthors' conclusions\nVery low-quality evidence means that we are uncertain about the effect of either the duration of antibiotic prophylaxis, or the superiority of one drug regimen over another for penetrating abdominal trauma on abdominal surgical site infection rates, mortality, or intra-abdominal infections.\nFuture RCTs should be adequately powered, test currently used antibiotics, known to be effective against gut flora, use methodology to minimise the risk of bias, and adequately report the level of peritoneal contamination encountered at laparotomy.\n\nGiven the provided abstract, please respond to the following query: \"Study Characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched for trials involving participants of any age or sex, who underwent an emergency operation to treat penetrating abdominal trauma. The evidence is current to 23 July 2019. We included 29 studies that included 4458 participants. There were problems with the design and conduct of all of these studies, which means that we were uncertain about the results. Most of these studies were carried out over 20 years ago, using antibiotics that are not often used today. Surgical techniques and practice have also evolved substantially during this time. Seven out of the 29 studies received funding from pharmaceutical companies, whilst the other studies did not state their funding sources."
}
] |
query-laysum
|
22967
|
[
{
"role": "user",
"content": "Abstract: Background\nLactoferrin, a normal component of human colostrum and milk, can enhance host defenses and may be effective for prevention of sepsis and necrotizing enterocolitis (NEC) in preterm neonates.\nObjectives\nTo assess the safety and effectiveness of lactoferrin supplementation to enteral feeds for prevention of sepsis and NEC in preterm neonates. Secondarily, we assessed the effects of lactoferrin supplementation to enteral feeds on the duration of positive-pressure ventilation, development of chronic lung disease (CLD) or periventricular leukomalacia (PVL), length of hospital stay to discharge among survivors, and adverse neurological outcomes at two years of age or later.\nSearch methods\nWe used the standard search strategy of Cochrane Neonatal to update our search. We searched the Cochrane Central Register of Controlled Trials (CENTRAL 2019, Issue 9), MEDLINE via PubMed (1966 to 20 January 2020), PREMEDLINE (1996 to 20 January 2020), Embase (1980 to 20 January 2020), and CINAHL (1982 to 20 January 2020). We also searched clinical trials databases, conference proceedings, and the reference lists of retrieved articles for randomized controlled trials and quasi-randomized trials.\nSelection criteria\nIn our search, we included randomized controlled trials (RCTs) evaluating enteral lactoferrin supplementation at any dose or duration to prevent sepsis or NEC in preterm neonates.\nData collection and analysis\nWe used the standard methods of Cochrane Neonatal and the GRADE approach to assess the certainty of evidence.\nMain results\nMeta-analysis of data from twelve randomized controlled trials showed that lactoferrin supplementation to enteral feeds decreased suspected or confirmed late-onset sepsis (typical RR 0.80, 95% CI 0.72 to 0.89; typical RD -0.05, 95% CI, -0.07 to -0.02; NNTB 20, 95% CI 14 to 50; 12 studies, 5425 participants, low-certainty evidence) and decreased length of hospital stay (MD -2.38 to 95% CI, -4.67 to -0.09; 3 studies, 1079 participants, low-certainty evidence). A subgroup analysis including data of infants with confirmed sepsis demonstrates a decrease in confirmed late-onset sepsis (typical RR 0.83, 95% CI 0.73 to 0.94; typical RD -0.03, 95% CI, -0.04 to -0.01; NNTB 33, 95% CI 25 to 100; 12 studies, 5425 participants, low-certainty evidence).\nSensitivity analysis including only good methodological certainty studies suggested a decrease in late-onset sepsis (both suspected and confirmed) with enteral lactoferrin supplementation (typical RR 0.82, 95% CI, 0.74 to 0.91; typical RD -0.04, 95% CI, -0.06 to -0.02; NNTB 20, 95% CI 14 to 50; 9 studies, 4702 participants, low-certainty evidence).\nThere were no differences in NEC stage II or III (typical RR 1.10, 95% CI, 0.86 to 1.41; typical RD -0.00, 95% CI, -0.02 to 0.01; 7 studies, 4874 participants; low-certainty evidence) or 'all-cause mortality' (typical RR 0.90, 95% CI 0.69 to 1.17; typical RD -0.00, 95% CI, -0.01 to 0.01; 11 studies, 5510 participants; moderate-certainty evidence). One study reported no differences in neurodevelopmental testing by Mullen's or Bayley III at 24 months of age after enteral lactoferrin supplementation (one study, 292 participants, low-certainty evidence).\nLactoferrin supplementation to enteral feeds with probiotics decreased late-onset sepsis (RR 0.25, 95% CI 0.14 to 0.46; RD -0.13, 95% CI -0.18 to -0.08; NNTB 8, 95% CI 6 to 13; 3 studies, 564 participants; low-certainty evidence) and NEC stage II or III (RR 0.04, 95% CI 0.00 to 0.62; RD -0.05, 95% CI -0.08 to -0.03; NNTB 20, 95% CI 12.5 to 33.3; 1 study, 496 participants; very low-certainty evidence), but not 'all-cause mortality' (very low-certainty evidence).\nLactoferrin supplementation to enteral feeds with or without probiotics had no effect on CLD, duration of mechanical ventilation or threshold retinopathy of prematurity (low-certainty evidence). Investigators reported no adverse effects in the included studies.\nAuthors' conclusions\nWe found low-certainty evidence from studies of good methodological quality that lactoferrin supplementation of enteral feeds decreases late-onset sepsis (both suspected and confirmed, and confirmed only) but not NEC ≥ stage II or 'all cause mortality' or neurodevelopmental outcomes at 24 months of age in preterm infants without adverse effects. Low- to very low-certainty evidence suggests that lactoferrin supplementation of enteral feeds in combination with probiotics decreases late-onset sepsis (data from confirmed sepsis only) and NEC ≥ stage II in preterm infants without adverse effects, however, there were few included studies of poor methodological quality. The presence of publication bias and small studies of poor methodology that may inflate the effect size make recommendations for clinical practice difficult.\n\nGiven the provided abstract, please respond to the following query: \"Certainty of evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Low to very low"
}
] |
query-laysum
|
22968
|
[
{
"role": "user",
"content": "Abstract: Background\nThe management of anticoagulation therapy around the time of catheter ablation (CA) procedure for adults with arrhythmia is critical and yet is variable in clinical practice. The ideal approach for safe and effective perioperative management should balance the risk of bleeding during uninterrupted anticoagulation while minimising the risk of thromboembolic events with interrupted therapy.\nObjectives\nTo compare the efficacy and harms of interrupted versus uninterrupted anticoagulation therapy for catheter ablation (CA) in adults with arrhythmias.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, and SCI-Expanded on the Web of Science for randomised controlled trials on 5 January 2021. We also searched three registers on 29 May 2021 to identify ongoing or unpublished trials. We performed backward and forward searches on reference lists of included trials and other systematic reviews and contacted experts in the field. We applied no restrictions on language or publication status.\nSelection criteria\nWe included randomised controlled trials comparing uninterrupted anticoagulation with any modality of interruption with or without heparin bridging for CA in adults aged 18 years or older with arrhythmia.\nData collection and analysis\nTwo review authors conducted independent screening, data extraction, and assessment of risk of bias. A third review author resolved disagreements. We extracted data on study population, interruption strategy, ablation procedure, thromboembolic events (stroke or systemic embolism), major and minor bleeding, asymptomatic thromboembolic events, cardiovascular and all-cause mortality, quality of life (QoL), length of hospital stay, cost, and source of funding. We used GRADE to assess the certainty of the evidence.\nMain results\nWe identified 12 studies (4714 participants) that compared uninterrupted periprocedural anticoagulation with interrupted anticoagulation. Studies performed an interruption strategy by either a complete interruption (one study) or by a minimal interruption (11 studies), of which a single-dose skipped strategy was used (nine studies) or two-dose skipped strategy (two studies), with or without heparin bridging.\nStudies included participants with a mean age of 65 years or greater, with only two studies conducted in relatively younger individuals (mean age less than 60 years). Paroxysmal atrial fibrillation (AF) was the primary type of AF in all studies, and seven studies included other types of AF (persistent and long-standing persistent). Most participants had CHADS2 or CHADS2-VASc demonstrating a low–moderate risk of stroke, with almost all participants having normal or mildly reduced renal function. Ablation source using radiofrequency energy was the most common (seven studies).\nTen studies (2835 participants) were conducted in East Asian countries (Japan, China, and South Korea), while the remaining two studies were conducted in the USA. Eight studies were conducted in a single centre. Postablation follow-up was variable among studies at less than 30 days (three studies), 30 days (six studies), and more than 30 days postablation (three studies).\nOverall, the meta-analysis showed high uncertainty of the effect between the interrupted strategy compared to uninterrupted strategy on the primary outcomes of thromboembolic events (risk ratio (RR) 1.76, 95% confidence interval (CI) 0.33 to 9.46; I2 = 59%; 6 studies, 3468 participants; very low-certainty evidence). However, subgroup analysis showed that uninterrupted vitamin K antagonist (VKA) is associated with a lower risk of thromboembolic events without increasing the risk of bleeding. There is also uncertainty on the outcome of major bleeding events (RR 1.10, 95% CI 0.59 to 2.05; I2 = 6%; 10 studies, 4584 participants; low-certainty evidence). The uncertainty was also evident for the secondary outcomes of minor bleeding (RR 1.01, 95% CI 0.46 to 2.22; I2 = 87%; 9 studies, 3843 participants; very low-certainty evidence), all-cause mortality (RR 0.34, 95% CI 0.01 to 8.21; 442 participants; low-certainty evidence) and asymptomatic thromboembolic events (RR 1.45, 95% CI 0.85 to 2.47; I2 = 56%; 6 studies, 1268 participants; very low-certainty evidence). There was a lower risk of the composite endpoint of thromboembolic events (stroke, systemic embolism, major bleeding, and all-cause mortality) in the interrupted compared to uninterrupted arm (RR 0.23, 95% CI 0.07 to 0.81; 1 study, 442 participants; low-certainty evidence).\nIn general, the low event rates, different comparator anticoagulants, and use of different ablation procedures may be the cause of imprecision and heterogeneity observed.\nAuthors' conclusions\nThis meta-analysis showed that the evidence is uncertain to inform the decision to either interrupt or continue anticoagulation therapy around CA procedure in adults with arrhythmia on outcomes of thromboembolic events, major and minor bleeding, all-cause mortality, asymptomatic thromboembolic events, and a composite endpoint of thromboembolic events (stroke, systemic embolism, major bleeding, and all-cause mortality).\nMost studies in the review adopted a minimal interruption strategy which has the advantage of reducing the risk of bleeding while maintaining a lower level of anticoagulation to prevent periprocedural thromboembolism, hence low event rates on the primary outcomes of thromboembolism and bleeding. The one study that adopted a complete interruption of VKA showed that uninterrupted VKA reduces the risk of thromboembolism without increasing the risk of bleeding. Hence, future trials with larger samples, tailored to a more generalisable population and using homogeneous periprocedural anticoagulant therapy and ablation source are required to address the safety and efficacy of the optimal management of anticoagulant therapy prior to ablation.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched scientific databases and identified 12 clinical trials involving 4714 participants aged 18 years and older with atrial fibrillation. The participants were randomly (like a flip of a coin) put into one of two or more groups. The groups were: continuing the blood thinner at the time of the procedure or stopping the blood thinner either for several days or by one or two doses before the procedure with or without giving additional heparin.\nThe trials reported on the risks of stroke, major bleeding, and minor bleeding after a follow-up period of about 30 days after the procedure. The included trials examined different ablation procedures as well as different blood thinners, including Coumadin and the new oral blood thinners such as apixaban, rivaroxaban, edoxaban, and dabigatran. Ten trials were conducted in East Asian populations (Japan, China, and South Korea). Drug companies funded two trials. The evidence is current to 5 January 2021."
}
] |
query-laysum
|
22969
|
[
{
"role": "user",
"content": "Abstract: Background\nKnee arthroscopy (KA) is a routine orthopedic procedure recommended to repair cruciate ligaments and meniscus injuries and, in suitable cases, to assist the diagnosis of persistent knee pain. There is a small risk of thromboembolic events associated with KA. This systematic review aims to assess if pharmacological or non-pharmacological interventions may reduce this risk. This is an update of an earlier Cochrane Review.\nObjectives\nTo evaluate the efficacy and safety of interventions – whether mechanical, pharmacological, or a combination of both – for thromboprophylaxis in adults undergoing KA.\nSearch methods\nWe used standard, extensive Cochrane search methods. The latest search date was 1 June 2021.\nSelection criteria\nWe included randomized controlled trials (RCTs) and controlled clinical trials (CCTs), blinded or unblinded, of all types of interventions used to prevent deep vein thrombosis (DVT) in men and women aged 18 years and older undergoing KA.\nData collection and analysis\nWe used standard Cochrane methods. Our primary outcomes were pulmonary embolism (PE), symptomatic DVT, asymptomatic DVT, and all-cause mortality. Our secondary outcomes were adverse effects, major bleeding, and minor bleeding. We used GRADE criteria to assess the certainty of the evidence.\nMain results\nWe did not identify any new studies for this update. This review includes eight studies involving 3818 adults with no history of thromboembolic disease. Five studies compared daily subcutaneous low-molecular-weight heparin (LMWH) versus no prophylaxis; one study compared oral rivaroxaban 10 mg versus placebo; one study compared daily subcutaneous LMWH versus graduated compression stockings; and one study compared aspirin versus no prophylaxis.\nThe incidence of PE in all studies combined was low, with seven cases in 3818 participants. There were no deaths in any of the intervention or control groups.\nLow-molecular-weight heparin versus no prophylaxis\nWhen compared with no prophylaxis, LMWH probably results in little to no difference in the incidence of PE in people undergoing KA (risk ratio [RR] 1.81, 95% confidence interval [CI] 0.49 to 6.65; 3 studies, 1820 participants; moderate-certainty evidence). LMWH may make little or no difference to the incidence of symptomatic DVT (RR 0.61, 95% CI 0.18 to 2.03; 4 studies, 1848 participants; low-certainty evidence). It is uncertain whether LMWH reduces the risk of asymptomatic DVT (RR 0.14, 95% CI 0.03 to 0.61; 2 studies, 369 participants; very low-certainty evidence). LMWH probably makes little or no difference to the risk of all adverse effects combined (RR 1.85, 95% CI 0.95 to 3.59; 5 studies, 1978 participants; moderate-certainty evidence), major bleeding (RR 0.98, 95% CI 0.06 to 15.72; 1451 participants; moderate-certainty evidence), or minor bleeding (RR 1.79, 95% CI 0.84 to 3.84; 5 studies, 1978 participants; moderate-certainty evidence).\nRivaroxaban versus placebo\nOne study with 234 participants compared oral rivaroxaban 10 mg versus placebo. There were no cases of PE reported. Rivaroxaban probably led to little or no difference in symptomatic DVT (RR 0.16, 95% CI 0.02 to 1.29; moderate-certainty evidence). It is uncertain whether rivaroxaban reduces the risk of asymptomatic DVT because the certainty of the evidence is very low (RR 0.95, 95% CI 0.06 to 15.01). The study only reported bleeding adverse effects. No major bleeds occurred in either group, and rivaroxaban probably made little or no difference to minor bleeding (RR 0.63, 95% CI 0.18 to 2.19; moderate-certainty evidence).\nAspirin versus no prophylaxis\nOne study compared aspirin with no prophylaxis. There were no PE, DVT or asymptomatic events detected in either group. The study authors reported adverse effects including pain and swelling, but without clarifying which groups these occurred in. There were no bleeds reported.\nLow-molecular-weight heparin versus compression stockings\nOne study with 1317 participants compared LMWH versus compression stockings. LMWH may lead to little or no difference in the risk of PE compared to compression stockings (RR 1.00, 95% CI 0.14 to 7.05; low-certainty evidence), but it may reduce the risk of symptomatic DVT (RR 0.17, 95% CI 0.04 to 0.75; low-certainty evidence). It is uncertain whether LMWH has any effect on asymptomatic DVT (RR 0.47, 95% CI 0.21 to 1.09; very low-certainty evidence). The results suggest LMWH probably leads to little or no difference in major bleeding (RR 3.01, 95% CI 0.61 to 14.88; moderate-certainty evidence), or minor bleeding (RR 1.16, 95% CI 0.64 to 2.08; moderate-certainty evidence).\nWe downgraded the certainty of the evidence for imprecision due to overall small event numbers, for risk of bias due to concerns about lack of blinding, and for indirectness due to uncertainty about the direct clinical relevance of asymptomatic DVT detection.\nAuthors' conclusions\nThere is a small risk that healthy adults undergoing KA will develop venous thromboembolism (PE or DVT). We found moderate- to low-certainty evidence of little or no benefit from LMWH, or rivaroxaban in reducing this small risk of PE or symptomatic DVT. The studies provided very low-certainty evidence that LMWH may reduce the risk of asymptomatic DVT compared to no prophylaxis, but it is uncertain how this directly relates to incidence of DVT or PE in healthy people undergoing KA. There is probably little or no difference in adverse effects (including major and minor bleeding), but data relating to these outcomes were limited by low numbers of events in the studies reporting these outcomes.\n\nGiven the provided abstract, please respond to the following query: \"Key message\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "• We did not find clear evidence that the use of clot-preventing medications (low-molecular-weight heparin (LMWH), aspirin or rivaroxaban) in healthy adults who undergo minimally invasive knee surgery can reduce the small risk of blood clots developing in their deep veins, or the small risk that these blood clots travel to a blood vessel in their lungs.\n• We did not find any clear evidence that LMWH, aspirin or rivaroxaban cause harmful effects such as bleeding in these people."
}
] |
query-laysum
|
22970
|
[
{
"role": "user",
"content": "Abstract: Background\nProton pump inhibitors (PPIs) are a class of medications that reduce acid secretion and are used for treating many conditions such as gastroesophageal reflux disease (GERD), dyspepsia, reflux esophagitis, peptic ulcer disease, and hypersecretory conditions (e.g. Zollinger-Ellison syndrome), and as part of the eradication therapy for Helicobacter pylori bacteria. However, approximately 25% to 70% of people are prescribed a PPI inappropriately. Chronic PPI use without reassessment contributes to polypharmacy and puts people at risk of experiencing drug interactions and adverse events (e.g. Clostridium difficile infection, pneumonia, hypomagnesaemia, and fractures).\nObjectives\nTo determine the effects (benefits and harms) associated with deprescribing long-term PPI therapy in adults, compared to chronic daily use (28 days or greater).\nSearch methods\nWe searched the following databases: Cochrane Central Register of Controlled Trials (CENTRAL; 2016, Issue 10), MEDLINE, Embase, clinicaltrials.gov, and the World Health Organization International Clinical Trials Registry Platform (WHO ICTRP). The last date of search was November 2016. We handsearched the reference lists of relevant studies. We screened 2357 articles (2317 identified through search strategy, 40 through other resources). Of these articles, we assessed 89 for eligibility.\nSelection criteria\nWe included randomized controlled trials (RCTs) and quasi-randomized trials comparing at least one deprescribing modality (e.g. stopping PPI or reducing PPI) with a control consisting of no change in continuous daily PPI use in adult chronic users. Outcomes of interest were: change in gastrointestinal (GI) symptoms, drug burden/PPI use, cost/resource use, negative and positive drug withdrawal events, and participant satisfaction.\nData collection and analysis\nTwo review authors independently reviewed and extracted data and completed the risk of bias assessment. A third review author independently confirmed risk of bias assessment. We used Review Manager 5 software for data analysis. We contacted study authors if there was missing information.\nMain results\nThe review included six trials (n = 1758). Trial participants were aged 48 to 57 years, except for one trial that had a mean age of 73 years. All participants were from the outpatient setting and had either nonerosive reflux disease or milder grades of esophagitis (LA grade A or B). Five trials investigated on-demand deprescribing and one trial examined abrupt discontinuation. There was low quality evidence that on-demand use of PPI may increase risk of 'lack of symptom control' compared with continuous PPI use (risk ratio (RR) 1.71, 95% confidence interval (CI) 1.31 to 2.21), thereby favoring continuous PPI use (five trials, n = 1653). There was a clinically significant reduction in 'drug burden', measured as PPI pill use per week with on-demand therapy (mean difference (MD) -3.79, 95% CI -4.73 to -2.84), favoring deprescribing based on moderate quality evidence (four trials, n = 1152). There was also low quality evidence that on-demand PPI use may be associated with reduced participant satisfaction compared with continuous PPI use. None of the included studies reported cost/resource use or positive drug withdrawal effects.\nAuthors' conclusions\nIn people with mild GERD, on-demand deprescribing may lead to an increase in GI symptoms (e.g. dyspepsia, regurgitation) and probably a reduction in pill burden. There was a decline in participant satisfaction, although heterogeneity was high. There were insufficient data to make a conclusion regarding long-term benefits and harms of PPI discontinuation, although two trials (one on-demand trial and one abrupt discontinuation trial) reported endoscopic findings in their intervention groups at study end.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Overall, the quality of evidence for this review ranged from very low to moderate. Trials were inconsistent with how they reported symptom control. There were also limitations in how the studies were conducted (e.g. participants and investigators may have known which medicine they received), which lowered the quality of evidence. Other contributing factors included small sample sizes for most trials and inconsistent results between studies."
}
] |
query-laysum
|
22971
|
[
{
"role": "user",
"content": "Abstract: Background\nAnxiety frequently coexists with depression and adding benzodiazepines to antidepressant treatment is common practice to treat people with major depression. However, more evidence is needed to determine whether this combined treatment is more effective and not any more harmful than antidepressants alone. It has been suggested that benzodiazepines may lose their efficacy with long-term administration and their chronic use carries risks of dependence.\nThis is the 2019 updated version of a Cochrane Review first published in 2001, and previously updated in 2005. This update follows a new protocol to conform with the most recent Cochrane methodology guidelines, with the inclusion of 'Summary of findings' tables and GRADE evaluations for quality of evidence.\nObjectives\nTo assess the effects of combining antidepressants with benzodiazepines compared with antidepressants alone for major depression in adults.\nSearch methods\nWe searched the Cochrane Common Mental Disorders Group's Controlled Trials Register (CCMDCTR), the Cochrane Central Register of Controlled Trials, MEDLINE, Embase and PsycINFO to May 2019. We searched the World Health Organization (WHO) trials portal and ClinicalTrials.gov to identify any additional unpublished or ongoing studies.\nSelection criteria\nAll randomised controlled trials that compared combined antidepressant plus benzodiazepine treatment with antidepressants alone for adults with major depression. We excluded studies administering psychosocial therapies targeted at depression and anxiety disorders concurrently. Antidepressants had to be prescribed, on average, at or above the minimum effective dose as presented by Hansen 2009 or according to the North American or European regulations. The combination therapy had to last at least four weeks.\nData collection and analysis\nTwo review authors independently extracted data and assessed risk of bias in the included studies, according to the criteria of the Cochrane Handbook for Systematic Reviews of Interventions. We entered data into Review Manager 5. We used intention-to-treat data. We combined continuous outcome variables of depressive and anxiety severity using standardised mean differences (SMD) with 95% confidence intervals (CIs). For dichotomous efficacy outcomes, we calculated the risk ratio (RR) with 95% CI. Regarding the primary outcome of acceptability, only overall dropout rates were available for all studies.\nMain results\nWe identified 10 studies published between 1978 to 2002 involving 731 participants. Six studies used tricyclic antidepressants (TCAs), two studies used selective serotonin reuptake inhibitors (SSRIs), one study used another heterocyclic antidepressant and one study used TCA or heterocyclic antidepressant.\nCombined therapy of benzodiazepines plus antidepressants was more effective than antidepressants alone for depressive severity in the early phase (four weeks) (SMD –0.25, 95% CI –0.46 to –0.03; 10 studies, 598 participants; moderate-quality evidence), but there was no difference between treatments in the acute phase (five to 12 weeks) (SMD –0.18, 95% CI –0.40 to 0.03; 7 studies, 347 participants; low-quality evidence) or in the continuous phase (more than 12 weeks) (SMD –0.21, 95% CI –0.76 to 0.35; 1 study, 50 participants; low-quality evidence). For acceptability of treatment, there was no difference in the dropouts due to any reason between combined therapy and antidepressants alone (RR 0.76, 95% CI 0.54 to 1.07; 10 studies, 731 participants; moderate-quality evidence).\nFor response in depression, combined therapy was more effective than antidepressants alone in the early phase (RR 1.34, 95% CI 1.13 to 1.58; 10 studies, 731 participants), but there was no evidence of a difference in the acute phase (RR 1.12, 95% CI 0.93 to 1.35; 7 studies, 383 participants) or in the continuous phase (RR 0.97, 95% CI 0.73 to 1.29; 1 study, 52 participants). For remission in depression, combined therapy was more effective than antidepressants alone in the early phase (RR 1.39, 95% CI 1.03 to 1.90, 10 studies, 731 participants), but there was no evidence of a difference in the acute phase (RR 1.27, 95% CI 0.99 to 1.63; 7 studies, 383 participants) or in the continuous phase (RR 1.31, 95% CI 0.80 to 2.16; 1 study, 52 participants). There was no evidence of a difference between combined therapy and antidepressants alone for anxiety severity in the early phase (SMD –0.76, 95% CI –1.67 to 0.14; 3 studies, 129 participants) or in the acute phase (SMD –0.48, 95% CI –1.06 to 0.10; 3 studies, 129 participants). No studies measured severity of insomnia. In terms of adverse effects, the dropout rates due to adverse events were lower for combined therapy than for antidepressants alone (RR 0.54, 95% CI 0.32 to 0.90; 10 studies, 731 participants; moderate-quality evidence). However, participants in the combined therapy group reported at least one adverse effect more often than participants who received antidepressants alone (RR 1.12, 95% CI 1.01 to 1.23; 7 studies, 510 participants; moderate-quality evidence).\nMost domains of risk of bias in the majority of the included studies were unclear. Random sequence generation, allocation concealment, blinding and selective outcome reporting were problematic due to insufficient details reported in most of the included studies and lack of availability of the study protocols. The greatest limitation in the quality of evidence was issues with attrition.\nAuthors' conclusions\nCombined antidepressant plus benzodiazepine therapy was more effective than antidepressants alone in improving depression severity, response in depression and remission in depression in the early phase. However, these effects were not maintained in the acute or the continuous phase. Combined therapy resulted in fewer dropouts due to adverse events than antidepressants alone, but combined therapy was associated with a greater proportion of participants reporting at least one adverse effect.\nThe moderate quality evidence of benefits of adding a benzodiazepine to an antidepressant in the early phase must be balanced judiciously against possible harms and consideration given to other alternative treatment strategies when antidepressant monotherapy may be considered inadequate. We need long-term, pragmatic randomised controlled trials to compare combination therapy against the monotherapy of antidepressant in major depression.\n\nGiven the provided abstract, please respond to the following query: \"Why is this review important?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Major depression is characterised by depressed mood, loss of interest or pleasure, diminished energy, fatigue, difficulties with concentration, changes in appetite, sleep disturbances and morbid thoughts of death. Depression often presents with anxiety. Depression and anxiety have negative impacts on the person and on society, often over the long term."
}
] |
query-laysum
|
22972
|
[
{
"role": "user",
"content": "Abstract: Background\nCough causes concern for parents and is a major cause of outpatient visits. Cough can impact quality of life, cause anxiety, and affect sleep in children and their parents. Honey has been used to alleviate cough symptoms. This is an update of reviews previously published in 2014, 2012, and 2010.\nObjectives\nTo evaluate the effectiveness of honey for acute cough in children in ambulatory settings.\nSearch methods\nWe searched CENTRAL (2018, Issue 2), which includes the Cochrane Acute Respiratory Infections Group's Specialised Register, MEDLINE (2014 to 8 February 2018), Embase (2014 to 8 February 2018), CINAHL (2014 to 8 February 2018), EBSCO (2014 to 8 February 2018), Web of Science (2014 to 8 February 2018), and LILACS (2014 to 8 February 2018). We also searched ClinicalTrials.gov and the World Health Organization International Clinical Trial Registry Platform (WHO ICTRP) on 12 February 2018. The 2014 review included searches of AMED and CAB Abstracts, but these were not searched for this update due to lack of institutional access.\nSelection criteria\nRandomised controlled trials comparing honey alone, or in combination with antibiotics, versus no treatment, placebo, honey-based cough syrup, or other over-the-counter cough medications for children aged 12 months to 18 years for acute cough in ambulatory settings.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane.\nMain results\nWe included six randomised controlled trials involving 899 children; we added three studies (331 children) in this update.\nWe assessed two studies as at high risk of performance and detection bias; three studies as at unclear risk of attrition bias; and three studies as at unclear risk of other bias.\nStudies compared honey with dextromethorphan, diphenhydramine, salbutamol, bromelin (an enzyme from the Bromeliaceae (pineapple) family), no treatment, and placebo. Five studies used 7-point Likert scales to measure symptomatic relief of cough; one used an unclear 5-point scale. In all studies, low score indicated better cough symptom relief.\nUsing a 7-point Likert scale, honey probably reduces cough frequency better than no treatment or placebo (no treatment: mean difference (MD) -1.05, 95% confidence interval (CI) -1.48 to -0.62; I² = 0%; 2 studies; 154 children; moderate-certainty evidence; placebo: MD -1.62, 95% CI -3.02 to -0.22; I² = 0%; 2 studies; 402 children; moderate-certainty evidence). Honey may have a similar effect as dextromethorphan in reducing cough frequency (MD -0.07, 95% CI -1.07 to 0.94; I² = 87%; 2 studies; 149 children; low-certainty evidence). Honey may be better than diphenhydramine in reducing cough frequency (MD -0.57, 95% CI -0.90 to -0.24; 1 study; 80 children; low-certainty evidence).\nGiving honey for up to three days is probably more effective in relieving cough symptoms compared with placebo or salbutamol. Beyond three days honey probably had no advantage over salbutamol or placebo in reducing cough severity, bothersome cough, and impact of cough on sleep for parents and children (moderate-certainty evidence). With a 5-point cough scale, there was probably little or no difference between the effects of honey and bromelin mixed with honey in reducing cough frequency and severity.\nAdverse events included nervousness, insomnia, and hyperactivity, experienced by seven children (9.3%) treated with honey and two children (2.7%) treated with dextromethorphan (risk ratio (RR) 2.94, 95% Cl 0.74 to 11.71; I² = 0%; 2 studies; 149 children; low-certainty evidence). Three children (7.5%) in the diphenhydramine group experienced somnolence (RR 0.14, 95% Cl 0.01 to 2.68; 1 study; 80 children; low-certainty evidence). When honey was compared with placebo, 34 children (12%) in the honey group and 13 (11%) in the placebo group complained of gastrointestinal symptoms (RR 1.91, 95% CI 1.12 to 3.24; I² = 0%; 2 studies; 402 children; moderate-certainty evidence). Four children who received salbutamol had rashes compared to one child in the honey group (RR 0.19, 95% CI 0.02 to 1.63; 1 study; 100 children; moderate-certainty evidence). No adverse events were reported in the no-treatment group.\nAuthors' conclusions\nHoney probably relieves cough symptoms to a greater extent than no treatment, diphenhydramine, and placebo, but may make little or no difference compared to dextromethorphan. Honey probably reduces cough duration better than placebo and salbutamol. There was no strong evidence for or against using honey. Most of the children received treatment for one night, which is a limitation to the results of this review. There was no difference in occurrence of adverse events between the honey and control arms.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Cough causes concern for parents and is a major reason for outpatient visits. Honey is believed to prevent growth of germs and reduce inflammation."
}
] |
query-laysum
|
22973
|
[
{
"role": "user",
"content": "Abstract: Background\nJaundice is a very common condition in newborns, affecting up to 60% of term newborns and 80% of preterm newborns in the first week of life. Jaundice is caused by increased bilirubin in the blood from the breakdown of red blood cells. The gold standard for measuring bilirubin levels is obtaining a blood sample and processing it in a laboratory. However, noninvasive transcutaneous bilirubin (TcB) measurement devices are widely available and used in many settings to estimate total serum bilirubin (TSB) levels.\nObjectives\nTo determine the diagnostic accuracy of transcutaneous bilirubin measurement for detecting hyperbilirubinaemia in newborns.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, CINAHL and trial registries up to 18 August 2022. We also checked the reference lists of all included studies and relevant systematic reviews for other potentially eligible studies.\nSelection criteria\nWe included cross-sectional and prospective cohort studies that evaluated the accuracy of any TcB device compared to TSB measurement in term or preterm newborn infants (0 to 28 days postnatal age). All included studies provided sufficient data and information to create a 2 × 2 table for the calculation of measures of diagnostic accuracy, including sensitivities and specificities. We excluded studies that only reported correlation coefficients.\nData collection and analysis\nTwo review authors independently applied the eligibility criteria to all citations from the search and extracted data from the included studies using a standard data extraction form. We summarised the available results narratively and, where possible, we combined study data in a meta-analysis.\nMain results\nWe included 23 studies, involving 5058 participants. All studies had low risk of bias as measured by the QUADAS 2 tool. The studies were conducted in different countries and settings, included newborns of different gestational and postnatal ages, compared various TcB devices (including the JM 101, JM 102, JM 103, BiliChek, Bilitest and JH20-1C) and used different cutoff values for a positive result. In most studies, the TcB measurement was taken from the forehead, sternum, or both. The sensitivity of various TcB cutoff values to detect significant hyperbilirubinaemia ranged from 74% to 100%, and specificity ranged from 18% to 89%.\nAuthors' conclusions\nThe high sensitivity of TcB to detect hyperbilirubinaemia suggests that TcB devices are reliable screening tests for ruling out hyperbilirubinaemia in newborn infants. Positive test results would require confirmation through serum bilirubin measurement.\n\nGiven the provided abstract, please respond to the following query: \"What did we find?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found 23 studies (5058 participants) that were conducted in different countries and settings, used different transcutaneous bilirubin measuring devices, and defined hyperbilirubinaemia with different bilirubin values. Some of the infants were premature and others were born at term (from 37 weeks' pregnancy); their ages ranged from birth to one month of life. Overall, the findings of the studies suggest that transcutaneous bilirubin measurement is a good screening tool for detecting hyperbilirubinaemia in newborns. The included studies found different degrees or levels of accuracy for the use of transcutaneous bilirubin measurement. However, due to the differences between studies, we could not provide an overall combined summary of the accuracy of the different tests. The differences in these studies included factors like the threshold values for hyperbilirubinaemia, the types of transcutaneous bilirubin measuring devices, and age and ethnicity/skin colour of the included infants."
}
] |
query-laysum
|
22974
|
[
{
"role": "user",
"content": "Abstract: Background\nEvidence from animal models and observational studies in humans has suggested that there is an inverse relationship between dietary intake of omega 3 long-chain polyunsaturated fatty acids (LCPUFA) and risk of developing age-related macular degeneration (AMD) or progression to advanced AMD.\nObjectives\nTo review the evidence that increasing the levels of omega 3 LCPUFA in the diet (either by eating more foods rich in omega 3 or by taking nutritional supplements) prevents AMD or slows the progression of AMD.\nSearch methods\nWe searched CENTRAL (which contains the Cochrane Eyes and Vision Group Trials Register) (2015, Issue 1), Ovid MEDLINE, Ovid MEDLINE In-Process and Other Non-Indexed Citations, Ovid MEDLINE Daily, Ovid OLDMEDLINE (January 1946 to February 2015), EMBASE (January 1980 to February 2015), Latin American and Caribbean Health Sciences Literature Database (LILACS) (January 1982 to February 2015), the ISRCTN registry (www.isrctn.com/editAdvancedSearch), ClinicalTrials.gov (www.clinicaltrials.gov) and the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP) (www.who.int/ictrp/search/en). We did not use any date or language restrictions in the electronic searches for trials. We last searched the electronic databases on 2 February 2015.\nSelection criteria\nWe included randomised controlled trials (RCTs) where increased dietary intake of omega 3 LCPUFA was compared to placebo or no intervention with the aim of preventing the development of AMD, or slowing its progression.\nData collection and analysis\nBoth authors independently selected studies, assessed them for risk of bias and extracted data. One author entered data into RevMan 5 and the other author checked the data entry. We conducted a meta-analysis for one primary outcome, progression of AMD, using a fixed-effect inverse variance model.\nMain results\nWe included two RCTs in this review, in which 2343 participants with AMD were randomised to receive either omega 3 fatty acid supplements or a placebo. The trials, which had a low risk of bias, were conducted in the USA and France. Overall, there was no evidence that people who took omega 3 fatty acid supplements were at decreased (or increased risk) of progression to advanced AMD (pooled hazard ratio (HR) 0.96, 95% confidence interval (CI) 0.84 to 1.10, high quality evidence). Similarly, people taking these supplements were no more (or less) likely to lose 15 or more letters of visual acuity (USA study HR 0.96, 95% CI 0.84 to 1.10; French study at 36 months risk ratio (RR) 1.25, 95% CI 0.69 to 2.26, participants = 230). The number of adverse events was similar in the intervention and placebo groups (USA study participants with one or more serious adverse event RR 1.00, 95% CI 0.91 to 1.09, participants = 2080; French study total adverse events RR 1.05, 95% CI 0.97 to 1.13, participants = 263).\nAuthors' conclusions\nThis review found that omega 3 LCPUFA supplementation in people with AMD for periods up to five years does not reduce the risk of progression to advanced AMD or the development of moderate to severe visual loss. No published randomised trials were identified on dietary omega 3 fatty acids for primary prevention of AMD. Currently available evidence does not support increasing dietary intake of omega 3 LCPUFA for the explicit purpose of preventing or slowing the progression of AMD.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Age-related macular degeneration (AMD) is an eye condition that affects the central area of the retina (light-sensitive tissue at the back of the eye). AMD is associated with a loss of detailed vision and adversely affects tasks such as reading, driving and face recognition. In the absence of a cure, there has been considerable interest in the role of modifiable risk factors to prevent or slow down the progression of AMD. Evidence from population studies suggests that people who have a diet with relatively high levels of omega 3 fatty acids (such as those derived from fish oils) are less likely to develop AMD."
}
] |
query-laysum
|
22975
|
[
{
"role": "user",
"content": "Abstract: Background\nContraception provides significant benefits for women's and children's health, yet many women have an unmet need for contraception. Rapid expansion in the use of mobile phones in recent years has had a dramatic impact on interpersonal communication. Within the health domain text messages and smartphone applications offer means of communication between clients and healthcare providers. This review focuses on interventions delivered by mobile phone and their effect on use of contraception.\nObjectives\nTo evaluate the benefits and harms of mobile phone-based interventions for improving contraception use.\nSearch methods\nWe used standard, extensive Cochrane search methods. The latest search date was August 2022.\nSelection criteria\nWe included randomised controlled trials (RCTs) of mobile phone-based interventions to improve forms of contraception use amongst users or potential users of contraception.\nData collection and analysis\nWe used standard Cochrane methods. Our primary outcomes were 1. uptake of contraception, 2. uptake of a specific method of contraception, 3. adherence to contraception method, 4. safe method switching, 5. discontinuation of contraception and 6. pregnancy or abortion. Our secondary outcomes were 7. road traffic accidents, 8. any physical or psychological effect reported and 9. violence or domestic abuse.\nMain results\nTwenty-three RCTs (12,793 participants) from 11 countries met our inclusion criteria. Eleven studies were conducted in high-income resource settings and 12 were in low-income settings. Thirteen studies used unidirectional text messaging-based interventions, six studies used interactive text messaging, four used voice message-based interventions and two used mobile-phone apps to improve contraception use. All studies received funding from non-commercial bodies.\nMobile phone-based interventions probably increase contraception use compared to the control (odds ratio (OR) 1.30, 95% confidence interval (CI) 1.06 to 1.60; 16 studies, 8972 participants; moderate-certainty evidence).\nThere may be little or no difference in rates of unintended pregnancy with the use of mobile phone-based interventions compared to control (OR 0.82, 95% CI 0.48 to 1.38; 8 trials, 2947 participants; moderate-certainty evidence).\nSubgroup analysis assessing unidirectional mobile phone interventions versus interactive mobile phone interventions found evidence of a difference between the subgroups favouring interactive interventions (P = 0.003, I2 = 88.5%). Interactive interventions had an OR of 1.71 (95% CI 1.28 to 2.29; P = 0.0003, I2 = 63%; 8 trials, 3089 participants) whilst unidirectional interventions had an OR of 1.03 (95% CI 0.87 to 1.22; P = 0.72, I2 = 17%; 9 trials, 5883 participants).\nSubgroup analysis assessing high-income versus low-income trial settings found no difference between groups (subgroup difference test: P = 0.70, I2 = 0%).\nOnly six trials reported on safety and unintended outcomes; one trial reported increased partner violence whilst another four trials reported no difference in physical violence rates between control and intervention groups. One trial reported no road traffic accidents with mobile phone intervention use.\nAuthors' conclusions\nThis review demonstrates there is evidence to support the use of mobile phone-based interventions in improving the use of contraception, with moderate-certainty evidence. Interactive mobile phone interventions appear more effective than unidirectional methods.\nThe cost-effectiveness, cost benefits, safety and long-term effects of these interventions remain unknown, as does the evidence of this approach to support contraception use among specific populations.\nFuture research should investigate the effectiveness and safety of mobile phone-based interventions with better quality trials to help establish the effects of interventions delivered by mobile phone on contraception use. This review is limited by the quality of the studies due to flaws in methodology, bias or imprecision of results.\n\nGiven the provided abstract, please respond to the following query: \"How did we identify and evaluate the evidence?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched medical databases for studies that assessed the use of interventions delivered by mobile phones and their impact on the use of contraception. We found 23 trials of 12,793 women undertaken in 11 countries in both high-income (11 studies) and low-income (12 studies) settings. These studies compared the standard of care to a mobile phone intervention – such as one-way text message reminders, interactive messages (which required a response from clients), voice messages or a mobile app."
}
] |
query-laysum
|
22976
|
[
{
"role": "user",
"content": "Abstract: Background\nHydrocephalus is a common neurological disorder, caused by a progressive accumulation of cerebrospinal fluid (CSF) within the intracranial space that can lead to increased intracranial pressure, enlargement of the ventricles (ventriculomegaly) and, consequently, to brain damage. Ventriculo-peritoneal shunt systems are the mainstay therapy for this condition, however there are different types of shunt systems.\nObjectives\nTo compare the effectiveness and adverse effects of conventional and complex shunt devices for CSF diversion in people with hydrocephalus.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (2020 Issue 2); Ovid MEDLINE (1946 to February 2020); Embase (Elsevier) (1974 to February 2020); Latin American and Caribbean Health Science Information Database (LILACS) (1980 to February 2020); ClinicalTrials.gov; and World Health Organization International Clinical Trials Registry Platform.\nSelection criteria\nWe selected randomised controlled trials or quasi-randomised trials of different types of ventriculo-peritoneal shunting devices for people with hydrocephalus. Primary outcomes included: treatment failure, adverse events and mortality.\nData collection and analysis\nTwo review authors screened studies for selection, assessed risk of bias and extracted data. Due to the scarcity of data, we performed a Synthesis Without Meta-analysis (SWiM) incorporating GRADE for the quality of the evidence.\nMain results\nWe included six studies with 962 participants assessing the effects of standard valves compared to anti-syphon valves, other types of standard valves, self-adjusting CSF flow-regulating valves and external differential programmable pressure valves. All included studies started in a hospital setting and offered ambulatory follow-up. Most studies were conducted in infants or children with hydrocephalus from diverse causes. The certainty of the evidence for most comparisons was low to very low.\n1. Standard valve versus anti-syphon valve\nThree studies with 296 randomised participants were included under this comparison. We are uncertain about the incidence of treatment failure in participants with standard valve and anti-syphon valves (very low certainty of the evidence). The incidence of adverse events may be similar in those with standard valves (range 0 to 1.9%) and anti-syphon valves (range 0 to 2.9%) (low certainty of the evidence). Mortality may be similar in those with standard valves (0%) and anti-syphon valves (0.9%) (RD 0.01%, 95% CI -0.02% to 0.03%, low certainty of the evidence). Ventricular size and head circumference may be similar in those with standard valves and anti-syphon valves (low certainty of the evidence). None of the included studies reported the quality of life of participants.\n2. Comparison between different types of standard valves\nTwo studies with 174 randomised participants were included under this comparison. We are uncertain about the incidence of treatment failure in participants with different types of standard valves (early postoperative period: RR 0.41, 95% CI 0.13 to 1.27; at 12 months follow-up: RR 1.17, 95% CI 0.72 to 1.92, very low certainty of the evidence). None of the included studies reported adverse events beyond those included under \"treatment failure\". We are uncertain about the effects of different types of standard valves on mortality (range 2% to 17%, very low certainty of the evidence). The included studies did not report the effects of these interventions on quality of life, ventricular size reduction or head circumference.\n3. Standard valve versus self-adjusting CSF flow-regulating valve\nOne study with 229 randomised participants addressed this comparison. The incidence of treatment failure may be similar in those with standard valves (42.98%) and self-adjusting CSF flow-regulating valves (39.13%) (low certainty of the evidence). The incidence of adverse events may be similar in those with standard valves (range 0 to 1.9%) and those with self-adjusting CSF flow-regulating valves (range 0 to 7.2%) (low certainty of the evidence). The included study reported no deaths in either group in the postoperative period. Beyond the early postoperative period, the authors stated that nine patients died (no disaggregated data by each type of intervention was available, low certainty of the evidence). The included studies did not report the effects of these interventions on quality of life, ventricular size reduction or head circumference.\n4. External differential programmable pressure valve versus non-programmable valve\nOne study with 377 randomised participants addressed this comparison. The incidence of treatment failure may be similar in those with programmable valves (52%) and non-programmable valves (52%) (RR 1.02, 95% CI 0.84 to 1.24, low certainty of the evidence). The incidence of adverse events may be similar in those with programmable valves (6.19%) and non-programmable valves (6.01%) (RR 0.97, 95% CI 0.44 to 2.15, low certainty of the evidence). The included study did not report the effect of these interventions on mortality, quality of life or head circumference. Ventricular size reduction may be similar in those with programmable valves and non-programmable valves (low certainty of the evidence).\nAuthors' conclusions\nStandard shunt valves for hydrocephalus compared to anti-syphon or self-adjusting CSF flow-regulating valves may cause little to no difference on the main outcomes of this review, however we are very uncertain due to the low to very low certainty of evidence. Similarly, different types of standard valves and external differential programmable pressure valves versus non-programmable valves may be associated with similar outcomes. Nevertheless, this review did not include valves with the latest technology, for which we need high-quality randomised controlled trials focusing on patient-important outcomes including costs.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The certainty of the evidence was mostly low to very low since the studies were poorly conducted, with a small number of participants. Furthermore, many studies did not report critical outcomes such as mortality."
}
] |
query-laysum
|
22977
|
[
{
"role": "user",
"content": "Abstract: Background\nThe effective control of moisture and microbes is necessary for the success of restoration procedures. The rubber dam, as an isolation method, has been widely used in dental restorative treatments. The effects of rubber dam usage on the longevity and quality of dental restorations still require evidence-based discussion. This review compares the effects of rubber dam with other isolation methods in dental restorative treatments. This is an update of the Cochrane Review first published in 2016.\nObjectives\nTo assess the effects of rubber dam isolation compared with other types of isolation used for direct and indirect restorative treatments in dental patients.\nSearch methods\nCochrane Oral Health's Information specialist searched the following electronic databases: Cochrane Oral Health's Trials Register (searched 13 January 2021), Cochrane Central Register of Controlled Trials (CENTRAL; 2020, Issue 12) in the Cochrane Library (searched 13 January 2021), MEDLINE Ovid (1946 to 13 January 2021), Embase Ovid (1980 to 13 January 2021), LILACS BIREME Virtual Health Library (Latin American and Caribbean Health Science Information database; 1982 to 13 January 2021), and SciELO BIREME Virtual Health Library (1998 to 13 January 2021). We also searched Chinese BioMedical Literature Database (CBM, in Chinese) (1978 to 13 January 2021), VIP database (in Chinese) (1989 to 13 January 2021), and China National Knowledge Infrastructure (CNKI, in Chinese) (1994 to 13 January 2021). We searched ClinicalTrials.gov and the World Health Organization International Clinical Trials Registry Platform, OpenGrey, and Sciencepaper Online (in Chinese) for ongoing trials. There were no restrictions on the language or date of publication when searching the electronic databases.\nSelection criteria\nWe included randomised controlled trials (including split-mouth trials) over one month in length assessing the effects of rubber dam compared with alternative isolation methods for dental restorative treatments.\nData collection and analysis\nTwo review authors independently screened the results of the electronic searches, extracted data, and assessed the risk of bias of the included studies. Disagreement was resolved by discussion. We strictly followed Cochrane's statistical guidelines and assessed the certainty of the evidence using GRADE.\nMain results\nWe included six studies conducted worldwide between 2010 and 2015 involving a total of 1342 participants (of which 233 participants were lost to follow-up). All the included studies were at high risk of bias.\nFive studies compared rubber dam with traditional cotton rolls isolation. One study was excluded from the analysis due to inconsistencies in the presented data. Of the four remaining trials, three reported survival rates of the restorations with a minimum follow-up of six months. Pooled results from two studies involving 192 participants indicated that the use of rubber dam isolation may increase the survival rates of direct composite restorations of non-carious cervical lesions (NCCLs) at six months (odds ratio (OR) 2.29, 95% confidence interval (CI) 1.05 to 4.99; low-certainty evidence). However, the use of rubber dam in NCCLs composite restorations may have little to no effect on the survival rates of the restorations compared to cotton rolls at 12 months (OR 1.38, 95% CI 0.45 to 4.28; 1 study, 30 participants; very low-certainty evidence) and at 18 months (OR 1.00, 95% CI 0.45 to 2.25; 1 study, 30 participants; very low-certainty evidence) but the evidence is very uncertain. At 24 months, the use of rubber dam may decrease the risk of failure of the restorations in children undergoing proximal atraumatic restorative treatment in primary molars but the evidence is very uncertain (hazard ratio (HR) 0.80, 95% CI 0.66 to 0.97; 1 study, 559 participants; very low-certainty evidence).\nNone of the included studies mentioned adverse effects or reported the direct cost of the treatment.\nAuthors' conclusions\nThis review found some low-certainty evidence that the use of rubber dam in dental direct restorative treatments may lead to a lower failure rate of the restorations compared with cotton roll usage after six months. At other time points, the evidence is very uncertain. Further high-quality research evaluating the effects of rubber dam usage on different types of restorative treatments is required.\n\nGiven the provided abstract, please respond to the following query: \"How up to date is this evidence?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The evidence is up to date to January 2021."
}
] |
query-laysum
|
22978
|
[
{
"role": "user",
"content": "Abstract: Background\nThe standard way most people are advised to stop smoking is by quitting abruptly on a designated quit day. However, many people who smoke have tried to quit many times and may like to try an alternative method. Reducing smoking behaviour before quitting could be an alternative approach to cessation. However, before this method can be recommended it is important to ensure that abrupt quitting is not more effective than reducing to quit, and to determine whether there are ways to optimise reduction methods to increase the chances of cessation.\nObjectives\nTo assess the effect of reduction-to-quit interventions on long-term smoking cessation.\nSearch methods\nWe searched the Cochrane Tobacco Addiction Group Specialised Register, MEDLINE, Embase and PsycINFO for studies, using the terms: cold turkey, schedul*, cut* down, cut-down, gradual*, abrupt*, fading, reduc*, taper*, controlled smoking and smoking reduction. We also searched trial registries to identify unpublished studies. Date of the most recent search: 29 October 2018.\nSelection criteria\nRandomised controlled trials in which people who smoked were advised to reduce their smoking consumption before quitting smoking altogether in at least one trial arm. This advice could be delivered using self-help materials or behavioural support, and provided alongside smoking cessation pharmacotherapies or not. We excluded trials that did not assess cessation as an outcome, with follow-up of less than six months, where participants spontaneously reduced without being advised to do so, where the goal of reduction was not to quit altogether, or where participants were advised to switch to cigarettes with lower nicotine levels without reducing the amount of cigarettes smoked or the length of time spent smoking. We also excluded trials carried out in pregnant women.\nData collection and analysis\nWe followed standard Cochrane methods. Smoking cessation was measured after at least six months, using the most rigorous definition available, on an intention-to-treat basis. We calculated risk ratios (RRs) and 95% confidence intervals (CIs) for smoking cessation for each study, where possible. We grouped eligible studies according to the type of comparison (no smoking cessation treatment, abrupt quitting interventions, and other reduction-to-quit interventions) and carried out meta-analyses where appropriate, using a Mantel-Haenszel random-effects model. We also extracted data on quit attempts, pre-quit smoking reduction, adverse events (AEs), serious adverse events (SAEs) and nicotine withdrawal symptoms, and meta-analysed these where sufficient data were available.\nMain results\nWe identified 51 trials with 22,509 participants. Most recruited adults from the community using media or local advertising. People enrolled in the studies typically smoked an average of 23 cigarettes a day. We judged 18 of the studies to be at high risk of bias, but restricting the analysis only to the five studies at low or to the 28 studies at unclear risk of bias did not significantly alter results.\nWe identified very low-certainty evidence, limited by risk of bias, inconsistency and imprecision, comparing the effect of reduction-to-quit interventions with no treatment on cessation rates (RR 1.74, 95% CI 0.90 to 3.38; I2 = 45%; 6 studies, 1599 participants). However, when comparing reduction-to-quit interventions with abrupt quitting (standard care) we found evidence that neither approach resulted in superior quit rates (RR 1. 01, 95% CI 0.87 to 1.17; I2 = 29%; 22 studies, 9219 participants). We judged this estimate to be of moderate certainty, due to imprecision. Subgroup analysis provided some evidence (P = 0.01, I2 = 77%) that reduction-to-quit interventions may result in more favourable quit rates than abrupt quitting if varenicline is used as a reduction aid. Our analysis comparing reduction using pharmacotherapy with reduction alone found low-certainty evidence, limited by inconsistency and imprecision, that reduction aided by pharmacotherapy resulted in higher quit rates (RR 1. 68, 95% CI 1.09 to 2.58; I2 = 78%; 11 studies, 8636 participants). However, a significant subgroup analysis (P < 0.001, I2 = 80% for subgroup differences) suggests that this may only be true when fast-acting NRT or varenicline are used (both moderate-certainty evidence) and not when nicotine patch, combination NRT or bupropion are used as an aid (all low- or very low-quality evidence). More evidence is likely to change the interpretation of the latter effects.\nAlthough there was some evidence from within-study comparisons that behavioural support for reduction to quit resulted in higher quit rates than self-help resources alone, the relative efficacy of various other characteristics of reduction-to-quit interventions investigated through within- and between-study comparisons did not provide any evidence that they enhanced the success of reduction-to-quit interventions. Pre-quit AEs, SAEs and nicotine withdrawal symptoms were measured variably and infrequently across studies. There was some evidence that AEs occurred more frequently in studies that compared reduction using pharmacotherapy versus no pharmacotherapy; however, the AEs reported were mild and usual symptoms associated with NRT use. There was no clear evidence that the number of people reporting SAEs, or changes in withdrawal symptoms, differed between trial arms.\nAuthors' conclusions\nThere is moderate-certainty evidence that neither reduction-to-quit nor abrupt quitting interventions result in superior long-term quit rates when compared with one another. Evidence comparing the efficacy of reduction-to-quit interventions with no treatment was inconclusive and of low certainty. There is also low-certainty evidence to suggest that reduction-to-quit interventions may be more effective when pharmacotherapy is used as an aid, particularly fast-acting NRT or varenicline (moderate-certainty evidence). Evidence for any adverse effects of reduction-to-quit interventions was sparse, but available data suggested no excess of pre-quit SAEs or withdrawal symptoms. We downgraded the evidence across comparisons due to risk of bias, inconsistency and imprecision. Future research should aim to match any additional components of multicomponent reduction-to-quit interventions across study arms, so that the effect of reduction can be isolated. In particular, well-conducted, adequately-powered studies should focus on investigating the most effective features of reduction-to-quit interventions to maximise cessation rates.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "There is very low-quality evidence looking at whether cutting down smoking before quitting helps more people to quit smoking than no treatment. We rated the quality as very low, as there were problems with the design of studies, findings of studies were very different from one another, and not enough people took part, making it difficult to tell whether cutting down helps people to quit smoking. However, there is moderate-certainty evidence that cutting down before quitting may result in similar quit rates to quitting all at once, which suggests that cutting down may be a helpful approach. We rated this evidence as moderate because there is a chance that future studies may find that cutting down helps slightly more or slightly fewer people to quit than when people quit all at once. There is also moderate-quality evidence that people may be more likely to quit by cutting down first when they use a stop-smoking medicine like varenicline or a type of fast-acting NRT to help them. We rated this evidence as moderate certainty because there were not enough people taking part; more studies are needed."
}
] |
query-laysum
|
22979
|
[
{
"role": "user",
"content": "Abstract: Background\nChronic obstructive pulmonary disease (COPD) is a respiratory condition causing accumulation of mucus in the airways, cough, and breathlessness; the disease is progressive and is the fourth most common cause of death worldwide. Current treatment strategies for COPD are multi-modal and aim to reduce morbidity and mortality and increase patients' quality of life by slowing disease progression and preventing exacerbations. Fixed-dose combinations (FDCs) of a long-acting beta2-agonist (LABA) plus a long-acting muscarinic antagonist (LAMA) delivered via a single inhaler are approved by regulatory authorities in the USA, Europe, and Japan for the treatment of COPD. Several LABA/LAMA FDCs are available and recent meta-analyses have clarified their utility versus their mono-components in COPD. Evaluation of the efficacy and safety of once-daily LABA/LAMA FDCs versus placebo will facilitate the comparison of different FDCs in future network meta-analyses.\nObjectives\nWe assessed the evidence for once-daily LABA/LAMA combinations (delivered in a single inhaler) versus placebo on clinically meaningful outcomes in patients with stable COPD.\nSearch methods\nWe identified trials from Cochrane Airways' Specialised Register (CASR) and also conducted a search of the US National Institutes of Health Ongoing Trials Register ClinicalTrials.gov (www.clinicaltrials.gov) and the World Health Organization International Clinical Trials Registry Platform (apps.who.int/trialsearch). We searched CASR and trial registries from their inception to 3 December 2018; we imposed no restriction on language of publication.\nSelection criteria\nWe included parallel-group and cross-over randomised controlled trials (RCTs) comparing once-daily LABA/LAMA FDC versus placebo. We included studies reported as full-text, those published as abstract only, and unpublished data. We excluded very short-term trials with a duration of less than 3 weeks. We included adults (≥ 40 years old) with a diagnosis of stable COPD. We included studies that allowed participants to continue using their ICS during the trial as long as the ICS was not part of the randomised treatment.\nData collection and analysis\nTwo review authors independently screened the search results to determine included studies, extracted data on prespecified outcomes of interest, and assessed the risk of bias of included studies; we resolved disagreements by discussion with a third review author. Where possible, we used a random-effects model to meta-analyse extracted data. We rated all outcomes using the GRADE (Grades of Recommendation, Assessment, Development and Evaluation) system and presented results in 'Summary of findings’ tables.\nMain results\nWe identified and included 22 RCTs randomly assigning 8641 people with COPD to either once-daily LABA/LAMA FDC (6252 participants) or placebo (3819 participants); nine studies had a cross-over design. Studies had a duration of between three and 52 weeks (median 12 weeks). The mean age of participants across the included studies ranged from 59 to 65 years and in 21 of 22 studies, participants had GOLD stage II or III COPD. Concomitant inhaled corticosteroid (ICS) use was permitted in all of the included studies (where stated); across the included studies, between 28% to 58% of participants were using ICS at baseline. Six studies evaluated the once-daily combination of IND/GLY (110/50 μg), seven studies evaluated TIO/OLO (2.5/5 or 5/5 μg), eight studies evaluated UMEC/VI (62.5/5, 125/25 or 500/25 μg) and one study evaluated ACD/FOR (200/6, 200/12 or 200/18 μg); all LABA/LAMA combinations were compared with placebo.\nThe risk of bias was generally considered to be low or unknown (insufficient detail provided), with only one study per domain considered to have a high risk of bias except for the domain 'other bias' which was determined to be at high risk of bias in four studies (in three studies, disease severity was greater at baseline in participants receiving LABA/LAMA compared with participants receiving placebo, which would be expected to shift the treatment effect in favour of placebo).\nCompared to the placebo, the pooled results for the primary outcomes for the once-daily LABA/LAMA arm were as follows: all-cause mortality, OR 1.88 (95% CI 0.81 to 4.36, low-certainty evidence); all-cause serious adverse events (SAEs), OR 1.06 (95% CI 0.88 to 1.28, high-certainty evidence); acute exacerbations of COPD (AECOPD), OR 0.53 (95% CI 0.36 to 0.78, moderate-certainty evidence); adjusted St George's Respiratory Questionnaire (SGRQ) score, MD -4.08 (95% CI -4.80 to -3.36, high-certainty evidence); proportion of SGRQ responders, OR 1.75 (95% CI 1.54 to 1.99). Compared with placebo, the pooled results for the secondary outcomes for the once-daily LABA/LAMA arm were as follows: adjusted trough forced expiratory volume in one second (FEV1), MD 0.20 L (95% CI 0.19 to 0.21, moderate-certainty evidence); adjusted peak FEV1, MD 0.31 L (95% CI 0.29 to 0.32, moderate-certainty evidence); and all-cause AEs, OR 0.95 (95% CI 0.86 to 1.04; high-certainty evidence). No studies reported data for the 6-minute walk test. The results were generally consistent across subgroups for different LABA/LAMA combinations and doses.\nAuthors' conclusions\nCompared with placebo, once-daily LABA/LAMA (either IND/GLY, UMEC/VI or TIO/OLO) via a combination inhaler is associated with a clinically significant improvement in lung function and health-related quality of life in patients with mild-to-moderate COPD; UMEC/VI appears to reduce the rate of exacerbations in this population. These conclusions are supported by moderate or high certainty evidence based on studies with an observation period of up to one year.\n\nGiven the provided abstract, please respond to the following query: \"What did we find?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Twenty-two studies (including 8641 people with COPD) compared once-daily LABA/LAMA in a single inhaler with a dummy inhaler. People were allowed to continue to use their inhaled corticosteroids (ICS) during the studies; approximately a third to a half of people were using their ICS at the beginning of each study. The evidence presented in this review is current up to December 2018. The majority of people who took part in the studies had mild-to-moderate COPD and the average age of people in each study ranged from 59 to 65 years. Six studies evaluated the once-daily combination of indacaterol/glycopyrronium, seven studies evaluated tiotropium/olodaterol, eight studies evaluated umeclidinium/vilanterol and one study evaluated aclidinium/formoterol.\nPeople who took once-daily LABA/LAMA using a single inhaler showed a greater improvement in quality of life than those taking placebo in a dummy inhaler; lung function was also improved in people taking once-daily LABA/LAMA. People taking umeclidinium/vilanterol had fewer flare-ups (exacerbations). There was no significant difference between groups (LABA/LAMA versus placebo) in the number of people who died, or in the number of people who experienced serious adverse events or any adverse event. The results were similar for the different LABA/LAMA combinations and doses that we evaluated.\nThe included studies were generally well designed and well reported. People in the studies and those performing the research did not know which treatment people were receiving, which ensures a fair evaluation of the treatments.\nIn three of the studies, people who were taking once-daily LABA/LAMA had more severe COPD at the start of the study than people taking dummy inhalers; this could have reduced the treatment effect seen with LABA/LAMA in these studies so we can be confident that our findings do not overestimate the effect seen with once-daily LABA/LAMA. One of the outcomes of interest (how far a person is able to walk in six minutes) was not reported by any of the included studies. Overall, we can be confident in the conclusions of this review."
}
] |
query-laysum
|
22980
|
[
{
"role": "user",
"content": "Abstract: Background\nPreterm infants are at risk of lung atelectasis due to various anatomical and physiological immaturities, placing them at high risk of respiratory failure and associated harms. Nasal continuous positive airway pressure (CPAP) is a positive pressure applied to the airways via the nares. It helps prevent atelectasis and supports adequate gas exchange in spontaneously breathing infants. Nasal CPAP is used in the care of preterm infants around the world. Despite its common use, the appropriate pressure levels to apply during nasal CPAP use remain uncertain.\nObjectives\nTo assess the effects of 'low' (≤ 5 cm H2O) versus 'moderate-high' (> 5 cm H2O) initial nasal CPAP pressure levels in preterm infants receiving CPAP either: 1) for initial respiratory support after birth and neonatal resuscitation or 2) following mechanical ventilation and endotracheal extubation.\nSearch methods\nWe ran a comprehensive search on 6 November 2020 in the following databases: CENTRAL via CRS Web and MEDLINE via Ovid. We also searched clinical trials databases and the reference lists of retrieved articles for randomized controlled trials (RCTs) and quasi-randomized trials.\nSelection criteria\nWe included RCTs, quasi-RCTs, cluster-RCTs and cross-over RCTs randomizing preterm infants of gestational age < 37 weeks or birth weight < 2500 grams within the first 28 days of life to different nasal CPAP levels.\nData collection and analysis\nWe used the standard methods of Cochrane Neonatal to collect and analyze data. We used the GRADE approach to assess the certainty of the evidence for the prespecified primary outcomes.\nMain results\nEleven trials met inclusion criteria of the review. Four trials were parallel-group RCTs reporting our prespecified primary or secondary outcomes. Two trials randomized 316 infants to low versus moderate-high nasal CPAP for initial respiratory support, and two trials randomized 117 infants to low versus moderate-high nasal CPAP following endotracheal extubation. The remaining seven studies were cross-over trials reporting short-term physiological outcomes. The most common potential sources of bias were absent or unclear blinding of personnel and assessors and uncertain selective reporting.\nNasal CPAP for initial respiratory support after birth and neonatal resuscitation\nNone of the six primary outcomes prespecified for inclusion in the summary of findings was eligible for meta-analysis. No trials reported on moderate-severe neurodevelopmental impairment at 18 to 26 months. The remaining five outcomes were reported in a single trial. On the basis of this trial, we are uncertain whether low or moderate-high nasal CPAP levels improve the outcomes of: death or bronchopulmonary dysplasia (BPD) at 36 weeks' postmenstrual age (PMA) (risk ratio (RR) 1.02, 95% confidence interval (CI) 0.56 to 1.85; 1 trial, 271 participants); mortality by hospital discharge (RR 1.04, 95% CI 0.51 to 2.12; 1 trial, 271 participants); BPD at 28 days of age (RR 1.10, 95% CI 0.56 to 2.17; 1 trial, 271 participants); BPD at 36 weeks' PMA (RR 0.80, 95% CI 0.25 to 2.57; 1 trial, 271 participants), and treatment failure or need for mechanical ventilation (RR 1.00, 95% CI 0.63 to 1.57; 1 trial, 271 participants). We assessed the certainty of the evidence as very low for all five outcomes due to risk of bias, a lack of consistency across multiple studies, and imprecise effect estimates.\nNasal CPAP following mechanical ventilation and endotracheal extubation\nOne of the six primary outcomes prespecified for inclusion in the summary of findings was eligible for meta-analysis. On the basis of these data, we are uncertain whether low or moderate-high nasal CPAP levels improve the outcome of treatment failure or need for mechanical ventilation (RR 1.52, 95% CI 0.92 to 2.50; 2 trials, 117 participants; I2 = 17%; risk difference 0.15, 95% CI −0.02 to 0.32; number needed to treat for an additional beneficial outcome 7, 95% CI −50 to 3). We assessed the certainty of the evidence as very low due to risk of bias, inconsistency across the studies, and imprecise effect estimates. No trials reported on moderate-severe neurodevelopmental impairment at 18 to 26 months or BPD at 28 days of age. The remaining three outcomes were reported in a single trial. On the basis of this trial, we are uncertain whether low or moderate-high nasal CPAP levels improve the outcomes of: death or BPD at 36 weeks' PMA (RR 0.87, 95% CI 0.51 to 1.49; 1 trial, 93 participants); mortality by hospital discharge (RR 2.94, 95% CI 0.12 to 70.30; 1 trial, 93 participants), and BPD at 36 weeks' PMA (RR 0.87, 95% CI 0.51 to 1.49; 1 trial, 93 participants). We assessed the certainty of the evidence as very low for all three outcomes due to risk of bias, a lack of consistency across multiple studies, and imprecise effect estimates.\nAuthors' conclusions\nThere are insufficient data from randomized trials to guide nasal CPAP level selection in preterm infants, whether provided as initial respiratory support or following extubation from invasive mechanical ventilation. We are uncertain as to whether low or moderate-high nasal CPAP levels improve morbidity and mortality in preterm infants. Well-designed trials evaluating this important aspect of a commonly used neonatal therapy are needed.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "How does giving preterm infants low versus moderate-high levels of nasal continuous positive airway pressure (CPAP) impact important health outcomes?"
}
] |
query-laysum
|
22981
|
[
{
"role": "user",
"content": "Abstract: Background\nSickle cell disease is one of the commonest severe monogenic disorders in the world, due to the inheritance of two abnormal haemoglobin (beta globin) genes. Sickle cell disease can cause severe pain, significant end-organ damage, pulmonary complications, and premature death. Stroke affects around 10% of children with sickle cell anaemia (HbSS). Chronic blood transfusions may reduce the risk of vaso-occlusion and stroke by diluting the proportion of sickled cells in the circulation.\nThis is an update of a Cochrane Review first published in 2002, and last updated in 2017.\nObjectives\nTo assess risks and benefits of chronic blood transfusion regimens in people with sickle cell disease for primary and secondary stroke prevention (excluding silent cerebral infarcts).\nSearch methods\nWe searched for relevant trials in the Cochrane Library, MEDLINE (from 1946), Embase (from 1974), the Transfusion Evidence Library (from 1980), and ongoing trial databases; all searches current to 8 October 2019.\nWe searched the Cochrane Cystic Fibrosis and Genetic Disorders Group Haemoglobinopathies Trials Register: 19 September 2019.\nSelection criteria\nRandomised controlled trials comparing red blood cell transfusions as prophylaxis for stroke in people with sickle cell disease to alternative or standard treatment. There were no restrictions by outcomes examined, language or publication status.\nData collection and analysis\nTwo authors independently assessed trial eligibility and the risk of bias and extracted data.\nMain results\nWe included five trials (660 participants) published between 1998 and 2016. Four of these trials were terminated early. The vast majority of participants had the haemoglobin (Hb)SS form of sickle cell disease.\nThree trials compared regular red cell transfusions to standard care in primary prevention of stroke: two in children with no previous long-term transfusions; and one in children and adolescents on long-term transfusion.\nTwo trials compared the drug hydroxyurea (hydroxycarbamide) and phlebotomy to long-term transfusions and iron chelation therapy: one in primary prevention (children); and one in secondary prevention (children and adolescents).\nThe quality of the evidence was very low to moderate across different outcomes according to GRADE methodology. This was due to the trials being at a high risk of bias due to lack of blinding, indirectness and imprecise outcome estimates.\nRed cell transfusions versus standard care\nChildren with no previous long-term transfusions\nLong-term transfusions probably reduce the incidence of clinical stroke in children with a higher risk of stroke (abnormal transcranial doppler velocities or previous history of silent cerebral infarct), risk ratio 0.12 (95% confidence interval 0.03 to 0.49) (two trials, 326 participants), moderate quality evidence.\nLong-term transfusions may: reduce the incidence of other sickle cell disease-related complications (acute chest syndrome, risk ratio 0.24 (95% confidence interval 0.12 to 0.48)) (two trials, 326 participants); increase quality of life (difference estimate -0.54, 95% confidence interval -0.92 to -0.17) (one trial, 166 participants); but make little or no difference to IQ scores (least square mean: 1.7, standard error 95% confidence interval -1.1 to 4.4) (one trial, 166 participants), low quality evidence.\nWe are very uncertain whether long-term transfusions: reduce the risk of transient ischaemic attacks, Peto odds ratio 0.13 (95% confidence interval 0.01 to 2.11) (two trials, 323 participants); have any effect on all-cause mortality, no deaths reported (two trials, 326 participants); or increase the risk of alloimmunisation, risk ratio 3.16 (95% confidence interval 0.18 to 57.17) (one trial, 121 participants), very low quality evidence.\nChildren and adolescents with previous long-term transfusions (one trial, 79 participants)\nWe are very uncertain whether continuing long-term transfusions reduces the incidence of: stroke, risk ratio 0.22 (95% confidence interval 0.01 to 4.35); or all-cause mortality, Peto odds ratio 8.00 (95% confidence interval 0.16 to 404.12), very low quality evidence.\nSeveral review outcomes were only reported in one trial arm (sickle cell disease-related complications, alloimmunisation, transient ischaemic attacks).\nThe trial did not report neurological impairment, or quality of life.\nHydroxyurea and phlebotomy versus red cell transfusions and chelation\nNeither trial reported on neurological impairment, alloimmunisation, or quality of life.\nPrimary prevention, children (one trial, 121 participants)\nSwitching to hydroxyurea and phlebotomy may have little or no effect on liver iron concentrations, mean difference -1.80 mg Fe/g dry-weight liver (95% confidence interval -5.16 to 1.56), low quality evidence.\nWe are very uncertain whether switching to hydroxyurea and phlebotomy has any effect on: risk of stroke (no strokes); all-cause mortality (no deaths); transient ischaemic attacks, risk ratio 1.02 (95% confidence interval 0.21 to 4.84); or other sickle cell disease-related complications (acute chest syndrome, risk ratio 2.03 (95% confidence interval 0.39 to 10.69)), very low quality evidence.\nSecondary prevention, children and adolescents (one trial, 133 participants)\nSwitching to hydroxyurea and phlebotomy may: increase the risk of sickle cell disease-related serious adverse events, risk ratio 3.10 (95% confidence interval 1.42 to 6.75); but have little or no effect on median liver iron concentrations (hydroxyurea, 17.3 mg Fe/g dry-weight liver (interquartile range 10.0 to 30.6)); transfusion 17.3 mg Fe/g dry-weight liver (interquartile range 8.8 to 30.7), low quality evidence.\nWe are very uncertain whether switching to hydroxyurea and phlebotomy: increases the risk of stroke, risk ratio 14.78 (95% confidence interval 0.86 to 253.66); or has any effect on all-cause mortality, Peto odds ratio 0.98 (95% confidence interval 0.06 to 15.92); or transient ischaemic attacks, risk ratio 0.66 (95% confidence interval 0.25 to 1.74), very low quality evidence.\nAuthors' conclusions\nThere is no evidence for managing adults, or children who do not have HbSS sickle cell disease.\nIn children who are at higher risk of stroke and have not had previous long-term transfusions, there is moderate quality evidence that long-term red cell transfusions reduce the risk of stroke, and low quality evidence they also reduce the risk of other sickle cell disease-related complications.\nIn primary and secondary prevention of stroke there is low quality evidence that switching to hydroxyurea with phlebotomy has little or no effect on the liver iron concentration.\nIn secondary prevention of stroke there is low-quality evidence that switching to hydroxyurea with phlebotomy increases the risk of sickle cell disease-related events.\nAll other evidence in this review is of very low quality.\n\nGiven the provided abstract, please respond to the following query: \"Search date\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The evidence is current to: 8 October 2019."
}
] |
query-laysum
|
22982
|
[
{
"role": "user",
"content": "Abstract: Background\nCarotid patch angioplasty may reduce the risk of acute occlusion or long-term restenosis of the carotid artery and subsequent ischaemic stroke in people undergoing carotid endarterectomy (CEA). This is an update of a Cochrane Review originally published in 1995 and updated in 2008.\nObjectives\nTo assess the safety and efficacy of routine or selective carotid patch angioplasty with either a venous patch or a synthetic patch compared with primary closure in people undergoing CEA. We wished to test the primary hypothesis that carotid patch angioplasty results in a lower rate of severe arterial restenosis and therefore fewer recurrent strokes and stroke-related deaths, without a considerable increase in perioperative complications.\nSearch methods\nWe searched the Cochrane Stroke Group trials register, CENTRAL, MEDLINE, Embase, two other databases, and two trial registries in September 2021.\nSelection criteria\nRandomised controlled trials and quasi-randomised trials comparing carotid patch angioplasty with primary closure in people undergoing CEA.\nData collection and analysis\nTwo review authors independently assessed eligibility and risk of bias; extracted data; and determined the certainty of evidence using the GRADE approach. Outcomes of interest included stroke, death, significant complications related to surgery, and artery restenosis or occlusion during the perioperative period (within 30 days of the operation) or during long-term follow-up.\nMain results\nWe included 11 trials involving 2100 participants undergoing 2304 CEA operations. The quality of trials was generally poor. Follow-up varied from hospital discharge to five years. Compared with primary closure, carotid patch angioplasty may make little or no difference to reduction in risk of any stroke during the perioperative period (odds ratio (OR) 0.57, 95% confidence interval (CI) 0.31 to 1.03; P = 0.063; 8 studies, 1769 participants; very low-certainty evidence), but may lower the risk of any stroke during long-term follow-up (OR 0.49, 95% CI 0.27 to 0.90; P = 0.022; 7 studies, 1332 participants; very low-certainty evidence). In the included studies, carotid patch angioplasty resulted in a lower risk of ipsilateral stroke during the perioperative period (OR 0.31, 95% CI 0.15 to 0.63; P = 0.001; 7 studies, 1201 participants; very low-certainty evidence), and during long-term follow-up (OR 0.32, 95% CI 0.16 to 0.63; P = 0.001; 6 studies, 1141 participants; very low-certainty evidence). The intervention was associated with a reduction in the risk of any stroke or death during long-term follow-up (OR 0.59, 95% CI 0.42 to 0.84; P = 0.003; 6 studies, 1019 participants; very low-certainty evidence). In addition, the included studies suggest that carotid patch angioplasty may reduce the risk of perioperative arterial occlusion (OR 0.18, 95% CI 0.08 to 0.41; P < 0.0001; 7 studies, 1435 participants; low-certainty evidence), and may reduce the risk of restenosis during long-term follow-up (OR 0.24, 95% CI 0.17 to 0.34; P < 0.00001; 8 studies, 1719 participants; low-certainty evidence). The studies recorded very few arterial complications, including haemorrhage, infection, cranial nerve palsies and pseudo-aneurysm formation, with either patch or primary closure. We found no correlation between the use of patch angioplasty and the risk of either perioperative or long-term stroke-related death or all-cause death rates.\nAuthors' conclusions\nCompared with primary closure, carotid patch angioplasty may reduce the risk of perioperative arterial occlusion and long-term restenosis of the operated artery. It would appear to reduce the risk of ipsilateral stroke during the perioperative and long-term period and reduce the risk of any stroke in the long-term when compared with primary closure. However, the evidence is uncertain due to the limited quality of included trials.\n\nGiven the provided abstract, please respond to the following query: \"What did we do?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched for studies that compared patch angioplasty and primary closure in people who had carotid endarterectomy. We compared and summarised the results, and rated our confidence in the evidence, based on factors such as study methods and study size."
}
] |
query-laysum
|
22983
|
[
{
"role": "user",
"content": "Abstract: Background\nMany systematic reviews exist on interventions to improve safe and effective medicines use by consumers, but research is distributed across diseases, populations and settings. The scope and focus of such reviews also vary widely, creating challenges for decision-makers seeking to inform decisions by using the evidence on consumers’ medicines use.\nThis is an update of a 2011 overview of systematic reviews, which synthesises the evidence, irrespective of disease, medicine type, population or setting, on the effectiveness of interventions to improve consumers' medicines use.\nObjectives\nTo assess the effects of interventions which target healthcare consumers to promote safe and effective medicines use, by synthesising review-level evidence.\nMethods\nSearch methods: We included systematic reviews published on the Cochrane Database of Systematic Reviews and the Database of Abstracts of Reviews of Effects. We identified relevant reviews by handsearching databases from their start dates to March 2012.\nSelection criteria: We screened and ranked reviews based on relevance to consumers’ medicines use, using criteria developed for this overview.\nData collection and analysis: We used standardised forms to extract data, and assessed reviews for methodological quality using the AMSTAR tool. We used standardised language to summarise results within and across reviews; and gave bottom-line statements about intervention effectiveness. Two review authors screened and selected reviews, and extracted and analysed data. We used a taxonomy of interventions to categorise reviews and guide syntheses.\nMain results\nWe included 75 systematic reviews of varied methodological quality. Reviews assessed interventions with diverse aims including support for behaviour change, risk minimisation and skills acquisition. No reviews aimed to promote systems-level consumer participation in medicines-related activities. Medicines adherence was the most frequently-reported outcome, but others such as knowledge, clinical and service-use outcomes were also reported. Adverse events were less commonly identified, while those associated with the interventions themselves, or costs, were rarely reported.\nLooking across reviews, for most outcomes, medicines self-monitoring and self-management programmes appear generally effective to improve medicines use, adherence, adverse events and clinical outcomes; and to reduce mortality in people self-managing antithrombotic therapy. However, some participants were unable to complete these interventions, suggesting they may not be suitable for everyone.\nOther promising interventions to improve adherence and other key medicines-use outcomes, which require further investigation to be more certain of their effects, include:\n· simplified dosing regimens: with positive effects on adherence;\n· interventions involving pharmacists in medicines management, such as medicines reviews (with positive effects on adherence and use, medicines problems and clinical outcomes) and pharmaceutical care services (consultation between pharmacist and patient to resolve medicines problems, develop a care plan and provide follow-up; with positive effects on adherence and knowledge).\nSeveral other strategies showed some positive effects, particularly relating to adherence, and other outcomes, but their effects were less consistent overall and so need further study. These included:\n· delayed antibiotic prescriptions: effective to decrease antibiotic use but with mixed effects on clinical outcomes, adverse effects and satisfaction;\n· practical strategies like reminders, cues and/or organisers, reminder packaging and material incentives: with positive, although somewhat mixed effects on adherence;\n· education delivered with self-management skills training, counselling, support, training or enhanced follow-up; information and counselling delivered together; or education/information as part of pharmacist-delivered packages of care: with positive effects on adherence, medicines use, clinical outcomes and knowledge, but with mixed effects in some studies;\n· financial incentives: with positive, but mixed, effects on adherence.\nSeveral strategies also showed promise in promoting immunisation uptake, but require further study to be more certain of their effects. These included organisational interventions; reminders and recall; financial incentives; home visits; free vaccination; lay health worker interventions; and facilitators working with physicians to promote immunisation uptake. Education and/or information strategies also showed some positive but even less consistent effects on immunisation uptake, and need further assessment of effectiveness and investigation of heterogeneity.\nThere are many different potential pathways through which consumers' use of medicines could be targeted to improve outcomes, and simple interventions may be as effective as complex strategies. However, no single intervention assessed was effective to improve all medicines-use outcomes across all diseases, medicines, populations or settings.\nEven where interventions showed promise, the assembled evidence often only provided part of the picture: for example, simplified dosing regimens seem effective for improving adherence, but there is not yet sufficient information to identify an optimal regimen.\nIn some instances interventions appear ineffective: for example, the evidence suggests that directly observed therapy may be generally ineffective for improving treatment completion, adherence or clinical outcomes.\nIn other cases, interventions may have variable effects across outcomes. As an example, strategies providing information or education as single interventions appear ineffective to improve medicines adherence or clinical outcomes, but may be effective to improve knowledge; an important outcome for promoting consumers' informed medicines choices.\nDespite a doubling in the number of reviews included in this updated overview, uncertainty still exists about the effectiveness of many interventions, and the evidence on what works remains sparse for several populations, including children and young people, carers, and people with multimorbidity.\nAuthors' conclusions\nThis overview presents evidence from 75 reviews that have synthesised trials and other studies evaluating the effects of interventions to improve consumers' medicines use.\nSystematically assembling the evidence across reviews allows identification of effective or promising interventions to improve consumers’ medicines use, as well as those for which the evidence indicates ineffectiveness or uncertainty.\nDecision makers faced with implementing interventions to improve consumers' medicines use can use this overview to inform decisions about which interventions may be most promising to improve particular outcomes. The intervention taxonomy may also assist people to consider the strategies available in relation to specific purposes, for example, gaining skills or being involved in decision making. Researchers and funders can use this overview to identify where more research is needed and assess its priority. The limitations of the available literature due to the lack of evidence for important outcomes and important populations, such as people with multimorbidity, should also be considered in practice and policy decisions.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Included reviews often had methodological limitations - at study level, review level, or both - meaning results should be interpreted with caution."
}
] |
query-laysum
|
22984
|
[
{
"role": "user",
"content": "Abstract: Background\nMany patients with an exacerbation of chronic obstructive pulmonary disease (COPD) are treated with antibiotics. However, the value of antibiotics remains uncertain, as systematic reviews and clinical trials have shown conflicting results.\nObjectives\nTo assess effects of antibiotics on treatment failure as observed between seven days and one month after treatment initiation (primary outcome) for management of acute COPD exacerbations, as well as their effects on other patient-important outcomes (mortality, adverse events, length of hospital stay, time to next exacerbation).\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL), in the Cochrane Library, MEDLINE, Embase, and other electronically available databases up to 26 September 2018.\nSelection criteria\nWe sought to find randomised controlled trials (RCTs) including people with acute COPD exacerbations comparing antibiotic therapy and placebo and providing follow-up of at least seven days.\nData collection and analysis\nTwo review authors independently screened references and extracted data from trial reports. We kept the three groups of outpatients, inpatients, and patients admitted to the intensive care unit (ICU) separate for benefit outcomes and mortality because we considered them to be clinically too different to be summarised as a single group. We considered outpatients to have a mild to moderate exacerbation, inpatients to have a severe exacerbation, and ICU patients to have a very severe exacerbation. When authors of primary studies did not report outcomes or study details, we contacted them to request missing data. We calculated pooled risk ratios (RRs) for treatment failure, Peto odds ratios (ORs) for rare events (mortality and adverse events), and mean differences (MDs) for continuous outcomes using random-effects models. We used GRADE to assess the quality of the evidence. The primary outcome was treatment failure as observed between seven days and one month after treatment initiation.\nMain results\nWe included 19 trials with 2663 participants (11 with outpatients, seven with inpatients, and one with ICU patients).\nFor outpatients (with mild to moderate exacerbations), evidence of low quality suggests that currently available antibiotics statistically significantly reduced the risk for treatment failure between seven days and one month after treatment initiation (RR 0.72, 95% confidence interval (CI) 0.56 to 0.94; I² = 31%; in absolute terms, reduction in treatment failures from 295 to 212 per 1000 treated participants, 95% CI 165 to 277). Studies providing older antibiotics not in use anymore yielded an RR of 0.69 (95% CI 0.53 to 0.90; I² = 31%). Evidence of low quality from one trial in outpatients suggested no effects of antibiotics on mortality (Peto OR 1.27, 95% CI 0.49 to 3.30). One trial reported no effects of antibiotics on re-exacerbations between two and six weeks after treatment initiation. Only one trial (N = 35) reported health-related quality of life but did not show a statistically significant difference between treatment and control groups.\nEvidence of moderate quality does not show that currently used antibiotics statistically significantly reduced the risk of treatment failure among inpatients with severe exacerbations (i.e. for inpatients excluding ICU patients) (RR 0.65, 95% CI 0.38 to 1.12; I² = 50%), but trial results remain uncertain. In turn, the effect was statistically significant when trials included older antibiotics no longer in clinical use (RR 0.76, 95% CI 0.58 to 1.00; I² = 39%). Evidence of moderate quality from two trials including inpatients shows no beneficial effects of antibiotics on mortality (Peto OR 2.48, 95% CI 0.94 to 6.55). Length of hospital stay (in days) was similar in antibiotic and placebo groups.\nThe only trial with 93 patients admitted to the ICU showed a large and statistically significant effect on treatment failure (RR 0.19, 95% CI 0.08 to 0.45; moderate-quality evidence; in absolute terms, reduction in treatment failures from 565 to 107 per 1000 treated participants, 95% CI 45 to 254). Results of this trial show a statistically significant effect on mortality (Peto OR 0.21, 95% CI 0.06 to 0.72; moderate-quality evidence) and on length of hospital stay (MD -9.60 days, 95% CI -12.84 to -6.36; low-quality evidence).\nEvidence of moderate quality gathered from trials conducted in all settings shows no statistically significant effect on overall incidence of adverse events (Peto OR 1.20, 95% CI 0.89 to 1.63; moderate-quality evidence) nor on diarrhoea (Peto OR 1.68, 95% CI 0.92 to 3.07; moderate-quality evidence).\nAuthors' conclusions\nResearchers have found that antibiotics have some effect on inpatients and outpatients, but these effects are small, and they are inconsistent for some outcomes (treatment failure) and absent for other outcomes (mortality, length of hospital stay). Analyses show a strong beneficial effect of antibiotics among ICU patients. Few data are available on the effects of antibiotics on health-related quality of life or on other patient-reported symptoms, and data show no statistically significant increase in the risk of adverse events with antibiotics compared to placebo. These inconsistent effects call for research into clinical signs and biomarkers that can help identify patients who would benefit from antibiotics, while sparing antibiotics for patients who are unlikely to experience benefit and for whom downsides of antibiotics (side effects, costs, and multi-resistance) should be avoided.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Chronic obstructive pulmonary disease (COPD) is a chronic condition (most often caused by smoking or environmental exposure) that affects the passage of air into and out of the lungs. As a consequence, patients experience shortness of breath and coughing. Flare-ups of COPD are a hallmark of more advanced stages of the disease. Flare-ups are defined as sustained worsening of symptoms from the patient's usual stable state. Commonly reported symptoms include worsening breathlessness, cough, increased sputum production, and change in sputum colour. Clinicians frequently prescribe antibiotics for flare-ups in patients with COPD, although the cause of flare-ups is often difficult to determine (viral, bacterial, environmental)."
}
] |
query-laysum
|
22985
|
[
{
"role": "user",
"content": "Abstract: Background\nAnticoagulation may improve survival in patients with cancer through a speculated anti-tumour effect, in addition to the antithrombotic effect, although may increase the risk of bleeding.\nObjectives\nTo evaluate the efficacy and safety of parenteral anticoagulants in ambulatory patients with cancer who, typically, are undergoing chemotherapy, hormonal therapy, immunotherapy or radiotherapy, but otherwise have no standard therapeutic or prophylactic indication for anticoagulation.\nSearch methods\nA comprehensive search included (1) a major electronic search (February 2016) of the following databases: Cochrane Central Register of Controlled Trials (CENTRAL) (2016, Issue 1), MEDLINE (1946 to February 2016; accessed via OVID) and Embase (1980 to February 2016; accessed via OVID); (2) handsearching of conference proceedings; (3) checking of references of included studies; (4) use of the 'related citation' feature in PubMed and (5) a search for ongoing studies in trial registries. As part of the living systematic review approach, we are running searches continually and we will incorporate new evidence rapidly after it is identified. This update of the systematic review is based on the findings of a literature search conducted on 14 August 2017.\nSelection criteria\nRandomized controlled trials (RCTs) assessing the benefits and harms of parenteral anticoagulation in ambulatory patients with cancer. Typically, these patients are undergoing chemotherapy, hormonal therapy, immunotherapy or radiotherapy, but otherwise have no standard therapeutic or prophylactic indication for anticoagulation.\nData collection and analysis\nUsing a standardized form we extracted data in duplicate on study design, participants, interventions outcomes of interest, and risk of bias. Outcomes of interested included all-cause mortality, symptomatic venous thromboembolism (VTE), symptomatic deep vein thrombosis (DVT), pulmonary embolism (PE), major bleeding, minor bleeding, and quality of life. We assessed the certainty of evidence for each outcome using the GRADE approach (GRADE handbook [GRADE handbook]).\nMain results\nOf 6947 identified citations, 19 RCTs fulfilled the eligibility criteria. These trials enrolled 9650 participants. Trial registries' searches identified nine registered but unpublished trials, two of which were labeled as 'ongoing trials'. In all included RCTs, the intervention consisted of heparin (either unfractionated heparin or low molecular weight heparin). Overall, heparin appears to have no effect on mortality at 12 months (risk ratio (RR) 0.98; 95% confidence interval (CI) 0.93 to 1.03; risk difference (RD) 10 fewer per 1000; 95% CI 35 fewer to 15 more; moderate certainty of evidence) and mortality at 24 months (RR 0.99; 95% CI 0.96 to 1.01; RD 8 fewer per 1000; 95% CI 31 fewer to 8 more; moderate certainty of evidence). Heparin therapy reduces the risk of symptomatic VTE (RR 0.56; 95% CI 0.47 to 0.68; RD 30 fewer per 1000; 95% CI 36 fewer to 22 fewer; high certainty of evidence), while it increases in the risks of major bleeding (RR 1.30; 95% 0.94 to 1.79; RD 4 more per 1000; 95% CI 1 fewer to 11 more; moderate certainty of evidence) and minor bleeding (RR 1.70; 95% 1.13 to 2.55; RD 17 more per 1000; 95% CI 3 more to 37 more; high certainty of evidence). Results failed to confirm or to exclude a beneficial or detrimental effect of heparin on thrombocytopenia (RR 0.69; 95% CI 0.37 to 1.27; RD 33 fewer per 1000; 95% CI 66 fewer to 28 more; moderate certainty of evidence); quality of life (moderate certainty of evidence).\nAuthors' conclusions\nHeparin appears to have no effect on mortality at 12 months and 24 months. It reduces symptomatic VTE and likely increases major and minor bleeding. Future research should further investigate the survival benefit of different types of anticoagulants in patients with different types and stages of cancer. The decision for a patient with cancer to start heparin therapy should balance the benefits and downsides, and should integrate the patient's values and preferences.\nEditorial note:This is a living systematic review. Living systematic reviews offer a new approach to review updating in which the review is continually updated, incorporating relevant new evidence, as it becomes available. Please refer to the Cochrane Database of Systematic Reviews for the current status of this review.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Research evidence suggests that blood thinners may improve the survival of patients with cancer, by preventing life-threatening blood clots and might also have a direct anticancer effect. However, blood thinners can also increase the risk of bleeding, which can be serious and reduce survival. It is therefore important to understand the pros and cons of treatment to allow patients and their doctors to be aware of the balance of risks and benefits."
}
] |
query-laysum
|
22986
|
[
{
"role": "user",
"content": "Abstract: Background\nGlaucoma is a leading cause of blindness worldwide. It results in a progressive loss of peripheral vision and, in late stages, loss of central vision leading to blindness. Early treatment of glaucoma aims to prevent or delay vision loss. Elevated intraocular pressure (IOP) is the main causal modifiable risk factor for glaucoma. Aqueous outflow obstruction is the main cause of IOP elevation, which can be mitigated either by increasing outflow or reducing aqueous humor production. Cyclodestructive procedures use various methods to target and destroy the ciliary body epithelium, the site of aqueous humor production, thereby lowering IOP. The most common approach is laser cyclophotocoagulation.\nObjectives\nTo assess the effectiveness and safety of cyclodestructive procedures for the management of non-refractory glaucoma (i.e. glaucoma in an eye that has not undergone incisional glaucoma surgery). We also aimed to compare the effect of different routes of administration, laser delivery instruments, and parameters of cyclophotocoagulation with respect to IOP control, visual acuity, pain control, and adverse events.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL) (which contains the Cochrane Eyes and Vision Trials Register) (2017, Issue 8); Ovid MEDLINE; Embase.com; LILACS; the metaRegister of Controlled Trials (mRCT) and ClinicalTrials.gov. The date of the search was 7 August 2017. We also searched the reference lists of reports from included studies.\nSelection criteria\nWe included randomized controlled trials of participants who had undergone cyclodestruction as a primary treatment for glaucoma. We included only head-to-head trials that had compared cyclophotocoagulation to other procedural interventions, or compared cyclophotocoagulation using different types of lasers, delivery methods, parameters, or a combination of these factors.\nData collection and analysis\nTwo review authors independently screened search results, assessed risks of bias, extracted data, and graded the certainty of the evidence in accordance with Cochrane standards.\nMain results\nWe included one trial (92 eyes of 92 participants) that evaluated the efficacy of diode transscleral cyclophotocoagulation (TSCPC) as primary surgical therapy. We identified no other eligible ongoing or completed trial. The included trial compared low-energy versus high-energy TSCPC in eyes with primary open-angle glaucoma. The trial was conducted in Ghana and had a mean follow-up period of 13.2 months post-treatment. In this trial, low-energy TSCPC was defined as 45.0 J delivered, high-energy as 65.5 J delivered; it is worth noting that other trials have defined high- and low-energy TSCPC differently. We assessed this trial to have had low risk of selection bias and reporting bias, unclear risk of performance bias, and high risk of detection bias and attrition bias. Trial authors excluded 13 participants with missing follow-up data; the analyses therefore included 40 (85%) of 47 participants in the low-energy group and 39 (87%) of 45 participants in the high-energy group.\nControl of IOP, defined as a decrease in IOP by 20% from baseline value, was achieved in 47% of eyes, at similar rates in the low-energy group and the high-energy groups; the small study size creates uncertainty about the significance of the difference, if any, between energy settings (risk ratio (RR) 1.03, 95% confidence interval (CI) 0.64 to 1.65; 79 participants; low-certainty evidence). The difference in effect between energy settings based on mean decrease in IOP, if any exists, also was uncertain (mean difference (MD) -0.50 mmHg, 95% CI -5.79 to 4.79; 79 participants; low-certainty evidence).\nDecreased vision was defined as the proportion of participants with a decrease of 2 or more lines on the Snellen chart or one or more categories of visual acuity when unable to read the eye chart. Twenty-three percent of eyes had a decrease in vision. The size of any difference between the low-energy group and the high-energy group was uncertain (RR 1.22, 95% CI 0.54 to 2.76; 79 participants; low-certainty evidence). Data were not available for mean visual acuity and proportion of participants with vision change defined as greater than 1 line on the Snellen chart.\nThe difference in the mean number of glaucoma medications used after cyclophotocoagulation was similar when comparing treatment groups (MD 0.10, 95% CI -0.43 to 0.63; 79 participants; moderate-certainty evidence). Twenty percent of eyes were retreated; the estimated effect of energy settings on the need for retreatment was inconclusive (RR 0.76, 95% CI 0.31 to 1.84; 79 participants; low-certainty evidence). No data for visual field, cost effectiveness, or quality-of-life outcomes were reported by the trial investigators.\nAdverse events were reported for the total study population, rather than by treatment group. The trial authors stated that most participants reported mild to moderate pain after the procedure, and many had transient conjunctival burns (percentages not reported). Severe iritis occurred in two eyes and hyphema occurred in three eyes. No instances of hypotony or phthisis bulbi were reported. The only adverse outcome that was reported by the treatment group was atonic pupil (RR 0.89 in the low-energy group, 95% CI 0.47 to 1.68; 92 participants; low-certainty evidence).\nAuthors' conclusions\nThere is insufficient evidence to evaluate the relative effectiveness and safety of cyclodestructive procedures for the primary procedural management of non-refractory glaucoma. Results from the one included trial did not compare cyclophotocoagulation to other procedural interventions and yielded uncertainty about any difference in outcomes when comparing low-energy versus high-energy diode TSCPC. Overall, the effect of laser treatment on IOP control was modest and the number of eyes experiencing vision loss was limited. More research is needed specific to the management of non-refractory glaucoma.\n\nGiven the provided abstract, please respond to the following query: \"Key messages\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We do not know whether this type of laser surgery is safer or more effective than other surgeries for treating glaucoma. We included only one study in this review. This study compared low-energy versus high-energy diode lasers, and the results were too similar between treatment groups to draw any conclusions. Additionally, this study did not compare destructive laser surgery to other surgical approaches. More research is needed to understand the usefulness of destructive laser procedures for primary glaucoma treatment."
}
] |
query-laysum
|
22987
|
[
{
"role": "user",
"content": "Abstract: Background\nPatient-reported outcomes measures (PROMs) assess a patient’s subjective appraisal of health outcomes from their own perspective. Despite hypothesised benefits that feedback on PROMs can support decision-making in clinical practice and improve outcomes, there is uncertainty surrounding the effectiveness of PROMs feedback.\nObjectives\nTo assess the effects of PROMs feedback to patients, or healthcare workers, or both on patient-reported health outcomes and processes of care.\nSearch methods\nWe searched MEDLINE, Embase, CENTRAL, two other databases and two clinical trial registries on 5 October 2020. We searched grey literature and consulted experts in the field.\nSelection criteria\nTwo review authors independently screened and selected studies for inclusion. We included randomised trials directly comparing the effects on outcomes and processes of care of PROMs feedback to healthcare professionals and patients, or both with the impact of not providing such information.\nData collection and analysis\nTwo groups of two authors independently extracted data from the included studies and evaluated study quality. We followed standard methodological procedures expected by Cochrane and EPOC. We used the GRADE approach to assess the certainty of the evidence. We conducted meta-analyses of the results where possible.\nMain results\nWe identified 116 randomised trials which assessed the effectiveness of PROMs feedback in improving processes or outcomes of care, or both in a broad range of disciplines including psychiatry, primary care, and oncology. Studies were conducted across diverse ambulatory primary and secondary care settings in North America, Europe and Australasia. A total of 49,785 patients were included across all the studies.\nThe certainty of the evidence varied between very low and moderate. Many of the studies included in the review were at risk of performance and detection bias.\nThe evidence suggests moderate certainty that PROMs feedback probably improves quality of life (standardised mean difference (SMD) 0.15, 95% confidence interval (CI) 0.05 to 0.26; 11 studies; 2687 participants), and leads to an increase in patient-physician communication (SMD 0.36, 95% CI 0.21 to 0.52; 5 studies; 658 participants), diagnosis and notation (risk ratio (RR) 1.73, 95% CI 1.44 to 2.08; 21 studies; 7223 participants), and disease control (RR 1.25, 95% CI 1.10 to 1.41; 14 studies; 2806 participants). The intervention probably makes little or no difference for general health perceptions (SMD 0.04, 95% CI -0.17 to 0.24; 2 studies, 552 participants; low-certainty evidence), social functioning (SMD 0.02, 95% CI -0.06 to 0.09; 15 studies; 2632 participants; moderate-certainty evidence), and pain (SMD 0.00, 95% CI -0.09 to 0.08; 9 studies; 2386 participants; moderate-certainty evidence). We are uncertain about the effect of PROMs feedback on physical functioning (14 studies; 2788 participants) and mental functioning (34 studies; 7782 participants), as well as fatigue (4 studies; 741 participants), as the certainty of the evidence was very low. We did not find studies reporting on adverse effects defined as distress following or related to PROM completion.\nAuthors' conclusions\nPROM feedback probably produces moderate improvements in communication between healthcare professionals and patients as well as in diagnosis and notation, and disease control, and small improvements to quality of life. Our confidence in the effects is limited by the risk of bias, heterogeneity and small number of trials conducted to assess outcomes of interest. It is unclear whether many of these improvements are clinically meaningful or sustainable in the long term. There is a need for more high-quality studies in this area, particularly studies which employ cluster designs and utilise techniques to maintain allocation concealment.\n\nGiven the provided abstract, please respond to the following query: \"Key messages\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Patient questionnaire responses fed back to health workers and patients may result in moderate benefits for patient-provider communication and small benefits for patients' quality of life. Healthcare workers probably make and record more diagnoses and take more notes. The intervention probably makes little or no difference for patient's general perceptions of their health, social functioning, and pain. There appears to be no impact on physical and mental functioning, and fatigue. Our confidence in these results is limited by the quality and number of included studies for each outcome."
}
] |
query-laysum
|
22988
|
[
{
"role": "user",
"content": "Abstract: Background\nAcute respiratory tract infections (ARTIs) are common in children and can involve both upper and lower airways. Many children experience frequent ARTI episodes or recurrent respiratory tract infections (RRTIs) in early life, which creates challenges for paediatricians, primary care physicians, parents and carers of children.\nIn China, Astragalus (Huang qi), alone or in combination with other herbs, is used by Traditional Chinese Medicine (TCM) practitioners in the form of a water extract, to reduce the risk of ARTIs; it is believed to stimulate the immune system. Better understanding of the therapeutic mechanisms of Astragalus may provide insights into ARTI prevention, and consequently reduced antibiotic use.\nObjectives\nTo assess the effectiveness and safety of oral Astragalus for preventing frequent episodes of acute respiratory tract infections (ARTIs) in children in community settings.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL, Issue 12, 2015), MEDLINE (Ovid) (1946 to 31 December 2015), Embase (Elsevier) (1974 to 31 December 2015), AMED (Ovid) (1985 to 31 December 2015), Chinese National Knowledge Infrastructure (CNKI) (1979 to 31 December 2015) and Chinese Scientific Journals full text database (CQVIP) (1989 to 31 December 2015), China Biology Medicine disc (CBM 1976 to 31 December 2015) and Wanfang Data Knowledge Service Platform (WanFang) (1998 to 31 December 2015).\nSelection criteria\nWe included randomised controlled trials (RCTs) comparing oral Astragalus as a sole Chinese herbal preparation with placebo to prevent frequent episodes of ARTIs in children.\nData collection and analysis\nWe used standard Cochrane methodological procedures for this review. We assessed search results to identify relevant studies. We planned to extract data using standardised forms. Disagreements were to be resolved through discussion. Risk of bias was to be assessed using the Cochrane 'Risk of bias' tool. We planned to use mean difference (MD) or standardised mean difference (SMD) for continuous data and risk ratio (RR) or odds ratio (OR) to analyse dichotomous data, both with 95% confidence intervals (CIs).\nMain results\nWe identified 6080 records: 3352 from English language databases, 2724 from Chinese databases, and four from other sources. Following initial screening and deduplication, we obtained 120 full-text papers for assessment. Of these, 21 were not RCTs; 55 did not meet the inclusion criteria because: participants were aged over 14 years; definition was not included for recurrent or frequent episodes;Astragalus preparation was not an intervention; Astragalus preparation was in the formula but was not the sole agent; the Astragalus preparation was not administered orally; or Astragalus was used for treatment rather than prevention of ARTI. A further 44 studies were excluded because they were not placebo-controlled, although other inclusion criteria were fulfilled.\nNo RCTs met our inclusion criteria.\nAuthors' conclusions\nWe found insufficient evidence to enable assessment of the effectiveness and safety of oral Astragalus as a sole intervention to prevent frequent ARTIs in children aged up to 14 years.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We could not assess evidence quality."
}
] |
query-laysum
|
22989
|
[
{
"role": "user",
"content": "Abstract: Background\nThe role of vitamin D supplementation as a treatment for COVID-19 has been a subject of considerable discussion. A thorough understanding of the current evidence regarding the effectiveness and safety of vitamin D supplementation for COVID-19 based on randomised controlled trials is required.\nObjectives\nTo assess whether vitamin D supplementation is effective and safe for the treatment of COVID-19 in comparison to an active comparator, placebo, or standard of care alone, and to maintain the currency of the evidence, using a living systematic review approach.\nSearch methods\nWe searched the Cochrane COVID-19 Study Register, Web of Science and the WHO COVID-19 Global literature on coronavirus disease to identify completed and ongoing studies without language restrictions to 11 March 2021.\nSelection criteria\nWe followed standard Cochrane methodology. We included randomised controlled trials (RCTs) evaluating vitamin D supplementation for people with COVID-19, irrespective of disease severity, age, gender or ethnicity.\nWe excluded studies investigating preventive effects, or studies including populations with other coronavirus diseases (severe acute respiratory syndrome (SARS) or Middle East respiratory syndrome (MERS)).\nData collection and analysis\nWe followed standard Cochrane methodology.\nTo assess bias in included studies, we used the Cochrane risk of bias tool (ROB 2) for RCTs. We rated the certainty of evidence using the GRADE approach for the following prioritised outcome categories: individuals with moderate or severe COVID-19: all-cause mortality, clinical status, quality of life, adverse events, serious adverse events, and for individuals with asymptomatic or mild disease: all-cause mortality, development of severe clinical COVID-19 symptoms, quality of life, adverse events, serious adverse events.\nMain results\nWe identified three RCTs with 356 participants, of whom 183 received vitamin D. In accordance with the World Health Organization (WHO) clinical progression scale, two studies investigated participants with moderate or severe disease, and one study individuals with mild or asymptomatic disease. The control groups consisted of placebo treatment or standard of care alone.\nEffectiveness of vitamin D supplementation for people with COVID-19 and moderate to severe disease\nWe included two studies with 313 participants. Due to substantial clinical and methodological diversity of both studies, we were not able to pool data. Vitamin D status was unknown in one study, whereas the other study reported data for vitamin D deficient participants. One study administered multiple doses of oral calcifediol at days 1, 3 and 7, whereas the other study gave a single high dose of oral cholecalciferol at baseline. We assessed one study with low risk of bias for effectiveness outcomes, and the other with some concerns about randomisation and selective reporting.\nAll-cause mortality at hospital discharge (313 participants)\nWe found two studies reporting data for this outcome. One study reported no deaths when treated with vitamin D out of 50 participants, compared to two deaths out of 26 participants in the control group (Risk ratio (RR) 0.11, 95% confidence interval (CI) 0.01 to 2.13). The other study reported nine deaths out of 119 individuals in the vitamin D group, whereas six participants out of 118 died in the placebo group (RR 1.49, 95% CI 0.55 to 4.04]. We are very uncertain whether vitamin D has an effect on all-cause mortality at hospital discharge (very low-certainty evidence).\nClinical status assessed by the need for invasive mechanical ventilation (237 participants)\nWe found one study reporting data for this outcome. Nine out of 119 participants needed invasive mechanical ventilation when treated with vitamin D, compared to 17 out of 118 participants in the placebo group (RR 0.52, 95% CI 0.24 to 1.13). Vitamin D supplementation may decrease need for invasive mechanical ventilation, but the evidence is uncertain (low-certainty evidence).\nQuality of life\nWe did not find data for quality of life.\nSafety of vitamin D supplementation for people with COVID-19 and moderate to severe disease\nWe did not include data from one study, because assessment of serious adverse events was not described and we are concerned that data might have been inconsistently measured. This study reported vomiting in one out of 119 participants immediately after vitamin D intake (RR 2.98, 95% CI 0.12 to 72.30). We are very uncertain whether vitamin D supplementation is associated with higher risk for adverse events (very low-certainty).\nEffectiveness and safety of vitamin D supplementation for people with COVID-19 and asymptomatic or mild disease\nWe found one study including 40 individuals, which did not report our prioritised outcomes, but instead data for viral clearance, inflammatory markers, and vitamin D serum levels. The authors reported no events of hypercalcaemia, but recording and assessment of further adverse events remains unclear. Authors administered oral cholecalciferol in daily doses for at least 14 days, and continued with weekly doses if vitamin D blood levels were > 50 ng/mL.\nAuthors' conclusions\nThere is currently insufficient evidence to determine the benefits and harms of vitamin D supplementation as a treatment of COVID-19. The evidence for the effectiveness of vitamin D supplementation for the treatment of COVID-19 is very uncertain. Moreover, we found only limited safety information, and were concerned about consistency in measurement and recording of these outcomes.\nThere was substantial clinical and methodological heterogeneity of included studies, mainly because of different supplementation strategies, formulations, vitamin D status of participants, and reported outcomes.\nThere is an urgent need for well-designed and adequately powered randomised controlled trials (RCTs) with an appropriate randomisation procedure, comparability of study arms and preferably double-blinding. We identified 21 ongoing and three completed studies without published results, which indicates that these needs will be addressed and that our findings are subject to change in the future. Due to the living approach of this work, we will update the review periodically.\n\nGiven the provided abstract, please respond to the following query: \"What is the link between vitamin D and COVID-19?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Some studies have shown that people who are in hospital with severe COVID-19 also have low levels of vitamin D (vitamin D deficiency). However, the risk factors for developing severe COVID-19 are the same as those for developing vitamin D deficiency, so it is difficult to tell if vitamin D deficiency itself is a risk factor for severe COVID-19. Risk factors include general ill-health, a poor diet, and pre-existing health conditions, such as diabetes, and liver and kidney disease.\nVitamin D is important for healthy bones, teeth and muscles. It helps to regulate blood sugar, the heart and blood vessels, and the lungs and airways. It also has a role in boosting the body’s immune system. These are areas affected by COVID-19, so giving vitamin D to people with COVID-19 might help them to recover more quickly or have the disease less severely."
}
] |
query-laysum
|
22990
|
[
{
"role": "user",
"content": "Abstract: Background\nRheumatoid arthritis (RA) is a chronic, systemic, inflammatory, autoimmune disease that results in joint deformity and immobility of the musculoskeletal system. The major goals of treatment are to relieve pain, reduce inflammation, slow down or stop joint damage, prevent disability, and preserve or improve the person's sense of well-being and ability to function. Tai Chi, interchangeably known as Tai Chi Chuan, is an ancient Chinese health-promoting martial art form that has been recognized in China as an effective arthritis therapy for centuries. This is an update of a review published in 2004.\nObjectives\nTo assess the benefits and harms of Tai Chi as a treatment for people with rheumatoid arthritis (RA).\nSearch methods\nWe updated the search of CENTRAL, MEDLINE, Embase, and clinical trial registries from 2002 to September 2018.\nSelection criteria\nWe selected randomized controlled trials and controlled clinical trials examining the benefits (ACR improvement criteria or pain, disease progression, function, and radiographic progression), and harms (adverse events and withdrawals) of exercise programs with Tai Chi instruction or incorporating principles of Tai Chi philosophy. We included studies of any duration that included control groups who received either no therapy or alternate therapy.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane.\nMain results\nAdding three studies (156 additional participants) to the original review, this update contains a total of seven trials with 345 participants. Participants were mostly women with RA, ranging in age from 16 to 80 years, who were treated in outpatient settings in China, South Korea, and the USA. The majority of the trials were at high risk of bias for performance and detection bias, due to the lack of blinding of participants or assessors. Almost 75% of the studies did not report random sequence generation, and we judged the risk of bias as unclear for allocation concealment in the majority of studies. The duration of the Tai Chi programs ranged from 8 to 12 weeks.\nIt is uncertain whether Tai Chi-based exercise programs provide a clinically important improvement in pain among Tai Chi participants compared to no therapy or alternate therapy. The change in mean pain in control groups, measured on visual analog scale (VAS 0 to 10 score, reduced score means less pain) ranged from a decrease of 0.51 to an increase of 1.6 at 12 weeks; in the Tai Chi groups, pain was reduced by a mean difference (MD) of -2.15 (95% confidence interval (CI) -3.19 to -1.11); 22% absolute improvement (95% CI, 11% to 32% improvement); 2 studies, 81 participants; very low-quality evidence, downgraded for imprecision, blinding and attrition bias.\nThere was very low-quality evidence, downgraded for, blinding, and attrition, that was inconclusive for an important difference in disease activity, measured using Disease Activity Scale (DAS-28-ESR) scores (0 to 10 scale, lower score means less disease activity), with no change in the control group and 0.40 reduction (95% CI -1.10 to 0.30) with Tai Chi; 4% absolute improvement (95% CI 11% improvement to 3% worsening); 1 study, 43 participants.\nFor the assessment of function, the change in mean Health Assessment Questionnaire (HAQ; 0 to 3 scale, lower score means better function) ranged from 0 to 0.1 in the control group, and reduced by MD 0.33 in the Tai Chi group (95% CI -0.79 to 0.12); 11% absolute improvement (95% CI 26% improvement to 4% worsening); 2 studies, 63 participants; very low-quality evidence, downgraded for imprecision, blinding, and attrition. We are unsure of an important improvement, as the results were inconclusive.\nParticipants in Tai Chi programs were less likely than those in a control group to withdraw from studies at 8 to 12 weeks (19/180 in intervention groups versus 49/165 in control groups; risk ratio (RR) 0.40 (95% CI 0.19 to 0.86); absolute difference 17% fewer (95% CI 30% fewer to 3% fewer); 7 studies, 289 participants; low-quality evidence, downgraded for imprecision and blinding.\nThere were no data available for radiographic progression. Short-term adverse events were not reported by group, but in two studies there was some narrative description of joint and muscle soreness and cramps; long-term adverse events were not reported.\nAuthors' conclusions\nIt is uncertain whether Tai Chi has any effect on clinical outcomes (joint pain, activity limitation, function) in RA, and important effects cannot be confirmed or excluded, since all outcomes had very low-quality evidence. Withdrawals from study were greater in the control groups than the Tai Chi groups, based on low-quality evidence. Although the incidence of adverse events is likely to be low with Tai Chi, we are uncertain, as studies failed to explicitly report such events. Few minor adverse events (joint and muscle soreness and cramps) were described qualitatively in the narrative of two of the studies. This updated review provides minimal change in the conclusions from the previous review, i.e. a pain outcome.\n\nGiven the provided abstract, please respond to the following query: \"What happened to people with rheumatoid arthritis who did Tai Chi?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Pain (measured on a visual analog scale (VAS) at 12 weeks)\n- People who did Tai Chi rated their pain 2.15 points lower (better) on a scale of 0 to 10, compared to the change in the control group (22% absolute improvement). The quality of the evidence was very low, due to a low number of participants and concerns about study design (2 studies, 81 participants).\n- People who did not do Tai Chi reported a mean change in pain that ranged from 0.5 lower to 1.6 points higher.\nDisease Activity (measured with the Disability Activity Scale (DAS-28-ESR) at 12 weeks)\n- People who did Tai Chi scored 0.4 points lower (better) on a scale of 0 to 10 for disease activity compared to the control group (4% absolute improvement). The quality of the evidence was very low, due to concerns about study design and a high number of withdrawals (1 study, 43 participants).\n- People who did not do Tai Chi reported no change in disease activity.\nFunction (measured by the Health Assessment Questionnaire (HAQ) at 12 weeks)\n- People who did Tai Chi scored 0.33 points lower (better) on a scale of 0 to 3 for function compared to the control group (11% absolute improvement). The quality of the evidence was very low, due to concerns about study design and a high number of withdrawals (2 studies, 63 participants).\n- People who did not do Tai Chi reported a mean change in function ranging from no change to 0.1 points higher.\nOverall withdrawals\n- 17/100 fewer people withdrew from the Tai Chi groups at 12 weeks (17% absolute improvement). The quality of the evidence was low, due to the low number of participants and concerns about study design (7 studies, 289 participants).\nWe found no studies that looked specifically at radiographic progression, short-term or long-term adverse events, although two studies described some joint and muscle soreness and cramps in the text."
}
] |
query-laysum
|
22991
|
[
{
"role": "user",
"content": "Abstract: Background\nParastomal herniation is a common problem following formation of a stoma after both elective and emergency abdominal surgery. Symptomatic hernias give rise to a significant amount of patient morbidity, and in some cases mortality, and therefore may necessitate surgical treatment to repair the hernial defect and/or re-site the stoma. In an effort to reduce this complication, recent research has focused on the application of a synthetic or biological mesh, inserted during stoma formation to help strengthen the abdominal wall.\nObjectives\nThe primary objective was to evaluate whether mesh reinforcement during stoma formation reduces the incidence of parastomal herniation. Secondary objectives included the safety or potential harms or both of mesh placement in terms of stoma-related infections, mesh-related infections, patient-reported symptoms/postoperative quality of life, and re-hospitalisation/ambulatory visits.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL; the Cochrane Library 2018, Issue 1), Ovid MEDLINE (1970 to 11 January 2018), Ovid Embase (1974 to 11 January 2018), and Science Citation Index Expanded (1970 to 11 January 2018). To identify ongoing studies, we also searched the metaRegister of Controlled Trials (mRCT) on 11 January 2018.\nSelection criteria\nWe considered for inclusion all randomised controlled trials (RCTs) of prosthetic mesh (including biological/composite mesh) placement versus a control group (no mesh) for the prevention of parastomal hernia.\nData collection and analysis\nTwo review authors independently assessed the studies identified by the literature search for potential eligibility. We obtained the full articles for all studies that potentially met the inclusion criteria and included all those that met the criteria. Any differences in opinion between review authors were resolved by consensus. We pooled study data into a meta-analysis. We assessed heterogeneity by calculation of I2 and expressed results for each variable as a risk ratio (RR) with corresponding 95% confidence intervals (CI). We expressed continous outcomes as mean difference (MD) with corresponding 95% CIs.\nMain results\nWe included 10 RCTs involving a total of 844 participants. The primary outcome was overall incidence of parastomal herniation. Secondary outcomes were rate of reoperation at 12 months, operative time, postoperative length of hospital stay, stoma-related infections, mesh-related infections, quality of life, and rehospitalisation rate. We judged the risk of bias across all domains to be low in six trials. We judged four trials to have an overall high risk of bias.\nThe overall incidence of parastomal hernia was less in participants receiving a prophylactic mesh compared to those who had a standard ostomy formation (RR 0.53, 95% CI 0.43 to 0.66; 10 studies, 771 participants; I2 = 69%; low-quality evidence). In absolute numbers, the incidence of parastomal hernia was 22 per 100 participants (18 to 27) receiving prophylactic mesh compared to 41 per 100 participants having a standard ostomy formation.\nThere were no differences in the need for reoperation (RR 0.90, 95% CI 0.50 to 1.64; 9 studies, 757 participants; I2 = 0%; low-quality evidence); operative time (MD -6.50 (min), 95% CI -18.24 to 5.24; 6 studies, 671 participants; low-quality evidence); postoperative length of hospital stay (MD -0.95 (days), 95% CI -2.03 to 0.70; 4 studies, 500 participants; moderate-quality evidence); or stoma-related infections (RR 0.89, 95% CI 0.32 to 2.50; 6 studies, 472 participants; I2 = 0%; low-quality evidence) between the two groups.\nWe were unable to analyse mesh-related infections, quality of life, and rehospitalisation rate due to sparse data or because the outcome was not reported in the included studies.\nAuthors' conclusions\nThis Cochrane Review included 10 RCTs with a total of 844 participants. The review demonstrated a reduction in the incidence of parastomal hernia in people who had a prophylactic synthetic mesh placed at the time of the index operation compared to a standard ostomy formation. However, our confidence in this estimate is low due to the presence of a large degree of clinical heterogeneity, as well as high variability in follow-up duration and technique of parastomal herniation detection. We found the rate of stoma-related infection to be similar in both the intervention and control groups.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Hernia formation around a stoma affects up to 50% of people undergoing formation of a stoma. The hernia might enlarge over time, which can cause considerable patient discomfort which in turn may lead patients to restrict their work and other physical activities. Reoperation and cosmetic concerns may also arise."
}
] |
query-laysum
|
22992
|
[
{
"role": "user",
"content": "Abstract: Background\nMore than one in five patients who undergo treatment for breast cancer will develop breast cancer-related lymphedema (BCRL). BCRL can occur as a result of breast cancer surgery and/or radiation therapy. BCRL can negatively impact comfort, function, and quality of life (QoL). Manual lymphatic drainage (MLD), a type of hands-on therapy, is frequently used for BCRL and often as part of complex decongestive therapy (CDT). CDT is a fourfold conservative treatment which includes MLD, compression therapy (consisting of compression bandages, compression sleeves, or other types of compression garments), skin care, and lymph-reducing exercises (LREs). Phase 1 of CDT is to reduce swelling; Phase 2 is to maintain the reduced swelling.\nObjectives\nTo assess the efficacy and safety of MLD in treating BCRL.\nSearch methods\nWe searched Medline, EMBASE, CENTRAL, WHO ICTRP (World Health Organization's International Clinical Trial Registry Platform), and Cochrane Breast Cancer Group's Specialised Register from root to 24 May 2013. No language restrictions were applied.\nSelection criteria\nWe included randomized controlled trials (RCTs) or quasi-RCTs of women with BCRL. The intervention was MLD. The primary outcomes were (1) volumetric changes, (2) adverse events. Secondary outcomes were (1) function, (2) subjective sensations, (3) QoL, (4) cost of care.\nData collection and analysis\nWe collected data on three volumetric outcomes. (1) LE (lymphedema) volume was defined as the amount of excess fluid left in the arm after treatment, calculated as volume in mL of affected arm post-treatment minus unaffected arm post-treatment. (2) Volume reduction was defined as the amount of fluid reduction in mL from before to after treatment calculated as the pretreatment LE volume of the affected arm minus the post-treatment LE volume of the affected arm. (3) Per cent reduction was defined as the proportion of fluid reduced relative to the baseline excess volume, calculated as volume reduction divided by baseline LE volume multiplied by 100. We entered trial data into Review Manger 5.2 (RevMan), pooled data using a fixed-effect model, and analyzed continuous data as mean differences (MDs) with 95% confidence intervals (CIs). We also explored subgroups to determine whether mild BCRL compared to moderate or severe BCRL, and BCRL less than a year compared to more than a year was associated with a better response to MLD.\nMain results\nSix trials were included. Based on similar designs, trials clustered in three categories.\n(1) MLD + standard physiotherapy versus standard physiotherapy (one trial) showed significant improvements in both groups from baseline but no significant between-groups differences for per cent reduction.\n(2) MLD + compression bandaging versus compression bandaging (two trials) showed significant per cent reductions of 30% to 38.6% for compression bandaging alone, and an additional 7.11% reduction for MLD (MD 7.11%, 95% CI 1.75% to 12.47%; two RCTs; 83 participants). Volume reduction was borderline significant (P = 0.06). LE volume was not significant. Subgroup analyses was significant showing that participants with mild-to-moderate BCRL were better responders to MLD than were moderate-to-severe participants.\n(3) MLD + compression therapy versus nonMLD treatment + compression therapy (three trials) were too varied to pool. One of the trials compared compression sleeve plus MLD to compression sleeve plus pneumatic pump. Volume reduction was statistically significant favoring MLD (MD 47.00 mL, 95% CI 15.25 mL to 78.75 mL; 1 RCT; 24 participants), per cent reduction was borderline significant (P=0.07), and LE volume was not significant. A second trial compared compression sleeve plus MLD to compression sleeve plus self-administered simple lymphatic drainage (SLD), and was significant for MLD for LE volume (MD -230.00 mL, 95% CI -450.84 mL to -9.16 mL; 1 RCT; 31 participants) but not for volume reduction or per cent reduction. A third trial of MLD + compression bandaging versus SLD + compression bandaging was not significant (P = 0.10) for per cent reduction, the only outcome measured (MD 11.80%, 95% CI -2.47% to 26.07%, 28 participants).\nMLD was well tolerated and safe in all trials.\nTwo trials measured function as range of motion with conflicting results. One trial reported significant within-groups gains for both groups, but no between-groups differences. The other trial reported there were no significant within-groups gains and did not report between-groups results. One trial measured strength and reported no significant changes in either group.\nTwo trials measured QoL, but results were not usable because one trial did not report any results, and the other trial did not report between-groups results.\nFour trials measured sensations such as pain and heaviness. Overall, the sensations were significantly reduced in both groups over baseline, but with no between-groups differences. No trials reported cost of care.\nTrials were small ranging from 24 to 45 participants. Most trials appeared to randomize participants adequately. However, in four trials the person measuring the swelling knew what treatment the participants were receiving, and this could have biased results.\nAuthors' conclusions\nMLD is safe and may offer additional benefit to compression bandaging for swelling reduction. Compared to individuals with moderate-to-severe BCRL, those with mild-to-moderate BCRL may be the ones who benefit from adding MLD to an intensive course of treatment with compression bandaging. This finding, however, needs to be confirmed by randomized data.\nIn trials where MLD and sleeve were compared with a nonMLD treatment and sleeve, volumetric outcomes were inconsistent within the same trial. Research is needed to identify the most clinically meaningful volumetric measurement, to incorporate newer technologies in LE assessment, and to assess other clinically relevant outcomes such as fibrotic tissue formation.\nFindings were contradictory for function (range of motion), and inconclusive for quality of life.\nFor symptoms such as pain and heaviness, 60% to 80% of participants reported feeling better regardless of which treatment they received.\nOne-year follow-up suggests that once swelling had been reduced, participants were likely to keep their swelling down if they continued to use a custom-made sleeve.\n\nGiven the provided abstract, please respond to the following query: \"Key Results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "When women were treated with a course of intensive compression bandaging, their swelling went down about 30% to 37%. When MLD was added to the intensive course of compression bandaging, their swelling went down another 7.11%. Thus, MLD may offer benefit when added to compression bandaging.\nExamining this finding more closely showed that this significant reduction benefit was observed in people with mild-to-moderate lymphedema when compared to participants with moderate-to-severe lymphedema. Thus, our findings suggest that individuals with mild-to-moderate BCRL are the ones who may benefit from adding MLD to an intensive course of treatment with compression bandaging. This finding, however, needs to be confirmed by further research.\nWhen women were given a standard elastic compression sleeve plus MLD and compared to women who received a standard compression sleeve plus a nonMLD treatment, results were mixed (sometimes favoring MLD and sometimes favoring neither treatment.)\nOne-year follow-up suggests that once swelling had been reduced, participants were likely to keep their swelling down if they continued to use a custom-made sleeve.\nMLD is safe and well tolerated.\nFindings were contradictory for function (range of motion), with one trial showing benefit and the other not. Two trials measured quality of life, but neither trial presented results comparing the treatment group to the control, so findings are inconclusive.\nNo trial measured cost of care."
}
] |
query-laysum
|
22993
|
[
{
"role": "user",
"content": "Abstract: Background\nFear of dental pain is a major barrier to treatment for children who need dental care. The use of preoperative analgesics has the potential to reduce postoperative discomfort and intraoperative pain. We reviewed the available evidence to determine whether further research is warranted and to inform the development of prescribing guidelines. This is an update of a Cochrane review published in 2012.\nObjectives\nTo assess the effects of preoperative analgesics for intraoperative or postoperative pain relief (or both) in children and adolescents undergoing dental treatment without general anaesthesia or sedation.\nSearch methods\nWe searched the following electronic databases: Cochrane Oral Health's Trials Register (to 5 January 2016), the Cochrane Central Register of Controlled Trials (CENTRAL) (the Cochrane Library 2015, Issue 12), MEDLINE via OVID (1946 to 5 January 2016), EMBASE via OVID (1980 to 5 January 2016), LILACS via BIREME (1982 to 5 January 2016) and the ISI Web of Science (1945 to 5 January 2016). We searched ClinicalTrials.gov and the World Health Organization (WHO) International Clinical Trials Registry Platform for ongoing trials to 5 January 2016. There were no restrictions regarding language or date of publication in the searches of the electronic databases. We handsearched several specialist journals dating from 2000 to 2011.\nWe checked the reference lists of all eligible trials for additional studies. We contacted specialists in the field for any unpublished data.\nSelection criteria\nRandomised controlled clinical trials of analgesics given before dental treatment versus placebo or no analgesics in children and adolescents up to 17 years of age. We excluded children and adolescents having dental treatment under sedation (including nitrous oxide/oxygen) or general anaesthesia.\nData collection and analysis\nTwo review authors assessed titles and abstracts of the articles obtained from the searches for eligibility, undertook data extraction and assessed the risk of bias in the included studies. We assessed the quality of the evidence using GRADE criteria.\nMain results\nWe included five trials in the review, with 190 participants in total. We did not identify any new studies for inclusion from the updated search in January 2016.\nThree trials were related to dental treatment, i.e. restorative and extraction treatments; two trials related to orthodontic treatment. We did not judge any of the included trials to be at low risk of bias.\nThree of the included trials compared paracetamol with placebo, only two of which provided data for analysis (presence or absence of parent-reported postoperative pain behaviour). Meta-analysis of the two trials gave arisk ratio (RR) for postoperative pain of 0.81 (95% confidence interval (CI) 0.53 to 1.22; two trials, 100 participants; P = 0.31), which showed no evidence of a benefit in taking paracetamol preoperatively (52% reporting pain in the placebo group versus 42% in the paracetamol group). One of these trials was at unclear risk of bias, and the other was at high risk. The quality of the evidence is low. One study did not have any adverse events; the other two trials did not mention adverse events.\nFour of the included trials compared ibuprofen with placebo. Three of these trials provided useable data. One trial reported no statistical difference in postoperative pain experienced by the ibuprofen group and the control group for children undergoing dental treatment. We pooled the data from the other two trials, which included participants who were having orthodontic separator replacement without a general anaesthetic, to determine the effect of preoperative ibuprofen on the severity of postoperative pain. There was a statistically significant mean difference in severity of postoperative pain of -13.44 (95% CI -23.01 to -3.88; two trials, 85 participants; P = 0.006) on a visual analogue scale (0 to 100), which indicated a probable benefit for preoperative ibuprofen before this orthodontic procedure. However, both trials were at high risk of bias. The quality of the evidence is low. Only one of the trials reported adverse events (one participant from the ibuprofen group and one from the placebo group reporting a lip or cheek biting injury).\nAuthors' conclusions\nFrom the available evidence, we cannot determine whether or not preoperative analgesics are of benefit in paediatric dentistry for procedures under local anaesthetic. There is probably a benefit in using preoperative analgesics prior to orthodontic separator placement. The quality of the evidence is low. Further randomised clinical trials should be completed with appropriate sample sizes and well defined outcome measures.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "None of the included studies were at low risk of bias. The quality of the evidence is low."
}
] |
query-laysum
|
22994
|
[
{
"role": "user",
"content": "Abstract: Background\nGestational weight gain is positively associated with fetal growth, and observational studies of food supplementation in pregnancy have reported increases in gestational weight gain and fetal growth.\nObjectives\nTo assess the effects of education during pregnancy to increase energy and protein intake, or of actual energy and protein supplementation, on energy and protein intake, and the effect on maternal and infant health outcomes.\nSearch methods\nWe searched the Cochrane Pregnancy and Childbirth Group's Trials Register (31 January 2015), reference lists of retrieved studies and contacted researchers in the field.\nSelection criteria\nRandomised controlled trials of dietary education to increase energy and protein intake, or of actual energy and protein supplementation, during pregnancy.\nData collection and analysis\nTwo review authors independently assessed trials for inclusion and assessed risk of bias. Two review authors independently extracted data and checked for accuracy. Extracted data were supplemented by additional information from the trialists we contacted.\nMain results\nWe examined 149 reports corresponding to 65 trials. Of these trials, 17 were included, 46 were excluded, and two are ongoing. Overall, 17 trials involving 9030 women were included. For this update, we assessed methodological quality of the included trials using the standard Cochrane criteria (risk of bias) and the GRADE approach. The overall risk of bias was unclear.\nNutritional education (five trials, 1090 women)\nWomen given nutritional education had a lower relative risk of having a preterm birth (two trials, 449 women) (risk ratio (RR) 0.46, 95% CI 0.21 to 0.98, low-quality evidence), and low birthweight (one trial, 300 women) (RR 0.04, 95% CI 0.01 to 0.14). Head circumference at birth was increased in one trial (389 women) (mean difference (MD) 0.99 cm, 95% CI 0.43 to 1.55), while birthweight was significantly increased among undernourished women in two trials (320 women) (MD 489.76 g, 95% CI 427.93 to 551.59, low-quality evidence), but did not significantly increase for adequately nourished women (MD 15.00, 95% CI -76.30 to 106.30, one trial, 406 women). Protein intake increased significantly (three trials, 632 women) (protein intake: MD +6.99 g/day, 95% CI 3.02 to 10.97). No significant differences were observed on any other outcomes such as neonatal death (RR 1.28, 95% CI 0.35 to 4.72, one trial, 448 women, low-quality evidence), stillbirth (RR 0.37, 95% CI 0.07 to 1.90, one trial, 431 women, low-quality evidence), small-for-gestational age (RR 0.97, 95% CI 0.45 to 2.11, one trial, 404 women, low-quality evidence) and total gestational weight gain (MD -0.41, 95% CI -4.41 to 3.59, two trials, 233 women). There were no data on perinatal death.\nBalanced energy and protein supplementation (12 trials, 6705 women)\nRisk of stillbirth was significantly reduced for women given balanced energy and protein supplementation (RR 0.60, 95% CI 0.39 to 0.94, five trials, 3408 women, moderate-quality evidence), and the mean birthweight was significantly increased (random-effects MD +40.96 g, 95% CI 4.66 to 77.26, Tau² = 1744, I² = 44%, 11 trials, 5385 women, moderate-quality evidence). There was also a significant reduction in the risk of small-for-gestational age (RR 0.79, 95% CI 0.69 to 0.90, I² = 16%, seven trials, 4408 women, moderate-quality evidence). No significant effect was detected for preterm birth (RR 0.96, 95% CI 0.80 to 1.16, five trials, 3384 women, moderate-quality evidence) or neonatal death (RR 0.68, 95% CI 0.43 to 1.07, five trials, 3381 women, low-quality evidence). Weekly gestational weight gain was not significantly increased (MD 18.63, 95% CI -1.81 to 39.07, nine trials, 2391 women, very low quality evidence). There were no data reported on perinatal death and low birthweight.\nHigh-protein supplementation (one trial, 1051 women)\nHigh-protein supplementation (one trial, 505 women), was associated with a significantly increased risk of small-for-gestational age babies (RR 1.58, 95% CI 1.03 to 2.41, moderate-quality evidence). There was no significant effect for stillbirth (RR 0.81, 95% CI 0.31 to 2.15, one trial, 529 women), neonatal death (RR 2.78, 95% CI 0.75 to 10.36, one trial, 529 women), preterm birth (RR 1.14, 95% CI 0.83 to 1.56, one trial, 505 women), birthweight (MD -73.00, 95% CI -171.26 to 25.26, one trial, 504 women) and weekly gestational weight gain (MD 4.50, 95% CI -33.55 to 42.55, one trial, 486 women, low-quality evidence). No data were reported on perinatal death.\nIsocaloric protein supplementation (two trials, 184 women)\nIsocaloric protein supplementation (two trials, 184 women) had no significant effect on birthweight (MD 108.25, 95% CI -220.89 to 437.40) and weekly gestational weight gain (MD 110.45, 95% CI -82.87 to 303.76, very low-quality evidence). No data reported on perinatal mortality, stillbirth, neonatal death, small-for-gestational age, and preterm birth.\nAuthors' conclusions\nThis review provides encouraging evidence that antenatal nutritional education with the aim of increasing energy and protein intake in the general obstetric population appears to be effective in reducing the risk of preterm birth, low birthweight, increasing head circumference at birth, increasing birthweight among undernourished women, and increasing protein intake. There was no evidence of benefit or adverse effect for any other outcome reported.\nBalanced energy and protein supplementation seems to improve fetal growth, and may reduce the risk of stillbirth and infants born small-for-gestational age. High-protein supplementation does not seem to be beneficial and may be harmful to the fetus. Balanced-protein supplementation alone had no significant effects on perinatal outcomes.\nThe results of this review should be interpreted with caution. The risk of bias was either unclear or high for at least one category examined in several of the included trials, and the quality of the evidence was low for several important outcomes. Also, as the anthropometric characteristics of the general obstetric population is changing, those developing interventions aimed at altering energy and protein intake should ensure that only those women likely to benefit are included. Large, well-designed randomised trials are needed to assess the effects of increasing energy and protein intake during pregnancy in women whose intake is below recommended levels.\n\nGiven the provided abstract, please respond to the following query: \"What does this mean?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Providing nutritional advice or balanced energy and protein supplements to women during pregnancy may be beneficial. However, there is not enough evidence on isocaloric protein supplements which currently appear to be unhelpful and high-protein supplements may be harmful."
}
] |
query-laysum
|
22995
|
[
{
"role": "user",
"content": "Abstract: Background\nUstekinumab (CNTO 1275) and briakinumab (ABT-874) are monoclonal antibodies that target the standard p40 subunit of the cytokines interleukin-12 and interleukin-23 (IL-12/23p40), which are involved in the pathogenesis of Crohn's disease.\nObjectives\nThe objectives of this review were to assess the efficacy and safety of anti-IL-12/23p40 antibodies for induction of remission in Crohn's disease.\nSearch methods\nWe searched the following databases from inception to 12 September 2016: PubMed, MEDLINE, EMBASE, and the Cochrane Library (CENTRAL). References and conference abstracts were searched to identify additional studies.\nSelection criteria\nRandomized controlled trials (RCTs) trials in which monoclonal antibodies against IL-12/23p40 were compared to placebo or another active comparator in patients with active Crohn's disease were included.\nData collection and analysis\nTwo authors independently screened studies for inclusion and extracted data. Methodological quality was assessed using the Cochrane risk of bias tool. The primary outcome was failure to induce clinical remission, defined as a Crohn's disease activity index (CDAI) of < 150 points. Secondary outcomes included failure to induce clinical improvement, adverse events, serious adverse events, and withdrawals due to adverse events. Clinical improvement was defined as decreases of > 70 or > 100 points in the CDAI from baseline. We calculated the risk ratio (RR) and 95% confidence intervals (95% CI) for each outcome. Data were analyzed on an intention-to-treat basis. The overall quality of the evidence supporting the outcomes was evaluated using the GRADE criteria.\nMain results\nSix RCTs (n = 2324 patients) met the inclusion criteria. A low risk of bias was assigned to all studies. The two briakinumab trials were not pooled due to differences in doses and time points for analysis. In both studies there was no statistically significant difference in remission rates. One study (n = 79) compared doses of 1 mg/kg and 3 mg/kg to placebo. In the briakinumab group 70% (44/63) of patients failed to enter clinical remission at 6 or 9 weeks compared to 81% (13/16) of placebo patients (RR 0.86, 95% CI 0.65 to 1.14). Subgroup analysis revealed no significant differences by dose. The other briakinumab study (n = 230) compared intravenous doses of 200 mg, 400 mg and 700 mg with placebo. Eighty-four per cent (154/184) of briakinumab patients failed to enter clinical remission at six weeks compared to 91% (42/46) of placebo patients (RR 0.92, 95% CI 0.83 to 1.03). Subgroup analysis revealed no significant differences by dose. GRADE analyses of the briakinumab studies rated the overall quality of the evidence for the outcome clinical remission as low. Based on the results of these two studies the manufacturers of briakinumab stopped production of this medication. The ustekinumab studies were pooled despite differences in intravenous doses (i.e. 1mg/kg, 3 mg/kg, 4.5 mg/kg, and 6 mg/kg), however the subcutaneous dose group was not included in the analysis, as it was unclear if subcutaneous was equivalent to intravenous dosing. There was a statistically significant difference in remission rates. At week six, 84% (764/914) of ustekinumab patients failed to enter remission compared to 90% (367/406) of placebo patients (RR 0.92, 95% CI 0.88 to 0.96; 3 studies; high-quality evidence). Subgroup analysis showed a statistically significant difference for the 6.0 mg/kg dose group (moderate-quality evidence). There were statistically significant differences in clinical improvement between ustekinumab and placebo-treated patients. In the ustekinumab group, 55% (502/914) of patients failed to improve clinically (i.e. 70-point decline in CDAI score), compared to 71% (287/406) of placebo patients (RR 0.78, 95% CI 0.71 to 0.85; 3 studies). Subgroup analysis revealed significant differences compared to placebo for the 1 mg/kg, 4.5 mg/kg and 6 mg/kg dosage subgroups. Similarly for a 100-point decline in CDAI, 64% (588/914) of patients in the ustekinumab group failed to improve clinically compared to 78% (318/406) of placebo patients (RR 0.82, 95% CI 0.77 to 0.88; 3 studies; high-quality evidence). Subgroup analysis showed a significant difference compared to placebo for the 4.5 mg/kg and 6.0 mg/kg (high-quality evidence) dose groups. There were no statistically significant differences in the incidence of adverse events, serious adverse events or withdrawal due to adverse events. Sixty-two per cent (860/1386) of ustekinumab patients developed at least one adverse event compared to 64% (407/637) of placebo patients (RR 0.97, 95% CI 0.90 to 1.04; 4 studies; high-quality evidence). Five per cent (75/1386) of ustekinumab patients had a serious adverse event compared to 6% (41/637) of placebo patients (RR 0.83, 95% CI 0.58 to 1.20; 4 studies; moderate-quality evidence). The most common adverse events in briakinumab patients were injection site reactions and infections. Infections were the most common adverse event in ustekinumab patients. Worsening of Crohn's disease and serious infections were the most common serious adverse events.\nAuthors' conclusions\nHigh quality evidence suggests that ustekinumab is effective for induction of clinical remission and clinical improvement in patients with moderate to severe Crohn's disease. Moderate to high quality evidence suggests that the optimal dosage of ustekinumab is 6 mg/kg. Briakinumab and ustekinumab appear to be safe. Moderate quality evidence suggests no increased risk of serious adverse events. Future studies are required to determine the long-term efficacy and safety of ustekinumab in patients with moderate to severe Crohn's disease.\n\nGiven the provided abstract, please respond to the following query: \"What are ustekinumab and briakinumab?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Ustekinumab and briakinumab are biologic medications. These medications can be injected under the skin using a syringe or directly infused into a vein (intravenous). Biologic therapies suppress the immune system and reduce the inflammation associated with Crohn's disease. When people with Crohn's disease are experiencing symptoms of the disease it is said to be ‘active’; periods when the symptoms stop are called ‘remission’."
}
] |
query-laysum
|
22996
|
[
{
"role": "user",
"content": "Abstract: Background\nAsthma is a respiratory disease characterised by variable airflow limitation and the presence of respiratory symptoms including wheeze, chest tightness, cough and/or dyspnoea. Exercise training is beneficial for people with asthma; however, the response to conventional models of pulmonary rehabilitation is less clear.\nObjectives\nTo evaluate, in adults with asthma, the effectiveness of pulmonary rehabilitation compared to usual care on exercise performance, asthma control, and quality of life (co-primary outcomes), incidence of severe asthma exacerbations/hospitalisations, mental health, muscle strength, physical activity levels, inflammatory biomarkers, and adverse events.\nSearch methods\nWe identified studies from the Cochrane Airways Trials Register, Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, Embase, ClinicalTrials.gov, and the World Health Organization International Clinical Trials Registry Platform, from their inception to May 2021, as well as the reference lists of all primary studies and review articles.\nSelection criteria\nWe included randomised controlled trials in which pulmonary rehabilitation was compared to usual care in adults with asthma. Pulmonary rehabilitation must have included a minimum of four weeks (or eight sessions) aerobic training and education or self-management. Co-interventions were permitted; however, exercise training alone was not.\nData collection and analysis\nFollowing the use of Cochrane's Screen4Me workflow, two review authors independently screened and selected trials for inclusion, extracted study characteristics and outcome data, and assessed risk of bias using the Cochrane risk of bias tool. We contacted study authors to retrieve missing data. We calculated between-group effects via mean differences (MD) or standardised mean differences (SMD) using a random-effects model. We evaluated the certainty of evidence using GRADE methodology.\nMain results\nWe included 10 studies involving 894 participants (range 24 to 412 participants (n = 2 studies involving n > 100, one contributing to meta-analysis), mean age range 27 to 54 years). We identified one ongoing study and three studies awaiting classification. One study was synthesised narratively, and another involved participants specifically with asthma-COPD overlap. Most programmes were outpatient-based, lasting from three to four weeks (inpatient) or eight to 12 weeks (outpatient). Education or self-management components included breathing retraining and relaxation, nutritional advice and psychological counselling. One programme was specifically tailored for people with severe asthma.\nPulmonary rehabilitation compared to usual care may increase maximal oxygen uptake (VO2 max) after programme completion, but the evidence is very uncertain for data derived using mL/kg/min (MD between groups of 3.63 mL/kg/min, 95% confidence interval (CI) 1.48 to 5.77; 3 studies; n = 129) and uncertain for data derived from % predicted VO2 max (MD 14.88%, 95% CI 9.66 to 20.1%; 2 studies; n = 60). The evidence is very uncertain about the effects of pulmonary rehabilitation compared to usual care on incremental shuttle walk test distance (MD between groups 74.0 metres, 95% CI 26.4 to 121.4; 1 study; n = 30). Pulmonary rehabilitation may have little to no effect on VO2 max at longer-term follow up (9 to 12 months), but the evidence is very uncertain (MD −0.69 mL/kg/min, 95% CI −4.79 to 3.42; I2 = 49%; 3 studies; n = 66).\nPulmonary rehabilitation likely improves functional exercise capacity as measured by 6-minute walk distance, with MD between groups after programme completion of 79.8 metres (95% CI 66.5 to 93.1; 5 studies; n = 529; moderate certainty evidence). This magnitude of mean change exceeds the minimally clinically important difference (MCID) threshold for people with chronic respiratory disease. The evidence is very uncertain about the longer-term effects one year after pulmonary rehabilitation for this outcome (MD 52.29 metres, 95% CI 0.7 to 103.88; 2 studies; n = 42).\nPulmonary rehabilitation may result in a small improvement in asthma control compared to usual care as measured by Asthma Control Questionnaire (ACQ), with an MD between groups of −0.46 (95% CI −0.76 to −0.17; 2 studies; n = 93; low certainty evidence); however, data derived from the Asthma Control Test were very uncertain (MD between groups 3.34, 95% CI −2.32 to 9.01; 2 studies; n = 442). The ACQ finding approximates the MCID of 0.5 points. Pulmonary rehabilitation results in little to no difference in asthma control as measured by ACQ at nine to 12 months follow-up (MD 0.09, 95% CI −0.35 to 0.53; 2 studies; n = 48; low certainty evidence).\nPulmonary rehabilitation likely results in a large improvement in quality of life as assessed by the St George's Respiratory Questionnaire (SGRQ) total score (MD −18.51, 95% CI −20.77 to −16.25; 2 studies; n = 440; moderate certainty evidence), with this magnitude of change exceeding the MCID. However, pulmonary rehabilitation may have little to no effect on Asthma Quality of Life Questionnaire (AQLQ) total scores, with the evidence being very uncertain (MD 0.87, 95% CI −0.13 to 1.86; 2 studies; n = 442). Longer-term follow-up data suggested improvements in quality of life may occur as measured by SGRQ (MD −13.4, 95% CI −15.93 to −10.88; 2 studies; n = 430) but not AQLQ (MD 0.58, 95% CI −0.23 to 1.38; 2 studies; n = 435); however, the evidence is very uncertain.\nOne study reported no difference between groups in the proportion of participants who experienced an asthma exacerbation during the intervention period. Data from one study suggest adverse events attributable to the intervention are rare.\nOverall risk of bias was most commonly impacted by performance bias attributed to a lack of participant blinding to knowledge of the intervention. This is inherently challenging to overcome in rehabilitation studies.\nAuthors' conclusions\nModerate certainty evidence shows that pulmonary rehabilitation is probably associated with clinically meaningful improvements in functional exercise capacity and quality of life upon programme completion in adults with asthma. The certainty of evidence relating to maximal exercise capacity was very low to low. Pulmonary rehabilitation appears to confer minimal effect on asthma control, although the certainty of evidence is very low to low. Unclear reporting of study methods and small sample sizes limits our certainty in the overall body of evidence, whilst heterogenous study designs and interventions likely contribute to inconsistent findings across clinical outcomes and studies. There remains considerable scope for future research.\n\nGiven the provided abstract, please respond to the following query: \"What did we find?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "- We found 10 studies involving 894 adults with asthma.\n- The studies ranged in size from 24 to 412 people.\n- Most studies were conducted in Europe.\n- Where reported, most study participants were female, with the average age ranging from 27 to 54 years.\n- One study specifically included people with severe forms of asthma. Another study specifically included people who had a condition involving overlapping features of both asthma and chronic obstructive pulmonary disease (COPD).\n- The way pulmonary rehabilitation was delivered varied across studies. Inpatient programmes lasted 3 to 4 weeks, whilst outpatient programmes lasted 8 to 12 weeks.\n- The specific nature of exercise or education components amongst the included studies varied widely."
}
] |
query-laysum
|
22997
|
[
{
"role": "user",
"content": "Abstract: Background\nWomen with inflammatory bowel disease (IBD) may require surgery, which may result in higher risk of infertility. Restorative proctocolectomy with ileal anal pouch anastomosis (IPAA) may increase infertility, but the degree to which IPAA affects infertility remains unclear, and the impact of other surgical interventions on infertility is unknown.\nObjectives\nPrimary objective\n• To determine the effects of surgical interventions for IBD on female infertility.\nSecondary objectives\n• To evaluate the impact of surgical interventions on the need for assisted reproductive technology (ART), time to pregnancy, miscarriage, stillbirth, prematurity, mode of delivery (spontaneous vaginal, instrumental vaginal, or Caesarean section), infant requirement for resuscitation and neonatal intensive care, low and very low birth weight, small for gestational age, antenatal and postpartum hemorrhage, retained placenta, postpartum depression, gestational diabetes, and gestational hypertension/preeclampsia.\nSearch methods\nWe searched MEDLINE, Embase, CENTRAL, and the Cochrane IBD Group Specialized Register from inception to September 27, 2018, to identify relevant studies. We also searched references of relevant articles, conference abstracts, grey literature, and trials registers.\nSelection criteria\nWe included observational studies that compared women of reproductive age (≥ 12 years of age) who underwent surgery to women with IBD who had a different type of surgery or no surgery (i.e. treated medically). We also included studies comparing women before and after surgery. Any type of IBD-related surgery was permitted. Infertility was defined as an inability to become pregnant following 12 months of unprotected intercourse. Infertility at 6, 18, and 24 months was included as a secondary outcome. We excluded studies that included women without IBD and those comparing women with IBD to women without IBD..\nData collection and analysis\nTwo review authors independently screened studies and extracted data. We used the Newcastle-Ottawa Scale to assess bias and GRADE to assess the overall certainty of evidence. We calculated the pooled risk ratio (RR) and 95% confidence interval (CI) using random-effects models. When individual studies reported odds ratios (ORs) and did not provide raw numbers, we pooled ORs instead.\nMain results\nWe identified 16 observational studies for inclusion. Ten studies were included in meta-analyses, of which nine compared women with and without a previous IBD-related surgery and the other compared women with open and laparoscopic IPAA. Of the ten studies included in meta-analyses, four evaluated infertility, one evaluated ART, and seven reported on pregnancy-related outcomes. Seven studies in which women were compared before and after colectomy and/or IPAA were summarized qualitatively, of which five included a comparison of infertility, three included the use of ART, and three included other pregnancy-related outcomes. One study included a comparison of women with and without IPAA, as well as before and after IPAA, and was therefore included in both the meta-analysis and the qualitative summary. All studies were at high risk of bias for at least two domains.\nWe are very uncertain of the effect of IBD surgery on infertility at 12 months (RR 5.45, 95% CI 0.41 to 72.57; 114 participants; 2 studies) and at 24 months (RR 3.59, 95% CI 1.32 to 9.73; 190 participants; 1 study). Infertility was lower in women who received laparoscopic surgery compared to open restorative proctocolectomy at 12 months (RR 0.70, 95% CI 0.38 to 1.27; 37 participants; 1 study).\nWe are very uncertain of the effect of IBD surgery on pregnancy-related outcomes, including miscarriage (OR 2.03, 95% CI 1.14 to 3.60; 776 pregnancies; 5 studies), use of ART (RR 25.09, 95% CI 1.56 to 403.76; 106 participants; 1 study), delivery via Caesarean section (RR 2.23, 95% CI 1.00 to 4.95; 20 pregnancies; 1 study), stillbirth (RR 1.96, 95% CI 0.42 to 9.18; 246 pregnancies; 3 studies), preterm birth (RR 1.91, 95% CI 0.67 to 5.48; 194 pregnancies; 3 studies), low birth weight (RR 0.61, 95% CI 0.08 to 4.83), and small for gestational age (RR 2.54, 95% CI 0.80 to 8.01; 65 pregnancies; 1 study).\nStudies comparing infertility before and after IBD-related surgery reported numerically higher rates of infertility at six months (before: 1/5, 20.0%; after: 9/15, 60.0%; 1 study), at 12 months (before: 68/327, 20.8%; after: 239/377, 63.4%; 5 studies), and at 24 months (before: 14/89, 15.7%; after: 115/164, 70.1%; 2 studies); use of ART (before: 5.3% to 42.2%; after: 30.3% to 34.3%; proportions varied across studies due to differences in which women were identified as at risk of using ART); and delivery via Caesarean section (before: 8/73, 11.0%; after: 36/75, 48.0%; 2 studies). In addition, women had a longer time to conception after surgery (two to five months; 2 studies) than before surgery (5 to 16 months; 2 studies). The proportions of women experiencing miscarriage (before: 19/123, 15.4%; after: 21/134, 15.7%; 3 studies) and stillbirth (before: 2/38, 5.3%; after: 3/80: 3.8%; 2 studies) were similar before and after surgery. Fewer women experienced gestational diabetes after surgery (before: 3/37, 8.1%; after: 0/37; 1 study), and the risk of preeclampsia was similar before and after surgery (before: 2/37, 5.4%; after: 0/37; 1 study). We are very uncertain of the effects of IBD-related surgery on these outcomes due to poor quality evidence, including confounding bias due to increased age of women after surgery.\nWe rated evidence for all outcomes and comparisons as very low quality due to the observational nature of the data, inclusion of small studies with imprecise estimates, and high risk of bias among included studies.\nAuthors' conclusions\nThe effect of surgical therapy for IBD on female infertility is uncertain. It is also uncertain if there are any differences in infertility among those undergoing open versus laparoscopic procedures. Previous surgery was associated with higher risk of miscarriage, use of ART, Caesarean section delivery, and giving birth to a low birth weight infant, but was not associated with risk of stillbirth, preterm delivery, or delivery of a small for gestational age infant. These findings are based on very low-quality evidence. As a result, definitive conclusions cannot be made, and future well-designed studies are needed to fully understand the impact of surgery on infertility and pregnancy outcomes.\n\nGiven the provided abstract, please respond to the following query: \"Conclusions\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Studies on the impact of IBD-related surgery on a woman's ability to get pregnant and on the outcomes of pregnancy are rare and of low quality. As a result, information reviewed in the present study on associations between IBD-related surgery and adverse pregnancy outcomes has high risk of bias, and we have very little confidence in these conclusions. Our results should be interpreted with caution. Additional well-designed studies on this topic are needed."
}
] |
query-laysum
|
22998
|
[
{
"role": "user",
"content": "Abstract: Background\nImproving upper limb function is a core element of stroke rehabilitation needed to maximise patient outcomes and reduce disability. Evidence about effects of individual treatment techniques and modalities is synthesised within many reviews. For selection of effective rehabilitation treatment, the relative effectiveness of interventions must be known. However, a comprehensive overview of systematic reviews in this area is currently lacking.\nObjectives\nTo carry out a Cochrane overview by synthesising systematic reviews of interventions provided to improve upper limb function after stroke.\nMethods\nSearch methods: We comprehensively searched the Cochrane Database of Systematic Reviews; the Database of Reviews of Effects; and PROSPERO (an international prospective register of systematic reviews) (June 2013). We also contacted review authors in an effort to identify further relevant reviews.\nSelection criteria: We included Cochrane and non-Cochrane reviews of randomised controlled trials (RCTs) of patients with stroke comparing upper limb interventions with no treatment, usual care or alternative treatments. Our primary outcome of interest was upper limb function; secondary outcomes included motor impairment and performance of activities of daily living. When we identified overlapping reviews, we systematically identified the most up-to-date and comprehensive review and excluded reviews that overlapped with this.\nData collection and analysis: Two overview authors independently applied the selection criteria, excluding reviews that were superseded by more up-to-date reviews including the same (or similar) studies. Two overview authors independently assessed the methodological quality of reviews (using a modified version of the AMSTAR tool) and extracted data. Quality of evidence within each comparison in each review was determined using objective criteria (based on numbers of participants, risk of bias, heterogeneity and review quality) to apply GRADE (Grades of Recommendation, Assessment, Development and Evaluation) levels of evidence. We resolved disagreements through discussion. We systematically tabulated the effects of interventions and used quality of evidence to determine implications for clinical practice and to make recommendations for future research.\nMain results\nOur searches identified 1840 records, from which we included 40 completed reviews (19 Cochrane; 21 non-Cochrane), covering 18 individual interventions and dose and setting of interventions. The 40 reviews contain 503 studies (18,078 participants). We extracted pooled data from 31 reviews related to 127 comparisons. We judged the quality of evidence to be high for 1/127 comparisons (transcranial direct current stimulation (tDCS) demonstrating no benefit for outcomes of activities of daily living (ADLs)); moderate for 49/127 comparisons (covering seven individual interventions) and low or very low for 77/127 comparisons.\nModerate-quality evidence showed a beneficial effect of constraint-induced movement therapy (CIMT), mental practice, mirror therapy, interventions for sensory impairment, virtual reality and a relatively high dose of repetitive task practice, suggesting that these may be effective interventions; moderate-quality evidence also indicated that unilateral arm training may be more effective than bilateral arm training. Information was insufficient to reveal the relative effectiveness of different interventions.\nModerate-quality evidence from subgroup analyses comparing greater and lesser doses of mental practice, repetitive task training and virtual reality demonstrates a beneficial effect for the group given the greater dose, although not for the group given the smaller dose; however tests for subgroup differences do not suggest a statistically significant difference between these groups. Future research related to dose is essential.\nSpecific recommendations for future research are derived from current evidence. These recommendations include but are not limited to adequately powered, high-quality RCTs to confirm the benefit of CIMT, mental practice, mirror therapy, virtual reality and a relatively high dose of repetitive task practice; high-quality RCTs to explore the effects of repetitive transcranial magnetic stimulation (rTMS), tDCS, hands-on therapy, music therapy, pharmacological interventions and interventions for sensory impairment; and up-to-date reviews related to biofeedback, Bobath therapy, electrical stimulation, reach-to-grasp exercise, repetitive task training, strength training and stretching and positioning.\nAuthors' conclusions\nLarge numbers of overlapping reviews related to interventions to improve upper limb function following stroke have been identified, and this overview serves to signpost clinicians and policy makers toward relevant systematic reviews to support clinical decisions, providing one accessible, comprehensive document, which should support clinicians and policy makers in clinical decision making for stroke rehabilitation.\nCurrently, no high-quality evidence can be found for any interventions that are currently used as part of routine practice, and evidence is insufficient to enable comparison of the relative effectiveness of interventions. Effective collaboration is urgently needed to support large, robust RCTs of interventions currently used routinely within clinical practice. Evidence related to dose of interventions is particularly needed, as this information has widespread clinical and research implications.\n\nGiven the provided abstract, please respond to the following query: \"Research question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Which interventions help to promote arm and hand recovery after a person has had a stroke?"
}
] |
query-laysum
|
22999
|
[
{
"role": "user",
"content": "Abstract: Background\nThis review is one of a series of rapid reviews that Cochrane contributors have prepared to inform the 2020 COVID-19 pandemic.\nWhen new respiratory infectious diseases become widespread, such as during the COVID-19 pandemic, healthcare workers’ adherence to infection prevention and control (IPC) guidelines becomes even more important. Strategies in these guidelines include the use of personal protective equipment (PPE) such as masks, face shields, gloves and gowns; the separation of patients with respiratory infections from others; and stricter cleaning routines. These strategies can be difficult and time-consuming to adhere to in practice. Authorities and healthcare facilities therefore need to consider how best to support healthcare workers to implement them.\nObjectives\nTo identify barriers and facilitators to healthcare workers’ adherence to IPC guidelines for respiratory infectious diseases.\nSearch methods\nWe searched OVID MEDLINE on 26 March 2020. As we searched only one database due to time constraints, we also undertook a rigorous and comprehensive scoping exercise and search of the reference lists of key papers. We did not apply any date limit or language limits.\nSelection criteria\nWe included qualitative and mixed-methods studies (with an identifiable qualitative component) that focused on the experiences and perceptions of healthcare workers towards factors that impact on their ability to adhere to IPC guidelines for respiratory infectious diseases. We included studies of any type of healthcare worker with responsibility for patient care. We included studies that focused on IPC guidelines (local, national or international) for respiratory infectious diseases in any healthcare setting. These selection criteria were framed by an understanding of the needs of health workers during the COVID-19 pandemic.\nData collection and analysis\nFour review authors independently assessed the titles, abstracts and full texts identified by the search. We used a prespecified sampling frame to sample from the eligible studies, aiming to capture diverse respiratory infectious disease types, geographical spread and data-rich studies. We extracted data using a data extraction form designed for this synthesis. We assessed methodological limitations using an adapted version of the Critical Skills Appraisal Programme (CASP) tool. We used a ‘best fit framework approach’ to analyse and synthesise the evidence. This provided upfront analytical categories, with scope for further thematic analysis. We used the GRADE-CERQual (Confidence in the Evidence from Reviews of Qualitative research) approach to assess our confidence in each finding. We examined each review finding to identify factors that may influence intervention implementation and developed implications for practice.\nMain results\nWe found 36 relevant studies and sampled 20 of these studies for our analysis. Ten of these studies were from Asia, four from Africa, four from Central and North America and two from Australia. The studies explored the views and experiences of nurses, doctors and other healthcare workers when dealing with severe acute respiratory syndrome (SARS), H1N1, MERS (Middle East respiratory syndrome), tuberculosis (TB), or seasonal influenza. Most of these healthcare workers worked in hospitals; others worked in primary and community care settings.\nThe review points to several barriers and facilitators that influenced healthcare workers’ ability to adhere to IPC guidelines. The following factors are based on findings assessed as of moderate to high confidence.\nHealthcare workers felt unsure as to how to adhere to local guidelines when they were lengthy and ambiguous or did not reflect national or international guidelines. They could feel overwhelmed because local guidelines were constantly changing. They also described how IPC strategies led to increased workloads and fatigue, for instance because they had to use PPE and take on additional cleaning. Healthcare workers described how their responses to IPC guidelines were influenced by the level of support they felt that they received from their management team.\nClear communication about IPC guidelines was seen as vital. But healthcare workers pointed to a lack of training about the infection itself and about how to use PPE. They also thought it was a problem when training was not mandatory.\nSufficient space to isolate patients was also seen as vital. A lack of isolation rooms, anterooms and shower facilities was identified as a problem. Other important practical measures described by healthcare workers included minimising overcrowding, fast-tracking infected patients, restricting visitors, and providing easy access to handwashing facilities.\nA lack of PPE, and provision of equipment that was of poor quality, was a serious concern for healthcare workers and managers. They also pointed to the need to adjust the volume of supplies as infection outbreaks continued.\nHealthcare workers believed that they followed IPC guidance more closely when they saw its value. Some healthcare workers felt motivated to follow the guidance because of fear of infecting themselves or their families, or because they felt responsible for their patients. Some healthcare workers found it difficult to use masks and other equipment when it made patients feel isolated, frightened or stigmatised. Healthcare workers also found masks and other equipment uncomfortable to use. The workplace culture could also influence whether healthcare workers followed IPC guidelines or not.\nAcross many of the findings, healthcare workers pointed to the importance of including all staff, including cleaning staff, porters, kitchen staff and other support staff when implementing IPC guidelines.\nAuthors' conclusions\nHealthcare workers point to several factors that influence their ability and willingness to follow IPC guidelines when managing respiratory infectious diseases. These include factors tied to the guideline itself and how it is communicated, support from managers, workplace culture, training, physical space, access to and trust in personal protective equipment, and a desire to deliver good patient care. The review also highlights the importance of including all facility staff, including support staff, when implementing IPC guidelines.\n\nGiven the provided abstract, please respond to the following query: \"What was studied in this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched for studies that examined healthcare workers’ views and experiences towards IPC guidelines – particularly for respiratory diseases that start suddenly, like COVID-19. Guidelines could be local, national, or international, for any healthcare setting. We included studies of any healthcare worker who looked after patients in any healthcare setting. We searched for studies published from 2002 onwards to cover the 2003 severe acute respiratory syndrome (SARS) outbreak onwards."
}
] |
query-laysum
|
23000
|
[
{
"role": "user",
"content": "Abstract: Background\nObstructive sleep-disordered breathing (oSDB) is a condition that encompasses breathing problems when asleep, due to an obstruction of the upper airways, ranging in severity from simple snoring to obstructive sleep apnoea syndrome (OSAS). It affects both children and adults. In children, hypertrophy of the tonsils and adenoid tissue is thought to be the commonest cause of oSDB. As such, tonsillectomy - with or without adenoidectomy - is considered an appropriate first-line treatment for most cases of paediatric oSDB.\nObjectives\nTo assess the benefits and harms of tonsillectomy with or without adenoidectomy compared with non-surgical management of children with oSDB.\nSearch methods\nWe searched the Cochrane Register of Studies Online, PubMed, EMBASE, CINAHL, Web of Science, Clinicaltrials.gov, ICTRP and additional sources for published and unpublished trials. The date of the search was 5 March 2015.\nSelection criteria\nRandomised controlled trials comparing the effectiveness and safety of (adeno)tonsillectomy with non-surgical management in children with oSDB aged 2 to 16 years.\nData collection and analysis\nWe used the standard methodological procedures expected by The Cochrane Collaboration.\nMain results\nThree trials (562 children) met our inclusion criteria. Two were at moderate to high risk of bias and one at low risk of bias. We did not pool the results because of substantial clinical heterogeneity. They evaluated three different groups of children: those diagnosed with mild to moderate OSAS by polysomnography (PSG) (453 children aged five to nine years; low risk of bias; CHAT trial), those with a clinical diagnosis of oSDB but with negative PSG recordings (29 children aged two to 14 years; moderate to high risk of bias; Goldstein) and children with Down syndrome or mucopolysaccharidosis (MPS) diagnosed with mild to moderate OSAS by PSG (80 children aged six to 12 years; moderate to high risk of bias; Sudarsan). Moreover, the trials included two different comparisons: adenotonsillectomy versus no surgery (CHAT trial and Goldstein) or versus continuous positive airway pressure (CPAP) (Sudarsan).\nDisease-specific quality of life and/or symptom score (using a validated instrument): first primary outcome\nIn the largest trial with lowest risk of bias (CHAT trial), at seven months, mean scores for those instruments measuring disease-specific quality of life and/or symptoms were lower (that is, better quality of life or fewer symptoms) in children receiving adenotonsillectomy than in those managed by watchful waiting:\n- OSA-18 questionnaire (scale 18 to 126): 31.8 versus 49.5 (mean difference (MD) -17.7, 95% confidence interval (CI) -21.2 to -14.2);\n- PSQ-SRBD questionnaire (scale 0 to 1): 0.2 versus 0.5 (MD -0.3, 95% CI -0.31 to -0.26);\n- Modified Epworth Sleepiness Scale (scale 0 to 24): 5.1 versus 7.1 (MD -2.0, 95% CI -2.9 to -1.1).\nNo data on this primary outcome were reported in the Goldstein trial.\nIn the Sudarsan trial, the mean OSA-18 score at 12 months did not significantly differ between the adenotonsillectomy and CPAP groups. The mean modified Epworth Sleepiness Scale scores did not differ at six months, but were lower in the surgery group at 12 months: 5.5 versus 7.9 (MD -2.4, 95% CI -3.1 to -1.7).\nAdverse events: second primary outcome\nIn the CHAT trial, 15 children experienced a serious adverse event: 6/194 (3%) in the adenotonsillectomy group and 9/203 (4%) in the control group (RD -1%, 95% CI -5% to 2%).\nNo major complications were reported in the Goldstein trial.\nIn the Sudarsan trial, 2/37 (5%) developed a secondary haemorrhage after adenotonsillectomy, while 1/36 (3%) developed a rash on the nasal dorsum secondary to the CPAP mask (RD -3%, 95% CI -6% to 12%).\nSecondary outcomes\nIn the CHAT trial, at seven months, mean scores for generic caregiver-rated quality of life were higher in children receiving adenotonsillectomy than in those managed by watchful waiting. No data on this outcome were reported by Sudarsan and Goldstein.\nIn the CHAT trial, at seven months, more children in the surgery group had normalisation of respiratory events during sleep as measured by PSG than those allocated to watchful waiting: 153/194 (79%) versus 93/203 (46%) (RD 33%, 95% CI 24% to 42%). In the Goldstein trial, at six months, PSG recordings were similar between groups and in the Sudarsan trial resolution of OSAS (Apnoea/Hypopnoea Index score below 1) did not significantly differ between the adenotonsillectomy and CPAP groups.\nIn the CHAT trial, at seven months, neurocognitive performance and attention and executive function had not improved with surgery: scores were similar in both groups. In the CHAT trial, at seven months, mean scores for caregiver-reported ratings of behaviour were lower (that is, better behaviour) in children receiving adenotonsillectomy than in those managed by watchful waiting, however, teacher-reported ratings of behaviour did not significantly differ.\nNo data on these outcomes were reported by Goldstein and Sudarsan.\nAuthors' conclusions\nIn otherwise healthy children, without a syndrome, of older age (five to nine years), and diagnosed with mild to moderate OSAS by PSG, there is moderate quality evidence that adenotonsillectomy provides benefit in terms of quality of life, symptoms and behaviour as rated by caregivers and high quality evidence that this procedure is beneficial in terms of PSG parameters. At the same time, high quality evidence indicates no benefit in terms of objective measures of attention and neurocognitive performance compared with watchful waiting. Furthermore, PSG recordings of almost half of the children managed non-surgically had normalised by seven months, indicating that physicians and parents should carefully weigh the benefits and risks of adenotonsillectomy against watchful waiting in these children. This is a condition that may recover spontaneously over time.\nFor non-syndromic children classified as having oSDB on purely clinical grounds but with negative PSG recordings, the evidence on the effects of adenotonsillectomy is of very low quality and is inconclusive.\nLow-quality evidence suggests that adenotonsillectomy and CPAP may be equally effective in children with Down syndrome or MPS diagnosed with mild to moderate OSAS by PSG.\nWe are unable to present data on the benefits of adenotonsillectomy in children with oSDB aged under five, despite this being a population in whom this procedure is often performed for this purpose.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Moderate quality evidence is available that children without a syndrome who have been diagnosed with mild to moderate OSAS do benefit from early adenotonsillectomy in terms of quality of life, symptoms and behaviour as reported by caregiver and high quality evidence in terms of overnight sleep study findings. The evidence on the effects of adenotonsillectomy in children without a syndrome who were diagnosed as having oSDB but who had a normal overnight sleep study is of very low quality. The evidence for children with Down syndrome or MPS diagnosed with mild to moderate OSAS is of low quality."
}
] |
query-laysum
|
23001
|
[
{
"role": "user",
"content": "Abstract: Background\nEpidemiological evidence has suggested a link between beta2-agonists and increases in asthma mortality. There has been much debate about whether regular (daily) long-acting beta2-agonists (LABA) are safe when used in combination with inhaled corticosteroids (ICS). This updated Cochrane Review includes results from two large trials that recruited 23,422 adolescents and adults mandated by the US Food and Drug Administration (FDA).\nObjectives\nTo assess the risk of mortality and non-fatal serious adverse events (SAEs) in trials that randomly assign participants with chronic asthma to regular formoterol and inhaled corticosteroids versus the same dose of inhaled corticosteroid alone.\nSearch methods\nWe identified randomised trials using the Cochrane Airways Group Specialised Register of trials. We checked websites of clinical trial registers for unpublished trial data as well as FDA submissions in relation to formoterol. The date of the most recent search was February 2019.\nSelection criteria\nWe included randomised clinical trials (RCTs) with a parallel design involving adults, children, or both with asthma of any severity who received regular formoterol and ICS (separate or combined) treatment versus the same dose of ICS for at least 12 weeks.\nData collection and analysis\nWe used standard methodological procedures expected by Cochrane. We obtained unpublished data on mortality and SAEs from the sponsors of the studies. We assessed our confidence in the evidence using GRADE recommendations. The primary outcomes were all-cause mortality and all-cause non-fatal serious adverse events.\nMain results\nWe found 42 studies eligible for inclusion and included 39 studies in the analyses: 29 studies included 35,751 adults, and 10 studies included 4035 children and adolescents. Inhaled corticosteroids included beclomethasone (daily metered dosage 200 to 800 µg), budesonide (200 to 1600 µg), fluticasone (200 to 250 µg), and mometasone (200 to 800 µg). Formoterol metered dosage ranged from 12 to 48 µg daily. Fixed combination ICS was used in most of the studies. We judged the risk of selection bias, performance bias, and attrition bias as low, however most studies did not report independent assessment of causation of SAEs.\nDeaths\nSeventeen of 18,645 adults taking formoterol and ICS and 13 of 17,106 adults taking regular ICS died of any cause. The pooled Peto odds ratio (OR) was 1.25 (95% confidence interval (CI) 0.61 to 2.56, moderate-certainty evidence), which equated to one death occurring for every 1000 adults treated with ICS alone for 26 weeks; the corresponding risk amongst adults taking formoterol and ICS was also one death (95% CI 0 to 2 deaths). No deaths were reported in the trials on children and adolescents (4035 participants) (low-certainty evidence).\nIn terms of asthma-related deaths, no children and adolescents died from asthma, but three of 12,777 adults in the formoterol and ICS treatment group died of asthma (both low-certainty evidence).\nNon-fatal serious adverse events\nA total of 401 adults experienced a non-fatal SAE of any cause on formoterol with ICS, compared to 369 adults who received regular ICS. The pooled Peto OR was 1.00 (95% CI 0.87 to 1.16, high-certainty evidence, 29 studies, 35,751 adults). For every 1000 adults treated with ICS alone for 26 weeks, 22 adults had an SAE; the corresponding risk for those on formoterol and ICS was also 22 adults (95% CI 19 to 25).\nThirty of 2491 children and adolescents experienced an SAE of any cause when receiving formoterol with ICS, compared to 13 of 1544 children and adolescents receiving ICS alone. The pooled Peto OR was 1.33 (95% CI 0.71 to 2.49, moderate-certainty evidence, 10 studies, 4035 children and adolescents). For every 1000 children and adolescents treated with ICS alone for 12.5 weeks, 8 had an non-fatal SAE; the corresponding risk amongst those on formoterol and ICS was 11 children and adolescents (95% CI 6 to 21).\nAsthma-related serious adverse events\nNinety adults experienced an asthma-related non-fatal SAE with formoterol and ICS, compared to 102 with ICS alone. The pooled Peto OR was 0.86 (95% CI 0.64 to 1.14, moderate-certainty evidence, 28 studies, 35,158 adults). For every 1000 adults treated with ICS alone for 26 weeks, 6 adults had an asthma-related non-fatal SAE; the corresponding risk for those on formoterol and ICS was 5 adults (95% CI 4 to 7).\nAmongst children and adolescents, 9 experienced an asthma-related non-fatal SAE with formoterol and ICS, compared to 5 on ICS alone. The pooled Peto OR was 1.18 (95% CI 0.40 to 3.51, very low-certainty evidence, 10 studies, 4035 children and adolescents). For every 1000 children and adolescents treated with ICS alone for 12.5 weeks, 3 had an asthma-related non-fatal SAE; the corresponding risk on formoterol and ICS was 4 (95% CI 1 to 11).\nAuthors' conclusions\nWe did not find a difference in the risk of death (all-cause or asthma-related) in adults taking combined formoterol and ICS versus ICS alone (moderate- to low-certainty evidence). No deaths were reported in children and adolescents. The risk of dying when taking either treatment was very low, but we cannot be certain if there is a difference in mortality when taking additional formoterol to ICS (low-certainty evidence).\nWe did not find a difference in the risk of non-fatal SAEs of any cause in adults (high-certainty evidence). A previous version of the review had shown a lower risk of asthma-related SAEs in adults taking combined formoterol and ICS; however, inclusion of new studies no longer shows a difference between treatments (moderate-certainty evidence).\nThe reported number of children and adolescents with SAEs was small, so uncertainty remains in this age group.\nWe included results from large studies mandated by the FDA. Clinical decisions and information provided to patients regarding regular use of formoterol and ICS need to take into account the balance between known symptomatic benefits of formoterol and ICS versus the remaining degree of uncertainty associated with its potential harmful effects.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We analysed data from 29 studies in 35,751 adults and 10 studies in 4035 children aged up to 17 years. The participants in the studies had a range of asthma severity, with most having been previously treated with regular ICS (over a wide range of doses). There were too few children in the studies to allow us to be certain about the effects in children.\nThirty deaths were reported in 35,751 adults. Seventeen of these deaths were reported in participants taking formoterol and ICS, and 13 deaths in participants who were taking ICS alone. Three deaths reported in adults taking formoterol and ICS were due to asthma, but there were no deaths due to asthma with ICS alone. No deaths were reported in children up to 17 years age.\nThe number of people experiencing serious harms of any cause was similar in adults with and without formoterol. Although there was no difference in the risk of serious harms in adults with asthma taking regular formoterol in combination with ICS compared to ICS alone, we could not confidently exclude a reduced or increased risk of events compared to taking ICS alone."
}
] |
query-laysum
|
23002
|
[
{
"role": "user",
"content": "Abstract: Background\nStem cell therapy (SCT) has been proposed as an alternative treatment for dilated cardiomyopathy (DCM), nonetheless its effectiveness remains debatable.\nObjectives\nTo assess the effectiveness and safety of SCT in adults with non-ischaemic DCM.\nSearch methods\nWe searched CENTRAL in the Cochrane Library, MEDLINE, and Embase for relevant trials in November 2020. We also searched two clinical trials registers in May 2020.\nSelection criteria\nEligible studies were randomized controlled trials (RCT) comparing stem/progenitor cells with no cells in adults with non-ischaemic DCM. We included co-interventions such as the administration of stem cell mobilizing agents. Studies were classified and analysed into three categories according to the comparison intervention, which consisted of no intervention/placebo, cell mobilization with cytokines, or a different mode of SCT.\nThe first two comparisons (no cells in the control group) served to assess the efficacy of SCT while the third (different mode of SCT) served to complement the review with information about safety and other information of potential utility for a better understanding of the effects of SCT.\nData collection and analysis\nTwo review authors independently screened all references for eligibility, assessed trial quality, and extracted data. We undertook a quantitative evaluation of data using random-effects meta-analyses. We evaluated heterogeneity using the I² statistic. We could not explore potential effect modifiers through subgroup analyses as they were deemed uninformative due to the scarce number of trials available. We assessed the certainty of the evidence using the GRADE approach. We created summary of findings tables using GRADEpro GDT. We focused our summary of findings on all-cause mortality, safety, health-related quality of life (HRQoL), performance status, and major adverse cardiovascular events.\nMain results\nWe included 13 RCTs involving 762 participants (452 cell therapy and 310 controls). Only one study was at low risk of bias in all domains. There were many shortcomings in the publications that did not allow a precise assessment of the risk of bias in many domains. Due to the nature of the intervention, the main source of potential bias was lack of blinding of participants (performance bias). Frequently, the format of the continuous data available was not ideal for use in the meta-analysis and forced us to seek strategies for transforming data in a usable format.\nWe are uncertain whether SCT reduces all-cause mortality in people with DCM compared to no intervention/placebo (mean follow-up 12 months) (risk ratio (RR) 0.84, 95% confidence interval (CI) 0.54 to 1.31; I² = 0%; studies = 7, participants = 361; very low-certainty evidence). We are uncertain whether SCT increases the risk of procedural complications associated with cells injection in people with DCM (data could not be pooled; studies = 7; participants = 361; very low-certainty evidence). We are uncertain whether SCT improves HRQoL (standardized mean difference (SMD) 0.62, 95% CI 0.01 to 1.23; I² = 72%; studies = 5, participants = 272; very low-certainty evidence) and functional capacity (6-minute walk test) (mean difference (MD) 70.12 m, 95% CI –5.28 to 145.51; I² = 87%; studies = 5, participants = 230; very low-certainty evidence). SCT may result in a slight functional class (New York Heart Association) improvement (data could not be pooled; studies = 6, participants = 398; low-certainty evidence). None of the included studies reported major adverse cardiovascular events as defined in our protocol. SCT may not increase the risk of ventricular arrhythmia (data could not be pooled; studies = 8, participants = 504; low-certainty evidence).\nWhen comparing SCT to cell mobilization with granulocyte-colony stimulating factor (G-CSF), we are uncertain whether SCT reduces all-cause mortality (RR 0.46, 95% CI 0.16 to 1.31; I² = 39%; studies = 3, participants = 195; very low-certainty evidence). We are uncertain whether SCT increases the risk of procedural complications associated with cells injection (studies = 1, participants = 60; very low-certainty evidence). SCT may not improve HRQoL (MD 4.61 points, 95% CI –5.62 to 14.83; studies = 1, participants = 22; low-certainty evidence). SCT may improve functional capacity (6-minute walk test) (MD 140.14 m, 95% CI 119.51 to 160.77; I² = 0%; studies = 2, participants = 155; low-certainty evidence). None of the included studies reported MACE as defined in our protocol or ventricular arrhythmia.\nThe most commonly reported outcomes across studies were based on physiological measures of cardiac function where there were some beneficial effects suggesting potential benefits of SCT in people with non-ischaemic DCM. However, it is unclear if this intermediate effects translates into clinical benefits for these patients.\nWith regard to specific aspects related to the modality of cell therapy and its delivery, uncertainties remain as subgroup analyses could not be performed as planned, making it necessary to wait for the publication of several studies that are currently in progress before any firm conclusion can be reached.\nAuthors' conclusions\nWe are uncertain whether SCT in people with DCM reduces the risk of all-cause mortality and procedural complications, improves HRQoL, and performance status (exercise capacity). SCT may improve functional class (NYHA), compared to usual care (no cells).\nSimilarly, when compared to G-CSF, we are also uncertain whether SCT in people with DCM reduces the risk of all-cause mortality although some studies within this comparison observed a favourable effect that should be interpreted with caution. SCT may not improve HRQoL but may improve to some extent performance status (exercise capacity). Very low-quality evidence reflects uncertainty regarding procedural complications. These suggested beneficial effects of SCT, although uncertain due to the very low certainty of the evidence, are accompanied by favourable effects on some physiological measures of cardiac function.\nPresently, the most effective mode of administration of SCT and the population that could benefit the most is unclear. Therefore, it seems reasonable that use of SCT in people with DCM is limited to clinical research settings. Results of ongoing studies are likely to modify these conclusions.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Are bone marrow cells safe and effective as a treatment for non-ischaemic dilated cardiomyopathy (DCM)?"
}
] |
query-laysum
|
23003
|
[
{
"role": "user",
"content": "Abstract: Background\nSince the introduction of the Swedish back school in 1969, back schools have frequently been used for treating people with low-back pain (LBP). However, the content of back schools has changed and appears to vary widely today. In this review we defined back school as a therapeutic programme given to groups of people, which includes both education and exercise. This is an update of a Cochrane review first published in 1999, and updated in 2004. For this review update, we split the review into two distinct reviews which separated acute from chronic LBP.\nObjectives\nTo assess the effectiveness of back schools on pain and disability for people with acute or subacute non-specific LBP. We also examined the effect on work status and adverse events.\nSearch methods\nWe searched CENTRAL, MEDLINE, EMBASE, CINAHL, PsycINFO, PubMed and two clinical trials registers up to 4 August 2015. We also checked the reference lists of articles and contacted experts in the field of research on LBP.\nSelection criteria\nWe included randomised controlled trials (RCTs) or quasi-RCTs that reported on back school for acute or subacute non-specific LBP. The primary outcomes were pain and disability. The secondary outcomes were work status and adverse events. Back school had to be compared with another treatment, a placebo (or sham or attention control) or no treatment.\nData collection and analysis\nWe used the 2009 updated method guidelines for this Cochrane review. Two review authors independently screened the references, assessed the quality of the trials and extracted the data. We set the threshold for low risk of bias, a priori, as six or more of 13 internal validity criteria and no serious flaws (e.g. large drop-out rate). We classified the quality of the evidence into one of four levels (high, moderate, low or very low) using the adapted Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach. We contacted study authors for additional information. We collected adverse effects information from the trials.\nMain results\nThe search update identified 273 new references, of which none fulfilled our inclusion criteria. We included four studies (643 participants) in this updated review, which were all included in the previous (2004) update. The quality of the evidence was very low for all outcomes. As data were too clinically heterogeneous to be pooled, we described individual trial results. The results indicate that there is very low quality evidence that back schools are no more effective than a placebo (or sham or attention control) or another treatment (physical therapies, myofascial therapy, joint manipulations, advice) on pain, disability, work status and adverse events at short-term, intermediate-term and long-term follow-up. There is very low quality evidence that shows a statistically significant difference between back schools and a placebo (or sham or attention control) for return to work at short-term follow-up in favour of back school. Very low quality evidence suggests that back school added to a back care programme is more effective than a back care programme alone for disability at short-term follow-up. Very low quality evidence also indicates that there is no difference in terms of adverse events between back school and myofascial therapy, joint manipulation and combined myofascial therapy and joint manipulation.\nAuthors' conclusions\nIt is uncertain if back schools are effective for acute and subacute non-specific LBP as there is only very low quality evidence available. While large well-conducted studies will likely provide more conclusive findings, back schools are not widely used interventions for acute and subacute LBP and further research into this area may not be a priority.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We included four studies in this review, which we included in the previous version of this review, which means that we did not identify any new relevant studies for inclusion in this update. The treatment comparisons were too dissimilar to be pooled and half of the studies were at high risk of bias. The quality of the evidence was very low for all outcomes.\nOne study compared back school with a placebo (sham treatment) and found no difference between groups for pain at short-term follow-up. Concerning work status, people in the back school group had a significantly shorter duration of sick-leave than people in the placebo group at short-term follow-up.\nFour studies compared back school with another treatment (physical therapies, myofascial therapy, joint manipulations, advice). Overall, there were no differences between groups for pain, disability, work status and adverse events at any time of follow-up. Only one study showed that back school added to a back care programme was more effective than back school alone for disability at short-term follow-up."
}
] |
query-laysum
|
23004
|
[
{
"role": "user",
"content": "Abstract: Background\nTrigger finger is a common hand condition that occurs when movement of a finger flexor tendon through the first annular (A1) pulley is impaired by degeneration, inflammation, and swelling. This causes pain and restricted movement of the affected finger. Non-surgical treatment options include activity modification, oral and topical non-steroidal anti-inflammatory drugs (NSAIDs), splinting, and local injections with anti-inflammatory drugs.\nObjectives\nTo review the benefits and harms of non-steroidal anti-inflammatory drugs (NSAIDs) versus placebo, glucocorticoids, or different NSAIDs administered by the same route for trigger finger.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, CINAHL, CNKI (China National Knowledge Infrastructure), ProQuest Dissertations and Theses, www.ClinicalTrials.gov, and the WHO trials portal until 30 September 2020. We applied no language or publication status restrictions.\nSelection criteria\nWe searched for randomised controlled trials (RCTs) and quasi-randomised trials of adult participants with trigger finger that compared NSAIDs administered topically, orally, or by injection versus placebo, glucocorticoid, or different NSAIDs administered by the same route.\nData collection and analysis\nTwo or more review authors independently screened the reports, extracted data, and assessed risk of bias and GRADE certainty of evidence. The seven major outcomes were resolution of trigger finger symptoms, persistent moderate or severe symptoms, recurrence of symptoms, total active range of finger motion, residual pain, patient satisfaction, and adverse events. Treatment effects were reported as risk ratios (RRs) and mean differences (MDs) with 95% confidence intervals (CIs).\nMain results\nTwo RCTs conducted in an outpatient hospital setting were included (231 adult participants, mean age 58.6 years, 60% female, 95% to 100% moderate to severe disease). Both studies compared a single injection of a non-selective NSAID (12.5 mg diclofenac or 15.0 mg ketorolac) given at lower than normal doses with a single injection of a glucocorticoid (triamcinolone 20 mg or 5 mg), with maximum follow-up duration of 12 weeks or 24 weeks.\nIn both studies, we detected risk of attrition and performance bias. One study also had risk of selection bias. The effects of treatment were sensitive to assumptions about missing outcomes. All seven outcomes were reported in one study, and five in the other.\nNSAID injection may offer little to no benefit over glucocorticoid injection, based on low- to very low-certainty evidence from two trials. Evidence was downgraded for bias and imprecision. There may be little to no difference between groups in resolution of symptoms at 12 to 24 weeks (34% with NSAIDs, 41% with glucocorticoids; absolute effect 7% lower, 95% confidence interval (CI) 16% lower to 5% higher; 2 studies, 231 participants; RR 0.83, 95% CI 0.62 to 1.11; low-certainty evidence). The rate of persistent moderate to severe symptoms may be higher at 12 to 24 weeks in the NSAIDs group (28%) compared to the glucocorticoid group (14%) (absolute effect 14% higher, 95% CI 2% to 33% higher; 2 studies, 231 participants; RR 2.03, 95% CI 1.19 to 3.46; low-certainty evidence). We are uncertain whether NSAIDs result in fewer recurrences at 12 to 24 weeks (1%) compared to glucocorticoid (21%) (absolute effect 20% lower, 95% CI 21% to 13% lower; 2 studies, 231 participants; RR 0.07, 95% CI 0.01 to 0.38; very low-certainty evidence). There may be little to no difference between groups in mean total active motion at 24 weeks (235 degrees with NSAIDs, 240 degrees with glucocorticoid) (absolute effect 5% lower, 95% CI 34.54% lower to 24.54% higher; 1 study, 99 participants; MD -5.00, 95% CI -34.54 to 24.54; low-certainty evidence). There may be little to no difference between groups in residual pain at 12 to 24 weeks (20% with NSAIDs, 24% with glucocorticoid) (absolute effect 4% lower, 95% CI 11% lower to 7% higher; 2 studies, 231 participants; RR 0.84, 95% CI 0.54 to 1.31; low-certainty evidence). There may be little to no difference between groups in participant-reported treatment success at 24 weeks (64% with NSAIDs, 68% with glucocorticoid) (absolute effect 4% lower, 95% CI 18% lower to 15% higher; 1 study, 121 participants; RR 0.95, 95% CI 0.74 to 1.23; low-certainty evidence).\nWe are uncertain whether NSAID injection has an effect on adverse events at 12 to 24 weeks (1% with NSAIDs, 1% with glucocorticoid) (absolute effect 0% difference, 95% CI 2% lower to 3% higher; 2 studies, 231 participants; RR 2.00, 95% CI 0.19 to 21.42; very low-certainty evidence).\nAuthors' conclusions\nFor adults with trigger finger, by 24 weeks' follow-up, results from two trials show that compared to glucocorticoid injection, NSAID injection offered little to no benefit in the treatment of trigger finger. Specifically, there was no difference in resolution, symptoms, recurrence, total active motion, residual pain, participant-reported treatment success, or adverse events.\n\nGiven the provided abstract, please respond to the following query: \"Limitations of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Our confidence in the results is limited because they come from two small studies only. These studies used different doses of steroid for injection, which may have affected our comparison of the studies. Further research is likely to change these results or increase our confidence in them.."
}
] |
query-laysum
|
23005
|
[
{
"role": "user",
"content": "Abstract: Background\nOxytocin and prostaglandin are hormones responsible for uterine contraction during the third stage of labour. Receptors in the uterine muscles are stimulated by exogenous or endogenous oxytocin leading to uterine contractions. Nipple stimulation or breastfeeding are stimuli that can lead to the secretion of oxytocin and consequent uterine contractions. Consequently, uterine contractions can reduce bleeding during the third stage of labour.\nObjectives\nTo investigate the effects of breastfeeding or nipple stimulation on postpartum haemorrhage (PPH) during the third stage of labour.\nSearch methods\nWe searched the Cochrane Pregnancy and Childbirth Group's Trials Register (15 July 2015) and reference lists of retrieved studies.\nSelection criteria\nRandomised and quasi-randomised controlled trials comparing breast stimulation, breastfeeding or suckling for PPH in the third stage of labour were selected for this review.\nData collection and analysis\nTwo review authors independently assessed studies for inclusion in terms of risk of bias and independently extracted data. Disagreements were resolved by a third review author.\nMain results\nWe included four trials (4608 women), but only two studies contributed data to the review's analyses (n = 4472). The studies contributing data were assessed as of high risk of bias overall. One of these studies was cluster-randomised and conducted in a low-income country and the other study was carried out in a high-income country. All four included studies assessed blood loss in the third stage of labour. Birth attendants estimated blood loss in two trials. The third trial assessed the hematocrit level on the second day postpartum to determine the effect of the bleeding. The fourth study measured PPH ≥ 500 mL.\nNipple stimulation versus no treatment\nOne study (4385 women) compared the effect of suckling versus no treatment. Blood loss was not measured in 114 women (59 in control group and 55 in suckling group). After excluding twin pregnancies, stillbirths and neonatal deaths, the main analyses for this trial were performed on 4227 vaginal deliveries. In terms of maternal death or severe morbidity, one maternal death occurred in the suckling group due to retained placenta (risk ratio (RR) 3.03, 95% confidence interval (CI) 0.12 to 74.26; one study, participants = 4227; very low quality evidence); severe morbidity was not mentioned. Severe PPH (≥ 1000 mL) was not reported in this study.\nThe incidence of PPH (≥ 500 mL) was similar in the suckling and no treatment groups (RR 0.95, 95% CI 0.77 to 1.16; one study, participants = 4227; moderate quality). There were no group differences between nipple stimulation and no treatment regarding blood loss in the third stage of labour (mean difference (MD) 2.00, 95% CI -7.39 to 11.39; one study, participants = 4227; low quality). The rates of retained placenta were similar (RR 1.01, 95% CI 0.14 to 7.16; one study, participants = 4227; very low quality evidence), as were perinatal deaths (RR 1.06, 95% CI 0.57 to 1.98; one study, participants = 4271; low quality), and maternal readmission to hospital (RR 1.01, 95% CI 0.14 to 7.16; one study, participants = 4227; very low quality). We downgraded the evidence for this comparison for risk of bias concerns in the one included trial (inappropriate analyses for cluster design) and for imprecision (wide CIs crossing the line of no difference and, for some outcomes, few events).\nMany maternal secondary outcomes (including side effects) were not reported. Similarly, most neonatal secondary outcomes were not reported.\nNipple stimulation versus oxytocin\nAnother study compared the effect of nipple stimulation (via a breast pump) with oxytocin. Eighty-seven women were recruited but only 85 women were analysed. Severe PPH ≥ 1000 mL and maternal death or severe morbidity were not reported.\nThere was no clear effect of nipple stimulation on blood loss (MD 15.00, 95% CI -24.50 to 54.50; one study, participants = 85; low quality evidence), or on postnatal anaemia compared to the oxytocin group (MD -0.40, 95% CI -2.22 to 1.42; one study, participants = 85; low quality evidence). We downgraded evidence for this comparison due to risk of bias concerns in the one included trial (alternate allocation) and for imprecision (wide CIs crossing the line of no difference and small sample size).\nMany maternal secondary outcomes (including side effects) were not reported, and none of this review's neonatal secondary outcomes were reported.\nAuthors' conclusions\nNone of the included studies reported one of this review's primary outcomes: severe PPH ≥ 1000 mL. Only one study reported on maternal death or severe morbidity. There were limited secondary outcome data for maternal outcomes and very few secondary outcome data for neonatal outcomes.\nThere was no clear differences between nipple stimulation (suckling) versus no treatment in relation to maternal death, the incidence of PPH (≥ 500 mL), blood loss in the third stage of labour, retained placenta, perinatal deaths or maternal readmission to hospital. Whilst these data are based on a single study with a reasonable sample size, the quality of these data are mostly low or very low.\nThere is insufficient evidence to evaluate the effect of nipple stimulation for reducing postpartum haemorrhage during the third stage of labour and more evidence from high-quality studies is needed. Further high-quality studies should recruit adequate sample sizes, assess the impact of nipple stimulation compared to uterotonic agents such as syntometrine and oxytocin, and report on important outcomes such as those listed in this review.\n\nGiven the provided abstract, please respond to the following query: \"What evidence did we find?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched for evidence on 15 July 2015 and included four randomised controlled studies with 4608, women but only two of the studies had useable data. We assessed both studies contributing outcome data as being at a high risk of bias. One study compared the effect of the baby suckling immediately after birth with no intervention. Another study compared nipple stimulation (with a breast pump) versus oxytocin injection. Neither study reported postpartum haemorrhage. Side effects of the treatments were not reported. Similarly, there was limited information on other consequences for women and their babies.\nWhen comparing nipple stimulation (suckling) with no breastfeeding, there were no clear differences in terms of the number of maternal deaths. The incidence of severe maternal illness was not reported. One woman in the suckling group died as a result of a retained placenta. Blood loss greater than or equal to 500 mL, retained placenta, perinatal deaths and maternal readmission to hospital were not clearly different between those who breast fed and those who did not. While these data were based on a single study with a reasonable sample size (4227 women), the results were mostly of low or very low quality due to concerns related to data analysis and study methodology.\nOur comparison of nipple stimulation (using a breast pump) versus oxytocin included one small study involving 85 women only. There was no clear difference between the groups in relation to blood loss or postnatal anaemia. These results were of low quality due to our concerns about the way the trial was conducted and its small size."
}
] |
query-laysum
|
23006
|
[
{
"role": "user",
"content": "Abstract: Background\nThromboangiitis obliterans, also known as Buerger's disease, is a non-atherosclerotic, segmental inflammatory pathology that most commonly affects the small- and medium-sized arteries, veins, and nerves in the upper and lower extremities. The etiology is unknown, but involves hereditary susceptibility, tobacco exposure, immune and coagulation responses. In many cases, there is no possibility of revascularization to improve the condition. Stem cell therapy is an option for patients with severe complications, such as ischemic ulcers or rest pain.\nObjectives\nTo assess the effectiveness and safety of stem cell therapy in individuals with thromboangiitis obliterans (Buerger's disease).\nSearch methods\nThe Cochrane Vascular Information Specialist searched the Cochrane Vascular Specialised Register, CENTRAL, MEDLINE, Embase, CINAHL and AMED databases and World Health Organization International Clinical Trials Registry Platform and ClinicalTrials.gov trials registers to 17 October 2017. The review authors searched the European grey literature OpenGrey Database, screened reference lists of relevant studies and contacted study authors.\nSelection criteria\nRandomized controlled trials (RCTs) or quasi-RCTs of stem cell therapy in thromboangiitis obliterans (Buerger's disease).\nData collection and analysis\nThe review authors (DC, DM, FN) independently assessed the studies, extracted data and performed data analysis.\nMain results\nWe only included one RCT (18 participants with thromboangiitis obliterans) comparing the implantation of stem cell derived from bone marrow with placebo and standard wound dressing care in this review. We identified no studies that compared stem cell therapy (bone marrow source) versus stem cell therapy (umbilical cord source), stem cell therapy (any source) versus pharmacological treatment and stem cell therapy (any source) versus sympathectomy. Ulcer healing was assessed in the form of ulcer size. The mean ulcer area decreased more in the stem cell implantation group: from 5.04 cm2 (standard deviation (SD) 0.70) to 1.48 cm2 (SD 0.56) compared with the control group: mean ulcer size area decreased from 4.68 cm2 (SD 0.62) to 3.59 cm2 (SD 0.14); mean difference (MD) -2.11 cm2, 95% confidence interval (CI) -2.49 to -1.73; 1 study, 18 participants; very low-quality evidence. Pain-free walking distance showed more of an improvement in the stem cell implantation group: from mean of 38.33 meters (SD 17.68) to 284.44 meters (SD 212.12) compared with the control group: mean walking distance increased from 35.66 meters (SD 19.79) to 78.22 meters (SD 35.35); MD 206.22 meters, 95% CI 65.73 to 346.71; 1 study; 18 participants; very low-quality evidence.\nOutcomes such as rate of amputation, pain, amputation-free survival and adverse effects were not assessed.\nThe quality of evidence was classified as very low, with only one study, small numbers of participants, high risk of bias in many domains and missing information regarding tobacco exposure status.\nAuthors' conclusions\nVery low-quality evidence suggests there may be an effect of the use of bone marrow-derived stem cells in the healing of ulcers and improvement in the pain-free walking distance in patients with Buerger's disease. High-quality trials assessing the effectiveness of stem cell therapy for treatment of patients with thromboangiitis obliterans (Buerger's disease) are needed.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Only one randomized controlled study (18 participants with thromboangiitis obliterans) comparing the implantation of stem cells derived from bone marrow with placebo and standard wound dressing care was included in this review (most recent search was 17 October 2017). We identified no studies that compared stem cell therapy from different sources, stem cell therapy versus drug treatment and stem cell therapy versus sympathectomy (surgical cutting of a sympathetic nerve). The results showed a decrease in ulcer size and improvement in pain-free walking distance in the group receiving the stem cell implantation compared with the group receiving placebo and standard wound dressing care.\nOutcomes such as rate of amputation, pain, amputation-free survival and adverse effects were not assessed."
}
] |
query-laysum
|
23007
|
[
{
"role": "user",
"content": "Abstract: Background\nThis is an updated version of the Cochrane Review first published in 2011, and most recently updated in 2019.\nEpilepsy is a chronic and disabling neurological disorder, affecting approximately 1% of the population. Up to 30% of people with epilepsy have seizures that are resistant to currently available antiepileptic drugs and require treatment with multiple antiepileptic drugs in combination. Felbamate is a second-generation antiepileptic drug that can be used as add-on therapy to standard antiepileptic drugs.\nObjectives\nTo evaluate the efficacy and tolerability of felbamate versus placebo when used as an add-on treatment for people with drug-resistant focal-onset epilepsy.\nSearch methods\nFor the latest update, we searched the Cochrane Register of Studies (CRS Web) and MEDLINE (Ovid, 1946 to 13 July 2021) on 15 July 2021. There were no language or time restrictions. We reviewed the reference lists of retrieved studies to search for additional reports of relevant studies. We also contacted the manufacturers of felbamate and experts in the field for information about any unpublished or ongoing studies.\nSelection criteria\nWe searched for randomised placebo-controlled add-on studies of people of any age with drug-resistant focal seizures. The studies could be double-blind, single-blind or unblinded and could be of parallel-group or cross-over design.\nData collection and analysis\nTwo review authors independently selected studies for inclusion and extracted information. In the case of disagreements, a third review author arbitrated. Review authors assessed the following outcomes: 50% or greater reduction in seizure frequency; absolute or percentage reduction in seizure frequency; treatment withdrawal; adverse effects; quality of life.\nMain results\nWe included four randomised controlled trials, representing a total of 236 participants, in the review. Two trials had parallel-group design, the third had a two-period cross-over design, and the fourth had a three-period cross-over design. We judged all four studies to be at an unclear risk of bias overall. Bias arose from the incomplete reporting of methodological details, the incomplete and selective reporting of outcome data, and from participants having unstable drug regimens during experimental treatment in one trial. Due to significant methodological heterogeneity, clinical heterogeneity and differences in outcome measures, it was not possible to perform a meta-analysis of the extracted data.\nOnly one study reported the outcome of 50% or greater reduction in seizure frequency, whilst three studies reported percentage reduction in seizure frequency compared to placebo. One study claimed an average seizure reduction of 35.8% with add-on felbamate whilst another study claimed a more modest reduction of 4.2%. Both studies reported that seizure frequency increased with add-on placebo and that there was a significant difference in seizure reduction between felbamate and placebo (P = 0.0005 and P = 0.018, respectively). The third study reported a 14% reduction in seizure frequency with add-on felbamate but stated that the difference between treatments was not significant. There were conflicting results regarding treatment withdrawal. One study reported a higher treatment withdrawal for placebo-randomised participants, whereas the other three studies reported higher treatment withdrawal rates for felbamate-randomised participants. Notably, the treatment withdrawal rates for felbamate treatment groups across all four studies remained reasonably low (less than 10%), suggesting that felbamate may be well tolerated. Felbamate-randomised participants most commonly withdrew from treatment due to adverse effects. The adverse effects consistently reported by all four studies were headache, dizziness and nausea. All three adverse effects were reported by 23% to 40% of felbamate-treated participants versus 3% to 15% of placebo-treated participants.\nWe assessed the evidence for all outcomes using GRADE and rated the evidence as very low certainty, meaning that we have little confidence in the findings reported. We mainly downgraded evidence for imprecision due to the narrative synthesis conducted and the low number of events. We stress that the true effect of felbamate could likely be significantly different from that reported in this current review update.\nAuthors' conclusions\nIn view of the methodological deficiencies, the limited number of included studies and the differences in outcome measures, we have found no reliable evidence to support the use of felbamate as an add-on therapy in people with drug-resistant focal-onset epilepsy. A large-scale, randomised controlled trial conducted over a longer period of time is required to inform clinical practice.\n\nGiven the provided abstract, please respond to the following query: \"Results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "After searching the available literature, we found four trials, involving 236 participants, that investigated the use of felbamate in people with drug-resistant focal epilepsy. We included the four trials in the review.\nAlthough three of the trials reported percentage reduction in seizure frequency, they all reported very different results. One reported a 36% reduction in seizure frequency with felbamate, one reported only a 4% reduction with felbamate, and the other trial reported that there was no difference between felbamate and placebo (an inactive, dummy drug). We therefore found no clear evidence to suggest that felbamate was better than placebo at reducing seizure frequency for people with drug-resistant focal epilepsy. Additionally, there was mixed evidence about whether more people withdrew from treatment with felbamate or placebo. Notably, less than 10% of people in each trial withdrew from treatment when they were receiving felbamate, suggesting that felbamate may have good tolerability. The side effects that were reported by all four trials - suggesting that they are the most common - were headache, dizziness and nausea."
}
] |
query-laysum
|
23008
|
[
{
"role": "user",
"content": "Abstract: Background\nHealth professionals sometimes do not use the best evidence to treat their patients, in part due to unconscious acts of omission and information overload. Reminders help clinicians overcome these problems by prompting them to recall information that they already know, or by presenting information in a different and more accessible format. Manually-generated reminders delivered on paper are defined as information given to the health professional with each patient or encounter, provided on paper, in which no computer is involved in the production or delivery of the reminder. Manually-generated reminders delivered on paper are relatively cheap interventions, and are especially relevant in settings where electronic clinical records are not widely available and affordable. This review is one of three Cochrane Reviews focused on the effectiveness of reminders in health care.\nObjectives\n1. To determine the effectiveness of manually-generated reminders delivered on paper in changing professional practice and improving patient outcomes.\n2. To explore whether a number of potential effect modifiers influence the effectiveness of manually-generated reminders delivered on paper.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, CINAHL and two trials registers on 5 December 2018. We searched grey literature, screened individual journals, conference proceedings and relevant systematic reviews, and reviewed reference lists and cited references of included studies.\nSelection criteria\nWe included randomised and non-randomised trials assessing the impact of manually-generated reminders delivered on paper as a single intervention (compared with usual care) or added to one or more co-interventions as a multicomponent intervention (compared with the co-intervention(s) without the reminder component) on professional practice or patients' outcomes. We also included randomised and non-randomised trials comparing manually-generated reminders with other quality improvement (QI) interventions.\nData collection and analysis\nTwo review authors screened studies for eligibility and abstracted data independently. We extracted the primary outcome as defined by the authors or calculated the median effect size across all reported outcomes in each study. We then calculated the median percentage improvement and interquartile range across the included studies that reported improvement related outcomes, and assessed the certainty of the evidence using the GRADE approach.\nMain results\nWe identified 63 studies (41 cluster-randomised trials, 18 individual randomised trials, and four non-randomised trials) that met all inclusion criteria. Fifty-seven studies reported usable data (64 comparisons). The studies were mainly located in North America (42 studies) and the UK (eight studies). Fifty-four studies took place in outpatient/ambulatory settings. The clinical areas most commonly targeted were cardiovascular disease management (11 studies), cancer screening (10 studies) and preventive care (10 studies), and most studies had physicians as their target population (57 studies). General management of a clinical condition (17 studies), test-ordering (14 studies) and prescription (10 studies) were the behaviours more commonly targeted by the intervention.\nForty-eight studies reported changes in professional practice measured as dichotomous process adherence outcomes (e.g. compliance with guidelines recommendations), 16 reported those changes measured as continuous process-of-care outcomes (e.g. number of days with catheters), eight reported dichotomous patient outcomes (e.g. mortality rates) and five reported continuous patient outcomes (e.g. mean systolic blood pressure).\nManually-generated reminders delivered on paper probably improve professional practice measured as dichotomous process adherence outcomes) compared with usual care (median improvement 8.45% (IQR 2.54% to 20.58%); 39 comparisons, 40,346 participants; moderate certainty of evidence) and may make little or no difference to continuous process-of-care outcomes (8 comparisons, 3263 participants; low certainty of evidence). Adding manually-generated paper reminders to one or more QI co-interventions may slightly improve professional practice measured as dichotomous process adherence outcomes (median improvement 4.24% (IQR −1.09% to 5.50%); 12 comparisons, 25,359 participants; low certainty of evidence) and probably slightly improve professional practice measured as continuous outcomes (median improvement 0.28 (IQR 0.04 to 0.51); 2 comparisons, 12,372 participants; moderate certainty of evidence). Compared with other QI interventions, manually-generated reminders may slightly decrease professional practice measured as process adherence outcomes (median decrease 7.9% (IQR −0.7% to 11%); 14 comparisons, 21,274 participants; low certainty of evidence).\nWe are uncertain whether manually-generated reminders delivered on paper, compared with usual care or with other QI intervention, lead to better or worse patient outcomes (dichotomous or continuous), as the certainty of the evidence is very low (10 studies, 13 comparisons). Reminders added to other QI interventions may make little or no difference to patient outcomes (dichotomous or continuous) compared with the QI alone (2 studies, 2 comparisons).\nRegarding resource use, studies reported additional costs per additional point of effectiveness gained, but because of the different currencies and years used the relevance of those figures is uncertain.\nNone of the included studies reported outcomes related to harms or adverse effects.\nAuthors' conclusions\nManually-generated reminders delivered on paper as a single intervention probably lead to small to moderate increases in outcomes related to adherence to clinical recommendations, and they could be used as a single QI intervention. It is uncertain whether reminders should be added to other QI intervention already in place in the health system, although the effects may be positive. If other QI interventions, such as patient or computerised reminders, are available, they should be preferred over manually-generated reminders, but under close evaluation in order to decrease uncertainty about their potential effect.\n\nGiven the provided abstract, please respond to the following query: \"How up-to-date is this review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The review authors searched for studies published up to December 2018."
}
] |
query-laysum
|
23009
|
[
{
"role": "user",
"content": "Abstract: Background\nAnaemia is common among cancer patients and they may require red blood cell transfusions. Erythropoiesis-stimulating agents (ESAs) and iron might help in reducing the need for red blood cell transfusions. However, it remains unclear whether the combination of both drugs is preferable compared to using one drug.\nObjectives\nTo systematically review the effect of intravenous iron, oral iron or no iron in combination with or without ESAs to prevent or alleviate anaemia in cancer patients and to generate treatment rankings using network meta-analyses (NMAs).\nSearch methods\nWe identified studies by searching bibliographic databases (CENTRAL, MEDLINE, Embase; until June 2021). We also searched various registries, conference proceedings and reference lists of identified trials.\nSelection criteria\nWe included randomised controlled trials comparing intravenous, oral or no iron, with or without ESAs for the prevention or alleviation of anaemia resulting from chemotherapy, radiotherapy, combination therapy or the underlying malignancy in cancer patients.\nData collection and analysis\nTwo review authors independently extracted data and assessed risk of bias. Outcomes were on-study mortality, number of patients receiving red blood cell transfusions, number of red blood cell units, haematological response, overall mortality and adverse events. We conducted NMAs and generated treatment rankings. We assessed the certainty of the evidence using GRADE.\nMain results\nNinety-six trials (25,157 participants) fulfilled our inclusion criteria; 62 trials (24,603 participants) could be considered in the NMA (12 different treatment options). Here we present the comparisons of ESA with or without iron and iron alone versus no treatment. Further results and subgroup analyses are described in the full text.\nOn-study mortality\nWe estimated that 92 of 1000 participants without treatment for anaemia died up to 30 days after the active study period. Evidence from NMA (55 trials; 15,074 participants) suggests that treatment with ESA and intravenous iron (12 of 1000; risk ratio (RR) 0.13, 95% confidence interval (CI) 0.01 to 2.29; low certainty) or oral iron (34 of 1000; RR 0.37, 95% CI 0.01 to 27.38; low certainty) may decrease or increase and ESA alone (103 of 1000; RR 1.12, 95% CI 0.92 to 1.35; moderate certainty) probably slightly increases on-study mortality. Additionally, treatment with intravenous iron alone (271 of 1000; RR 2.95, 95% CI 0.71 to 12.34; low certainty) may increase and oral iron alone (24 of 1000; RR 0.26, 95% CI 0.00 to 19.73; low certainty) may increase or decrease on-study mortality.\nHaematological response\nWe estimated that 90 of 1000 participants without treatment for anaemia had a haematological response. Evidence from NMA (31 trials; 6985 participants) suggests that treatment with ESA and intravenous iron (604 of 1000; RR 6.71, 95% CI 4.93 to 9.14; moderate certainty), ESA and oral iron (527 of 1000; RR 5.85, 95% CI 4.06 to 8.42; moderate certainty), and ESA alone (467 of 1000; RR 5.19, 95% CI 4.02 to 6.71; moderate certainty) probably increases haematological response. Additionally, treatment with oral iron alone may increase haematological response (153 of 1000; RR 1.70, 95% CI 0.69 to 4.20; low certainty).\nRed blood cell transfusions\nWe estimated that 360 of 1000 participants without treatment for anaemia needed at least one transfusion. Evidence from NMA (69 trials; 18,684 participants) suggests that treatment with ESA and intravenous iron (158 of 1000; RR 0.44, 95% CI 0.31 to 0.63; moderate certainty), ESA and oral iron (144 of 1000; RR 0.40, 95% CI 0.24 to 0.66; moderate certainty) and ESA alone (212 of 1000; RR 0.59, 95% CI 0.51 to 0.69; moderate certainty) probably decreases the need for transfusions. Additionally, treatment with intravenous iron alone (268 of 1000; RR 0.74, 95% CI 0.43 to 1.28; low certainty) and with oral iron alone (333 of 1000; RR 0.92, 95% CI 0.54 to 1.57; low certainty) may decrease or increase the need for transfusions.\nOverall mortality\nWe estimated that 347 of 1000 participants without treatment for anaemia died overall. Low-certainty evidence from NMA (71 trials; 21,576 participants) suggests that treatment with ESA and intravenous iron (507 of 1000; RR 1.46, 95% CI 0.87 to 2.43) or oral iron (482 of 1000; RR 1.39, 95% CI 0.60 to 3.22) and intravenous iron alone (521 of 1000; RR 1.50, 95% CI 0.63 to 3.56) or oral iron alone (534 of 1000; RR 1.54, 95% CI 0.66 to 3.56) may decrease or increase overall mortality. Treatment with ESA alone may lead to little or no difference in overall mortality (357 of 1000; RR 1.03, 95% CI 0.97 to 1.10; low certainty).\nThromboembolic events\nWe estimated that 36 of 1000 participants without treatment for anaemia developed thromboembolic events. Evidence from NMA (50 trials; 15,408 participants) suggests that treatment with ESA and intravenous iron (66 of 1000; RR 1.82, 95% CI 0.98 to 3.41; moderate certainty) probably slightly increases and with ESA alone (66 of 1000; RR 1.82, 95% CI 1.34 to 2.47; high certainty) slightly increases the number of thromboembolic events. None of the trials reported results on the other comparisons.\nThrombocytopenia or haemorrhage\nWe estimated that 76 of 1000 participants without treatment for anaemia developed thrombocytopenia/haemorrhage. Evidence from NMA (13 trials, 2744 participants) suggests that treatment with ESA alone probably leads to little or no difference in thrombocytopenia/haemorrhage (76 of 1000; RR 1.00, 95% CI 0.67 to 1.48; moderate certainty). None of the trials reported results on other comparisons.\nHypertension\nWe estimated that 10 of 1000 participants without treatment for anaemia developed hypertension. Evidence from NMA (24 trials; 8383 participants) suggests that treatment with ESA alone probably increases the number of hypertensions (29 of 1000; RR 2.93, 95% CI 1.19 to 7.25; moderate certainty). None of the trials reported results on the other comparisons.\nAuthors' conclusions\nWhen considering ESAs with iron as prevention for anaemia, one has to balance between efficacy and safety. Results suggest that treatment with ESA and iron probably decreases number of blood transfusions, but may increase mortality and the number of thromboembolic events. For most outcomes the different comparisons within the network were not fully connected, so ranking of all treatments together was not possible. More head-to-head comparisons including all evaluated treatment combinations are needed to fill the gaps and prove results of this review.\n\nGiven the provided abstract, please respond to the following query: \"How up to date is the evidence?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The evidence is up-to-date to June 2021."
}
] |
query-laysum
|
23010
|
[
{
"role": "user",
"content": "Abstract: Background\nAntiplatelet agents are recommended for people with myocardial infarction and acute coronary syndromes, transient ischaemic attack or stroke, and for those in whom coronary stents have been inserted. People who take antiplatelet agents are at increased risk of adverse events when undergoing non-cardiac surgery because of these indications. However, taking antiplatelet therapy also introduces risk to the person undergoing surgery because the likelihood of bleeding is increased. Discontinuing antiplatelet therapy before surgery might reduce this risk but subsequently it might make thrombotic problems, such as myocardial infarction, more likely.\nObjectives\nTo compare the effects of continuation versus discontinuation for at least five days of antiplatelet therapy on the occurrence of bleeding and ischaemic events in adults undergoing non-cardiac surgery under general, spinal or regional anaesthesia.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL; 2018, Issue 1), MEDLINE (1946 to January 2018), and Embase (1974 to January 2018). We searched clinical trials registers for ongoing studies, and conducted backward and forward citation searching of relevant articles.\nSelection criteria\nWe included randomized controlled trials of adults who were taking single or dual antiplatelet therapy, for at least two weeks, and were scheduled for elective non-cardiac surgery. Included participants had at least one cardiac risk factor. We planned to include quasi-randomized studies.\nWe excluded people scheduled for minor surgeries under local anaesthetic or sedation in which bleeding that required transfusion or additional surgery was unlikely. We included studies which compared perioperative continuation of antiplatelet therapy versus discontinuation of antiplatelet therapy or versus substitution of antiplatelet therapy with a placebo for at least five days before surgery.\nData collection and analysis\nTwo review authors independently assessed studies for inclusion, extracted data, assessed risk of bias and synthesized findings. Our primary outcomes were: all-cause mortality at longest follow-up (up to six months); all-cause mortality (up to 30 days). Secondary outcomes included: blood loss requiring transfusion of blood products; blood loss requiring further surgical intervention; risk of ischaemic events. We used GRADE to assess the quality of evidence for each outcome\nMain results\nWe included five RCTs with 666 randomized adults. We identified three ongoing studies.\nAll study participants were scheduled for elective general surgery (including abdominal, urological, orthopaedic and gynaecological surgery) under general, spinal or regional anaesthesia. Studies compared continuation of single or dual antiplatelet therapy (aspirin or clopidogrel) with discontinuation of therapy for at least five days before surgery.\nThree studies reported adequate methods of randomization, and two reported methods to conceal allocation. Three studies were placebo-controlled trials and were at low risk of performance bias, and three studies reported adequate methods to blind outcome assessors to group allocation. Attrition was limited in four studies and two studies had reported prospective registration with clinical trial registers and were at low risk of selective outcome reporting bias.\nWe reported mortality at two time points: the longest follow-up reported by study authors up to six months, and time point reported by study authors up to 30 days. Five studies reported mortality up to six months (of which four studies had a longest follow-up at 30 days, and one study at 90 days) and we found that either continuation or discontinuation of antiplatelet therapy may make little or no difference to mortality up to six months (risk ratio (RR) 1.21, 95% confidence interval (CI) 0.34 to 4.27; 659 participants; low-certainty evidence); the absolute effect is three more deaths per 1000 with continuation of antiplatelets (ranging from eight fewer to 40 more). Combining the four studies with a longest follow-up at 30 days alone showed the same effect estimate, and we found that either continuation or discontinuation of antiplatelet therapy may make little or no difference to mortality at 30 days after surgery (RR 1.21, 95% CI 0.34 to 4.27; 616 participants; low-certainty evidence); the absolute effect is three more deaths per 1000 with continuation of antiplatelets (ranging from nine fewer to 42 more).\nWe found that either continuation or discontinuation of antiplatelet therapy probably makes little or no difference in incidences of blood loss requiring transfusion (RR 1.37, 95% CI 0.83 to 2.26; 368 participants; absolute effect of 42 more participants per 1000 requiring transfusion in the continuation group, ranging from 19 fewer to 119 more; four studies; moderate-certainty evidence); and may make little or no difference in incidences of blood loss requiring additional surgery (RR 1.54, 95% CI 0.31 to 7.58; 368 participants; absolute effect of six more participants per 1000 requiring additional surgery in the continuation group, ranging from seven fewer to 71 more; four studies; low-certainty evidence). We found that either continuation or discontinuation of antiplatelet therapy may make little or no difference to incidences of ischaemic events (to include peripheral ischaemia, cerebral infarction, and myocardial infarction) within 30 days of surgery (RR 0.67, 95% CI 0.25 to 1.77; 616 participants; absolute effect of 17 fewer participants per 1000 with an ischaemic event in the continuation group, ranging from 39 fewer to 40 more; four studies; low-certainty evidence).\nWe used the GRADE approach to downgrade evidence for all outcomes owing to limited evidence from few studies. We noted a wide confidence in effect estimates for mortality at the end of follow-up and at 30 days, and for blood loss requiring transfusion which suggested imprecision. We noted visual differences in study results for ischaemic events which suggested inconsistency.\nAuthors' conclusions\nWe found low-certainty evidence that either continuation or discontinuation of antiplatelet therapy before non-cardiac surgery may make little or no difference to mortality, bleeding requiring surgical intervention, or ischaemic events. We found moderate-certainty evidence that either continuation or discontinuation of antiplatelet therapy before non-cardiac surgery probably makes little or no difference to bleeding requiring transfusion. Evidence was limited to few studies with few participants, and with few events. The three ongoing studies may alter the conclusions of the review once published and assessed.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Some studies had low risk of bias because they had clearly reported their methods for randomizing people to each group, and three studies used a placebo agent so that people did not know whether or not they were continuing their usual antiplatelet therapy. However, we found few studies with few events, with wide variation in results. To continue or stop taking antiplatelet drugs for a few days before non-cardiac surgery might make little or no difference to the number of people who died, who had bleeding that needed further surgery or who had ischaemic events, and it probably makes little or no difference to bleeding that needed a blood transfusion. We found three ongoing studies which will increase certainty in the effect in future updates of the review."
}
] |
query-laysum
|
23011
|
[
{
"role": "user",
"content": "Abstract: Background\nFeeding practices around the time of packed red blood cell transfusion have been implicated in the subsequent development of necrotising enterocolitis (NEC) in preterm infants. Specifically, it has been suggested that withholding feeds around the time of transfusion may reduce the risk of subsequent NEC. It is important to determine if withholding feeds around transfusion reduces the risk of subsequent NEC and associated mortality.\nObjectives\n• To assess the benefits and risks of stopping compared to continuing feed management before, during, and after blood transfusion in preterm infants\n• To assess the effects of stopping versus continuing feeds in the following subgroups of infants: infants of different gestations; infants with symptomatic and asymptomatic anaemia; infants who received different feeding schedules, types of feed, and methods of feed delivery; infants who were transfused with different blood products, at different blood volumes, via different routes of delivery; and those who received blood transfusion with and without co-interventions such as use of diuretics\n• To determine the effectiveness and safety of stopping feeds around the time of a blood transfusion in reducing the risk of subsequent necrotising enterocolitis (NEC) in preterm infants\nSearch methods\nWe used the standard search strategy of Cochrane Neonatal to search the Cochrane Central Register of Controlled Trials (CENTRAL; 2018, Issue 11), in the Cochrane Library; MEDLINE (1966 to 14 November 2018); Embase (1980 to 14 November 2018); and the Cumulative Index to Nursing and Allied Health Literature (CINAHL; 1982 to 14 November 2018). We also searched clinical trials databases, conference proceedings, and reference lists of retrieved articles for randomised controlled trials (RCTs), cluster-RCTs, and quasi-RCTs.\nSelection criteria\nRandomised and quasi-randomised controlled trials that compared stopping feeds versus continuing feeds around the time of blood transfusion in preterm infants.\nData collection and analysis\nTwo review authors independently selected trials, assessed trial quality, and extracted data from the included studies.\nMain results\nThe search revealed seven studies that assessed effects of stopping feeds during blood transfusion. However, only one RCT involving 22 preterm infants was eligible for inclusion in the review. This RCT had low risk of selection bias but high risk of performance bias, as care personnel were not blinded to the study allocation. The primary objective of this trial was to investigate changes in mesenteric blood flow, and no cases of NEC were reported in any of the infants included in the trial. We were unable to draw any conclusions from this single study. The overall GRADE rating for quality of evidence was very low.\nAuthors' conclusions\nRandomised controlled trial evidence is insufficient to show whether stopping feeds has an effect on the incidence of subsequent NEC or death. Large, adequately powered RCTs are needed to address this issue.\n\nGiven the provided abstract, please respond to the following query: \"Quality of evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Data were insufficient to allow any meaningful conclusions based on the very low quality of evidence according to the GRADE rating. Large randomised controlled trials are needed to answer the review question."
}
] |
query-laysum
|
23012
|
[
{
"role": "user",
"content": "Abstract: Background\nOphthalmia neonatorum is an infection of the eyes in newborns that can lead to blindness, particularly if the infection is caused by Neisseria gonorrhoeae. Antiseptic or antibiotic medication is dispensed into the eyes of newborns, or dispensed systemically, soon after delivery to prevent neonatal conjunctivitis and potential vision impairment.\nObjectives\n1. To determine if any type of systemic or topical eye medication is better than placebo or no prophylaxis in preventing ophthalmia neonatorum.\n2. To determine if any one systemic or topical eye medication is better than any other medication in preventing ophthalmia neonatorum.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, LILACS, and three trials registers, date of last search 4 October 2019. We also searched references of included studies and contacted pharmaceutical companies.\nSelection criteria\nWe included randomised and quasi-randomised controlled trials of any topical, systemic, or combination medical interventions used to prevent ophthalmia neonatorum in newborns compared with placebo, no prophylaxis, or with each other.\nData collection and analysis\nWe used standard methods expected by Cochrane. Outcomes were: blindness or any adverse visual outcome at 12 months, conjunctivitis at 1 month (gonococcal (GC), chlamydial (CC), bacterial (BC), any aetiology (ACAE), or unknown aetiology (CUE)), and adverse effects.\nMain results\nWe included 30 trials with a total of 79,198 neonates. Eighteen studies were conducted in high-income settings (the USA, Europe, Israel, Canada), and 12 were conducted in low- and middle-income settings (Africa, Iran, China, Indonesia, Mexico). Fifteen of the 30 studies were quasi-randomised. We judged every study to be at high risk of bias in at least one domain. Ten studies included a comparison arm with no prophylaxis. There were 14 different prophylactic regimens and 12 different medications in the 30 included studies.\nAny prophylaxis compared to no prophylaxis\nUnless otherwise indicated, the following evidence comes from studies assessing one or more of the following interventions: tetracycline 1%, erythromycin 0.5%, povidone-iodine 2.5%, silver nitrate 1%. None of the studies reported data on the primary outcomes: blindness or any adverse visual outcome at any time point. There was only very low-certainty evidence on the risk of GC with prophylaxis (4/5340 newborns) compared to no prophylaxis (5/2889) at one month (risk ratio (RR) 0.79, 95% confidence interval (CI) 0.24 to 2.65, 3 studies). Low-certainty evidence suggested there may be little or no difference in effect on CC (RR 0.96, 95% CI 0.57 to 1.61, 4874 newborns, 2 studies) and BC (RR 0.84, 95% CI 0.37 to 1.93, 3685 newborns, 2 studies). Moderate-certainty evidence suggested a probable reduction in risk of ACAE at one month (RR 0.65, 95% 0.54 to 0.78, 9666 newborns, 8 studies assessing tetracycline 1%, erythromycin 0.5%, povidone-iodine 2.5%, silver nitrate 1%, colostrum, bacitracin-phenacaine ointment). There was only very low-certainty evidence on CUE (RR 1.75, 95% CI 0.37 to 8.28, 330 newborns, 1 study). Very low-certainty evidence on adverse effects suggested no increased nasolacrimal duct obstruction (RR 0.93, 95% CI 0.68 to 1.28, 404 newborns, 1 study of erythromycin 0.5% and silver nitrate 1%) and no increased keratitis (single study of 40 newborns assessing silver nitrate 1% with no events).\nAny prophylaxis compared to another prophylaxis\nOverall, evidence comparing different interventions did not suggest any consistently superior intervention. However, most of this evidence was of low-certainty and was extremely limited.\nAuthors' conclusions\nThere are no data on whether prophylaxis for ophthalmia neonatorum prevents serious outcomes such as blindness or any adverse visual outcome. Moderate-certainty evidence suggests that the use of prophylaxis may lead to a reduction in the incidence of ACAE in newborns but the evidence for effect on GC, CC or BC was less certain. Comparison of individual interventions did not suggest any consistently superior intervention, but data were limited. A trial comparing tetracycline, povidone-iodine (single administration), and chloramphenicol for GC and CC could potentially provide the community with an effective, universally applicable prophylaxis against ophthalmia neonatorum.\n\nGiven the provided abstract, please respond to the following query: \"What was the aim of the review?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The aim of this Cochrane Review was to determine if any medication is better than placebo or no preventive action in preventing ophthalmia neonatorum. Cochrane Review authors collected and analysed all relevant studies to answer this question and found 30 studies."
}
] |
query-laysum
|
23013
|
[
{
"role": "user",
"content": "Abstract: Background\nSurgical treatment of chronic rhinosinusitis with nasal polyps is an established treatment for medically resistant nasal polyp disease. Whether a nasal polypectomy with additional sinus dissection offers any advantage over an isolated nasal polypectomy has not been systematically reviewed.\nObjectives\nTo assess the effectiveness of simple polyp surgery versus more extensive surgical clearance in chronic rhinosinusitis with nasal polyps.\nSearch methods\nWe searched the Cochrane Ear, Nose and Throat Disorders Group Trials Register; the Cochrane Central Register of Controlled Trials (CENTRAL 2014, Issue 1); PubMed; EMBASE; CINAHL; Web of Science; Cambridge Scientific Abstracts; ICTRP and additional sources for published and unpublished trials. The date of the search was 20 February 2014.\nSelection criteria\nRandomised and quasi-randomised controlled trials in patients over 16 with chronic rhinosinusitis with nasal polyps, who have failed a course of medical management and who have not previously undergone any previous surgical intervention for their nasal disease. Studies compared nasal polypectomy with more extensive sinus clearance in this patient cohort.\nData collection and analysis\nWe used the standard methodological procedures expected by The Cochrane Collaboration.\nMain results\nWe identified no trials which met our inclusion criteria. Six controlled trials (five randomised) met some but not all of the inclusion criteria and were therefore excluded from the review.\nAuthors' conclusions\nWe are unable to reach any conclusions as to whether isolated nasal polypectomy or more extensive sinus surgery is a superior surgical treatment modality for chronic rhinosinusitis with nasal polyps. There is a need for high-quality randomised controlled trials to assess whether additional sinus surgery confers any benefit when compared to nasal polypectomy performed in isolation.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "No trials met our inclusion criteria. We did identify six controlled trials (five of which were randomised) in which some but not all of our inclusion criteria were met, but we had to exclude these studies from the review."
}
] |
query-laysum
|
23014
|
[
{
"role": "user",
"content": "Abstract: Background\nThis is an updated version of the original Cochrane review published in Issue 5, 2015.\nCervical cancer is the fourth most common cancer among women worldwide, with estimated 569,847 new diagnoses and 311,365 deaths per year. However, incidence and stage at diagnosis vary greatly between geographic areas and are largely dependent on the availability of a robust population screening programme. For example, in Nigeria, advanced-stage disease at presentation is common (86% to 89.3% of new cases), whereas in the UK, only 21.9% of women present with International Federation of Gynaecology and Obstetrics (FIGO) stage II+ disease. Women with advanced cancer of the cervix often need palliation for distressing symptoms, such as vaginal bleeding. Vaginal bleeding can be life threatening in advanced disease, with an incidence ranging from 0.7% to 100%. Bleeding is the immediate cause of death in 6% of women with cervical cancer and its management often poses a challenge.\nThus, vaginal bleeding remains a common consequence of advanced cervical cancer. Currently, there is no systematic review that addresses palliative interventions for controlling vaginal bleeding caused by advanced cervical cancer. A systematic evaluation of the available palliative interventions is needed to inform decision-making.\nObjectives\nTo evaluate the efficacy and safety of tranexamic acid, vaginal packing (with or without formalin-soaked packs), interventional radiology or other interventions compared with radiotherapy for palliative treatment of vaginal bleeding in women with advanced cervical cancer.\nSearch methods\nThe search for the original review was run in 23 March 2015, and subsequent searches for this update were run 21 March 2018. We searched the Cochrane Central Register of Controlled Trials (CENTRAL; 2018, Issue 3) in the Cochrane Library; MEDLINE via Ovid to March week 2, 2018; and Embase via Ovid to March week 12, 2018. We also searched registers of clinical trials, abstracts of scientific meetings and reference lists of review articles, and contacted experts in the field. We handsearched citation lists of relevant studies.\nSelection criteria\nWe searched for randomised and non-randomised comparative studies that evaluated the efficacy and safety of tranexamic acid, vaginal packing (with or without formalin-soaked packs), interventional radiology or other interventions compared with radiotherapy techniques for palliative treatment of vaginal bleeding in women with advanced cervical cancer (with or without metastasis), irrespective of publication status, year of publication or language in the review.\nData collection and analysis\nTwo review authors independently assessed whether potentially relevant studies met the inclusion criteria. We found no studies for inclusion and, therefore, we analysed no data.\nMain results\nThe search strategy identified 1522 unique references of which we excluded 1330 on the basis of title and abstract. We retrieved the remaining 22 articles in full, but none satisfied the inclusion criteria. We identified only observational data from single-arm studies of women treated with formalin-soaked packs, interventional radiology or radiotherapy techniques for palliative control of vaginal bleeding in women with cervical cancer.\nAuthors' conclusions\nSince the last version of this review we found no new studies. There is no evidence from controlled trials to support or refute the use of any of the proposed interventions compared with radiotherapy. Therefore, the choice of intervention will be based on local resources. Radiotherapy techniques for managing vaginal bleeding are not readily available in resource-poor settings, where advanced cases of cervical cancer are predominant. Thus, this systematic review identified the need for a randomised controlled trial assessing the benefits and risks of palliative treatments for vaginal bleeding in women with advanced cervical cancer.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "cervical cancer (cancer of the neck of the womb) is the fourth most common cancer among women throughout the world, accounting for about 569,847 new detected cases and 311,365 deaths every year. Women more commonly present with advanced disease in the developing (low-income) world, where access to cervical screening programmes is limited. Advanced cancer of the cervix may not be curable and women often need treatment to control distressing symptoms (palliation), such as vaginal bleeding. Bleeding can be severe enough to be life threatening in women with advanced cervical cancer. Management of vaginal bleeding often poses a challenge, especially in the developing world, where access to radiotherapy is limited. Options for palliative treatment of severe vaginal bleeding include interventional radiology treatment (using x-rays to guide the insertion of 'plugs' into blood vessels supplying the cancer) or vaginal packing (where gauze is compacted into the vagina to absorb the blood and apply pressure directly to the cervix), although these are often only partly effective and may cause harm. Vaginal packs can be soaked with formalin, which is a preservative chemical. Other options for treating severe vaginal bleeding include tranexamic acid (a medicine that reduces bleeding that can be given by mouth or by injection) and radiotherapy (high-energy x-ray treatment)."
}
] |
query-laysum
|
23015
|
[
{
"role": "user",
"content": "Abstract: Background\nActive smoking increases the risk of tuberculosis (TB) infection 2 to 2.5 times and is significantly associated with recurrent TB and TB mortality. Observational studies have shown associations between smoking and poor TB treatment outcomes such as increased loss to follow-up rate, severity of disease, drug resistance and slow smear conversion. Since most smoking-related immunologic abnormalities are reversible within six weeks of stopping smoking, smoking cessation may have substantial positive effects on TB treatment outcomes, TB relapse and future lung disease.\nObjectives\nTo analyse the effect of tobacco smoking cessation interventions (SCIs) on the treatment outcomes of people with adult pulmonary TB.\nSearch methods\nWe searched the Cochrane Tobacco Addiction Group Specialised Register using free-text and MeSH terms for TB and antitubercular treatment. We also searched MEDLINE and EMBASE using the same topic-related terms, combined with the search terms used to identify trials of tobacco cessation interventions from the Specialised Register. We also searched reference list of articles and reviews, the Conference Paper Index, clinicaltrials.gov and grey literature. The searches are current to 29th July 2015.\nSelection criteria\nIndividual and cluster-randomised controlled trials (RCTs), regardless of date, language and publication status, studies of adults with pulmonary TB on first-line anti-tubercular drugs, with interventions at either an individual or a population level, delivered separately or as part of a larger tobacco control package. This included any type of behavioural or pharmaceutical intervention or both for smoking cessation.\nData collection and analysis\nUsing the eligibility criteria, two authors independently checked the abstracts of retrieved studies for relevance, and acquired full trial reports of candidates for inclusion. The authors resolved any disagreements on eligibility by mutual consent, or by recourse to a third author. Two authors intended to independently extract study data from eligible studies into a data extraction form and compare the findings, synthesise data using risk ratios, and assess risk of bias using standard Cochrane methodologies. However, we found no eligible trials.\nMain results\nThere were no randomised controlled trials that met the eligibility criteria. A number of potentially eligible studies are underway, and we will assess them for inclusion in the next update of this review.\nAuthors' conclusions\nThere is a lack of high-quality evidence, i.e. RCTs, that tests the effectiveness of cessation interventions in improving TB treatment outcomes. There is a need for good-quality randomised controlled trials that assess the effect of SCIs on TB treatment outcomes in both the short and long term. Establishing such an evidence base would be an essential step towards the implementation of SCIs in TB control programmes worldwide.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Do treatments to help people with tuberculosis (TB) of the lungs to stop smoking also improve how they respond to treatment for their TB?"
}
] |
query-laysum
|
23016
|
[
{
"role": "user",
"content": "Abstract: Background\nLong-term mechanical ventilation is the most common situation for which tracheostomy is indicated for patients in intensive care units (ICUs). 'Early' and 'late' tracheostomies are two categories of the timing of tracheostomy. Evidence on the advantages attributed to early versus late tracheostomy is somewhat conflicting but includes shorter hospital stays and lower mortality rates.\nObjectives\nTo evaluate the effectiveness and safety of early (≤ 10 days after tracheal intubation) versus late tracheostomy (> 10 days after tracheal intubation) in critically ill adults predicted to be on prolonged mechanical ventilation with different clinical conditions.\nSearch methods\nThis is an update of a review last published in 2012 (Issue 3, The Cochrane Library) with previous searches run in December 2010. In this version, we searched the Cochrane Central Register of Controlled Trials (CENTRAL) (2013, Issue 8); MEDLINE (via PubMed) (1966 to August 2013); EMBASE (via Ovid) (1974 to August 2013); LILACS (1986 to August 2013); PEDro (Physiotherapy Evidence Database) at www.pedro.fhs.usyd.edu.au (1999 to August 2013) and CINAHL (1982 to August 2013). We reran the search in October 2014 and will deal with any studies of interest when we update the review.\nSelection criteria\nWe included all randomized and quasi-randomized controlled trials (RCTs or QRCTs) comparing early tracheostomy (two to 10 days after intubation) against late tracheostomy (> 10 days after intubation) for critically ill adult patients expected to be on prolonged mechanical ventilation.\nData collection and analysis\nTwo review authors extracted data and conducted a quality assessment. Meta-analyses with random-effects models were conducted for mortality, time spent on mechanical ventilation and time spent in the ICU.\nMain results\nWe included eight RCTs (N = 1977 participants). At the longest follow-up time available in these studies, evidence of moderate quality from seven RCTs (n = 1903) showed lower mortality rates in the early as compared with the late tracheostomy group (risk ratio (RR) 0.83, 95% confidence interval (CI) 0.70 to 0.98; P value 0.03; number needed to treat for an additional beneficial outcome (NNTB) ≅ 11). Divergent results were reported on the time spent on mechanical ventilation and no differences were noted for pneumonia, but the probability of discharge from the ICU was higher at day 28 in the early tracheostomy group (RR 1.29, 95% CI 1.08 to 1.55; P value 0.006; NNTB ≅ 8).\nAuthors' conclusions\nThe whole findings of this systematic review are no more than suggestive of the superiority of early over late tracheostomy because no information of high quality is available for specific subgroups with particular characteristics.\n\nGiven the provided abstract, please respond to the following query: \"Key results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Patients receiving early tracheostomy had lower risk of mortality at the longest follow-up time available in seven studies that measured mortality (ranging from 28 days to two years of follow-up), as compared with patients subjected to a late tracheostomy. However, the available evidence should be considered with caution because information is insufficient regarding any subgroup(s) or individual characteristic(s) potentially associated with the best indications for early or late tracheostomy. According to available results, approximately 11 patients would need to be treated with an early instead of a late tracheostomy to prevent one death. Results concerning the time spent on mechanical ventilation are not definitive, but they suggest benefits associated with early tracheostomy. Two studies show a significantly higher probability of discharge from the ICU at 28 days of follow-up in the early tracheostomy group and no significant differences for pneumonia. Possible differences between early and late tracheostomy have yet to be adequately investigated in high-quality studies because no information is available on the best indication for either early or late tracheostomy in patients with specific characteristics."
}
] |
query-laysum
|
23017
|
[
{
"role": "user",
"content": "Abstract: Background\nConstipation within childhood is an extremely common problem. Despite the widespread use of osmotic and stimulant laxatives by health professionals to manage constipation in children, there has been a long standing paucity of high quality evidence to support this practice.\nObjectives\nWe set out to evaluate the efficacy and safety of osmotic and stimulant laxatives used to treat functional childhood constipation.\nSearch methods\nWe searched MEDLINE, EMBASE, the Cochrane Central Register of Controlled Trials, and the Cochrane IBD Group Specialized Trials Register from inception to 10 March 2016. There were no language restrictions. We also searched the references of all included studies, personal contacts and drug companies to identify studies.\nSelection criteria\nRandomised controlled trials (RCTs) which compared osmotic or stimulant laxatives to placebo or another intervention, with participants aged 0 to 18 years old were considered for inclusion. The primary outcome was frequency of defecation. Secondary endpoints included faecal incontinence, disimpaction, need for additional therapies and adverse events.\nData collection and analysis\nRelevant papers were identified and two authors independently assessed the eligibility of trials, extracted data and assessed methodological quality using the Cochrane risk of bias tool. The primary outcome was frequency of defecation. Secondary endpoints included faecal incontinence, disimpaction, need for additional therapies and adverse events. For continuous outcomes we calculated the mean difference (MD) and 95% confidence interval (CI) using a fixed-effect model. For dichotomous outcomes we calculated the risk ratio (RR) and 95% CI using a fixed-effect model. The Chi2 and I2 statistics were used to assess statistical heterogeneity. A random-effects model was used in situations of unexplained heterogeneity. We assessed the overall quality of the evidence supporting the primary and secondary outcomes using the GRADE criteria.\nMain results\nTwenty-five RCTs (2310 participants) were included in the review. Fourteen studies were judged to be at high risk of bias due to lack of blinding, incomplete outcome data and selective reporting. Meta-analysis of two studies (101 patients) comparing polyethylene glycol (PEG) with placebo showed a significantly increased number of stools per week with PEG (MD 2.61 stools per week, 95% CI 1.15 to 4.08). Common adverse events in the placebo-controlled studies included flatulence, abdominal pain, nausea, diarrhoea and headache. Participants receiving high dose PEG (0.7 g/kg) had significantly more stools per week than low dose PEG (0.3 g/kg) participants (1 study, 90 participants, MD 1.30, 95% 0.76 to 1.84). Meta-analysis of 6 studies with 465 participants comparing PEG with lactulose showed a significantly greater number of stools per week with PEG (MD 0.70 , 95% CI 0.10 to 1.31), although follow-up was short. Patients who received PEG were significantly less likely to require additional laxative therapies. Eighteen per cent (27/154) of PEG patients required additional therapies compared to 31% (47/150) of lactulose patients (RR 0.55, 95% CI 0.36 to 0.83). No serious adverse events were reported with either agent. Common adverse events in these studies included diarrhoea, abdominal pain, nausea, vomiting and pruritis ani. Meta-analysis of 3 studies with 211 participants comparing PEG with milk of magnesia showed that the stools per week were significantly greater with PEG (MD 0.69, 95% CI 0.48 to 0.89). However, the magnitude of this difference was quite small and may not be clinically significant. One child was noted to be allergic to PEG, but there were no other serious adverse events reported. One study found a significant difference in stools per week favouring milk of magnesia over lactulose (MD -1.51, 95% CI -2.63 to -0.39, 50 patients), Meta-analysis of 2 studies with 287 patients comparing liquid paraffin (mineral oil) with lactulose revealed a relatively large statistically significant difference in the number of stools per week favouring liquid paraffin (MD 4.94 , 95% CI 4.28 to 5.61). No serious adverse events were reported. Adverse events included abdominal pain, distention and watery stools. No statistically significant differences in the number of stools per week were found between PEG and enemas (1 study, 90 patients, MD 1.00, 95% CI -1.58 to 3.58), dietary fibre mix and lactulose (1 study, 125 patients, P = 0.481), senna and lactulose (1 study, 21 patients, P > 0.05), lactitol and lactulose (1 study, 51 patients, MD -0.80, 95% CI -2.63 to 1.03), hydrolyzed guar gum and lactulose (1 study, 61 patients, MD 1.00, 95% CI -1.80 to 3.80), PEG and flixweed (1 study, 109 patients, MD 0.00, 95% CI -0.33 to 0.33), PEG and dietary fibre (1 study, 83 patients, MD 0.20, 95% CI -0.64 to 1.04), and PEG and liquid paraffin (2 studies, 261 patients, MD 0.35, 95% CI -0.24 to 0.95).\nAuthors' conclusions\nThe pooled analyses suggest that PEG preparations may be superior to placebo, lactulose and milk of magnesia for childhood constipation. GRADE analyses indicated that the overall quality of the evidence for the primary outcome (number of stools per week) was low or very low due to sparse data, inconsistency (heterogeneity), and high risk of bias in the studies in the pooled analyses. Thus, the results of the pooled analyses should be interpreted with caution because of quality and methodological concerns, as well as clinical heterogeneity, and short follow-up. There is also evidence suggesting the efficacy of liquid paraffin (mineral oil). There is no evidence to demonstrate the superiority of lactulose when compared to the other agents studied, although there is a lack of placebo controlled studies. Further research is needed to investigate the long term use of PEG for childhood constipation, as well as the role of liquid paraffin. The optimal dose of PEG also warrants further investigation.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The primary objective was to evaluate the effectiveness and side effects of osmotic and stimulant laxatives used for the treatment of functional childhood constipation."
}
] |
query-laysum
|
23018
|
[
{
"role": "user",
"content": "Abstract: Background\nExercise training is commonly recommended for individuals with fibromyalgia. This review examined the effects of supervised group aquatic training programs (led by an instructor). We defined aquatic training as exercising in a pool while standing at waist, chest, or shoulder depth. This review is part of the update of the 'Exercise for treating fibromyalgia syndrome' review first published in 2002, and previously updated in 2007.\nObjectives\nThe objective of this systematic review was to evaluate the benefits and harms of aquatic exercise training in adults with fibromyalgia.\nSearch methods\nWe searched The Cochrane Library 2013, Issue 2 (Cochrane Database of Systematic Reviews, Database of Abstracts of Reviews of Effects, Cochrane Central Register of Controlled Trials, Health Technology Assessment Database, NHS Economic Evaluation Database), MEDLINE, EMBASE, CINAHL, PEDro, Dissertation Abstracts, WHO international Clinical Trials Registry Platform, and AMED, as well as other sources (i.e., reference lists from key journals, identified articles, meta-analyses, and reviews of all types of treatment for fibromyalgia) from inception to October 2013. Using Cochrane methods, we screened citations, abstracts, and full-text articles. Subsequently, we identified aquatic exercise training studies.\nSelection criteria\nSelection criteria were: a) full-text publication of a randomized controlled trial (RCT) in adults diagnosed with fibromyalgia based on published criteria, and b) between-group data for an aquatic intervention and a control or other intervention. We excluded studies if exercise in water was less than 50% of the full intervention.\nData collection and analysis\nWe independently assessed risk of bias and extracted data (24 outcomes), of which we designated seven as major outcomes: multidimensional function, self reported physical function, pain, stiffness, muscle strength, submaximal cardiorespiratory function, withdrawal rates and adverse effects. We resolved discordance through discussion. We evaluated interventions using mean differences (MD) or standardized mean differences (SMD) and 95% confidence intervals (95% CI). Where two or more studies provided data for an outcome, we carried out meta-analysis. In addition, we set and used a 15% threshold for calculation of clinically relevant differences.\nMain results\nWe included 16 aquatic exercise training studies (N = 881; 866 women and 15 men). Nine studies compared aquatic exercise to control, five studies compared aquatic to land-based exercise, and two compared aquatic exercise to a different aquatic exercise program.\nWe rated the risk of bias related to random sequence generation (selection bias), incomplete outcome data (attrition bias), selective reporting (reporting bias), blinding of outcome assessors (detection bias), and other bias as low. We rated blinding of participants and personnel (selection and performance bias) and allocation concealment (selection bias) as low risk and unclear. The assessment of the evidence showed limitations related to imprecision, high statistical heterogeneity, and wide confidence intervals.\nAquatic versus control\nWe found statistically significant improvements (P value < 0.05) in all of the major outcomes. Based on a 100-point scale, multidimensional function improved by six units (MD -5.97, 95% CI -9.06 to -2.88; number needed to treat (NNT) 5, 95% CI 3 to 9), self reported physical function by four units (MD -4.35, 95% CI -7.77 to -0.94; NNT 6, 95% CI 3 to 22), pain by seven units (MD -6.59, 95% CI -10.71 to -2.48; NNT 5, 95% CI 3 to 8), and stiffness by 18 units (MD -18.34, 95% CI -35.75 to -0.93; NNT 3, 95% CI 2 to 24) more in the aquatic than the control groups. The SMD for muscle strength as measured by knee extension and hand grip was 0.63 standard deviations higher compared to the control group (SMD 0.63, 95% CI 0.20 to 1.05; NNT 4, 95% CI 3 to 12) and cardiovascular submaximal function improved by 37 meters on six-minute walk test (95% CI 4.14 to 69.92). Only two major outcomes, stiffness and muscle strength, met the 15% threshold for clinical relevance (improved by 27% and 37% respectively). Withdrawals were similar in the aquatic and control groups and adverse effects were poorly reported, with no serious adverse effects reported.\nAquatic versus land-based\nThere were no statistically significant differences between interventions for multidimensional function, self reported physical function, pain or stiffness: 0.91 units (95% CI -4.01 to 5.83), -5.85 units (95% CI -12.33 to 0.63), -0.75 units (95% CI -10.72 to 9.23), and two units (95% CI -8.88 to 1.28) respectively (all based on a 100-point scale), or in submaximal cardiorespiratory function (three seconds on a 100-meter walk test, 95% CI -1.77 to 7.77). We found a statistically significant difference between interventions for strength, favoring land–based training (2.40 kilo pascals grip strength, 95% CI 4.52 to 0.28). None of the outcomes in the aquatic versus land comparison reached clinically relevant differences of 15%. Withdrawals were similar in the aquatic and land groups and adverse effects were poorly reported, with no serious adverse effects in either group.\nAquatic versus aquatic (Ai Chi versus stretching in the water, exercise in pool water versus exercise in sea water)\nAmong the major outcomes the only statistically significant difference between interventions was for stiffness, favoring Ai Chi (1.00 on a 100-point scale, 95% CI 0.31 to 1.69).\nAuthors' conclusions\nLow to moderate quality evidence relative to control suggests that aquatic training is beneficial for improving wellness, symptoms, and fitness in adults with fibromyalgia. Very low to low quality evidence suggests that there are benefits of aquatic and land-based exercise, except in muscle strength (very low quality evidence favoring land). No serious adverse effects were reported.\n\nGiven the provided abstract, please respond to the following query: \"Key results: for those who did aquatic training compared to people who did land-based exercise\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "People who did both programs had similar results for overall well-being, physical function, pain, and stiffness. However, people who exercise on land improved their muscle strength by 9% more than those who did aquatic training. About the same number of people from both groups dropped out."
}
] |
query-laysum
|
23019
|
[
{
"role": "user",
"content": "Abstract: Background\nThe diagnosis of Alzheimer's disease dementia and other dementias relies on clinical assessment. There is a high prevalence of cognitive disorders, including undiagnosed dementia in secondary care settings. Short cognitive tests can be helpful in identifying those who require further specialist diagnostic assessment; however, there is a lack of consensus around the optimal tools to use in clinical practice. The Mini-Cog is a short cognitive test comprising three-item recall and a clock-drawing test that is used in secondary care settings.\nObjectives\nThe primary objective was to determine the accuracy of the Mini-Cog for detecting dementia in a secondary care setting. The secondary objectives were to investigate the heterogeneity of test accuracy in the included studies and potential sources of heterogeneity. These potential sources of heterogeneity will include the baseline prevalence of dementia in study samples, thresholds used to determine positive test results, the type of dementia (Alzheimer's disease dementia or all causes of dementia), and aspects of study design related to study quality.\nSearch methods\nWe searched the following sources in September 2012, with an update to 12 March 2019: Cochrane Dementia Group Register of Diagnostic Test Accuracy Studies, MEDLINE (OvidSP), Embase (OvidSP), BIOSIS Previews (Web of Knowledge), Science Citation Index (ISI Web of Knowledge), PsycINFO (OvidSP), and LILACS (BIREME). We made no exclusions with regard to language of Mini-Cog administration or language of publication, using translation services where necessary.\nSelection criteria\nWe included cross-sectional studies and excluded case-control designs, due to the risk of bias. We selected those studies that included the Mini-Cog as an index test to diagnose dementia where dementia diagnosis was confirmed with reference standard clinical assessment using standardised dementia diagnostic criteria. We only included studies in secondary care settings (including inpatient and outpatient hospital participants).\nData collection and analysis\nWe screened all titles and abstracts generated by the electronic database searches. Two review authors independently checked full papers for eligibility and extracted data. We determined quality assessment (risk of bias and applicability) using the QUADAS-2 tool. We extracted data into two-by-two tables to allow calculation of accuracy metrics for individual studies, reporting the sensitivity, specificity, and 95% confidence intervals of these measures, summarising them graphically using forest plots.\nMain results\nThree studies with a total of 2560 participants fulfilled the inclusion criteria, set in neuropsychology outpatient referrals, outpatients attending a general medicine clinic, and referrals to a memory clinic. Only n = 1415 (55.3%) of participants were included in the analysis to inform evaluation of Mini-Cog test accuracy, due to the selective use of available data by study authors. There were concerns related to high risk of bias with respect to patient selection, and unclear risk of bias and high concerns related to index test conduct and applicability. In all studies, the Mini-Cog was retrospectively derived from historic data sets. No studies included acute general hospital inpatients. The prevalence of dementia ranged from 32.2% to 87.3%. The sensitivities of the Mini-Cog in the individual studies were reported as 0.67 (95% confidence interval (CI) 0.63 to 0.71), 0.60 (95% CI 0.48 to 0.72), and 0.87 (95% CI 0.83 to 0.90). The specificity of the Mini-Cog for each individual study was 0.87 (95% CI 0.81 to 0.92), 0.65 (95% CI 0.57 to 0.73), and 1.00 (95% CI 0.94 to 1.00). We did not perform meta-analysis due to concerns related to risk of bias and heterogeneity.\nAuthors' conclusions\nThis review identified only a limited number of diagnostic test accuracy studies using Mini-Cog in secondary care settings. Those identified were at high risk of bias related to patient selection and high concerns related to index test conduct and applicability. The evidence was indirect, as all studies evaluated Mini-Cog differently from the review question, where it was anticipated that studies would conduct Mini-Cog and independently but contemporaneously perform a reference standard assessment to diagnose dementia. The pattern of test accuracy varied across the three studies. Future research should evaluate Mini-Cog as a test in itself, rather than derived from other neuropsychological assessments. There is also a need for evaluation of the feasibility of the Mini-Cog for the detection of dementia to help adequately determine its role in the clinical pathway.\n\nGiven the provided abstract, please respond to the following query: \"To whom do the results of this review apply?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The studies included in the review were conducted in the USA, Germany, and Brazil. Two studies included those patients referred to specialists evaluating memory and thinking skills, and one study recruited individuals attending a medical outpatient clinic. The percentage of people with a final diagnosis of dementia was between 32% and 87% (an average of 64%)."
}
] |
query-laysum
|
23020
|
[
{
"role": "user",
"content": "Abstract: Background\nChronic inflammatory joint diseases (IJDs) affect 1% to 2% of the population in developed countries. IJDs include rheumatoid arthritis (RA), ankylosing spondylitis (AS), psoriatic arthritis (PsA), and other forms of spondyloarthritis (SpA). Tobacco smoking is considered a significant environmental risk factor for developing IJDs. There are indications that smoking exacerbates the symptoms and worsens disease outcomes.\nObjectives\nThe objective of this review was to investigate the evidence for effects of smoking cessation interventions on smoking cessation and disease activity in smokers with IJD.\nSearch methods\nWe searched the Cochrane Tobacco Addiction Group Specialized Register; the Cochrane Central Register of Controlled Trials (CENTRAL), the Cochrane Library; PubMed/MEDLINE; Embase; PsycINFO; the Cumulative Index to Nursing and Allied Health Literature (CINAHL); and three trials registers to October 2018.\nSelection criteria\nWe included randomised controlled trials testing any form of smoking cessation intervention for adult daily smokers with a diagnosis of IJD, and measuring smoking cessation at least six months after baseline.\nData collection and analysis\nWe used standard methodological procedures as expected by Cochrane.\nMain results\nWe included two studies with 57 smokers with a diagnosis of rheumatoid arthritis (RA). We identified no studies including other IJDs. One pilot study compared a smoking cessation intervention specifically for people with RA with a less intensive, generic smoking cessation intervention. People included in the study had a mean age of 56.5 years and a disease duration of 7.7 years (mean). The second study tested effects of an eight-week cognitive-behavioural patient education intervention on cardiovascular disease (CVD) risk for people with RA and compared this with information on CVD risk only. The intervention encouraged participants to address multiple behaviours impacting CVD risk, including smoking cessation, but did not target smoking cessation alone. People included in the study had a mean age of 62.2 years (intervention group) and 60.8 years (control group), and disease duration of 11.6 years (intervention group) and 14.1 years (control group). It was not appropriate to perform a meta-analysis of abstinence data from the two studies due to clinical heterogeneity between interventions. Neither of the studies individually provided evidence to show benefit of the interventions tested. Only one study reported on adverse effects. These effects were non-serious, and numbers were comparable between trial arms. Neither of the studies assessed or reported disease activity or any of the predefined secondary outcomes. We assessed the overall certainty of evidence as very low due to indirectness, imprecision, and high risk of detection bias based on GRADE.\nAuthors' conclusions\nWe found very little research investigating the efficacy of smoking cessation intervention specifically in people with IJD. Included studies are limited by imprecision, risk of bias, and indirectness. Neither of the included studies investigated whether smoking cessation intervention reduced disease activity among people with IJD. High-quality, adequately powered studies are warranted. In particular, researchers should ensure that they measure disease markers and quality of life, in addition to long-term smoking cessation.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We searched the literature in October 2018. We included two studies with a total of 57 adult smokers - both men and women - with rheumatoid arthritis. One of the studies tested an intervention to help people with rheumatoid arthritis to quit smoking. This study recruited only smokers and compared this specialist, stop smoking programme with a standard, less intensive stop smoking programme. The other study tested an intervention to reduce the risk of heart disease and stroke in people with rheumatoid arthritis. Researchers recruited non-smokers and smokers and compared this programme to a brief factual information leaflet about risks of heart disease. Both studies followed study participants for six months.\nThese studies were funded by Arthritis Research UK Educational Research Fellowship, Arthritis Research UK, the New Zealand Health Research Council, Arthritis New Zealand, and the University of Otago Research Fund."
}
] |
query-laysum
|
23021
|
[
{
"role": "user",
"content": "Abstract: Background\nThis is an updated review concerning 'Higher versus lower fractions of inspired oxygen or targets of arterial oxygenation for adults admitted to the intensive care unit'.\nSupplementary oxygen is provided to most patients in intensive care units (ICUs) to prevent global and organ hypoxia (inadequate oxygen levels). Oxygen has been administered liberally, resulting in high proportions of patients with hyperoxemia (exposure of tissues to abnormally high concentrations of oxygen). This has been associated with increased mortality and morbidity in some settings, but not in others. Thus far, only limited data have been available to inform clinical practice guidelines, and the optimum oxygenation target for ICU patients is uncertain. Because of the publication of new trial evidence, we have updated this review.\nObjectives\nTo update the assessment of benefits and harms of higher versus lower fractions of inspired oxygen (FiO2) or targets of arterial oxygenation for adults admitted to the ICU.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, Embase, Science Citation Index Expanded, BIOSIS Previews, and LILACS. We searched for ongoing or unpublished trials in clinical trial registers and scanned the reference lists and citations of included trials. Literature searches for this updated review were conducted in November 2022.\nSelection criteria\nWe included randomised controlled trials (RCTs) that compared higher versus lower FiO2 or targets of arterial oxygenation (partial pressure of oxygen (PaO2), peripheral or arterial oxygen saturation (SpO2 or SaO2)) for adults admitted to the ICU. We included trials irrespective of publication type, publication status, and language.\nWe excluded trials randomising participants to hypoxaemia (FiO2 below 0.21, SaO2/SpO2 below 80%, or PaO2 below 6 kPa) or to hyperbaric oxygen, and cross-over trials and quasi-randomised trials.\nData collection and analysis\nFour review authors independently, and in pairs, screened the references identified in the literature searches and extracted the data. Our primary outcomes were all-cause mortality, the proportion of participants with one or more serious adverse events (SAEs), and quality of life. We analysed all outcomes at maximum follow-up. Only three trials reported the proportion of participants with one or more SAEs as a composite outcome. However, most trials reported on events categorised as SAEs according to the International Conference on Harmonisation Good Clinical Practice (ICH-GCP) criteria. We, therefore, conducted two analyses of the effect of higher versus lower oxygenation strategies using 1) the single SAE with the highest reported proportion in each trial, and 2) the cumulated proportion of participants with an SAE in each trial. Two trials reported on quality of life.\nSecondary outcomes were lung injury, myocardial infarction, stroke, and sepsis. No trial reported on lung injury as a composite outcome, but four trials reported on the occurrence of acute respiratory distress syndrome (ARDS) and five on pneumonia. We, therefore, conducted two analyses of the effect of higher versus lower oxygenation strategies using 1) the single lung injury event with the highest reported proportion in each trial, and 2) the cumulated proportion of participants with ARDS or pneumonia in each trial.\nWe assessed the risk of systematic errors by evaluating the risk of bias in the included trials using the Risk of Bias 2 tool. We used the GRADEpro tool to assess the overall certainty of the evidence. We also evaluated the risk of publication bias for outcomes reported by 10b or more trials.\nMain results\nWe included 19 RCTs (10,385 participants), of which 17 reported relevant outcomes for this review (10,248 participants). For all-cause mortality, 10 trials were judged to be at overall low risk of bias, and six at overall high risk of bias. For the reported SAEs, 10 trials were judged to be at overall low risk of bias, and seven at overall high risk of bias. Two trials reported on quality of life, of which one was judged to be at overall low risk of bias and one at high risk of bias for this outcome.\nMeta-analysis of all trials, regardless of risk of bias, indicated no significant difference from higher or lower oxygenation strategies at maximum follow-up with regard to mortality (risk ratio (RR) 1.01, 95% confidence interval (C)I 0.96 to 1.06; I2 = 14%; 16 trials; 9408 participants; very low-certainty evidence); occurrence of SAEs: the highest proportion of any specific SAE in each trial RR 1.01 (95% CI 0.96 to 1.06; I2 = 36%; 9466 participants; 17 trials; very low-certainty evidence), or quality of life (mean difference (MD) 0.5 points in participants assigned to higher oxygenation strategies (95% CI -2.75 to 1.75; I2 = 34%, 1649 participants; 2 trials; very low-certainty evidence)). Meta-analysis of the cumulated number of SAEs suggested benefit of a lower oxygenation strategy (RR 1.04 (95% CI 1.02 to 1.07; I2 = 74%; 9489 participants; 17 trials; very low certainty evidence)). However, trial sequential analyses, with correction for sparse data and repetitive testing, could reject a relative risk increase or reduction of 10% for mortality and the highest proportion of SAEs, and 20% for both the cumulated number of SAEs and quality of life. Given the very low-certainty of evidence, it is necessary to interpret these findings with caution.\nMeta-analysis of all trials indicated no statistically significant evidence of a difference between higher or lower oxygenation strategies on the occurrence of lung injuries at maximum follow-up (the highest reported proportion of lung injury RR 1.08, 95% CI 0.85 to 1.38; I2 = 0%; 2048 participants; 8 trials; very low-certainty evidence).\nMeta-analysis of all trials indicated harm from higher oxygenation strategies as compared with lower on the occurrence of sepsis at maximum follow-up (RR 1.85, 95% CI 1.17 to 2.93; I2 = 0%; 752 participants; 3 trials; very low-certainty evidence). Meta-analysis indicated no differences regarding the occurrences of myocardial infarction or stroke.\nAuthors' conclusions\nIn adult ICU patients, it is still not possible to draw clear conclusions about the effects of higher versus lower oxygenation strategies on all-cause mortality, SAEs, quality of life, lung injuries, myocardial infarction, stroke, and sepsis at maximum follow-up. This is due to low or very low-certainty evidence.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We set out to update the assessment on whether more supplemental oxygen is better than less supplemental oxygen for adults admitted to the intensive care unit (ICU)."
}
] |
query-laysum
|
23022
|
[
{
"role": "user",
"content": "Abstract: Background\nVitamin C supplementation may help reduce the risk of pregnancy complications such as pre-eclampsia, intrauterine growth restriction and maternal anaemia. There is a need to evaluate the efficacy and safety of vitamin C supplementation in pregnancy.\nObjectives\nTo evaluate the effects of vitamin C supplementation, alone or in combination with other separate supplements on pregnancy outcomes, adverse events, side effects and use of health resources.\nSearch methods\nWe searched the Cochrane Pregnancy and Childbirth Group's Trials Register (31 March 2015) and reference lists of retrieved studies.\nSelection criteria\nAll randomised or quasi-randomised controlled trials evaluating vitamin C supplementation in pregnant women. Interventions using a multivitamin supplement containing vitamin C or where the primary supplement was iron were excluded.\nData collection and analysis\nTwo review authors independently assessed trials for inclusion and risk of bias, extracted data and checked them for accuracy.\nMain results\nTwenty-nine trials involving 24,300 women are included in this review. Overall, 11 trials were judged to be of low risk of bias, eight were high risk of bias and for 10 trials it was unclear. No clear differences were seen between women supplemented with vitamin C alone or in combination with other supplements compared with placebo or no control for the risk of stillbirth (risk ratio (RR) 1.15, 95% confidence intervals (CI) 0.89 to 1.49; 20,038 participants; 11 studies; I² = 0%; moderate quality evidence), neonatal death (RR 0.79, 95% CI 0.58 to 1.08; 19,575 participants; 11 studies; I² = 0%), perinatal death (average RR 1.07, 95% CI 0.77 to 1.49; 17,105 participants; seven studies; I² = 35%), birthweight (mean difference (MD) 26.88 g, 95% CI -18.81 to 72.58; 17,326 participants; 13 studies; I² = 69%), intrauterine growth restriction (RR 0.98, 95% CI 0.91 to 1.06; 20,361 participants; 12 studies; I² = 15%; high quality evidence), preterm birth (average RR 0.99, 95% CI 0.90 to 1.10; 22,250 participants; 16 studies; I² = 49%; high quality evidence), preterm PROM (prelabour rupture of membranes) (average RR 0.98, 95% CI 0.70 to 1.36; 16,825 participants; 10 studies; I² = 70%; low quality evidence), term PROM (average RR 1.26, 95% CI 0.62 to 2.56; 2674 participants; three studies; I² = 87%), and clinical pre-eclampsia (average RR 0.92, 95% CI 0.80 to 1.05; 21,956 participants; 16 studies; I² = 41%; high quality evidence).\nWomen supplemented with vitamin C alone or in combination with other supplements compared with placebo or no control were at decreased risk of having a placental abruption (RR 0.64, 95% CI 0.44 to 0.92; 15,755 participants; eight studies; I² = 0%; high quality evidence) and had a small increase in gestational age at birth (MD 0.31, 95% CI 0.01 to 0.61; 14,062 participants; nine studies; I² = 65%), however they were also more likely to self-report abdominal pain (RR 1.66, 95% CI 1.16 to 2.37; 1877 participants; one study). In the subgroup analyses based on the type of supplement, vitamin C supplementation alone was associated with a reduced risk of preterm PROM (average RR 0.66, 95% CI 0.48 to 0.91; 1282 participants; five studies; I² = 0%) and term PROM (average RR 0.55, 95% CI 0.32 to 0.94; 170 participants; one study). Conversely, the risk of term PROM was increased when supplementation included vitamin C and vitamin E (average RR 1.73, 95% CI 1.34 to 2.23; 3060 participants; two studies; I² = 0%). There were no differences in the effects of vitamin C on other outcomes in the subgroup analyses examining the type of supplement. There were no differing patterns in other subgroups of women based on underlying risk of pregnancy complications, timing of commencement of supplementation or dietary intake of vitamin C prior to trial entry. The GRADE quality of the evidence was high for intrauterine growth restriction, preterm birth, and placental abruption, moderate for stillbirth and clinical pre-eclampsia, low for preterm PROM.\nAuthors' conclusions\nThe data do not support routine vitamin C supplementation alone or in combination with other supplements for the prevention of fetal or neonatal death, poor fetal growth, preterm birth or pre-eclampsia. Further research is required to elucidate the possible role of vitamin C in the prevention of placental abruption and prelabour rupture of membranes. There was no convincing evidence that vitamin C supplementation alone or in combination with other supplements results in other important benefits or harms.\n\nGiven the provided abstract, please respond to the following query: \"Why is this important?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Having a low intake of vitamin C could be associated with complications in pregnancy such as high blood pressure with swelling of the hands, feet and face (pre-eclampsia), anaemia and having a small baby."
}
] |
query-laysum
|
23023
|
[
{
"role": "user",
"content": "Abstract: Background\nPrior to introducing pneumococcal conjugate vaccines (PCVs), Streptococcus pneumoniae was most commonly isolated from the middle ear fluid of children with acute otitis media (AOM). Reducing nasopharyngeal colonisation of this bacterium by PCVs may lead to a decline in AOM. The effects of PCVs deserve ongoing monitoring since studies from the post-PCV era report a shift in causative otopathogens towards non-vaccine serotypes and other bacteria. This updated Cochrane Review was first published in 2002 and updated in 2004, 2009, 2014, and 2019.\nObjectives\nTo assess the effect of PCVs in preventing AOM in children up to 12 years of age.\nSearch methods\nWe searched CENTRAL, MEDLINE, Embase, CINAHL, LILACS, Web of Science, and two trials registers, ClinicalTrials.gov and WHO ICTRP, to 11 June 2020.\nSelection criteria\nRandomised controlled trials of PCV versus placebo or control vaccine.\nData collection and analysis\nWe used the standard methodological procedures expected by Cochrane. The primary outcomes were frequency of all-cause AOM and adverse effects. Secondary outcomes included frequency of pneumococcal AOM and frequency of recurrent AOM (defined as three or more AOM episodes in six months or four or more in one year). We used GRADE to assess the certainty of the evidence.\nMain results\nWe included 15 publications of 11 trials (60,733 children, range 74 to 37,868 per trial) of 7- to 11-valent PCVs versus control vaccines (meningococcus type C vaccine in three trials, and hepatitis A or B vaccine in eight trials). We included one additional publication of a previously included trial for this 2020 update. We did not find any relevant trials with the newer 13-valent PCV. Most studies were funded by pharmaceutical companies. Overall, risk of bias was low. In seven trials (59,415 children), PCVs were administered in early infancy, whilst four trials (1318 children) included children aged one year and over who were either healthy or had a history of respiratory illness. There was considerable clinical heterogeneity across studies, therefore we reported results from individual studies.\nPCV administered in early infancy\nPCV7\nThe licenced 7-valent PCV with CRM197 as carrier protein (CRM197-PCV7) was associated with a 6% (95% confidence interval (CI) −4% to 16%; 1 trial; 1662 children) and 6% (95% CI 4% to 9%; 1 trial; 37,868 children) relative risk reduction (RRR) in low-risk infants (moderate-certainty evidence), but was not associated with a reduction in all-cause AOM in high-risk infants (RRR −5%, 95% CI −25% to 12%). PCV7 with the outer membrane protein complex of Neisseria meningitidis serogroup B as carrier protein (OMPC-PCV7) was not associated with a reduction in all-cause AOM (RRR −1%, 95% CI −12% to 10%; 1 trial; 1666 children; low-certainty evidence).\nCRM197-PCV7 and OMPC-PCV7 were associated with 20% (95% CI 7% to 31%) and 25% (95% CI 11% to 37%) RRR in pneumococcal AOM, respectively (2 trials; 3328 children; high-certainty evidence), and CRM197-PCV7 with 9% (95% CI −12% to 27%) and 10% (95% CI 7% to 13%) RRR in recurrent AOM (2 trials; 39,530 children; moderate-certainty evidence).\nPHiD-CV10/11\nThe effect of a licenced 10-valent PCV conjugated to protein D, a surface lipoprotein of Haemophilus influenzae, (PHiD-CV10) on all-cause AOM in healthy infants varied from 6% (95% CI −6% to 17%; 1 trial; 5095 children) to 15% (95% CI −1% to 28%; 1 trial; 7359 children) RRR (low-certainty evidence). PHiD-CV11 was associated with 34% (95% CI 21% to 44%) RRR in all-cause AOM (1 trial; 4968 children; moderate-certainty evidence).\nPHiD-CV10 and PHiD-CV11 were associated with 53% (95% CI 16% to 74%) and 52% (95% CI 37% to 63%) RRR in pneumococcal AOM (2 trials; 12,327 children; high-certainty evidence), and PHiD-CV11 with 56% (95% CI −2% to 80%) RRR in recurrent AOM (1 trial; 4968 children; low-certainty evidence).\nPCV administered at a later age\nPCV7\nWe found no evidence of a beneficial effect on all-cause AOM of administering CRM197-PCV7 in children aged 1 to 7 years with a history of respiratory illness or frequent AOM (2 trials; 457 children; moderate-certainty evidence) and CRM197-PCV7 combined with a trivalent influenza vaccine in children aged 18 to 72 months with a history of respiratory tract infections (1 trial; 597 children; moderate-certainty evidence).\nCRM197-PCV9\nIn 1 trial including 264 healthy daycare attendees aged 1 to 3 years, CRM197-PCV9 was associated with 17% (95% CI −2% to 33%) RRR in parent-reported all-cause otitis media (very low-certainty evidence).\nAdverse events\nNine trials reported on adverse effects (77,389 children; high-certainty evidence). Mild local reactions and fever were common in both groups, and occurred more frequently in PCV than in control vaccine groups: redness (< 2.5 cm): 5% to 20% versus 0% to 16%; swelling (< 2.5 cm): 5% to 12% versus 0% to 8%; and fever (< 39 °C): 15% to 44% versus 8% to 25%. More severe redness (> 2.5 cm), swelling (> 2.5 cm), and fever (> 39 °C) occurred less frequently (0% to 0.9%, 0.1% to 1.3%, and 0.4% to 2.5%, respectively) in children receiving PCV, and did not differ significantly between PCV and control vaccine groups. Pain or tenderness, or both, was reported more frequently in PCV than in control vaccine groups: 3% to 38% versus 0% to 8%. Serious adverse events judged to be causally related to vaccination were rare and did not differ significantly between groups, and no fatal serious adverse event judged causally related to vaccination was reported.\nAuthors' conclusions\nAdministration of the licenced CRM197-PCV7 and PHiD-CV10 during early infancy is associated with large relative risk reductions in pneumococcal AOM. However, the effects of these vaccines on all-cause AOM is far more uncertain based on low- to moderate-certainty evidence. We found no evidence of a beneficial effect on all-cause AOM of administering PCVs in high-risk infants, after early infancy, and in older children with a history of respiratory illness. Compared to control vaccines, PCVs were associated with an increase in mild local reactions (redness, swelling), fever, and pain and/or tenderness. There was no evidence of a difference in more severe local reactions, fever, or serious adverse events judged to be causally related to vaccination.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We reviewed the evidence for the effect of vaccination against Streptococcus pneumoniae (pneumococcus, a type of bacterium) for preventing acute middle ear infections in children."
}
] |
query-laysum
|
23024
|
[
{
"role": "user",
"content": "Abstract: Background\nSolid organ transplantation has seen improvements in both surgical techniques and immunosuppression, achieving prolonged survival. Essential to graft acceptance and post-transplant recovery, immunosuppressive medications are often accompanied by a high prevalence of gastrointestinal (GI) symptoms and side effects. Apart from GI side effects, long-term exposure to immunosuppressive medications has seen an increase in drug-related morbidities such as diabetes mellitus, hyperlipidaemia, hypertension, and malignancy. Non-adherence to immunosuppression can lead to an increased risk of graft failure.\nRecent research has indicated that any microbial imbalances (otherwise known as gut dysbiosis or leaky gut) may be associated with cardiometabolic diseases in the long term. Current evidence suggests a link between the gut microbiome and the production of putative uraemic toxins, increased gut permeability, and transmural movement of bacteria and endotoxins and inflammation. Early observational and intervention studies have been investigating food-intake patterns, various synbiotic interventions (antibiotics, prebiotics, or probiotics), and faecal transplants to measure their effects on microbiota in treating cardiometabolic diseases. It is believed high doses of synbiotics, prebiotics and probiotics are able to modify and improve dysbiosis of gut micro-organisms by altering the population of the micro-organisms. With the right balance in the gut flora, a primary benefit is believed to be the suppression of pathogens through immunostimulation and gut barrier enhancement (less permeability of the gut).\nObjectives\nTo assess the benefits and harms of synbiotics, prebiotics, and probiotics for recipients of solid organ transplantation.\nSearch methods\nWe searched the Cochrane Kidney and Transplant Specialised Register up to 9 March 2022 through contact with the Information Specialist using search terms relevant to this review. Studies in the Register are identified through searches of CENTRAL, MEDLINE, and EMBASE, conference proceedings, the International Clinical Trials Register (ICTRP) Search Portal and ClinicalTrials.gov.\nSelection criteria\nWe included randomised controlled trials measuring and reporting the effects of synbiotics, prebiotics, or probiotics, in any combination and any formulation given to solid organ transplant recipients (any age and setting). Two authors independently assessed the retrieved titles and abstracts and, where necessary, the full text to determine which satisfied the inclusion criteria.\nData collection and analysis\nData extraction was independently carried out by two authors using a standard data extraction form. The methodological quality of included studies was assessed using the Cochrane risk of bias tool. Data entry was carried out by one author and cross-checked by another. Confidence in the evidence was assessed using the Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach.\nMain results\nFive studies (250 participants) were included in this review. Study participants were adults with a kidney (one study) or liver (four studies) transplant. One study compared a synbiotic to placebo, two studies compared a probiotic to placebo, and two studies compared a synbiotic to a prebiotic.\nOverall, the quality of the evidence is poor. Most studies were judged to have unclear (or high) risk of bias across most domains. Of the available evidence, meta-analyses undertaken were of limited data from small studies. Across all comparisons, GRADE evaluations for all outcomes were judged to be very low certainty evidence. Very low certainty evidence implies that we are very uncertain about results (not estimable due to lack of data or poor quality).\nSynbiotics had uncertain effects on the change in microbiota composition (total plasma p-cresol), faecal characteristics, adverse events, kidney function or albumin concentration (1 study, 34 participants) compared to placebo.\nProbiotics had uncertain effects on GI side effects, infection rates immediately post-transplant, liver function, blood pressure, change in fatty liver, and lipids (1 study, 30 participants) compared to placebo.\nSynbiotics had uncertain effects on graft health (acute liver rejection) (2 studies, 129 participants: RR 0.73, 95% CI 0.43 to 1.25; 2 studies, 129 participants; I² = 0%), the use of immunosuppression, infection (2 studies, 129 participants: RR 0.18, 95% CI 0.03 to 1.17; I² = 66%), GI function (time to first bowel movement), adverse events (2 studies, 129 participants: RR 0.79, 95% CI 0.40 to 1.59; I² = 20%), serious adverse events (2 studies, 129 participants: RR 1.49, 95% CI 0.42 to 5.36; I² = 81%), death (2 studies, 129 participants), and organ function measures (2 studies; 129 participants) compared to prebiotics.\nAuthors' conclusions\nThis review highlights the severe lack of high-quality RCTs testing the efficacy of synbiotics, prebiotics or probiotics in solid organ transplant recipients. We have identified significant gaps in the evidence.\nDespite GI symptoms and postoperative infection being the most common reasons for high antibiotic use in this patient population, along with increased morbidity and the growing antimicrobial resistance, we found very few studies that adequately tested these as alternative treatments.\nThere is currently no evidence to support or refute the use of synbiotics, prebiotics, or probiotics in solid organ transplant recipients, and findings should be viewed with caution.\nWe have identified an area of significant uncertainty about the efficacy of synbiotics, prebiotics, or probiotics in solid organ transplant recipients. Future research in this field requires adequately powered RCTs comparing synbiotics, prebiotics, and probiotics separately and with placebo measuring a standard set of core transplant outcomes. Six studies are currently ongoing (822 proposed participants); therefore, it is possible that findings may change with their inclusion in future updates.\n\nGiven the provided abstract, please respond to the following query: \"What did we do?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We reviewed all of the evidence on synbiotics, prebiotics and probiotics to see whether they can improve bowel and gut problems in people who have received an organ transplant (heart, kidney, liver, lung, or pancreas)."
}
] |
query-laysum
|
23025
|
[
{
"role": "user",
"content": "Abstract: Background\nAbortion is common worldwide and increasingly abortions are performed at less than 14 weeks’ gestation using medical methods, specifically using a combination of mifepristone and misoprostol. Medical abortion is known to be a painful process, but the optimal method of pain management is unclear. We sought to identify and compare pain management regimens for medical abortion before 14 weeks’ gestation.\nObjectives\nPrimary objective\nTo determine if there is evidence of superiority of any particular pain relief regimen in the management of combination medical abortion (mifepristone + misoprostol) under 14 weeks' gestation (i.e. up to 13 + 6 weeks or 97 days).\nSecondary objectives\nTo compare the rate of gastrointestinal side effects resulting from different methods of analgesia\nTo compare the rate of complete abortion resulting from different methods of analgesia during medical abortion\nTo determine if the induction-to-abortion interval is associated with different methods of analgesia\nTo determine if any method of analgesia is associated with unscheduled contact with the care provider in relation to pain.\nSearch methods\nOn 21 August 2019 we searched CENTRAL, MEDLINE, Embase, CINAHL, LILACs, PsycINFO, the World Health Organization International Clinical Trials Registry and ClinicalTrials.gov together with reference checking and handsearching of conference abstracts of relevant learned societies and professional organisations to identify further studies.\nSelection criteria\nWe included randomised controlled trials (RCTs) and observational studies (non-randomised studies of interventions (NRSIs)) of any pain relief intervention (pharmacological and non-pharmacological) for mifepristone-misoprostol combination medical abortion of pregnancies less than 14 weeks’ gestation.\nData collection and analysis\nTwo review authors (JRW and MA) independently assessed all identified papers for inclusion and risks of bias, resolving any discrepancies through discussion with a third and fourth author as required (CM and SC). Two review authors independently conducted data extraction, including calculations of pain relief scores, and checked for accuracy. We assessed the certainty of the evidence using the GRADE approach.\nMain results\nWe included four RCTs and one NRSI. Due to the heterogeneity of study designs, interventions and outcome reporting, we were unable to perform meta-analysis for any of the primary or secondary outcomes in this review.\nOnly one study found evidence of an effect between interventions on pain score: a prophylactic dose of ibuprofen 1600 mg likely reduces the pain score when compared to a dose of paracetamol 2000 mg (mean difference (MD) 2.26 out of 10 lower, 95% confidence interval (CI) 3.00 to 1.52 lower; 1 RCT 108 women; moderate-certainty evidence).\nThere may be little to no difference in pain score when comparing pregabalin 300 mg with placebo (MD 0.5 out of 10 lower, 95% CI 1.41 lower to 0.41 higher; 1 RCT, 107 women; low-certainty evidence).\nThere may be little to no difference in pain score when comparing ibuprofen 800 mg with placebo (MD 1.4 out of 10 lower, 95% CI 3.33 lower to 0.53 higher; 1 RCT, 61 women; low-certainty evidence).\nAmbulation or non-ambulation during medical abortion treatment may have little to no effect on pain score, but the evidence is very uncertain (MD 0.1 out of 5 higher, 95% CI 0.26 lower to 0.46 higher; 1 NRSI, 130 women; very low-certainty evidence).\nThere may be little to no difference in pain score when comparing therapeutic versus prophylactic administration of ibuprofen 800 mg (MD 0.2 out of 10 higher, 95% CI 0.41 lower to 0.81 higher; 1 RCT, 228 women; low-certainty evidence).\nOther outcomes of interest were reported inconsistently across studies. Where these outcomes were reported, there was no evidence of difference in incidence of gastrointestinal side effects, complete abortion rate, interval between misoprostol administration to pregnancy expulsion, unscheduled contact with a care provider, patient satisfaction with analgesia regimen nor patient satisfaction with abortion experience overall. However, the certainty of evidence was very low to low.\nAuthors' conclusions\nThe findings of this review provide some support for the use of ibuprofen as a single dose given with misoprostol prophylactically, or in response to pain as needed. The optimal dosing of ibuprofen is unclear, but a single dose of ibuprofen 1600 mg was shown to be effective, and it was less certain whether 800 mg was effective. Paracetamol 2000 mg does not improve pain scores as much as ibuprofen 1600 mg, however its use does not appear to cause greater frequency of side effects or reduce the success of the abortion.\nA single dose of pregabalin 300 mg does not affect pain scores during medical abortion, but like paracetamol, does not appear to cause harm. Ambulation or non-ambulation during the medical abortion procedure does not appear to affect pain scores, outcomes, or duration of treatment and so women can be advised to mobilise or not, as they wish.\nThe majority of outcomes in this review had low- to very low-certainty evidence, primarily due to small sample sizes and two studies at high risk of bias. High-quality, large-scale RCT research is needed for pain management during medical abortion at gestations less than 14 weeks. Consistent recording of pain with a validated measure would be of value to the field going forward.\n\nGiven the provided abstract, please respond to the following query: \"Main results\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found some evidence for the use of ibuprofen given either routinely with misoprostol, or in response to pain as needed.\nThe best dose of ibuprofen is unclear, but a single dose of ibuprofen 1600 mg was likely effective, and it was less certain whether 800 mg was effective. Paracetamol 2000 mg was less likely to improve pain scores as much as ibuprofen 1600 mg, however its use did not appear to cause harm, and it did not affect the success of the abortion.\nA single dose of pregabalin 300mg may not affect pain scores during medical abortion, but like paracetamol, it did not result in any known harm. Being mobile or resting during the medical abortion procedure may not affect pain scores, or the success of the abortion or the time taken to pass the pregnancy.\n"
}
] |
query-laysum
|
23026
|
[
{
"role": "user",
"content": "Abstract: Background\nBurn injuries are the fourth most common traumatic injury, causing an estimated 180,000 deaths annually worldwide. Superficial burns can be managed with dressings alone, but deeper burns or those that fail to heal promptly are usually treated surgically. Acute burns surgery aims to debride burnt skin until healthy tissue is reached, at which point skin grafts or temporising dressings are applied. Conventional debridement is performed with an angled blade, tangentially shaving burned tissue until healthy tissue is encountered. Hydrosurgery, an alternative to conventional blade debridement, simultaneously debrides, irrigates, and removes tissue with the aim of minimising damage to uninjured tissue. Despite the increasing use of hydrosurgery, its efficacy and the risk of adverse events following surgery for burns is unclear.\nObjectives\nTo assess the effects of hydrosurgical debridement and skin grafting versus conventional surgical debridement and skin grafting for the treatment of acute partial-thickness burns.\nSearch methods\nIn December 2019 we searched the Cochrane Wounds Specialised Register; the Cochrane Central Register of Controlled Trials (CENTRAL); Ovid MEDLINE (including In-Process & Other Non-Indexed Citations); Ovid Embase and EBSCO CINAHL Plus. We also searched clinical trials registries for ongoing and unpublished studies, and scanned reference lists of relevant included studies as well as reviews, meta-analyses and health technology reports to identify additional studies. There were no restrictions with respect to language, date of publication or study setting.\nSelection criteria\nWe included randomised controlled trials (RCTs) that enrolled people of any age with acute partial-thickness burn injury and assessed the use of hydrosurgery.\nData collection and analysis\nTwo review authors independently performed study selection, data extraction, 'Risk of bias' assessment, and GRADE assessment of the certainty of the evidence.\nMain results\nOne RCT met the inclusion criteria of this review. The study sample size was 61 paediatric participants with acute partial-thickness burns of 3% to 4% total burn surface area. Participants were randomised to hydrosurgery or conventional debridement. There may be little or no difference in mean time to complete healing (mean difference (MD) 0.00 days, 95% confidence interval (CI) −6.25 to 6.25) or postoperative infection risk (risk ratio 1.33, 95% CI 0.57 to 3.11). These results are based on very low-certainty evidence, which was downgraded twice for risk of bias, once for indirectness, and once for imprecision.\nThere may be little or no difference in operative time between hydrosurgery and conventional debridement (MD 0.2 minutes, 95% CI −12.2 to 12.6); again, the certainty of the evidence is very low, downgraded once for risk of bias, once for indirectness, and once for imprecision. There may be little or no difference in scar outcomes at six months. Health-related quality of life, resource use, and other adverse outcomes were not reported.\nAuthors' conclusions\nThis review contains one randomised trial of hydrosurgery versus conventional debridement in a paediatric population with low percentage of total body surface area burn injuries. Based on the available trial data, there may be little or no difference between hydrosurgery and conventional debridement in terms of time to complete healing, postoperative infection, operative time, and scar outcomes at six months. These results are based on very low-certainty evidence. Further research evaluating these outcomes as well as health-related quality of life, resource use, and other adverse event outcomes is required.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Burns are common injuries worldwide and can cause illness, lifelong disability and even death. Deep burns often require surgery because the skin is too damaged to heal on its own. The damaged, burnt skin must therefore be cut away (debridement) and replaced with healthy skin, which is typically a very thin layer of healthy skin (graft) taken from another part of the body. Debridement is normally done with a specific surgical knife.\nRecently, a high-pressure, water-based jet system has been developed, known as hydrosurgery. This tool removes burnt skin only, leaving behind the unburned, healthy skin. Hydrosurgery may be more accurate than a knife in terms of removing burned skin, which may lead to better healing.\nAll open wounds, including burns, are at risk of infection so adequate debridement is important to reduce the risk of infection. If the wound is closed quickly, it will heal better, with less scarring and less risk of infection."
}
] |
query-laysum
|
23027
|
[
{
"role": "user",
"content": "Abstract: Background\nRoot canal therapy is a sequence of treatments involving root canal cleaning, shaping, decontamination, and obturation. It is conventionally performed through a hole drilled into the crown of the affected tooth, namely orthograde root canal therapy. When it fails, retrograde filling, which seals the root canal from the root apex, is a good alternative. Many materials are used for retrograde filling. Since none meets all the criteria an ideal material should possess, selecting the most efficacious material is of utmost importance. This is an update of a Cochrane Review first published in 2016.\nObjectives\nTo determine the effects of different materials used for retrograde filling in children and adults for whom retrograde filling is necessary in order to save the tooth.\nSearch methods\nAn Information Specialist searched five bibliographic databases up to 21 April 2021 and used additional search methods to identify published, unpublished, and ongoing studies. We also searched four databases in the Chinese language.\nSelection criteria\nWe selected randomised controlled trials (RCTs) that compared different retrograde filling materials, with the reported success rate that was assessed by clinical or radiological methods for which the follow-up period was at least 12 months.\nData collection and analysis\nRecords were screened in duplicate by independent screeners. Two review authors extracted data independently and in duplicate. Original trial authors were contacted for any missing information. Two review authors independently assessed the risk of bias of the included studies. We followed Cochrane's statistical guidelines and assessed the certainty of the evidence using GRADE.\nMain results\nWe included eight studies, all at high risk of bias, involving 1399 participants with 1471 teeth, published between 1995 and 2019, and six comparisons of retrograde filling materials.\n- Mineral trioxide aggregate (MTA) versus intermediate restorative material (IRM): there may be little to no effect of MTA compared to IRM on success rate at one year, but the evidence is very uncertain (risk ratio (RR) 1.09, 95% confidence interval (CI) 0.97 to 1.22; I2 = 0%; 2 studies; 222 teeth; very low-certainty evidence).\n- MTA versus super ethoxybenzoic acid (Super-EBA): there may be little to no effect of MTA compared to Super-EBA on success rate at one year, but the evidence is very uncertain (RR 1.03, 95% CI 0.96 to 1.10; 1 study; 192 teeth; very low-certainty evidence).\n- Super-EBA versus IRM: the evidence is very uncertain about the effect of Super-EBA compared with IRM on success rate at 1 year, with results indicating Super-EBA may reduce or have no effect on success rate (RR 0.90, 95% CI 0.80 to 1.01; 1 study; 194 teeth; very low-certainty evidence).\n- Dentine-bonded resin composite versus glass ionomer cement: compared to glass ionomer cement, dentine-bonded resin composite may increase the success rate of the treatment at 1 year, but the evidence is very uncertain (RR 2.39, 95% CI 1.60 to 3.59; 1 study; 122 teeth; very low-certainty evidence). Same result was obtained when considering the root as unit of analysis at one year (RR 1.59, 95% CI 1.20 to 2.09; 1 study; 127 roots; very low-certainty evidence).\n- Glass ionomer cement versus amalgam: the evidence is very uncertain about the effect of glass ionomer cement compared with amalgam on success rate at one year, with results indicating glass ionomer cement may reduce or have no effect on success rate (RR 0.98, 95% CI 0.86 to 1.12; 1 study; 105 teeth; very low-certainty evidence).\n- MTA versus root repair material (RRM): there may be little to no effect of MTA compared to RRM on success rate at one year, but the evidence is very uncertain (RR 1.00, 95% CI 0.94 to 1.07; I2 = 0%; 2 studies; 278 teeth; very low-certainty evidence).\nAdverse events were not assessed by any of the included studies.\nAuthors' conclusions\nBased on the present limited evidence, there is insufficient evidence to draw any conclusion as to the benefits of any one material over another for retrograde filling in root canal therapy. We conclude that more high-quality RCTs are required.\n\nGiven the provided abstract, please respond to the following query: \"What are the limitations of the evidence?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The main limitations of the evidence are that studies:\n- were very small;\n- were conducted in ways that may have introduced errors into their results; and\n- there were not enough studies to be certain about the results.\nDue to these limitations, we have little confidence in the evidence."
}
] |
query-laysum
|
23028
|
[
{
"role": "user",
"content": "Abstract: Background\nMany workers suffer from work-related stress and are at increased risk of work-related cardiovascular, musculoskeletal, or mental disorders. In the European Union the prevalence of work-related stress was estimated at about 22%. There is consensus that stress, absenteeism, and well-being of employees can be influenced by leadership behaviour. Existing reviews predominantly included cross-sectional and non-experimental studies, which have limited informative value in deducing causal relationships between leadership interventions and health outcomes.\nObjectives\nTo assess the effect of four types of human resource management (HRM) training for supervisors on employees' psychomental stress, absenteeism, and well-being. We included training aimed at improving supervisor-employee interaction, either off-the-job or on-the-job training, and training aimed at improving supervisors' capability of designing the work environment, either off-the-job or on-the-job training.\nSearch methods\nIn May 2019 we searched CENTRAL, MEDLINE, four other databases, and most relevant trials registers (ICTRP, TroPHI, ClinicalTrials.gov). We did not impose any language restrictions on the searches.\nSelection criteria\nWe included randomised controlled trials (RCT), cluster-randomised controlled trials (cRCT), and controlled before-after studies (CBA) with at least two intervention and control sites, which examined the effects of supervisor training on psychomental stress, absenteeism, and well-being of employees within natural settings of organisations by means of validated measures.\nData collection and analysis\nAt least two authors independently screened abstracts and full texts, extracted data and assessed the risk of bias of included studies. We analysed study data from intervention and control groups with respect to different comparisons, outcomes, follow-up time, study designs, and intervention types. We pooled study results by use of standardised mean differences (SMD) with 95% confidence intervals when possible. We assessed the quality of evidence for each outcome using the GRADE approach.\nMain results\nWe included 25 studies of which 4 are awaiting assessment. The 21 studies that could be analysed were 1 RCT, 14 cRCTs and 6 CBAs with a total of at least 3479 employees in intervention and control groups. We judged 12 studies to have an unclear risk of bias and the remaining nine studies to have a high risk of bias. Sixteen studies focused on improving supervisor-employee interaction, whereas five studies aimed at improving the design of working environments by means of supervisor training.\nTraining versus no intervention\nWe found very low-quality evidence that supervisor training does not reduce employees' stress levels (6 studies) or absenteeism (1 study) when compared to no intervention, regardless of intervention type or follow-up. We found inconsistent, very low-quality evidence that supervisor training aimed at employee interaction may (2 studies) or may not (7 studies) improve employees' well-being when compared to no intervention. Effects from two studies were not estimable due to missing data.\nTraining versus placebo\nWe found moderate-quality evidence (2 studies) that supervisor training off the job aimed at employee interaction does not reduce employees' stress levels more than a placebo training at mid-term follow-up. We found low-quality evidence in one study that supervisor training on the job aimed at employee interaction does not reduce employees' absenteeism more than placebo training at long-term follow-up. Effects from one study were not estimable due to insufficient data.\nTraining versus other training\nOne study compared the effects of supervisor training off the job aimed at employee interaction on employees' stress levels to training off the job aimed at working conditions at long-term follow-up but due to insufficient data, effects were not estimable.\nAuthors' conclusions\nBased on a small and heterogeneous sample of controlled intervention studies and in contrast to prevailing consensus that supervisor behaviour influences employees' health and well-being, we found inconsistent evidence that supervisor training may or may not improve employees' well-being when compared to no intervention. For all other types of interventions and outcomes, there was no evidence of a considerable effect. However, due to the very low- to moderate-quality of the evidence base, clear conclusions are currently unwarranted. Well-designed studies are needed to clarify effects of supervisor training on employees' stress, absenteeism, and well-being.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We included 25 studies of which 4 studies are awaiting assessment. The 21 studies that could be analysed included a total of 3479 employees. Sixteen studies trained supervisor-employee interaction, either off-the-job (9 studies) or on-the job (7 studies). Five studies trained the design of working environments, off-the-job in 2 studies and on-the job in 3 studies. The 21 studies compared 23 interventions with no training, sham training or other training at various times of follow-up."
}
] |
query-laysum
|
23029
|
[
{
"role": "user",
"content": "Abstract: Background\nOesophageal cancer is the seventh leading cause of cancer death worldwide. Traditional Chinese herbal medicine is sometimes used as an adjunct to radiotherapy or chemotherapy for this type of cancer. This review was first published in 2007 and updated in 2009; this 2016 update is the latest version of the review.\nObjectives\nTo assess the efficacy and possible adverse effects of the addition of Chinese herbal medicine to treatment with radiotherapy or chemotherapy for oesophageal cancer.\nSearch methods\nWe searched the Cochrane Upper Gastrointestinal and Pancreatic Diseases Group Trials Register, the Cochrane Library, MEDLINE, EMBASE, Allied and Complementary Medicine Database (AMED), China National Knowledge Infrastructure (CNKI), VIP database, Wanfang database and the Chinese Cochrane Centre Controlled Trials Register up to 1 October, 2015. We also searched databases of ongoing trials, the Internet and reference lists.\nSelection criteria\nRandomised controlled trials (RCTs) comparing the use of radiotherapy or chemotherapy with and without the addition of Chinese herbal medicine.\nData collection and analysis\nAt least two review authors independently extracted data and assessed trial quality.\nMain results\nWe tried to contact the 142 study authors by telephone, and finally included nine studies with 490 participants. All included studies were conducted in China, and allocated advanced oesophageal cancer patients to radiotherapy or chemotherapy groups, with and without additional Chinese herbal medicine. Quality of life, short-term therapeutic effects, TCM symptoms and adverse events caused by radiotherapy or chemotherapy were reported in these studies. Overall, we considered the trials to be at unclear or high risk of bias.\nThe quality of life measure was conducted before and after the intervention; our analysis showed a beneficial effect, both in number of participants experiencing an improvement (risk ratio (RR) 2.20, 95% confidence interval (CI) 1.42 to 3.39; 5 RCTs, 233 participants, change of performance status score ≥ 10) and number of participants experiencing a deterioration (RR 0.41, 95% CI 0.27 to 0.62; 6 RCTs, 287 participants, change of performance status score ≤ 10). We judged this to have low quality evidence, downgrading quality of evidence for risk of bias and imprecision, and upgrading quality of evidence for the large effect.\nFor short-term therapeutic effects, the results suggest that traditional Chinese medicine (TCM) has a positive impact on improvement (complete response + partial response) (RR 1.17, 95% CI 1.02 to 1.35; 8 RCTs, 450 participants), moderate quality evidence and downgrading for risk of bias. There was no significant difference for progressive disease (RR 0.73, 95% CI 0.52 to 1.01; 8 RCTs, 450 participants), low quality evidence and downgrading for risk of bias and imprecision. Three studies assessed this outcome after four weeks or three months' follow-up, the remaining studies gave no detailed information for this outcome. TCM symptoms, which was similar to short-term therapeutic effects evaluated with TCM clinical criteria, was diagnosed in two studies of 88 people at the end of the intervention. The results suggest that TCM has a positive impact on both total effectiveness (RR 1.84, 95% CI 1.20 to 2.81) and ineffectiveness (RR 0.22, 95% CI 0.05 to 0.93); we judged the studies to have very low quality evidence, downgrading for risk of bias and imprecision.\nNine studies reported a series of adverse events caused by radiotherapy or chemotherapy at the end of the intervention, including mucositis, radiation oesophagitis, arrest of bone marrow, gastrointestinal reactions, renal and hepatic impairment, white blood cell descent, neurotoxicity, cardiac toxicity and anaemia. For those containing multiple studies, we conducted a pooled analysis. As a result, TCM showed a significant effect on radiation oesophagitis (RR 0.66, 95% CI 0.47 to 0.94; 2 RCTs, 90 participants), gastrointestinal reactions (RR 0.54, 95% CI 0.36 to 0.81; 4 RCTs, 268 participants) and white blood cell descent (RR 0.60, 95% CI 0.44 to 0.83; 4 RCTs, 224 participants). The quality of evidence was low or very low, downgrading for risk of bias and imprecision.\nAuthors' conclusions\nWe currently find no evidence to determine whether TCM is an effective treatment for oesophageal cancer. The effect of TCM on short-term therapeutic effects is uncertain. TCM probably has positive effects on quality of life and on some adverse events caused by radiotherapy or chemotherapy in advanced oesophageal cancer patients undergoing radiotherapy or chemotherapy. The results of the review need to be interpreted cautiously owing to overall low quality evidence. Future trials should be large and correctly designed to detect important clinical effects and minimise risk of bias.\n\nGiven the provided abstract, please respond to the following query: \"Review question\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Chinese herbal medicine is widely used as adjunct therapy to chemo- or radiotherapy in patients being treated for cancer of the oesophagus. As yet there is no evidence that Chinese herbal medicine is effective or not in this role."
}
] |
query-laysum
|
23030
|
[
{
"role": "user",
"content": "Abstract: Background\nPsoriasis is a chronic inflammatory skin condition that can markedly reduce life quality. Several systemic therapies exist for moderate to severe psoriasis, including oral fumaric acid esters (FAE). These contain dimethyl fumarate (DMF), the main active ingredient, and monoethyl fumarate. FAE are licensed for psoriasis in Germany but used off-licence in many countries.\nObjectives\nTo assess the effects and safety of oral fumaric acid esters for psoriasis.\nSearch methods\nWe searched the following databases up to 7 May 2015: the Cochrane Skin Group Specialised Register, CENTRAL in the Cochrane Library (Issue 4, 2015), MEDLINE (from 1946), EMBASE (from 1974), and LILACS (from 1982). We searched five trials registers and checked the reference lists of included and excluded studies for further references to relevant randomised controlled trials. We handsearched six conference proceedings that were not already included in the Cochrane Skin Group Specialised Register.\nSelection criteria\nRandomised controlled trials (RCTs) of FAE, including DMF monotherapy, in individuals of any age and sex with a clinical diagnosis of psoriasis.\nData collection and analysis\nTwo review authors independently assessed trial quality and extracted data. Primary outcomes were improvement in Psoriasis Area and Severity Index (PASI) score and the proportion of participants discontinuing treatment due to adverse effects.\nMain results\nWe included 6 studies (2 full reports, 2 abstracts, 1 brief communication, and 1 letter), with a total of 544 participants. Risk of bias was unclear in several studies because of insufficient reporting. Five studies compared FAE with placebo, and one study compared FAE with methotrexate. All studies reported data at 12 to 16 weeks, and we identified no longer-term studies. When FAE were compared with placebo, we could not perform meta-analysis for the primary outcome of PASI score because the three studies that assessed this outcome reported the data differently, although all studies reported a significant reduction in PASI scores with FAE. Only 1 small study designed for psoriatic arthritis reported on the other primary outcome of participants discontinuing treatment due to adverse effects (2 of 13 participants on FAE compared with none of the 14 participants on placebo; risk ratio (RR) 5.36, 95% confidence interval (CI) 0.28 to 102.1; 27 participants; very low-quality evidence). However, these findings are uncertain due to indirectness and a very wide confidence interval. Two studies, containing 247 participants and both only reported as abstracts, allowed meta-analysis for PASI 50, which showed superiority of FAE over placebo (RR 4.55, 95% CI 2.80 to 7.40; low-quality evidence), with a combined PASI 50 of 64% in those given FAE compared with a PASI 50 of 14% for those on placebo, representing a number needed to treat to benefit of 2. The same studies reported more participants achieving PASI 75 with FAE, but we did not pool the data because of significant heterogeneity; none of the studies measured PASI 90. One study reported significant improvement in participants' quality of life (QoL) with FAE, measured with Skindex-29. However, we could not compute the mean difference because of insufficient reporting in the abstract. More participants experienced adverse effects, mainly gastrointestinal disturbance and flushing, on FAE (RR 4.72, 95% CI 2.45 to 9.08; 1 study, 99 participants; moderate-quality evidence), affecting 76% of participants given FAE and 16% of the placebo group (representing a number needed to treat to harm of 2). The other studies reported similar findings or did not report adverse effects fully.\nOne study of 54 participants compared methotrexate (MTX) with FAE. PASI score at follow-up showed superiority of MTX (mean Difference (MD) 3.80, 95% CI 0.68 to 6.92; 51 participants; very low-quality evidence), but the difference was not significant after adjustment for baseline disease severity. The difference between groups for the proportion of participants who discontinued treatment due to adverse effects was uncertain because of imprecision (RR 0.19, 95% CI 0.02 to 1.53; 1 study, 51 participants; very low-quality evidence). Overall, the number of participants experiencing common nuisance adverse effects was not significantly different between the 2 groups, with 89% of the FAE group affected compared with 100% of the MTX group (RR 0.89, 95% CI 0.77 to 1.03; 54 participants; very low-quality evidence). Flushing was more frequent in those on FAE, with 13 out of 27 participants affected compared with 2 out of 27 given MTX. There was no significant difference in the number of participants who attained PASI 50, 75, and 90 in the 2 groups (very low-quality evidence) whereas this study did not measure the effect of treatments on QoL. The included studies reported no serious adverse effects of FAE and were too small and of limited duration to provide evidence about rare or delayed effects.\nAuthors' conclusions\nEvidence suggests that FAE are superior to placebo and possibly similar in efficacy to MTX for psoriasis; however, the evidence provided in this review was limited, and it must be noted that four out of six included studies were abstracts or brief reports, restricting study reporting. FAE are associated with nuisance adverse effects, including flushing and gastrointestinal disturbance, but short-term studies reported no serious adverse effects.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The risk of study bias, which means any factors that may systematically deviate away from the true findings, was unclear in most studies. This may be because most of the studies were conducted decades ago or were incompletely reported. Several analyses comparing FAE with placebo and methotrexate were limited because the studies were small or did not provide enough information to establish how these treatments compare with each other. Therefore, the overall quality of the evidence was low when comparing FAE with placebo and very low when comparing FAE with methotrexate.\nFuture RCTs should use standard psoriasis outcome measures, including a validated quality of life scale, to enable the comparison and combination of results. They should be longer in duration or have longer follow-up phases to provide evidence about any delayed adverse effects."
}
] |
query-laysum
|
23031
|
[
{
"role": "user",
"content": "Abstract: Background\nLengthy duration of use may be a risk factor for umbilical venous catheter-associated bloodstream infection in newborn infants. Early planned removal of umbilical venous catheters (UVCs) is recommended to reduce the incidence of infection and associated morbidity and mortality.\nObjectives\nTo compare the effectiveness of early planned removal of UVCs (up to two weeks after insertion) versus an expectant approach or a longer fixed duration in preventing bloodstream infection and other complications in newborn infants.\nTo perform subgroup analyses by gestational age at birth and prespecified planned duration of UVC placement (see \"Subgroup analysis and investigation of heterogeneity\").\nSearch methods\nWe used the standard Cochrane Neonatal search strategy including electronic searches of the Cochrane Central Register of Controlled Trials (CENTRAL; 2017, Issue 4), Ovid MEDLINE, Embase, and the Maternity & Infant Care Database (until May 2017), as well as conference proceedings and previous reviews.\nSelection criteria\nRandomised and quasi-randomised controlled trials that compared effects of early planned removal of UVCs (up to two weeks after insertion) versus an expectant approach or a longer fixed duration in preventing bloodstream infection and other complications in newborn infants.\nData collection and analysis\nTwo review authors assessed trial eligibility and risk of bias and independently undertook data extraction. We analysed treatment effects and reported risk ratio (RR) and risk difference (RD) for dichotomous data, and mean difference (MD) for continuous data, with respective 95% confidence intervals (CIs). We planned to use a fixed-effect model in meta-analyses and to explore potential causes of heterogeneity in sensitivity analyses. We assessed the quality of evidence for the main comparison at the outcome level using GRADE methods.\nMain results\nWe found one eligible trial. Participants were 210 newborn infants with birth weight less than 1251 grams. The trial was unblinded but otherwise of good methodological quality. This trial compared removal of an umbilical venous catheter within 10 days after insertion (and replacement with a peripheral cannula or a percutaneously inserted central catheter as required) versus expectant management (UVC in place up to 28 days). More infants in the early planned removal group than in the expectant management group (83 vs 33) required percutaneous insertion of a central catheter (PICC). Trial results showed no difference in the incidence of catheter-related bloodstream infection (RR 0.65, 95% CI 0.35 to 1.22), in hospital mortality (RR 1.12, 95% CI 0.42 to 2.98), in catheter-associated thrombus necessitating removal (RR 0.33, 95% confidence interval 0.01 to 7.94), or in other morbidity. GRADE assessment indicated that the quality of evidence was \"low\" at outcome level principally as the result of imprecision and risk of surveillance bias due to lack of blinding in the included trial.\nAuthors' conclusions\nCurrently available trial data are insufficient to show whether early planned removal of umbilical venous catheters reduces risk of infection, mortality, or other morbidity in newborn infants. A large, simple, and pragmatic randomised controlled trial is needed to resolve this ongoing uncertainty.\n\nGiven the provided abstract, please respond to the following query: \"Study characteristics\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found only one small randomised controlled trial (including 210 very low birth weight newborn infants) that addressed this question."
}
] |
query-laysum
|
23032
|
[
{
"role": "user",
"content": "Abstract: Background\nThis is an update of the review originally published in 2011 and first updated in 2015.\nIn most people with low-grade gliomas (LGG), the primary treatment regimen remains a combination of surgery followed by postoperative radiotherapy. However, the optimal timing of radiotherapy is controversial. It is unclear whether to use radiotherapy in the early postoperative period, or whether radiotherapy should be delayed until tumour progression occurs.\nObjectives\nTo assess the effects of early postoperative radiotherapy versus radiotherapy delayed until tumour progression for low-grade intracranial gliomas in people who had initial biopsy or surgical resection.\nSearch methods\nOriginal searches were run up to September 2014. An updated literature search from September 2014 through November 2019 was performed on the following electronic databases: the Cochrane Central Register of Controlled Trials (CENTRAL; 2019, Issue 11), MEDLINE via Ovid (September 2014 to November week 2 2019), and Embase via Ovid (September 2014 to 2019 week 46) to identify trials for inclusion in this Cochrane review update.\nSelection criteria\nWe included randomised controlled trials (RCTs) that compared early versus delayed radiotherapy following biopsy or surgical resection for the treatment of people with newly diagnosed intracranial LGG (astrocytoma, oligodendroglioma, mixed oligoastrocytoma, astroblastoma, xanthoastrocytoma, or ganglioglioma). Radiotherapy may include conformal external beam radiotherapy (EBRT) with linear accelerator or cobalt-60 sources, intensity-modulated radiotherapy (IMRT), or stereotactic radiosurgery (SRS).\nData collection and analysis\nThree review authors independently assessed the trials for inclusion and risk of bias, and extracted study data. We resolved any differences between review authors by discussion. Adverse effects were also extracted from the study report. We performed meta-analyses using a random-effects model with inverse variance weighting.\nMain results\nWe included one large, multi-institutional, prospective RCT, involving 311 participants; the risk of bias in this study was unclear. This study found that early postoperative radiotherapy was associated with an increase in time to progression compared to observation (and delayed radiotherapy upon disease progression) for people with LGG but did not significantly improve overall survival (OS). The median progression-free survival (PFS) was 5.3 years in the early radiotherapy group and 3.4 years in the delayed radiotherapy group (hazard ratio (HR) 0.59, 95% confidence interval (CI) 0.45 to 0.77; P < 0.0001; 311 participants; 1 trial; low-quality evidence). The median OS in the early radiotherapy group was 7.4 years, while the delayed radiotherapy group experienced a median overall survival of 7.2 years (HR 0.97, 95% CI 0.71 to 1.33; P = 0.872; 311 participants; 1 trial; low-quality evidence). The total dose of radiotherapy given was 54 Gy; five fractions of 1.8 Gy per week were given for six weeks. Adverse effects following radiotherapy consisted of skin reactions, otitis media, mild headache, nausea, and vomiting. Rescue therapy was provided to 65% of the participants randomised to delayed radiotherapy. People in both cohorts who were free from tumour progression showed no differences in cognitive deficit, focal deficit, performance status, and headache after one year. However, participants randomised to the early radiotherapy group experienced significantly fewer seizures than participants in the delayed postoperative radiotherapy group at one year (25% versus 41%, P = 0.0329, respectively).\nAuthors' conclusions\nGiven the high risk of bias in the included study, the results of this analysis must be interpreted with caution. Early radiation therapy was associated with the following adverse effects: skin reactions, otitis media, mild headache, nausea, and vomiting. People with LGG who underwent early radiotherapy showed an increase in time to progression compared with people who were observed and had radiotherapy at the time of progression. There was no significant difference in overall survival between people who had early versus delayed radiotherapy; however, this finding may be due to the effectiveness of rescue therapy with radiation in the control arm. People who underwent early radiation had better seizure control at one year than people who underwent delayed radiation. There were no cases of radiation-induced malignant transformation of LGG. However, it remained unclear whether there were differences in memory, executive function, cognitive function, or quality of life between the two groups since these measures were not evaluated.\n\nGiven the provided abstract, please respond to the following query: \"What are the main findings?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "From the literature search in September 2014, we included one randomised controlled trial, involving 311 participants, that looked at early or delayed radiotherapy given at the time of disease progression in people with LGG. This study was well-designed and reported useful data on survival, but did not include other clinically important information, such as functional independent survival (functional, or neurological impairment, or both) and quality of life. Therefore, we felt that the trial was of unclear quality. People who received early (soon after surgery) radiotherapy had a longer time until their disease progressed than people who only had radiotherapy once the disease had progressed. However, the people that were initially observed had similar survival to the people who had early radiotherapy. Quality of life measures such as memory, executive function, and cognitive deterioration differences were not evaluated in either group. The findings did not suggest that people who received early radiotherapy lived longer than those had delayed radiotherapy. However, people who had early radiotherapy had better control over their seizures than those who had delayed radiotherapy. The toxic effects of radiotherapy were rated as minimal in both groups using a grading system which measured severity and included skin reactions, ear inflammation, mild headache, nausea, and vomiting.\nUpdate searches in November 2019 found no new articles which met the inclusion criteria. No articles were considered eligible for inclusion in this review update."
}
] |
query-laysum
|
23033
|
[
{
"role": "user",
"content": "Abstract: Background\nTraditionally, cavitated carious lesions and those extending into dentine have been treated by 'complete' removal of carious tissue, i.e. non-selective removal and conventional restoration (CR). Alternative strategies for managing cavitated or dentine carious lesions remove less or none of the carious tissue and include selective carious tissue removal (or selective excavation (SE)), stepwise carious tissue removal (SW), sealing carious lesions using sealant materials, sealing using preformed metal crowns (Hall Technique, HT), and non-restorative cavity control (NRCC).\nObjectives\nTo determine the comparative effectiveness of interventions (CR, SE, SW, sealing of carious lesions using sealant materials or preformed metal crowns (HT), or NRCC) to treat carious lesions conventionally considered to require restorations (cavitated or micro-cavitated lesions, or occlusal lesions that are clinically non-cavitated but clinically/radiographically extend into dentine) in primary or permanent teeth with vital (sensitive) pulps.\nSearch methods\nAn information specialist searched four bibliographic databases to 21 July 2020 and used additional search methods to identify published, unpublished and ongoing studies.\nSelection criteria\nWe included randomised clinical trials comparing different levels of carious tissue removal, as listed above, against each other, placebo, or no treatment. Participants had permanent or primary teeth (or both), and vital pulps (i.e. no irreversible pulpitis/pulp necrosis), and carious lesions conventionally considered to need a restoration (i.e. cavitated lesions, or non- or micro-cavitated lesions radiographically extending into dentine). The primary outcome was failure, a composite measure of pulp exposure, endodontic therapy, tooth extraction, and restorative complications (including resealing of sealed lesions).\nData collection and analysis\nPairs of review authors independently screened search results, extracted data, and assessed the risk of bias in the studies and the overall certainty of the evidence using GRADE criteria. We measured treatment effects through analysing dichotomous outcomes (presence/absence of complications) and expressing them as odds ratios (OR) with 95% confidence intervals (CI). For failure in the subgroup of deep lesions, we used network meta-analysis to assess and rank the relative effectiveness of different interventions.\nMain results\nWe included 27 studies with 3350 participants and 4195 teeth/lesions, which were conducted in 11 countries and published between 1977 and 2020. Twenty-four studies used a parallel-group design and three were split-mouth. Two studies included adults only, 20 included children/adolescents only and five included both. Ten studies evaluated permanent teeth, 16 evaluated primary teeth and one evaluated both. Three studies treated non-cavitated lesions; 12 treated cavitated, deep lesions, and 12 treated cavitated but not deep lesions or lesions of varying depth.\nSeventeen studies compared conventional treatment (CR) with a less invasive treatment: SE (8), SW (4), two HT (2), sealing with sealant materials (4) and NRCC (1). Other comparisons were: SE versus HT (2); SE versus SW (4); SE versus sealing with sealant materials (2); sealant materials versus no sealing (2).\nFollow-up times varied from no follow-up (pulp exposure during treatment) to 120 months, the most common being 12 to 24 months.\nAll studies were at overall high risk of bias.\nEffect of interventions\nSealing using sealants versus other interventions for non-cavitated or cavitated but not deep lesions\nThere was insufficient evidence of a difference between sealing with sealants and CR (OR 5.00, 95% CI 0.51 to 49.27; 1 study, 41 teeth, permanent teeth, cavitated), sealing versus SE (OR 3.11, 95% CI 0.11 to 85.52; 2 studies, 82 primary teeth, cavitated) or sealing versus no treatment (OR 0.05, 95% CI 0.00 to 2.71; 2 studies, 103 permanent teeth, non-cavitated), but we assessed all as very low-certainty evidence.\nHT, CR, SE, NRCC for cavitated, but not deep lesions in primary teeth\nThe odds of failure may be higher for CR than HT (OR 8.35, 95% CI 3.73 to 18.68; 2 studies, 249 teeth; low-certainty evidence) and lower for HT than NRCC (OR 0.19, 95% CI 0.05 to 0.74; 1 study, 84 teeth, very low-certainty evidence). There was insufficient evidence of a difference between SE versus HT (OR 8.94, 95% CI 0.57 to 139.67; 2 studies, 586 teeth) or CR versus NRCC (OR 1.16, 95% CI 0.50 to 2.71; 1 study, 102 teeth), both very low-certainty evidence.\nCR, SE, SW for deep lesions\nThe odds of failure were higher for CR than SW in permanent teeth (OR 2.06, 95% CI 1.34 to 3.17; 3 studies, 398 teeth; moderate-certainty evidence), but not primary teeth (OR 2.43, 95% CI 0.65 to 9.12; 1 study, 63 teeth; very low-certainty evidence).\nThe odds of failure may be higher for CR than SE in permanent teeth (OR 11.32, 95% CI 1.97 to 65.02; 2 studies, 179 teeth) and primary teeth (OR 4.43, 95% CI 1.04 to 18.77; 4 studies, 265 teeth), both very low-certainty evidence. Notably, two studies compared CR versus SE in cavitated, but not deep lesions, with insufficient evidence of a difference in outcome (OR 0.62, 95% CI 0.21 to 1.88; 204 teeth; very low-certainty evidence).\nThe odds of failure were higher for SW than SE in permanent teeth (OR 2.25, 95% CI 1.33 to 3.82; 3 studies, 371 teeth; moderate-certainty evidence), but not primary teeth (OR 2.05, 95% CI 0.49 to 8.62; 2 studies, 126 teeth; very low-certainty evidence).\nFor deep lesions, a network meta-analysis showed the probability of failure to be greatest for CR compared with SE, SW and HT.\nAuthors' conclusions\nCompared with CR, there were lower numbers of failures with HT and SE in the primary dentition, and with SE and SW in the permanent dentition. Most studies showed high risk of bias and limited precision of estimates due to small sample size and typically limited numbers of failures, resulting in assessments of low or very low certainty of evidence for most comparisons.\n\nGiven the provided abstract, please respond to the following query: \"How is the condition treated?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Dentine/cavitated carious lesions can be treated by complete removal of decayed parts of the tooth and placement of a filling. Dentists call this non-selective carious tissue removal and conventional restoration (CR). Although effective, this risks weakening the tooth or causing problems if the pulp of the tooth is exposed. Newer alternatives involve removing less or none of the carious tissue:\n1. Selective carious tissue removal (or selective excavation (SE)): decay around the edges of the cavity is fully removed, but close to the pulp, some softened dentine is left. A filling is then put in.\n2. Stepwise carious tissue removal (SW): as with SE, most of the decayed parts are removed, but soft dentine is left in areas close to the pulp and the cavity is filled using materials such as composite. After a gap of several months, more of the softened dentine is removed.\n3. Sealing carious lesions using sealant materials: a thin coating made from resin or glass ionomer is painted over the decayed tooth, hardens in a few minutes and makes the caries inactive by stopping bacteria reaching it.\n4. Sealing using preformed metal crowns (Hall Technique, HT): a preformed metal crown (i.e. chosen from a selection of sizes to closely fit the tooth, but not moulded for the particular tooth) is pushed over the decayed tooth to seal in the carious lesion.\n5. Non-restorative cavity control (NRCC): cavities are made easier to clean, and patients helped to develop good tooth care and eating habits to reduce the risk of the decay progressing."
}
] |
query-laysum
|
23034
|
[
{
"role": "user",
"content": "Abstract: Background\nWith potential antiviral and anti-inflammatory properties, Janus kinase (JAK) inhibitors represent a potential treatment for symptomatic severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection. They may modulate the exuberant immune response to SARS-CoV-2 infection. Furthermore, a direct antiviral effect has been described. An understanding of the current evidence regarding the efficacy and safety of JAK inhibitors as a treatment for coronavirus disease 2019 (COVID-19) is required.\nObjectives\nTo assess the effects of systemic JAK inhibitors plus standard of care compared to standard of care alone (plus/minus placebo) on clinical outcomes in individuals (outpatient or in-hospital) with any severity of COVID-19, and to maintain the currency of the evidence using a living systematic review approach.\nSearch methods\nWe searched the Cochrane COVID-19 Study Register (comprising MEDLINE, Embase, ClinicalTrials.gov, World Health Organization (WHO) International Clinical Trials Registry Platform, medRxiv, and Cochrane Central Register of Controlled Trials), Web of Science, WHO COVID-19 Global literature on coronavirus disease, and the US Department of Veterans Affairs Evidence Synthesis Program (VA ESP) Covid-19 Evidence Reviews to identify studies up to February 2022. We monitor newly published randomised controlled trials (RCTs) weekly using the Cochrane COVID-19 Study Register, and have incorporated all new trials from this source until the first week of April 2022.\nSelection criteria\nWe included RCTs that compared systemic JAK inhibitors plus standard of care to standard of care alone (plus/minus placebo) for the treatment of individuals with COVID-19. We used the WHO definitions of illness severity for COVID-19.\nData collection and analysis\nWe assessed risk of bias of primary outcomes using Cochrane's Risk of Bias 2 (RoB 2) tool. We used GRADE to rate the certainty of evidence for the following primary outcomes: all-cause mortality (up to day 28), all-cause mortality (up to day 60), improvement in clinical status: alive and without need for in-hospital medical care (up to day 28), worsening of clinical status: new need for invasive mechanical ventilation or death (up to day 28), adverse events (any grade), serious adverse events, secondary infections.\nMain results\nWe included six RCTs with 11,145 participants investigating systemic JAK inhibitors plus standard of care compared to standard of care alone (plus/minus placebo). Standard of care followed local protocols and included the application of glucocorticoids (five studies reported their use in a range of 70% to 95% of their participants; one study restricted glucocorticoid use to non-COVID-19 specific indications), antibiotic agents, anticoagulants, and antiviral agents, as well as non-pharmaceutical procedures. At study entry, about 65% of participants required low-flow oxygen, about 23% required high-flow oxygen or non-invasive ventilation, about 8% did not need any respiratory support, and only about 4% were intubated. We also identified 13 ongoing studies, and 9 studies that are completed or terminated and where classification is pending.\nIndividuals with moderate to severe disease\nFour studies investigated the single agent baricitinib (10,815 participants), one tofacitinib (289 participants), and one ruxolitinib (41 participants).\nSystemic JAK inhibitors probably decrease all-cause mortality at up to day 28 (95 of 1000 participants in the intervention group versus 131 of 1000 participants in the control group; risk ratio (RR) 0.72, 95% confidence interval (CI) 0.57 to 0.91; 6 studies, 11,145 participants; moderate-certainty evidence), and decrease all-cause mortality at up to day 60 (125 of 1000 participants in the intervention group versus 181 of 1000 participants in the control group; RR 0.69, 95% CI 0.56 to 0.86; 2 studies, 1626 participants; high-certainty evidence).\nSystemic JAK inhibitors probably make little or no difference in improvement in clinical status (discharged alive or hospitalised, but no longer requiring ongoing medical care) (801 of 1000 participants in the intervention group versus 778 of 1000 participants in the control group; RR 1.03, 95% CI 1.00 to 1.06; 4 studies, 10,802 participants; moderate-certainty evidence). They probably decrease the risk of worsening of clinical status (new need for invasive mechanical ventilation or death at day 28) (154 of 1000 participants in the intervention group versus 172 of 1000 participants in the control group; RR 0.90, 95% CI 0.82 to 0.98; 2 studies, 9417 participants; moderate-certainty evidence).\nSystemic JAK inhibitors probably make little or no difference in the rate of adverse events (any grade) (427 of 1000 participants in the intervention group versus 441 of 1000 participants in the control group; RR 0.97, 95% CI 0.88 to 1.08; 3 studies, 1885 participants; moderate-certainty evidence), and probably decrease the occurrence of serious adverse events (160 of 1000 participants in the intervention group versus 202 of 1000 participants in the control group; RR 0.79, 95% CI 0.68 to 0.92; 4 studies, 2901 participants; moderate-certainty evidence). JAK inhibitors may make little or no difference to the rate of secondary infection (111 of 1000 participants in the intervention group versus 113 of 1000 participants in the control group; RR 0.98, 95% CI 0.89 to 1.09; 4 studies, 10,041 participants; low-certainty evidence). Subgroup analysis by severity of COVID-19 disease or type of JAK inhibitor did not identify specific subgroups which benefit more or less from systemic JAK inhibitors.\nIndividuals with asymptomatic or mild disease\nWe did not identify any trial for this population.\nAuthors' conclusions\nIn hospitalised individuals with moderate to severe COVID-19, moderate-certainty evidence shows that systemic JAK inhibitors probably decrease all-cause mortality. Baricitinib was the most often evaluated JAK inhibitor. Moderate-certainty evidence suggests that they probably make little or no difference in improvement in clinical status. Moderate-certainty evidence indicates that systemic JAK inhibitors probably decrease the risk of worsening of clinical status and make little or no difference in the rate of adverse events of any grade, whilst they probably decrease the occurrence of serious adverse events. Based on low-certainty evidence, JAK inhibitors may make little or no difference in the rate of secondary infection. Subgroup analysis by severity of COVID-19 or type of agent failed to identify specific subgroups which benefit more or less from systemic JAK inhibitors.\nCurrently, there is no evidence on the efficacy and safety of systemic JAK inhibitors for individuals with asymptomatic or mild disease (non-hospitalised individuals).\n\nGiven the provided abstract, please respond to the following query: \"How up to date is this evidence?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The evidence is up to date to February 2022. We monitor newly published studies weekly using the Cochrane COVID-19 Study Register, and have incorporated all new trials identified from this source until the first week of April 2022."
}
] |
query-laysum
|
23035
|
[
{
"role": "user",
"content": "Abstract: Background\nThalassaemia is a quantitative abnormality of haemoglobin caused by mutations in genes controlling production of alpha or beta globins. Abnormally unpaired globin chains cause membrane damage and cell death within organ systems and destruction of erythroid precursors in the bone marrow, leading to haemolytic anaemia. The life-long management of the general health effects of thalassaemia is highly challenging, and failure to deal with dental and orthodontic complications exacerbates the public health, financial and personal burden of the condition. There is a lack of evidence-based guidelines to help care seekers and providers manage such dental and orthodontic complications. This review aimed to evaluate the available evidence on methods for treating dental and orthodontic complications in people with thalassaemia to inform future recommendations. This is an update of a Cochrane Review first published in 2019.\nObjectives\nTo assess different methods for treating dental and orthodontic complications in people with thalassaemia.\nSearch methods\nWe searched the Cochrane Cystic Fibrosis and Genetic Disorders Group's Haemoglobinopathies Trials Register in September 2022, and we searched nine online databases and trials registries in January 2022. We searched the reference lists of relevant articles and reviews and contacted haematologists, experts in fields of dentistry, organisations, pharmaceutical companies and researchers working in this field.\nSelection criteria\nWe searched for published or unpublished randomised controlled trials (RCTs) that evaluated treatment of dental and orthodontic complications in individuals diagnosed with thalassaemia, irrespective of phenotype, severity, age, sex and ethnic origin.\nData collection and analysis\nTwo review authors independently screened the 37,242 titles retrieved by the search. After deduplication, we identified two potentially relevant RCTs. On assessing their eligibility against our inclusion and exclusion criteria, we excluded one and included the other.\nMain results\nWe included one parallel-design RCT conducted in Saudi Arabia and involving 29 participants (19 males, 10 females) with thalassaemia. It aimed to assess the effectiveness of photodynamic therapy as an adjuvant to conventional full-mouth ultrasonic scaling for the treatment of gingivitis. The average age of participants was around 23 years.\nThere is very low-certainty evidence from this trial that full-mouth ultrasonic scaling plus photodynamic therapy compared to full-mouth ultrasonic scaling alone may improve gingival index score and bleeding on probing after 12 weeks in people with thalassaemia.\nWe found no studies that assessed other interventions for the various dental or orthodontic complications of thalassaemia.\nAuthors' conclusions\nAlthough the included study showed greater reduction in gingivitis in the group treated with full-mouth ultrasonic scaling plus photodynamic therapy, the evidence is of very low certainty. The study had unclear risk of bias, a short follow-up period and no data on safety or adverse effects. We cannot make definitive recommendations for clinical practice based on the limited evidence of a single trial. Future studies will very likely affect the conclusions of this review.\nThis review highlights the need for high-quality RCTs that investigate the effectiveness of various treatment modalities for dental and orthodontic complications in people with thalassaemia. It is crucial that future trials assess adverse effects of interventions.\n\nGiven the provided abstract, please respond to the following query: \"How does thalassaemia cause dental and orthodontic problems?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "When people inherit two copies of the mutated gene, the defective haemoglobin in the red blood cells does not release oxygen normally into the body. Defective cells build up in the body's organs and bone marrow cells, causing tissue damage and cell death, which leads to a decrease in red blood cells (anaemia). The lack of oxygen due to anaemia can stop organs from working normally, and people often need blood transfusions to correct the decrease in red blood cells. The body tries to compensate for the anaemia naturally by making more red blood cells, which causes expansion of the bone marrow spaces. In the skull, cheekbones and jawbones, this expanding bone marrow causes abnormal bony swellings, which can result in jaw deformity and teeth not being in their correct position (known as malocclusion). Severe changes to the face and jaw lead to difficulties with speech, eating and appearance. These visible features can be distressing for people with thalassaemia and have a negative impact on their quality of life.\nTo survive, people with thalassaemia have to focus on dealing with the serious impact of anaemia on their general health and on managing complications related to the long-term treatment of the condition. Consequently, they may neglect dental problems like tooth decay, gum disease and infections, and ordinary dental problems can become more severe and require advanced treatment. Before dental practitioners start any kind of dental treatment in people with thalassaemia, they need to consider both the underlying condition and the effects of the resulting anaemia or its treatment. Dental treatment may be particularly risky in people with thalassaemia who have had their spleen removed, as this can make them more prone to infections. There are no guidelines describing the best treatment plan, due to a lack of information in the scientific literature."
}
] |
query-laysum
|
23036
|
[
{
"role": "user",
"content": "Abstract: Background\nMost people who stop smoking gain weight. This can discourage some people from making a quit attempt and risks offsetting some, but not all, of the health advantages of quitting. Interventions to prevent weight gain could improve health outcomes, but there is a concern that they may undermine quitting.\nObjectives\nTo systematically review the effects of: (1) interventions targeting post-cessation weight gain on weight change and smoking cessation (referred to as 'Part 1') and (2) interventions designed to aid smoking cessation that plausibly affect post-cessation weight gain (referred to as 'Part 2').\nSearch methods\nPart 1 - We searched the Cochrane Tobacco Addiction Group's Specialized Register and CENTRAL; latest search 16 October 2020.\nPart 2 - We searched included studies in the following 'parent' Cochrane reviews: nicotine replacement therapy (NRT), antidepressants, nicotine receptor partial agonists, e-cigarettes, and exercise interventions for smoking cessation published in Issue 10, 2020 of the Cochrane Library. We updated register searches for the review of nicotine receptor partial agonists.\nSelection criteria\nPart 1 - trials of interventions that targeted post-cessation weight gain and had measured weight at any follow-up point or smoking cessation, or both, six or more months after quit day.\nPart 2 - trials included in the selected parent Cochrane reviews reporting weight change at any time point.\nData collection and analysis\nScreening and data extraction followed standard Cochrane methods. Change in weight was expressed as difference in weight change from baseline to follow-up between trial arms and was reported only in people abstinent from smoking. Abstinence from smoking was expressed as a risk ratio (RR). Where appropriate, we performed meta-analysis using the inverse variance method for weight, and Mantel-Haenszel method for smoking.\nMain results\nPart 1: We include 37 completed studies; 21 are new to this update. We judged five studies to be at low risk of bias, 17 to be at unclear risk and the remainder at high risk.\nAn intermittent very low calorie diet (VLCD) comprising full meal replacement provided free of charge and accompanied by intensive dietitian support significantly reduced weight gain at end of treatment compared with education on how to avoid weight gain (mean difference (MD) −3.70 kg, 95% confidence interval (CI) −4.82 to −2.58; 1 study, 121 participants), but there was no evidence of benefit at 12 months (MD −1.30 kg, 95% CI −3.49 to 0.89; 1 study, 62 participants). The VLCD increased the chances of abstinence at 12 months (RR 1.73, 95% CI 1.10 to 2.73; 1 study, 287 participants). However, a second study found that no-one completed the VLCD intervention or achieved abstinence.\nInterventions aimed at increasing acceptance of weight gain reported mixed effects at end of treatment, 6 months and 12 months with confidence intervals including both increases and decreases in weight gain compared with no advice or health education. Due to high heterogeneity, we did not combine the data. These interventions increased quit rates at 6 months (RR 1.42, 95% CI 1.03 to 1.96; 4 studies, 619 participants; I2 = 21%), but there was no evidence at 12 months (RR 1.25, 95% CI 0.76 to 2.06; 2 studies, 496 participants; I2 = 26%).\nSome pharmacological interventions tested for limiting post-cessation weight gain (PCWG) reduced weight gain at the end of treatment (dexfenfluramine, phenylpropanolamine, naltrexone). The effects of ephedrine and caffeine combined, lorcaserin, and chromium were too imprecise to give useful estimates of treatment effects. There was very low-certainty evidence that personalized weight management support reduced weight gain at end of treatment (MD −1.11 kg, 95% CI −1.93 to −0.29; 3 studies, 121 participants; I2 = 0%), but no evidence in the longer-term 12 months (MD −0.44 kg, 95% CI −2.34 to 1.46; 4 studies, 530 participants; I2 = 41%). There was low to very low-certainty evidence that detailed weight management education without personalized assessment, planning and feedback did not reduce weight gain and may have reduced smoking cessation rates (12 months: MD −0.21 kg, 95% CI −2.28 to 1.86; 2 studies, 61 participants; I2 = 0%; RR for smoking cessation 0.66, 95% CI 0.48 to 0.90; 2 studies, 522 participants; I2 = 0%).\nPart 2: We include 83 completed studies, 27 of which are new to this update.\nThere was low certainty that exercise interventions led to minimal or no weight reduction compared with standard care at end of treatment (MD −0.25 kg, 95% CI −0.78 to 0.29; 4 studies, 404 participants; I2 = 0%). However, weight was reduced at 12 months (MD −2.07 kg, 95% CI −3.78 to −0.36; 3 studies, 182 participants; I2 = 0%).\nBoth bupropion and fluoxetine limited weight gain at end of treatment (bupropion MD −1.01 kg, 95% CI −1.35 to −0.67; 10 studies, 1098 participants; I2 = 3%); (fluoxetine MD −1.01 kg, 95% CI −1.49 to −0.53; 2 studies, 144 participants; I2 = 38%; low- and very low-certainty evidence, respectively). There was no evidence of benefit at 12 months for bupropion, but estimates were imprecise (bupropion MD −0.26 kg, 95% CI −1.31 to 0.78; 7 studies, 471 participants; I2 = 0%). No studies of fluoxetine provided data at 12 months.\nThere was moderate-certainty that NRT reduced weight at end of treatment (MD −0.52 kg, 95% CI −0.99 to −0.05; 21 studies, 2784 participants; I2 = 81%) and moderate-certainty that the effect may be similar at 12 months (MD −0.37 kg, 95% CI −0.86 to 0.11; 17 studies, 1463 participants; I2 = 0%), although the estimates are too imprecise to assess long-term benefit.\nThere was mixed evidence of the effect of varenicline on weight, with high-certainty evidence that weight change was very modestly lower at the end of treatment (MD −0.23 kg, 95% CI −0.53 to 0.06; 14 studies, 2566 participants; I2 = 32%); a low-certainty estimate gave an imprecise estimate of higher weight at 12 months (MD 1.05 kg, 95% CI −0.58 to 2.69; 3 studies, 237 participants; I2 = 0%).\nAuthors' conclusions\nOverall, there is no intervention for which there is moderate certainty of a clinically useful effect on long-term weight gain. There is also no moderate- or high-certainty evidence that interventions designed to limit weight gain reduce the chances of people achieving abstinence from smoking.\n\nGiven the provided abstract, please respond to the following query: \"Programmes aimed at limiting weight gain\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Compared with no programme or brief advice only, a personalized weight-management programme may reduce weight gain at the end of treatment, after six months and after 12 months. However, a weight-management programme without personalized assessment, planning and feedback may not reduce weight gain, and may reduce the number of people who stop smoking.\nCompared with no programme, acceptance-based programmes for weight:"
}
] |
query-laysum
|
23037
|
[
{
"role": "user",
"content": "Abstract: Background\nWhile cigarette smoking has declined globally, waterpipe smoking is rising, especially among youth. The impact of this rise is amplified by mounting evidence of its addictive and harmful nature. Waterpipe smoking is influenced by multiple factors, including appealing flavors, marketing, use in social settings, and misperceptions that waterpipe is less harmful or addictive than cigarettes. People who use waterpipes are interested in quitting, but are often unsuccessful at doing so on their own. Therefore, developing and testing waterpipe cessation interventions to help people quit was identified as a priority for global tobacco control efforts.\nObjectives\nTo evaluate the effectiveness of tobacco cessation interventions for people who smoke waterpipes.\nSearch methods\nWe searched the Cochrane Tobacco Addiction Review Group Specialized Register from database inception to 29 July 2022, using variant terms and spellings ('waterpipe' or 'narghile' or 'arghile' or 'shisha' or 'goza' or 'narkeela' or 'hookah' or 'hubble bubble'). We searched for trials, published or unpublished, in any language.\nSelection criteria\nWe sought randomized controlled trials (RCTs), quasi-RCTs, or cluster-RCTs of any smoking cessation interventions for people who use waterpipes, of any age or gender. In order to be included, studies had to measure waterpipe abstinence at a three-month follow-up or longer.\nData collection and analysis\nWe used standard Cochrane methods. Our primary outcome was abstinence from waterpipe use at least three months after baseline. We also collected data on adverse events. Individual study effects and pooled effects were summarized as risk ratios (RR) and 95% confidence intervals (95% CI), using Mantel-Haenszel random-effects models to combine studies, where appropriate. We assessed statistical heterogeneity with the I2 statistic. We summarized secondary outcomes narratively. We used the five GRADE considerations (risk of bias, inconsistency of effect, imprecision, indirectness, and publication bias) to assess the certainty of the body of evidence for our primary outcome in four categories high, moderate, low, or very low.\nMain results\nThis review included nine studies, involving 2841 participants. All studies were conducted in adults, and were carried out in Iran, Vietnam, Syria, Lebanon, Egypt, Pakistan, and the USA. Studies were conducted in several settings, including colleges/universities, community healthcare centers, tuberculosis hospitals, and cancer treatment centers, while two studies tested e-health interventions (online web-based educational intervention, text message intervention). Overall, we judged three studies to be at low risk of bias, and six studies at high risk of bias.\nWe pooled data from five studies (1030 participants) that tested intensive face-to-face behavioral interventions compared with brief behavioral intervention (e.g. one behavioral counseling session), usual care (e.g. self-help materials), or no intervention. In our meta-analysis, we included people who used waterpipe exclusively, or with another form of tobacco. Overall, we found low-certainty evidence of a benefit of behavioral support for waterpipe abstinence (RR 3.19 95% CI 2.17 to 4.69; I2 = 41%; 5 studies, N = 1030). We downgraded the evidence because of imprecision and risk of bias.\nWe pooled data from two studies (N = 662 participants) that tested varenicline combined with behavioral intervention compared with placebo combined with behavioral intervention. Although the point estimate favored varenicline, 95% CIs were imprecise, and incorporated the potential for no difference and lower quit rates in the varenicline groups, as well as a benefit as large as that found in cigarette smoking cessation (RR 1.24, 95% CI 0.69 to 2.24; I2 = 0%; 2 studies, N = 662; low-certainty evidence). We downgraded the evidence because of imprecision. We found no clear evidence of a difference in the number of participants experiencing adverse events (RR 0.98, 95% CI 0.67 to 1.44; I2 = 31%; 2 studies, N = 662). The studies did not report serious adverse events.\nOne study tested the efficacy of seven weeks of bupropion therapy combined with behavioral intervention. There was no clear evidence of benefit for waterpipe cessation when compared with behavioral support alone (RR 0.77, 95% CI 0.42 to 1.41; 1 study, N = 121; very low-certainty evidence), or with self-help (RR 1.94, 95% CI 0.94 to 4.00; 1 study, N = 86; very low-certainty evidence).\nTwo studies tested e-health interventions. One study reported higher waterpipe quit rates among participants randomized to either a tailored mobile phone or untailored mobile phone intervention compared with those randomized to no intervention (RR 1.48, 95% CI 1.07 to 2.05; 2 studies, N = 319; very low-certainty evidence). Another study reported higher waterpipe abstinence rates following an intensive online educational intervention compared with a brief online educational intervention (RR 1.86, 95% CI 1.08 to 3.21; 1 study, N = 70; very low-certainty evidence).\nAuthors' conclusions\nWe found low-certainty evidence that behavioral waterpipe cessation interventions can increase waterpipe quit rates among waterpipe smokers. We found insufficient evidence to assess whether varenicline or bupropion increased waterpipe abstinence; available evidence is compatible with effect sizes similar to those seen for cigarette smoking cessation.\nGiven e-health interventions' potential reach and effectiveness for waterpipe cessation, trials with large samples and long follow-up periods are needed. Future studies should use biochemical validation of abstinence to prevent the risk of detection bias. Finally, there has been limited attention given to high-risk groups for waterpipe smoking, such as youth, young adults, pregnant women, and dual or poly tobacco users. These groups would benefit from targeted studies.\n\nGiven the provided abstract, please respond to the following query: \"What did we find?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We found nine studies that tested interventions to help waterpipe smokers to quit. Among these, five studies tested behavioral support; two studies tested a quit-smoking medicine, called varenicline; one study tested a quit-smoking medicine called bupropion; and two studies tested e-health support delivered over the internet or mobile phone."
}
] |
query-laysum
|
23038
|
[
{
"role": "user",
"content": "Abstract: Background\nThe detection and diagnosis of caries at the initial (non-cavitated) and moderate (enamel) levels of severity is fundamental to achieving and maintaining good oral health and prevention of oral diseases. An increasing array of methods of early caries detection have been proposed that could potentially support traditional methods of detection and diagnosis. Earlier identification of disease could afford patients the opportunity of less invasive treatment with less destruction of tooth tissue, reduce the need for treatment with aerosol-generating procedures, and potentially result in a reduced cost of care to the patient and to healthcare services.\nObjectives\nTo determine the diagnostic accuracy of different visual classification systems for the detection and diagnosis of non-cavitated coronal dental caries for different purposes (detection and diagnosis) and in different populations (children or adults).\nSearch methods\nCochrane Oral Health's Information Specialist undertook a search of the following databases: MEDLINE Ovid (1946 to 30 April 2020); Embase Ovid (1980 to 30 April 2020); US National Institutes of Health Ongoing Trials Register (ClinicalTrials.gov, to 30 April 2020); and the World Health Organization International Clinical Trials Registry Platform (to 30 April 2020). We studied reference lists as well as published systematic review articles.\nSelection criteria\nWe included diagnostic accuracy study designs that compared a visual classification system (index test) with a reference standard (histology, excavation, radiographs). This included cross-sectional studies that evaluated the diagnostic accuracy of single index tests and studies that directly compared two or more index tests. Studies reporting at both the patient or tooth surface level were included. In vitro and in vivo studies were considered. Studies that explicitly recruited participants with caries into dentine or frank cavitation were excluded. We also excluded studies that artificially created carious lesions and those that used an index test during the excavation of dental caries to ascertain the optimum depth of excavation.\nData collection and analysis\nWe extracted data independently and in duplicate using a standardised data extraction and quality assessment form based on QUADAS-2 specific to the review context. Estimates of diagnostic accuracy were determined using the bivariate hierarchical method to produce summary points of sensitivity and specificity with 95% confidence intervals (CIs) and regions, and 95% prediction regions. The comparative accuracy of different classification systems was conducted based on indirect comparisons. Potential sources of heterogeneity were pre-specified and explored visually and more formally through meta-regression.\nMain results\nWe included 71 datasets from 67 studies (48 completed in vitro) reporting a total of 19,590 tooth sites/surfaces. The most frequently reported classification systems were the International Caries Detection and Assessment System (ICDAS) (36 studies) and Ekstrand-Ricketts-Kidd (ERK) (15 studies). In reporting the results, no distinction was made between detection and diagnosis. Only two studies were at low risk of bias across all four domains, and 15 studies were at low concern for applicability across all three domains. The patient selection domain had the highest proportion of high risk of bias studies (49 studies). Four studies were assessed at high risk of bias for the index test domain, nine for the reference standard domain, and seven for the flow and timing domain. Due to the high number of studies on extracted teeth concerns regarding applicability were high for the patient selection and index test domains (49 and 46 studies respectively).\nStudies were synthesised using a hierarchical bivariate method for meta-analysis. There was substantial variability in the results of the individual studies: sensitivities ranged from 0.16 to 1.00 and specificities from 0 to 1.00. For all visual classification systems the estimated summary sensitivity and specificity point was 0.86 (95% CI 0.80 to 0.90) and 0.77 (95% CI 0.72 to 0.82) respectively, diagnostic odds ratio (DOR) 20.38 (95% CI 14.33 to 28.98). In a cohort of 1000 tooth surfaces with 28% prevalence of enamel caries, this would result in 40 being classified as disease free when enamel caries was truly present (false negatives), and 163 being classified as diseased in the absence of enamel caries (false positives). The addition of test type to the model did not result in any meaningful difference to the sensitivity or specificity estimates (Chi2(4) = 3.78, P = 0.44), nor did the addition of primary or permanent dentition (Chi2(2) = 0.90, P = 0.64). The variability of results could not be explained by tooth surface (occlusal or approximal), prevalence of dentinal caries in the sample, nor reference standard. Only one study intentionally included restored teeth in its sample and no studies reported the inclusion of sealants.\nWe rated the certainty of the evidence as low, and downgraded two levels in total for risk of bias due to limitations in the design and conduct of the included studies, indirectness arising from the in vitro studies, and inconsistency of results.\nAuthors' conclusions\nWhilst the confidence intervals for the summary points of the different visual classification systems indicated reasonable performance, they do not reflect the confidence that one can have in the accuracy of assessment using these systems due to the considerable unexplained heterogeneity evident across the studies. The prediction regions in which the sensitivity and specificity of a future study should lie are very broad, an important consideration when interpreting the results of this review. Should treatment be provided as a consequence of a false-positive result then this would be non-invasive, typically the application of fluoride varnish where it was not required, with low potential for an adverse event but healthcare resource and finance costs.\nDespite the robust methodology applied in this comprehensive review, the results should be interpreted with some caution due to shortcomings in the design and execution of many of the included studies. Studies to determine the diagnostic accuracy of methods to detect and diagnose caries in situ are particularly challenging. Wherever possible future studies should be carried out in a clinical setting, to provide a realistic assessment of performance within the oral cavity with the challenges of plaque, tooth staining, and restorations, and consider methods to minimise bias arising from the use of imperfect reference standards in clinical studies.\n\nGiven the provided abstract, please respond to the following query: \"Who do the results of this review apply to?\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Studies included in the review were carried out in Brazil, Europe, Japan, and Australia. A large number of studies performed the tests on extracted teeth, while clinical studies were completed in dental hospitals, general dental practices, or schools. Studies were from the years 1988 to 2019."
}
] |
query-laysum
|
23039
|
[
{
"role": "user",
"content": "Abstract: Background\nPerioperative hypertension requires careful management. Angiotensin-converting enzyme inhibitors (ACEIs) or angiotensin II type 1 receptor blockers (ARBs) have shown efficacy in treating hypertension associated with surgery. However, there is lack of consensus about whether they can prevent mortality and morbidity.\nObjectives\nTo systematically assess the benefits and harms of administration of ACEIs or ARBs perioperatively for the prevention of mortality and morbidity in adults (aged 18 years and above) undergoing any type of surgery under general anaesthesia.\nSearch methods\nWe searched the current issue of the Cochrane Central Register of Controlled Trials (CENTRAL; 2014, Issue 12), Ovid MEDLINE (1966 to 8 December 2014), EMBASE (1980 to 8 December 2014), and references of the retrieved randomized trials, meta-analyses, and systematic reviews. We reran the search on February 3, 2017. Three potential new studies of interest were added to a list of 'Studies awaiting Classification' and will be incorporated into the formal review findings during the review update.\nSelection criteria\nWe included randomized controlled trials (RCTs) comparing perioperative administration of ACEIs or ARBs with placebo in adults (aged 18 years and above) undergoing any type of surgery under general anaesthesia. We excluded studies in which participants underwent procedures that required local anaesthesia only, or participants who had already been on ACEIs or ARBs.\nData collection and analysis\nTwo review authors independently performed study selection, assessed the risk of bias, and extracted data. We used standard methodological procedures expected by Cochrane.\nMain results\nWe included seven RCTs with a total of 571 participants in the review. Two of the seven trials involved 36 participants undergoing non-cardiac vascular surgery (infrarenal aortic surgery), and five involved 535 participants undergoing cardiac surgery, including valvular surgery, coronary artery bypass surgery, and cardiopulmonary bypass surgery. The intervention was started from 11 days to 25 minutes before surgery in six trials and during surgery in one trial. We considered all seven RCTs to carry a high risk of bias. The effects of ACEIs or ARBs on perioperative mortality and acute myocardial infarction were uncertain because the quality of the evidence was very low. The risk of death was 2.7% in the ACEIs or ARBs group and 1.6% in the placebo group (risk ratio (RR) 1.61; 95% confidence interval (CI) 0.44 to 5.85). The risk of acute myocardial infarction was 1.7% in the ACEIs or ARBs group and 3.0% in the placebo group (RR 0.55; 95% CI 0.14 to 2.26). ACEIs or ARBs may improve congestive heart failure (cardiac index) perioperatively (mean difference (MD) -0.60; 95% CI -0.70 to -0.50, very low-quality evidence). In terms of rate of complications, there was no difference in perioperative cerebrovascular complications (RR 0.48; 95% CI 0.18 to 1.28, very low-quality evidence) and hypotension (RR 1.95; 95% CI 0.86 to 4.41, very low-quality evidence). Cardiac surgery-related renal failure was not reported. ACEIs or ARBs were associated with shortened length of hospital stay (MD -0.54; 95% CI -0.93 to -0.16, P value = 0.005, very low-quality evidence). These findings should be interpreted cautiously due to likely confounding by the clinical backgrounds of the participants. ACEIs or ARBs may shorten the length of hospital stay, (MD -0.54; 95% CI -0.93 to -0.16, very low-quality evidence) Two studies reported adverse events, and there was no evidence of a difference between the ACEIs or ARBs and control groups.\nAuthors' conclusions\nOverall, this review did not find evidence to support that perioperative ACEIs or ARBs can prevent mortality, morbidity, and complications (hypotension, perioperative cerebrovascular complications, and cardiac surgery-related renal failure). We found no evidence showing that the use of these drugs may reduce the rate of acute myocardial infarction. However, ACEIs or ARBs may increase cardiac output perioperatively. Due to the low and very low methodology quality, high risk of bias, and lack of power of the included studies, the true effect may be substantially different from the observed estimates. Perioperative (mainly elective cardiac surgery, according to included studies) initiation of ACEIs or ARBs therapy should be individualized.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "People with high blood pressure around the time of surgery are carefully treated as they have a higher risk of complications such as reduced blood flow to the heart muscle (myocardial ischaemia), heart attack, and even death. ACEIs or ARBs relax the blood vessels and are effective in treating high blood pressure associated with surgery, but the outcome is uncertain when these are used for the prevention of surgery-related complications."
}
] |
query-laysum
|
23040
|
[
{
"role": "user",
"content": "Abstract: Background\n18F-flutemetamol uptake by brain tissue, measured by positron emission tomography (PET), is accepted by regulatory agencies like the Food and Drug Administration (FDA) and the European Medicine Agencies (EMA) for assessing amyloid load in people with dementia. Its added value is mainly demonstrated by excluding Alzheimer's pathology in an established dementia diagnosis. However, the National Institute on Aging and Alzheimer's Association (NIA-AA) revised the diagnostic criteria for Alzheimer's disease and the confidence in the diagnosis of mild cognitive impairment (MCI) due to Alzheimer's disease may be increased when using some amyloid biomarkers tests like 18F-flutemetamol. These tests, added to the MCI core clinical criteria, might increase the diagnostic test accuracy (DTA) of a testing strategy. However, the DTA of 18F-flutemetamol to predict the progression from MCI to Alzheimer’s disease dementia (ADD) or other dementias has not yet been systematically evaluated.\nObjectives\nTo determine the DTA of the 18F-flutemetamol PET scan for detecting people with MCI at time of performing the test who will clinically progress to ADD, other forms of dementia (non-ADD) or any form of dementia at follow-up.\nSearch methods\nThe most recent search for this review was performed in May 2017. We searched MEDLINE (OvidSP), Embase (OvidSP), PsycINFO (OvidSP), BIOSIS Citation Index (Thomson Reuters Web of Science), Web of Science Core Collection, including the Science Citation Index (Thomson Reuters Web of Science) and the Conference Proceedings Citation Index (Thomson Reuters Web of Science), LILACS (BIREME), CINAHL (EBSCOhost), ClinicalTrials.gov (https://clinicaltrials.gov), and the World Health Organization International Clinical Trials Registry Platform (WHO ICTRP) (http://www.who.int/ictrp/search/en/). We also searched ALOIS, the Cochrane Dementia & Cognitive Improvement Group’s specialised register of dementia studies (http://www.medicine.ox.ac.uk/alois/). We checked the reference lists of any relevant studies and systematic reviews, and performed citation tracking using the Science Citation Index to identify any additional relevant studies. No language or date restrictions were applied to the electronic searches.\nSelection criteria\nWe included studies that had prospectively defined cohorts with any accepted definition of MCI at time of performing the test and the use of 18F-flutemetamol scan to evaluate the DTA of the progression from MCI to ADD or other forms of dementia. In addition, we only selected studies that applied a reference standard for Alzheimer’s dementia diagnosis, for example, National Institute of Neurological and Communicative Disorders and Stroke and the Alzheimer’s Disease and Related Disorders Association (NINCDS-ADRDA) or Diagnostic and Statistical Manual of Mental Disorders-IV (DSM-IV) criteria.\nData collection and analysis\nWe screened all titles and abstracts identified in electronic-database searches. Two review authors independently selected studies for inclusion and extracted data to create two-by-two tables, showing the binary test results cross-classified with the binary reference standard. We used these data to calculate sensitivities, specificities, and their 95% confidence intervals. Two independent assessors performed quality assessment using the QUADAS-2 tool plus some additional items to assess the methodological quality of the included studies.\nMain results\nProgression from MCI to ADD was evaluated in 243 participants from two studies. The studies reported data on 19 participants with two years of follow-up and on 224 participants with three years of follow-up. Nine (47.4%) participants converted at two years follow-up and 81 (36.2%) converted at three years of follow-up.\nThere were concerns about participant selection and sampling in both studies. The index test domain in one study was considered unclear and in the second study it was considered at low risk of bias. For the reference standard domain, one study was considered at low risk and the second study was considered to have an unclear risk of bias. Regarding the domains of flow and timing, both studies were considered at high risk of bias.\nMCI to ADD;\nProgression from MCI to ADD at two years of follow-up had a sensitivity of 89% (95% CI 52 to 100) and a specificity of 80% (95% CI 44 to 97) by quantitative assessment by SUVR (n = 19, 1 study).\nProgression from MCI to ADD at three years of follow-up had a sensitivity of 64% (95% CI 53 to 75) and a specificity of 69% (95% CI 60 to 76) by visual assessment (n = 224, 1 study).\nThere was no information regarding the other two objectives in this systematic review (SR): progression from MCI to other forms of dementia and progression to any form of dementia at follow-up.\nAuthors' conclusions\nDue to the varying sensitivity and specificity for predicting the progression from MCI to ADD and the limited data available, we cannot recommend routine use of 18F-flutemetamol in clinical practice. 18F-flutemetamol has high financial costs; therefore, clearly demonstrating its DTA and standardising the process of the 18F-flutemetamol modality is important prior to its wider use.\n\nGiven the provided abstract, please respond to the following query: \"Aim\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "We aimed to evaluate the accuracy of the 18F-flutemetamol PET scan in identifying those people with MCI who clinically progress to ADD, other types of dementia, or any form of dementia over a period of time."
}
] |
query-laysum
|
23041
|
[
{
"role": "user",
"content": "Abstract: Background\nPlatinum-based therapy, including cisplatin, carboplatin, oxaliplatin or a combination of these, is used to treat a variety of paediatric malignancies. One of the most significant adverse effects is the occurrence of hearing loss or ototoxicity. In an effort to prevent this ototoxicity, different otoprotective medical interventions have been studied. This review is the third update of a previously published Cochrane Review.\nObjectives\nTo assess the efficacy of medical interventions to prevent hearing loss and to determine possible effects of these interventions on antitumour efficacy, toxicities other than hearing loss and quality of life in children with cancer treated with platinum-based therapy as compared to placebo, no additional treatment or another protective medical intervention.\nSearch methods\nWe searched the Cochrane Central Register of Controlled Trials, MEDLINE (PubMed) and Embase (Ovid) to 8 January 2019. We handsearched reference lists of relevant articles and assessed the conference proceedings of the International Society for Paediatric Oncology (2006 up to and including 2018), the American Society of Pediatric Hematology/Oncology (2007 up to and including 2018) and the International Conference on Long-Term Complications of Treatment of Children and Adolescents for Cancer (2010 up to and including 2015). We scanned ClinicalTrials.gov and the World Health Organization International Clinical Trials Registry Platform (WHO ICTRP; apps.who.int/trialsearch) for ongoing trials (on 2 January 2019).\nSelection criteria\nRandomized controlled trials (RCTs) or controlled clinical trials (CCTs) evaluating platinum-based therapy with an otoprotective medical intervention versus platinum-based therapy with placebo, no additional treatment or another protective medical intervention in children with cancer.\nData collection and analysis\nTwo review authors independently performed the study selection, data extraction, risk of bias assessment and GRADE assessment of included studies, including adverse effects. We performed analyses according to the Cochrane Handbook for Systematic Reviews of Interventions.\nMain results\nWe identified two RCTs and one CCT (total number of participants 149) evaluating the use of amifostine versus no additional treatment in the original version of the review; the updates identified no additional studies. Two studies included children with osteosarcoma, and the other study included children with hepatoblastoma. Children received cisplatin only or a combination of cisplatin and carboplatin, either intra-arterially or intravenously. Pooling of results of the included studies was not possible. From individual studies the effect of amifostine on symptomatic ototoxicity only (i.e. National Cancer Institute Common Toxicity Criteria version 2 (NCICTCv2) or modified Brock grade 2 or higher) and combined asymptomatic and symptomatic ototoxicity (i.e. NCICTCv2 or modified Brock grade 1 or higher) were uncertain (low-certainty evidence). Only one study including children with osteosarcoma treated with intra-arterial cisplatin provided information on tumour response, defined as the number of participants with a good or partial remission. The available-data analysis (data were missing for one participant), best-case scenario analysis and worst-case scenario analysis showed a difference in favour of amifostine, although the certainty of evidence for this effect was low. There was no information on survival for any of the included studies. Only one study, including children with osteosarcoma treated with intra-arterial cisplatin, provided data on the number of participants with adverse effects other than ototoxicity grade 3 or higher (on NCICTCv2 scale). There was low-certainty evidence that grade 3 or 4 vomiting was higher with amifostine (risk ratio (RR) 9.04, 95% confidence interval (CI) 1.99 to 41.12). The effects on cardiotoxicity and renal toxicity grade 3 or 4 were uncertain (low-certainty evidence). None of the studies evaluated quality of life.\nIn the recent update, we also identified one RCT including 109 children with localized hepatoblastoma evaluating the use of sodium thiosulfate versus no additional treatment. Children received intravenous cisplatin only (one child also received carboplatin). There was moderate-certainty evidence that both symptomatic ototoxicity only (i.e. Brock criteria grade 2 or higher) and combined asymptomatic and symptomatic ototoxicity (i.e. Brock criteria grade 1 or higher) was lower with sodium thiosulfate (combined asymptomatic and symptomatic ototoxicity: RR 0.52, 95% CI 0.33 to 0.81; symptomatic ototoxicity only: RR 0.39, 95% CI 0.19 to 0.83). The effect of sodium thiosulfate on tumour response (defined as number of participants with a complete or partial response at the end of treatment), overall survival (calculated from time of randomization to death or last follow-up), event-free survival (calculated from time of randomization until disease progression, disease relapse, second primary cancer, death, or last follow-up, whichever came first) and adverse effects other than hearing loss and tinnitus grade 3 or higher (according to National Cancer Institute Common Toxicity Criteria Adverse Effects version 3 (NCICTCAEv3) criteria) was uncertain (low-certainty evidence for all these outcomes). Quality of life was not assessed.\nWe found no eligible studies for possible otoprotective medical interventions other than amifostine and sodium thiosulfate and for other types of malignancies.\nAuthors' conclusions\nAt the moment there is no evidence from individual studies in children with osteosarcoma or hepatoblastoma treated with different platinum analogues and dosage schedules that underscores the use of amifostine as an otoprotective intervention as compared to no additional treatment. Since pooling of results was not possible and the evidence was of low certainty, no definitive conclusions can be made. Since we found only one RCT evaluating the use of sodium thiosulfate in children with localized hepatoblastoma treated with cisplatin, no definitive conclusions on benefits and harms can be drawn. It should be noted that 'no evidence of effect', as identified in this review, is not the same as 'evidence of no effect'. We identified no eligible studies for other possible otoprotective medical interventions and other types of malignancies, so no conclusions can be made about their efficacy in preventing ototoxicity in children treated with platinum-based therapy. More high-quality research is needed.\n\nGiven the provided abstract, please respond to the following query: \"Quality of the evidence\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "The quality of the evidence was moderate (for hearing loss with sodium thiosulfate) to low (for all other outcomes (results)). The quality of the evidence was limited because of issues with the study design (for all outcomes) and small numbers of participants in each study (for all outcomes except hearing loss with sodium thiosulfate)."
}
] |
query-laysum
|
23042
|
[
{
"role": "user",
"content": "Abstract: Background\nEndometriosis is defined as the presence of endometrial tissue (glands and stroma) outside the uterine cavity. This condition is oestrogen-dependent and thus is seen primarily during the reproductive years. Owing to their antiproliferative effects in the endometrium, progesterone receptor modulators (PRMs) have been advocated for treatment of endometriosis.\nObjectives\nTo assess the effectiveness and safety of PRMs primarily in terms of pain relief as compared with other treatments or placebo or no treatment in women of reproductive age with endometriosis.\nSearch methods\nWe searched the following electronic databases, trial registers, and websites: the Cochrane Gynaecology and Fertility Group (CGFG) Specialised Register of Controlled Trials, the Central Register of Studies Online (CRSO), MEDLINE, Embase, PsycINFO, clinicaltrials.gov, and the World Health Organization (WHO) platform, from inception to 28 November 2016. We handsearched reference lists of articles retrieved by the search.\nSelection criteria\nWe included randomised controlled trials (RCTs) published in all languages that examined effects of PRMs for treatment of symptomatic endometriosis.\nData collection and analysis\nWe used standard methodological procedures as expected by the Cochrane Collaboration. Primary outcomes included measures of pain and side effects.\nMain results\nWe included 10 randomised controlled trials (RCTs) with 960 women. Two RCTs compared mifepristone versus placebo or versus a different dose of mifepristone, one RCT compared asoprisnil versus placebo, one compared ulipristal versus leuprolide acetate, and four compared gestrinone versus danazol, gonadotropin-releasing hormone (GnRH) analogues, or a different dose of gestrinone. The quality of evidence ranged from high to very low. The main limitations were serious risk of bias (associated with poor reporting of methods and high or unclear rates of attrition in most studies), very serious imprecision (associated with low event rates and wide confidence intervals), and indirectness (outcome assessed in a select subgroup of participants).\nMifepristone versus placebo\nOne study made this comparison and reported rates of painful symptoms among women who reported symptoms at baseline.\nAt three months, the mifepristone group had lower rates of dysmenorrhoea (odds ratio (OR) 0.08, 95% confidence interval (CI) 0.04 to 0.17; one RCT, n =352; moderate-quality evidence), suggesting that if 40% of women taking placebo experience dysmenorrhoea, then between 3% and 10% of women taking mifepristone will do so. The mifepristone group also had lower rates of dyspareunia (OR 0.23, 95% CI 0.11 to 0.51; one RCT, n = 223; low-quality evidence). However, the mifepristone group had higher rates of side effects: Nearly 90% had amenorrhoea and 24% had hot flushes, although the placebo group reported only one event of each (1%) (high-quality evidence). Evidence was insufficient to show differences in rates of nausea, vomiting, or fatigue, if present.\nMifepristone dose comparisons\nTwo studies compared doses of mifepristone and found insufficient evidence to show differences between different doses in terms of effectiveness or safety, if present. However, subgroup analysis of comparisons between mifepristone and placebo suggest that the 2.5 mg dose may be less effective than 5 mg or 10 mg for treating dysmenorrhoea or dyspareunia.\nGestrinone comparisons\nOns study compared gestrinone with danazol, and another study compared gestrinone with leuprolin.\nEvidence was insufficient to show differences, if present, between gestrinone and danazol in rate of pain relief (those reporting no or mild pelvic pain) (OR 0.71, 95% CI 0.33 to 1.56; two RCTs, n = 230; very low-quality evidence), dysmenorrhoea (OR 0.72, 95% CI 0.39 to 1.33; two RCTs, n = 214; very low-quality evidence), or dyspareunia (OR 0.83, 95% CI 0.37 to 1.86; two RCTs, n = 222; very low-quality evidence). The gestrinone group had a higher rate of hirsutism (OR 2.63, 95% CI 1.60 to 4.32; two RCTs, n = 302; very low-quality evidence) and a lower rate of decreased breast size (OR 0.62, 95% CI 0.38 to 0.98; two RCTs, n = 302; low-quality evidence). Evidence was insufficient to show differences between groups, if present, in rate of hot flushes (OR 0.79, 95% CI 0.50 to 1.26; two RCTs, n = 302; very low-quality evidence) or acne (OR 1.45, 95% CI 0.90 to 2.33; two RCTs, n = 302; low-quality evidence).\nWhen researchers compared gestrinone versus leuprolin through measurements on the 1 to 3 verbal rating scale (lower score denotes benefit), the mean dysmenorrhoea score was higher in the gestrinone group (MD 0.35 points, 95% CI 0.12 to 0.58; one RCT, n = 55; low-quality evidence), but the mean dyspareunia score was lower in this group (MD 0.33 points, 95% CI 0.62 to 0.04; low-quality evidence). The gestrinone group had lower rates of amenorrhoea (OR 0.04, 95% CI 0.01 to 0.38; one RCT, n = 49; low-quality evidence) and hot flushes (OR 0.20, 95% CI 0.06 to 0.63; one study, n = 55; low quality evidence) but higher rates of spotting or bleeding (OR 22.92, 95% CI 2.64 to 198.66; one RCT, n = 49; low-quality evidence).\nEvidence was insufficient to show differences in effectiveness or safety between different doses of gestrinone, if present.\nAsoprisnil versus placebo\nOne study (n = 130) made this comparison but did not report data suitable for analysis.\nUlipristal versus leuprolide acetate\nOne study (n = 38) made this comparison but did not report data suitable for analysis.\nAuthors' conclusions\nAmong women with endometriosis, moderate-quality evidence shows that mifepristone relieves dysmenorrhoea, and low-quality evidence suggests that this agent relieves dyspareunia, although amenorrhoea and hot flushes are common side effects. Data on dosage were inconclusive, although they suggest that the 2.5 mg dose of mifepristone may be less effective than higher doses. We found insufficient evidence to permit firm conclusions about the safety and effectiveness of other progesterone receptor modulators.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Endometriosis is a disease of endometrial tissue (glands and stroma) outside the uterine cavity. It is oestrogen-dependent and thus is seen primarily during the reproductive years. It can cause pain in the abdomen, generally during periods (menstruation) or associated with sexual intercourse. Progesterone receptor modulators have been advocated as one of the hormonal treatments for endometriosis."
}
] |
query-laysum
|
23043
|
[
{
"role": "user",
"content": "Abstract: Background\nIntermediate hyperglycaemia (IH) is characterised by one or more measurements of elevated blood glucose concentrations, such as impaired fasting glucose (IFG), impaired glucose tolerance (IGT) and elevated glycosylated haemoglobin A1c (HbA1c). These levels are higher than normal but below the diagnostic threshold for type 2 diabetes mellitus (T2DM). The reduced threshold of 5.6 mmol/L (100 mg/dL) fasting plasma glucose (FPG) for defining IFG, introduced by the American Diabetes Association (ADA) in 2003, substantially increased the prevalence of IFG. Likewise, the lowering of the HbA1c threshold from 6.0% to 5.7% by the ADA in 2010 could potentially have significant medical, public health and socioeconomic impacts.\nObjectives\nTo assess the overall prognosis of people with IH for developing T2DM, regression from IH to normoglycaemia and the difference in T2DM incidence in people with IH versus people with normoglycaemia.\nSearch methods\nWe searched MEDLINE, Embase, ClincialTrials.gov and the International Clinical Trials Registry Platform (ICTRP) Search Portal up to December 2016 and updated the MEDLINE search in February 2018. We used several complementary search methods in addition to a Boolean search based on analytical text mining.\nSelection criteria\nWe included prospective cohort studies investigating the development of T2DM in people with IH. We used standard definitions of IH as described by the ADA or World Health Organization (WHO). We excluded intervention trials and studies on cohorts with additional comorbidities at baseline, studies with missing data on the transition from IH to T2DM, and studies where T2DM incidence was evaluated by documents or self-report only.\nData collection and analysis\nOne review author extracted study characteristics, and a second author checked the extracted data. We used a tailored version of the Quality In Prognosis Studies (QUIPS) tool for assessing risk of bias. We pooled incidence and incidence rate ratios (IRR) using a random-effects model to account for between-study heterogeneity. To meta-analyse incidence data, we used a method for pooling proportions. For hazard ratios (HR) and odds ratios (OR) of IH versus normoglycaemia, reported with 95% confidence intervals (CI), we obtained standard errors from these CIs and performed random-effects meta-analyses using the generic inverse-variance method. We used multivariable HRs and the model with the greatest number of covariates. We evaluated the certainty of the evidence with an adapted version of the GRADE framework.\nMain results\nWe included 103 prospective cohort studies. The studies mainly defined IH by IFG5.6 (FPG mmol/L 5.6 to 6.9 mmol/L or 100 mg/dL to 125 mg/dL), IFG6.1 (FPG 6.1 mmol/L to 6.9 mmol/L or 110 mg/dL to 125 mg/dL), IGT (plasma glucose 7.8 mmol/L to 11.1 mmol/L or 140 mg/dL to 199 mg/dL two hours after a 75 g glucose load on the oral glucose tolerance test, combined IFG and IGT (IFG/IGT), and elevated HbA1c (HbA1c5.7: HbA1c 5.7% to 6.4% or 39 mmol/mol to 46 mmol/mol; HbA1c6.0: HbA1c 6.0% to 6.4% or 42 mmol/mol to 46 mmol/mol). The follow-up period ranged from 1 to 24 years. Ninety-three studies evaluated the overall prognosis of people with IH measured by cumulative T2DM incidence, and 52 studies evaluated glycaemic status as a prognostic factor for T2DM by comparing a cohort with IH to a cohort with normoglycaemia. Participants were of Australian, European or North American origin in 41 studies; Latin American in 7; Asian or Middle Eastern in 50; and Islanders or American Indians in 5. Six studies included children and/or adolescents.\nCumulative incidence of T2DM associated with IFG5.6, IFG6.1, IGT and the combination of IFG/IGT increased with length of follow-up. Cumulative incidence was highest with IFG/IGT, followed by IGT, IFG6.1 and IFG5.6. Limited data showed a higher T2DM incidence associated with HbA1c6.0 compared to HbA1c5.7. We rated the evidence for overall prognosis as of moderate certainty because of imprecision (wide CIs in most studies). In the 47 studies reporting restitution of normoglycaemia, regression ranged from 33% to 59% within one to five years follow-up, and from 17% to 42% for 6 to 11 years of follow-up (moderate-certainty evidence).\nStudies evaluating the prognostic effect of IH versus normoglycaemia reported different effect measures (HRs, IRRs and ORs). Overall, the effect measures all indicated an elevated risk of T2DM at 1 to 24 years of follow-up. Taking into account the long-term follow-up of cohort studies, estimation of HRs for time-dependent events like T2DM incidence appeared most reliable. The pooled HR and the number of studies and participants for different IH definitions as compared to normoglycaemia were: IFG5.6: HR 4.32 (95% CI 2.61 to 7.12), 8 studies, 9017 participants; IFG6.1: HR 5.47 (95% CI 3.50 to 8.54), 9 studies, 2818 participants; IGT: HR 3.61 (95% CI 2.31 to 5.64), 5 studies, 4010 participants; IFG and IGT: HR 6.90 (95% CI 4.15 to 11.45), 5 studies, 1038 participants; HbA1c5.7: HR 5.55 (95% CI 2.77 to 11.12), 4 studies, 5223 participants; HbA1c6.0: HR 10.10 (95% CI 3.59 to 28.43), 6 studies, 4532 participants. In subgroup analyses, there was no clear pattern of differences between geographic regions. We downgraded the evidence for the prognostic effect of IH versus normoglycaemia to low-certainty evidence due to study limitations because many studies did not adequately adjust for confounders. Imprecision and inconsistency required further downgrading due to wide 95% CIs and wide 95% prediction intervals (sometimes ranging from negative to positive prognostic factor to outcome associations), respectively.\nThis evidence is up to date as of 26 February 2018.\nAuthors' conclusions\nOverall prognosis of people with IH worsened over time. T2DM cumulative incidence generally increased over the course of follow-up but varied with IH definition. Regression from IH to normoglycaemia decreased over time but was observed even after 11 years of follow-up. The risk of developing T2DM when comparing IH with normoglycaemia at baseline varied by IH definition. Taking into consideration the uncertainty of the available evidence, as well as the fluctuating stages of normoglycaemia, IH and T2DM, which may transition from one stage to another in both directions even after years of follow-up, practitioners should be careful about the potential implications of any active intervention for people 'diagnosed' with IH.\n\nGiven the provided abstract, please respond to the following query: \"Background\". Use language that can be easily comprehended by individuals without domain knowledge:\n"
},
{
"role": "assistant",
"content": "Type 2 diabetes is often diagnosed by blood sugar measurements like fasting blood glucose or glucose measurements after an oral glucose tolerance test (drinking 75 g of glucose on an empty stomach) or by measuring glycosylated haemoglobin A1c (HbA1c), a long-term marker of blood glucose levels. Type 2 diabetes can have bad effects on health in the long term (diabetic complications), like severe eye or kidney disease or diabetic feet, eventually resulting in foot ulcers.\nRaised blood glucose levels (hyperglycaemia), which are above normal ranges but below the limit of diagnosing type 2 diabetes, indicate prediabetes, or intermediate hyperglycaemia. The way prediabetes is defined has important effects on public health because some physicians treat people with prediabetes with medications that can be harmful. For example, reducing the threshold for defining impaired fasting glucose (after an overnight fast) from 6.1 mmol/L or 110 mg/dL to 5.6 mmol/L or 100 mg/dL, as done by the American Diabetes Association (ADA), dramatically increased the number of people diagnosed with prediabetes worldwide."
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.